Cloudda
Cloudda
Abstract:
NFS is a common file-sharing protocol that is used to efficiently access remote data. It
provides a versatile technique of remote file access by allowing a user to access remote data
as if it were local data. NFS excels in writing files with small record sizes, whereas iSCSI
excels at writing files with higher record sizes. Yet, there is no discernible difference between
NFS and iSCSI in reading activity. iSCSI is the better protocol to employ in an OpenStack
Cinder implementation for cloud storage. NFS and iSCSI can be configured in the Cinder
configuration file on the client side, and on the server side on a Synology NAS storage
system.
Introduction:
Data storage is the most important in the network system, file system is such a way that it has
the files which are named and logically arranged for doing the operations like accessing the
data[1]. The NFS(Network File System) was designed by Sun Microsystems, it is a client-
server application that it provides the shared file system across a network to the
clients[2].NFS provides transparent and remote access to the file systems. File servers are
transforming into very important in the modern network acting as a centralized data sharing
and storage stations[1]. Cloud computing is a term used nowadays. Cloud computing usage is
already spread all over the world and used by resources more effectively but also makes it
much easier to utilize a virtual machine. Several alternative protocols can many companies.
Because of its high usage, a need for cloud storage now emerges. Cloud storage not only uses
be implemented for cloud storage, and each protocol has its advantages. This research aims to
determine the most resources more effectively but also makes it much easier to utilize a
virtual machine[4].Cloud storage services, such as Dropbox, Google Drive, iCloud Drive, and
Microsoft OneDrive, have revolutionized the way people store and synchronize their files
across different devices. Dropbox stands out due to its delta sync feature, which provides
enhanced network-level security. These services are highly popular and greatly simplify the
task of accessing files from multiple devices[3].
[1]
This is the basic architecture of the nfs which includes client and server
On the server side, In order to uniquely identify files on the server, each file is assigned a file
handle (handle) which includes the inode number, inode generation number, and filesystem
id. The fhandle is used by the client to reference the file in subsequent requests. The inode
number and generation number are used to identify the file, while the filesystem id is used to
distinguish the file on the server from other files that may have the same inode number and
generation number but reside on a different filesystem.
However, it is possible for a file to be deleted and then a new file created with the same inode
number and generation number, which can lead to confusion on the server if it receives a
request with an fhandle that references the original file. To handle this, the server must
maintain a cache of recently deleted inodes and check if the inode referred to by the fhandle
has been recently deleted and reused. If it has, the server can return an error to the client
indicating that the file no longer exists. This ensures that the server can correctly handle
fhandles even when the inode number and generation number are reused.
The Sun version of client-side NFS provides a transparent interface for remote file access on
the client side. To accomplish this, a technique of locating remote files that did not affect the
structure of path names was required, as some Unix-based solutions need changes to current
programmes. Instead of late binding, the hostname lookup and file address binding are
performed only once per filesystem, allowing the client to mount a remote filesystem to a
directory. This method has the advantage of requiring the client to deal with hostnames only
once, at mount time, and allowing the server to limit filesystem access by confirming client
credentials. The downside is that remote files are not accessible until a mount has been
completed. To provide transparent access to different types of filesystems mounted on a
single machine, a new filesystem interface was developed in the kernel that supports both a
Virtual Filesystem (VFS) interface and a Virtual Node (node) interface[7].
The Ficus file system is unique in its ability to allow updates during network partition if any
copy of the file is accessible. Any updates made to files or directories are automatically
propagated to accessible replicas. Conflicting updates to directories are detected and repaired
automatically, while conflicting updates to ordinary files are reported to the owner. The
optimistic scheme of the Ficus file system is attractive due to the rarity of conflicting updates
and the infrequency of communication outages that render replicas inaccessible.
Additionally, the Ficus file system uses stackable layers to add new features to an existing
file system without having to re-implement existing functions. This is done by structuring the
file system as a stack of modules with the same interface, allowing for transparent
augmentation of existing services. The implementation of the Ficus file system using the
layered architecture is described in this paper [9].
TOFF aims to achieve two primary goals, namely replication transparency and failure
transparency. Replication transparency refers to the ability to replicate data without the
clients being aware of it or needing to maintain a list of replicated servers. In other words, the
replication process should be seamless and invisible to the clients. Meanwhile, failure
transparency pertains to the ability of the system to mask any failure, such that the clients are
not aware of the failure and can continue to access the system as if nothing happened. These
two goals are essential in ensuring a robust and reliable distributed system. [10].
2. Parallel NFS:
NFS protocol enables file sharing across multiple systems. pNFS extension was introduced to
improve performance by allowing the distribution of file access across multiple servers.
pNFS decouples the data path of NFS from its control path to allow concurrent I/0 streams
from multiple client processes to storage servers. This approach is unique when compared to
other parallel file systems like GPS, Panasas, Lustre, and PVFS. pNFS is the only open
standard that is being developed and approved by various commercial and nonprofit
organizations [11]. Parallel NFS (pNFS) is a protocol designed for parallel I/O access in
various storage environments. It has been validated through several prototypes and efforts to
improve its performance have focused on exposing the best bandwidth potential from
underlying file and storage systems. The pNFS architecture is discussed, and a proposed
direct I/O request flow protocol is suggested to further enhance its performance. Overall,
NFS is an emergent standard protocol with potential for efficient parallel I/O access.
The introduces gVault, a cryptographic network file system that uses Gmail's storage to
provide users with a free and easily accessible network drive. gVault offers benefits such as
secure remote access, 24/7 availability, and large storage capacity. The design and
implementation of gVault address challenges to ensure data confidentiality and integrity, with
a novel encrypted storage model and key management techniques. An initial prototype is
implemented and experiments show that the cost of security is negligible compared to data
transfer costs [15]. gVault operates as a client-side application that prompts users to enter
their Gmail login credentials and a master password. Once this information is entered, gVault
establishes a session with the Gmail server and operates as an HTTP client. All file
operations performed by the user are translated into equivalent HTTP requests and the
resulting HTML responses from the Gmail servers are parsed to retrieve the user's data. To
ensure the security of user data, gVault employs various cryptographic techniques in this
process. The gVault design assumes that everything outside the client machine's security
perimeter is untrusted, with no restrictions on potential attacks.
(from
Google images)
Conclusion:
In conclusion, the NFS protocol offers an efficient and flexible way to access remote data that
can be indistinguishable from local data to the user. Various implementations of NFS exist
for different situations and underlying communication technologies, making it one of the
most versatile methods of remote file access available today. Cloud sync services are
experiencing an upsurge in pop- ularity, which makes efficient service provisioning on both
client and server sides more and more crucial [3]. In terms of writing small record size files,
NFS has an advantage, while iSCSI is better for larger files. For reading, there is no
noticeable difference between the two protocols. Overall, iSCSI has a slight throughput
advantage, making it the preferred protocol for OpenStack Cinder. The client-side
configuration can be done through the Cinder configuration file, and the server-side
configuration can be done on Synology NAS storage for both NFS and iSCSI.
References: