0% found this document useful (0 votes)
36 views

Seminar Mahesh

The document discusses cloud storage in cloud computing. It provides an introduction to cloud computing basics, including types of cloud, stakeholders, and advantages. It then covers cloud architecture, comparing cloud and grid computing, the relation to utility computing, and types of cloud services. The document also includes a case study on popular cloud applications like Amazon EC2 and S3, and Google App Engine.

Uploaded by

Sreeman
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Seminar Mahesh

The document discusses cloud storage in cloud computing. It provides an introduction to cloud computing basics, including types of cloud, stakeholders, and advantages. It then covers cloud architecture, comparing cloud and grid computing, the relation to utility computing, and types of cloud services. The document also includes a case study on popular cloud applications like Amazon EC2 and S3, and Google App Engine.

Uploaded by

Sreeman
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

CLOUD STORAGE IN CLOUD COMPUTING

A Technical Seminar Report submitted in partial fulfillment of


the requirements for the award of the Degree of

BACHELOR OF TECHNOLOGY

In

INFORMATION TECHNOLOGY

Submitted By

PETA MAHESH
19WJ1A1240

DEPARTMENT OF INFORMATION TECHNOLOGY


GURU NANAK INSTITUTIONS TECHNICAL CAMPUS(Autonomous)
(Affiliated to JNTUH, Accredited by NBA)
Ibrahimpatnam, R.R. District, Telangana - 501506.
(2019-2023)
DEPARTMENT OF INFORMATION TECHNOLOGY
CERTIFICATE
This is to certify that this technical seminar entitled “CLOUD STORAGE IN CLOUD
COMPUTING”submitted by Peta Mahesh (19WJ1A1240) in partial fulfillment of the
requirementsfor the award of the degree of Bachelor of Technology in Information Technology
prescribed by Guru Nanak Institutions Technical Campus (Autonomous) affiliated to
Jawaharlal NehruTechnological University, Hyderabad during the academic 2021-2022. The
Seminar report has been approved as it satisfies the academic requirements in respect of Technical
Seminar prescribed for Bachelor Degree.

Coordinator HOD Associate Director


Mr.K.Anil Kumar Dr.M.I.THARIQ HUSSAN Dr.RISHISAYAL

External Examiner
ACKNOWLEDGEMENT
I wish to express our sincere thanks to Chairman Sardar T.S.Kohli and Vice-Chairman Sardar
G.S.Kohli of Guru Nanak Institutions for providing us all necessary facilities, resources and
infrastructure for completing this technical seminar work.

I express a whole hearted gratitude to our Managing Director Dr.H.S.Saini for providing strong
vision in engineering education through accreditation policies under regular training
in upgrading education for the faculty members.

I express a whole hearted gratitude to our Director Dr.K.Venkata Rao for providing us the
constructive platform to launch our ideas in the area of Information Technology and improving
our academic standards.

I express a whole hearted gratitude to our Associate Director Dr.Rishi Sayal for providing us the
conductive environment for carrying through our academic schedules and projects with ease.

I have been truly blessed to have a wonderful advisor Dr.M.I.Thariq Hussan, HOD- Department
of Information Technology for guiding us to explore the ramification of our work and we express
our sincere gratitude towards him for leading me throughout the technical seminar work.

I especially thank our Coordinator, Mr.K.Anil Kumar Assistant Professor, GNITC for his
suggestions and constant guidance in Seminar Presentation. We express our sincere thanks to all
the faculties of Information Technology department who helped us in every stage of our technical
seminar by providing their valuable suggestions and support.
DECLARATION
I, hereby declare that the work which is being presented in the thesis entitled “CLOUD
STORAGE IN CLOUD COMPUTING” by PETA MAHESH (19WJ1A1240) in partial
fulfillment of requirements for the award of degree of B.TECH (INFORMATION
TECHNOLOGY)submitted in the Department of IT is an authentic record of our own work
carried out during a period from March 2023–June 2023.The matter presented in this thesis has
not been submitted by me in any other University/ Institute for the award of any degree.

Date:
Place: GNITC
Thanking you

PETA MAHESH(19WJ1A1240)
Vision of the Department
To be a premier Information Technology department in the region by providing high quality
education, research and employability.
Mission of the Department

M1: Nurture young individuals into knowledgeable, skillful and ethical professionals in Their
pursuit of Information Technology.
M2: Transform the students through soft skills, excellent teaching learning process and sustain
High performance by innovations.
M3: Extensive partnerships and collaborations with foreign universities to enrich the knowledge
and research.
M4: Develop industry-interaction for innovation and product development to provide good
placements.
DEPARTMENT OF INFORMATION TECHNOLOGY
PROGRAMME EDUCATIONAL OBJECTIVES (PEOs)

PEO 1: Produce industry ready graduates having the ability to apply academic knowledge across
the disciplines and in emerging areas of Information Technology for higher studies, research,
employability, product development and handle the realistic problems.

PEO 2: Graduates will have good communication skills, possess ethical conduct, sense
of responsibility to serve the society and protect the environment.

PEO 3: Graduates will have excellence in soft skills, managerial skills, leadership qualities and
understand the need for lifelong learning for a successful professional career.

PROGRAMME OUTCOMES (POs)


Engineering Graduates will be able to:

1 Engineering knowledge: Apply the knowledge of mathematics, science, engineering fundamentals,


and an engineering specialization to the solution of complex engineering
problems.

2. Problem analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of mathematics,
natural sciences, and engineering sciences.

3. Design/development of solutions: Design solutions for complex engineering problems and


design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations.
4. Conduct investigations of complex problems: Use research-based knowledge and research
methods including design of experiments, analysis and interpretation of data, and synthesis of the
information to provide valid conclusions.

5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering activities with
an understanding of the limitations.

6. The engineer and society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to the
professional engineering practice.
7. Environment and sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and need for
sustainable development.

8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.

9. Individual and team work: Function effectively as an individual, and as a member or leader in
diverse teams, and in multidisciplinary settings.

10. Communication: Communicate effectively on complex engineering activities with the


engineering community and with society at large, such as, being able to comprehend and write
effective reports and design documentation, make effective presentations, and give and receive
clear instructions.

11. Project management and finance: Demonstrate knowledge and understanding of the leader
in a team, to manage projects and in multidisciplinary environments.

12. Life-long learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.

PROGRAMME SPECIFIC OUTCOMES (PSOs)

1. Ability to design, develop, test and debug software applications, evaluate and recognize
potential risks and provide innovative solutions.

2. Explore technical knowledge in diverse areas of Information Technology for upliftment of


3. society, successful career, entrepreneurship and higher studies
Abstract
The term “cloud computing” is a recent buzzword in the IT world. Behind this fancy poetic phrase
there lies a true picture of the future of computing for both in technical perspective and social
perspective. Though the term “Cloud Computing” is recent but the idea of centralizing
computation and storage in distributed data centers maintained by third party companies is not new
but it came in way back in 1990s along with distributed computing approaches like grid computing.
Cloud computing is aimed at providing IT as a service to the cloud users on-demand basis with
greater flexibility, availability, reliability and scalability with utility computing model. This new
paradigm of computing has an immense potential in it to be used in the field of e-governance and
in rural development perspective in developing countries like India.
Contents
PAGE NUMBERS
1 Introduction 1-2
2 Cloud Computing Basics 3-5
2.1 Types of Cloud 3
2.2 Cloud Stakeholders 3-4
2.3 Advantages of using Cloud 4-5

3 Motivation towards Cloud in recent time 6


4 Cloud Architecture 7-12
4.1 Comparison between Cloud 8
Computing and Grid Computing
4.2 Relation between Cloud Computing 9
and Utility Computing
4.3 Types of utility cloud services 9-12

5 Popular Cloud Applications: A Case study 13-17


5.1 Amazon EC2 and S3 Services 13-14
5.2 Google App-Engine 14
5.3 Windows Azure 14-17

6 Cloud Computing Application in Indian 18-21


context 18-20
6.1 E-Governance 20-21
6.2 Rural development
7 Conclusion 22
8 References 23-24
LIST OF FIGURES

FIGURE NO NAME OF THE FIGURE PAGE NO.

1 Interconnection between cloud stakeholders 4


2 Basic cloud computing architecture 7
3 Virtulization basic 8
4 Cloud service stack 11
5 Windows Azure component architecture 16
1 Introduction
Cloud computing is a recently developing paradigm of distributed computing. Though it is not a
new idea that emerged just recently. In 1969 [16] L. Kleinrock anticipated, “As of now, computer
networks are still in their infancy. But as they grow up and become more sophisticated, we will
probably see the spread of ’computer utilities’ which, like present electric and telephone utilities,
will service individual homes and offices across the country.” His vision was the true indication
of today’s utility based computing paradigm. One of the giant steps towards this world was taken
in mid 1990s when grid computing was first coined to allow consumers to obtain computing power
on demand. The origin of cloud computing can be seen as an evolution of grid computing
technologies. The term Cloud computing was given prominence first by Google’s CEO Eric
Schmidt in late 2006. So the birth of cloud computing is very recent phenomena although its root
belongs to some old ideas with new business, technical and social perspectives. From the
architectural point of view cloud is naturally build on an existing grid based architecture and uses
the grid services and adds some technologies like virtualization and some business models.
In brief cloud is essentially a bunch of commodity computers networked together in same or
different geographical locations, operating together to serve a number of customers with different
need and workload on demand basis with the help of virtualization. Cloud services are provided
to the cloud users as utility services like water, electricity, telephone using pay-as-you-use business
model. These utility services are generally described as XaaS (X as a Service) where X can be
Software or Platform or Infrastructure etc. Cloud users use these services provided by the cloud
providers and build their applications in the internet and thus deliver them to their end users. So
the cloud users don’t have to worry about installing, maintaining hardware and software needed.
And they also can afford these services as they have to pay as much they use. So the cloud users
can reduce their expenditure and effort in the field of IT using cloud services instead of establishing
IT infrastructure themselves. Cloud is essentially provided by large distributed data centers.
These data centers are often organized as grid and the cloud is built on top of the grid services.
Cloud users are provided with virtual images of the physical machines in the data centers. This
virtualization is one of the key concept of cloud computing as it essentially builds the abstraction
over the physical system. Many cloud applications are gaining popularity day by day for their
availability, reliability, scalability and utility model.

1
These applications made distributed computing easy as the critical aspects are handled by the cloud
provider itself. Cloud computing is growing now-a-days in the interest of technical and business
organizations but this can also be beneficial for solving social issues. In the recent time E-
Governance is being implemented in developing countries to improve efficiency and effectiveness
of governance. This approach can be improved much by using cloud computing instead of
traditional ICT. In India, economy is agriculture based and most of the citizens live in rural areas.
The standard of living, agricultural productivity etc can be enhanced by utilizing cloud computing
in a proper way.

2
2 Cloud Computing Basics
Cloud computing is a paradigm of distributed computing to provide the customers on-demand,
utility based computing services. Cloud users can provide more reliable, available and updated
services to their clients in turn. Cloud itself consists of physical machines in the data centers of
cloud providers. Virtualization is provided on top of these physical machines. These virtual
machines are provided to the cloud users. Different cloud provider provides cloud services of
different abstraction level. E.g. Amazon EC2 enables the users to handle very low level details
where Google App-Engine provides a development platform for the developers to develop their
applications. So the cloud services are divided into many types like Software as a Service, Platform
as a Service or Infrastructure as a Service. These services are available over the Internet in the
whole world where the cloud acts as the single point of access for serving all customers. Cloud
computing architecture addresses difficulties of large scale data processing.

2.1 Types of Cloud

Cloud can be of three types :-

1. Private Cloud – This type of cloud is maintained within an organization and used solely
for their internal purpose. So the utility model is not a big term in this scenario. Many companies
are moving towards this setting and experts consider this is the 1st step for an organization to
move into cloud. Security, network bandwidth are not critical issues for private cloud.
2. Public Cloud – In this type an organization rents cloud services from cloud
providers on-demand basis. Services provided to the users using utility computing model.
3. Hybrid Cloud – This type of cloud is composed of multiple internal or external cloud. This
is the scenario when an organization moves to public cloud computing domain from its internal
private cloud.

2.2 Cloud Stakeholders

To know why cloud computing is used let’s first concentrate on who use it. And
then we would discuss what advantages they get using cloud. There are three types
of stakeholders cloud providers, cloud users and the end users [Figure 1]. Cloud
providers provide cloud services to the cloud users. These cloud services are of
the form of utility computing i.e. the cloud users uses these services pay-as-you-go
model. The cloud users develop their product using these services and deliver the
product to the end users.

3
Figure 1: Interconnection between cloud stakeholders

2.3 Advantages of using Cloud


The advantages for using cloud services can be of technical, architectural, business etc .

1. Cloud Providers’ point of view


(a) Most of the data centers today are under utilized. They are mostly 15% utilized. These data
centers need spare capacity just to cope with the huge spikes that sometimes get in the server
usage. Large companies having those data centers can easily rent those computing power to
other organizations and get profit out of it and also make the resources needed for running data
center (like power) utilized properly.
(b) Companies having large data centers have already deployed the resources and to provide
cloud services they would need very little investment and the cost would be incremental.

2. Cloud Users’ point of view


(a) Cloud users need not to take care about the hardware and software they use and also they
don’t have to be worried about maintenance. The users are no longer tied to some one traditional
system.
(b) Virtualization technology gives the illusion to the users that they are having all there
sources available.
(c) Cloud users can use the resources on demand basis and pay as much as they use. So the
users can plan well for reducing their usage to minimize their expenditure.

4
(d) Scalability is one of the major advantages to cloud users. Scalability is provided dynamically
to the users. Users get as much resources as they need. Thus this model perfectly fits in the
management of rare spikes in the demand.

5
3. Motivation towards Cloud in recent time
Cloud computing is not a new idea but it is an evolution of some old paradigm of distributed
computing. The advent of the enthusiasm about cloud computing in recent past is due to some
recent technology trend and business models

1. High demand of interactive applications –


Applications with real time response and with capability of providing information either by
other users or by nonhuman sensors gaining more and more popularity today. These are
generally attracted to cloud not only because of high availability but also because these services
are generally data intensive and require analyzing data across different sources.
2. Parallel batch processing –
Cloud inherently supports batch-processing and analyzing tera-bytes of data very efficiently.
Programming models like Google’s map-reduce and Yahoo!’s open source counter part
Hadoop can be used to do these hiding operational complexity of parallel processing of
hundreds of cloud computing servers.
3. New trend in business world and scientific community –
In recent times the business enterprises are interested in discovering customers needs, buying
patterns, supply chains to take top management decisions. These require analysis of very large
amount of online data. This can be done with the help of cloud very easily. Yahoo! Homepage
is a very good example of such thing. In the homepage they show the hottest news in the
country. And according to the users’ interest they change the ads and other sections in the page.
Other than these many scientific experiments need very time consuming data processing jobs
like LHC (Large Hadron Collider). Those can be done by cloud.
4. Extensive desktop application –
Some desktop applications like Matlab, Mathematica are becoming so compute intensive that
a single desktop machine is no longer enough to run them. So they are developed to be capable
of using cloud computing to perform extensive evaluations.

6
4 Cloud Architecture
The cloud providers actually have the physical data centers to provide virtualized services to
their users through Internet. The cloud providers often provide separation between application
and data. This scenario is shown in the Figure 2. The underlying physical machines are
generally organized in grids and they are usually geographically distributed. Virtualization
plays an important role in the cloud scenario. The data center hosts provide the physical
hardware on which virtual machines resides. User potentially can use any OS supported by the
virtual machines used.

Operating systems are designed for specific hardware and software. It results
in the lack of portability of operating system and software from one machine to another machine
which uses different instruction set architecture. The concept of
virtual machine solves this problem by acting as an interface between the hardware and the
operating system called as system VMs . Another category of virtual machine is called process
virtual machine which acts as an abstract layer between the operating system and applications.
Virtualization can be very roughly said to be as software translating the hardware instructions
generated by conventional software to the understandable format for the physical hardware.
Virtualization also includes the mapping of virtual resources like registers and memory to real hard
ware resources. The underlying platform in virtualization is generally referred to as host and the
software that runs in the VM environment is called as the guest.The Figure 3 shows very basics of
virtualization. Here the virtualization layer covers the physical hardware. Operating System
7
accesses physical hardware through virtualization layer. Applications can issue instruction by
using OS interface as well as directly using virtualizing layer interface. This design enables the
users to use applications not compatible with the operating system.Virtualization enables the
migration of the virtual image from one physical
machine to another and this feature is useful for cloud as by data locality lots of optimization is
possible and also this feature is helpful for taking back up in different locations. This feature also
enables the provider to shut down some of the data center physical machines to reduce power
consumption.

Figure 3: Virtualization basic

4.1 Comparison between Cloud Computing and Grid Computing

Most of the cloud architectures are built on Grid architecture and utilizes its ser
vice. Grid is also a form of distributed computing architecture where organizations
owning data centers collaborate with each other to have mutual benefit. Although if
apparently seen it seems that cloud computing is no different from its originator in
the first look but there are substantial difference between them in spite of so many
similarities. The relation between Grid and cloud computing is discussed in
Table 1.

8
4.2 Relation between Cloud Computing and Utility Computing
The cloud users enjoy utility computing model for interacting with cloud service
providers. This Utility computing is essentially not same as cloud computing. Utility
computing is the aggregation of computing resources, such as computation and
storage, as a metered service similar to a traditional public utility like electricity,
water or telephone network. This service might be provided by a dedicated computer
cluster specifically built for the purpose of being rented out, or even an under-utilized
supercomputer. And cloud is one of such option of providing utility computing to
the users.
4.3 Types of utility cloud services
Utility computing services provided by the cloud provider can be classified by the
type of the services. These services are typically represented as XaaS where we

9
10
can replace X by Infrastructure or Platform or Hardware or Software or Desktop or Data etc. There
are three main types of services most widely accepted - Software as a Service, Platform as a Service
and Infrastructure as a Service. These services provide different levels of abstraction and flexibility
to the cloud users.

Figure 4: Cloud Service stack

We’ll now discuss some salient features of some of these models –

1. SaaS (Software as a service) –

Delivers a single application through the web browser to thousands of customers using a
multitenant architecture. On the customer side, it means no upfront investment in servers or
software licensing; on the provider side, with just one application to maintain, cost is low compared
to conventional hosting. Under SaaS, the software publisher (seller) runs and maintains all
necessary hardware and software. The customer of SaaS accesses the applications through
Internet. For example Salesforce.com with yearly revenues of over $300M, offers on-demand
Customer Relationship Management software solutions. This application runs on Sales force.

11
com’s own infrastructure and delivered directly to the users over the Internet. Salesforce does not
sell perpetual licenses but it charges a monthly subscription fee starting at $65/user/month . Google
docs is also a very nice example of SaaS where the users can create, edit, delete and share their
documents, spreadsheets or presentations whereas Google have the responsibility to maintain the
software and hardware.
E.g. - Google Apps, Zoho Office
2. PaaS (Platform as a service) –

Delivers development environment as a service. One can build his/her own applications that run
on the provider’s infrastructure that support transactions, uniform authentication, robust scalability
and availability. The applications built using PaaS are offered as SaaS and consumed directly from
the end users’ web browsers. This gives the ability to integrate or consume third-party web-
services from other service platforms.
E.g. - Google App Engine.

3. IaaS (Infrastructure as a Service) –

IaaS service provides the users of the cloud greater flexibility to lower level than other services. It
gives even CPU clocks with OS level control to the developers.
E.g. - Amazon EC2 and S3.

12
5 Popular Cloud Applications: A Case study
Applications using cloud computing are gaining popularity day by day for their high
availability, reliability and utility service model. Today many cloud providers are in
the IT market. Of those Google App-Engine, Windows Azure and Amazon EC2, S3
are prominent ones for their popularity and technical perspective.

5.1 Amazon EC2 and S3 Services


Amazon Elastic Computing (EC2) [13] is one of the biggest organizations to provide
Infrastructure as a Service. They provide the computer architecture with XEN virtual machine.
Amazon EC2 is one of the biggest deployment of XEN architecture to date. The clients can install
their suitable operating system on the virtual machine. EC2 uses Simple Storage Service (S3) for
storage of data. Users can hire suitable amount CPU power, storage, and memory without any
upfront commitment. Users can control the entire software stack from kernel upwards.The
architecture has two components one is the EC2 for computing purposes and S3 is for storage
purposes .
• Simple Storage Service:
S3 can be thought as a globally available distributed hash table with high-level acces control. Data
is stored in name/value pairs. Names are like UNIX file names and the value can be object having
size up-to 5 GB with up-to 4K of metadata for each object. All objects in Amazon’s S3 must fit
into the global namespace. This namespace consists of a “bucket name” and an “object name”.
Bucket names are like user names in traditional email account and provided by Amazon on first
come first serve basis. An
AWS (Amazon Web Services) account can have maximum of 100 buckets. Data to S3 can be sent
by SOAP based API or with raw HTTP “PUT” commands. Data can be retrieved using SOAP
HTTP or BitTorrent. While using BitTorrent the S3 system operates as both tracker and the initial
seeder. There are also some tools available which enables the users to view S3 as a remote file
system. Upload download rate from and to S3 is not that much exiting. One developer from
Germany reported experiencing 10-100 KBps. This rate can go up-to 1-2 MBps on the higher side
depending on the time of the day. Although the speed is not that much fascinating it is good enough
for delivering web objects and for backup purposes although for doing computation it is not
suitable. Amazon S3 has a very impressive support for privacy, integrity and short term
availability. Long term availability is unknown as this depends on the internal commitment of
Amazon data centers. Data privacy can be obtained by encrypting the data to be stored. But this
encryption is to be done by the user before storing the data in S3. One can use SSL with HTTPS
to connect to S3 for more security but this usage of SSL increases upload/download time also.
Data integrity can be achieved by checking end to end MD5 checking. When an object is stored
into S3 then it returns MD5 of that object. One can easily check it with previously computed hash
value to guarantee data
integrity. Short term availability depends upon the Amazon’s connectivity and load on its server
at that instant. Once the data is actually in the S3 then it is Amazon’s responsibility to take care of
it’s availability. They claim that the data is backed up on multiple hard drives in multiple data
centers but doesn’t guarantee this by any Service Level Agreement. There is no backup or recovery
mechanism if the user accidentally deletes any data. Amazon has a very impressive scheme of

13
authentication in comparison to other cloud services. Every AWS account has an Access Key ID
and a Secret Key.
The ID is of 20 characters and the Key is a 41 character string. When signing HMAC is fifirst
computed for the sign request parameters using that Key. And in the Amazon server that HMAC
is again computed and compared with the value previously computed in the client side. These
requests also include timestamp to prevent replay attacks.

• Elastic Compute Cloud:


As the name implies EC2 rents cloud of computers to the users with flexibility of choosing the
configuration of the virtual machine like RAM size, local disk size, processor speeds etc. Machines
that deliver EC2 services are actually virtual machines running on top of XEN platform. Users can
store a disk image inside S3 and create a virtual machine in EC2 using tools provided by Amazon.
This virtual machine can be easily instantiated using a java program and can also be monitored.
As EC2 is based on XEN it supports any linux distribution as well as other OSs. Amazon does not
promise about reliability of the EC2 computers. Any machine can crash at any moment and they
are not backed up. Although these machine generally don’t crash according to the experience of
the users but it is safe to use S3 to store information which is more reliable and replicated service.
EC2 security model is similar to that of S3. The only difference is that the commands are signed
with an X 509 private key. But this key is downloaded from AWS account so the security depends
fundamentally on the AWS username and password.

5.2 Google App-Engine


Google App-Engine is a platform for developing and deploying web applications in Google’s
architecture. This provides Platform as a Service to the cloud users. In 2008 Google App-Engine
was first released as beta version. Languages supported by Google App-Engine are python, java
and any extension of JVM languages. AppEngine requires developers to use only languages which
are supported by it and this is also applied with APIs and frameworks. Now Google App-Engine
allows storing and retrieving data from a Big Table non-relational database. App Engine
applications are expected to be request-reply based. Google App engine provides automatic
scalability, persistent data storage service. Data store features a query engine and transaction
capabilities. These applications are easy to scale as traffic and data storage need to grow so the
cloud user doesn’t have to worry about the spikes in the traffic or data. These applications are
generally suitable for social networking start-ups, event-based websites catering to seasonal

5.3 Windows Azure


Windows Azure is an intermediate in the spectrum of flexibility vs programmer convenience.
These systems use .NET libraries to facilitate language independent managed environment. This
service falls under the category of Platform as a Service. Though it is actually in between complete
application framework like Google App-Engine and hardware virtual machines like EC2. Azure
applications run on machines in Microsoft data centers. By using this service customers can use it
to run applications and store data on internet accessible machines owned by Microsoft. windows

14
Azure platform provides three fundamental components - compute component, storage component
and fabric component. Basic components of Windows Azure are shown in Figure 5.

• The Compute Service:


The primary goal of this platform is to support a large number of simultaneous users. (Microsoft
also said that they would use Azure to build their SaaS applications which motivated many
potential users.) To allow applications to scale out Microsoft uses multiple instances of that
applications on virtual machines provided by Hypervisor. Developers use Windows Azure portal
through Web browser, and use Windows live ID to sign in into his/her hosting account or storage
account or both. Two diferrent types of Azure instance is available: Web role instance and Worker
role instances.
– Web role instance:
As the name implies this type of instance can accept HTTP or HTTPS requests. For this facility
Microsoft uses IIS (Internet Information Services) as a web server inside the VM provided.
Developers can build applications using ASP.NET, Windows Communication Foundation (WCF)
or any other .NET technology or native codes also like C++. PHP or java based technologies also
supported in Azure. Azure scales applications by running multiple instances without any affinity
with a particular Web role instance. So it is perfectly natural for an Azure application to serve
multiple requests from a single user by multiple instances. So this requires to write the client state
in the Azure storage after each client request.
-Worker role instance:
This type of instances are very similar to that of Web role instances. But unlike the Web role
instances these don’t have

15
Figure 5: Windows Azure component architecture

IIS configured. They can be configured to run executable of users’ right. Worker role instance is
more likely to function like a background job.
Web role instances can be used to accept request from the users and then they can be processed by
Worker role instances in a later point of time. For a compute intensive work many Worker role
instances can run in parallel.Long and monitoring of Azure applications is made easy by provision
of application wide log. a developer can collect performance related information like measure of
CPU usage, store crash dumps in the storage. Azure doesn’t give the developer the freedom to use
his/her own VM image for Windows Azure. The platform maintains its own Windows.
Applications in Azure run only in user mode - no administrative access isn’t allowed here. So
Windows Azure can update the operating system in each VM without any concern of affecting the
applications running on it. This approach separates administrative work from the user domain.

• The Storage Service: Applications running in Azure uses storage of different type:-

16
– Blobs:
This is used for storing binary data in a simple hierarchy. Blobs can have associated metadata
with them. A user account can have one or more containers and these containers have one or more
blobs.
– Storage tables:
Blobs provide mechanisms for unstructured data but for more structured purposes tables are more
suitable. These tables are nothing like tables in a traditional database. They are actually stored in
a group of entities. These tables can be accessed by using ADO.NET Data Services. SQL is not
preferable for scale out issues.
– Queue :
This is not a structure like tables or blobs to store data but these queues are used to store messages
about tasks to be performed by Worker role instance. These tasks are written by Web role instances
on receiving request from clients. A Worker role instance waiting on that queue can read the
message and perform the task it specifies.

All data in the Windows Azure storage is replicated three times for providing fault tolerance. Azure
also keeps backups in geographically distributed data centers. Windows Azure storage can be
accessed by any Windows Azure application as well as any application hosted at another cloud
platform. All the blobs, tables, queues are named using URIs and can be accessed by HTTP
methods calls. Some applications have inherent need for relational databases. This is provided in
the form of SQL Azure. This is build on Microsoft SQL Server. This data can be accessed via
ADO.NET or by other Windows data access interfaces.

• The Fabric: All Windows Azure application and all of the data stored in Azure Storage live are
physically happen inside some of the data centers handled by Microsoft. In the data centers the set
of machines dedicated to Azure are organized into a fabric. These machines are managed by fabric
controller. These are replicated in five to seven machines. These controllers are aware of every
Windows Azure application running in that fabric and also owns all the resources like computers,
switches, load balancers etc. Controllers monitors, decides which resources to allocate to new
applications looking at the configuration file with the application. They also monitor the running
applications

17
6 Cloud Computing Application in Indian context
Today most of the studies in cloud computing is related to commercial benefits. But this idea can
also be successfully applied to non-profit organizations and to the social benefit. In the developing
countries like India Cloud computing can bring about a revolution in the field of low cost
computing with greater efficiency, availability and reliability. Recently in these countries e-
governance has started to flourish. Experts envisioned that utility based computing has a great
future in governance. Cloud computing can also be applied to the development of rural life in India
by building information hubs to help the concerned people with greater access to required
information and enable them to share their experiences to build new knowledge bases.

6.1 E-Governance
E-Governance is an interface between Government and public or this can be an interface between
two governments or between government and business organizations. Objectives are generally to
improve efficiency and effectiveness to serve public demand and to save costs for online services.
This requires Government to have the will to decentralize the responsibilities and processes and
start to have faith on electronic and internet systems. E-government is a form of e-business in
governance and refers to the processes and structures needed to deliver electronic services to the
public (citizens and businesses), collaborate with business partners and to conduct electronic
transactions within an organizational entity. This E-Governance can be greatly improved by utility
computing .

Impact of Technology in E-governance –

• 24/7 Service Model – Systems and services require high availability. Get the citizens feel that
Government is always at their service.
• Need for Content – Web contents should be regularly updated and the information provided to
the public should be sufficient. Respective departments should be responsible for providing the
information.
• Human Resource – Building these IT skilled resources would need properly trained personals.
This would make government to compete with other private organizations.
• Security – Sensitive Government data is to be highly secured. Policies are to be taken seriously
maintained and designed.
• Privacy – Personal data should be given sufficient privacy. It can be a difficult issue if data is
stored across different departments and computer systems. Recently Government of India have
taken initiative and launched several projects to facilitate people with better mechanism of
governance using IT as a tool. They have launched projects like Gyan Ganga, e-Gram to leverage
the strength of connectivity. Gyan Ganga is one of the initiatives of the Government of Gujrat to
ensure wireless Internet connectivity to 18000 villages in Gujrat. This project is based on corDECT
a technology based on Wireless Local Loop (WLL). Rural citizens are provided with facilities like
browsing emails, Internet, land records, rural job opportunities, status of various government
projects, information about local weather, soil and consult with experts to increase productivity in
agriculture, to have answer to their queries about veterinary and health care. Gyan Ganga comes
with other facilities with on-line registration of various applications, on-line public grievance form,
information on Government projects etc.

18
Another Government of India initiative is E-Gram computerization of local Gram panchayats. This
is also now implemented in the villages of Gujrat. This EGram provides the rural people services
like birth and death certification, property assessment, tax-collection, accounts of gram panchayats
etc. Why traditional systems are not sufficient For maintaining traditional systems in e-government
there are many more disadvantages.
• Application life cycle management – Applications are generally developed in evolutionary
manner and changes should be consistent across all the department and up gradation should be
performed when the system is functioning.
• Software licensing – Software should be licensed for each and every department terminal. This
incurs a large amount of establishment cost.
• Scalability – Traditional centralized systems have inherent weakness towards the aspect of
scalability.
• Security – This is the most crucial aspect for e-governance. Government information is highly
sensitive. So they should be highly secured. For the traditional systems all the systems across all
the departments should have sufficient security.
• Scalability – Cloud computing by design supports scalability. The data centers have enough
computing and storage capacity to cope up with the spike demand.
• Modifiable – Applications hosted in cloud can be modified internally without too much concern
of the end users. Change in one place would reflect in all the places inherently and it would be
consistent.
• Data logging – This central facility can be very useful for locating any fault in the system.
Logging can also be used for detecting unauthorized usage checking or detecting compromization.
• Availability – Cloud services are well known for high availability. If any data center is down for
any reason there is hot backup ready to work immediately. Virtual machine migration is used to
great extend in this situation to facilitate load balancing in case of failure of some systems.
• Reliability – Replication and migration of instances across data centers make the reliability of
the system very high in the cloud scenario.
• Physical disaster recovery – Backup policies can be very useful for physical disaster avoidance
and this is inherent to the cloud system. Data is stored in different physical location so that hot
backup can be provided whenever needed.
• Policy management – Polices can be managed in a centralized fashion. This is helpful for
introducing Government policies readily unlike the present scenario.
• Legacy software – An already developed software can be moved to cloud with minor changes
some times. So the Government doesn’t incur cost for developing applications which it already
has.
• Pay model – Cloud providers’ pay-as-you-use model enables the customer (Government) to
reduce cost of deployment and control the usage.
• Reduce power consumption – Adaptation of cloud reduces power consumption in different
offices and usage of power is concentrated in the data center only. But also that is not the concern
of the government as those data centers are to be handled by the third party who provides cloud
services.

19
6.2 Rural development
In the context of rural development cloud computing can also be used to success for its centralized
storage and computing facility and utility based pay model. As per 72.2% of total Indian
population resides in rural areas. According to the survey conducted by “Hole in the Wall project”
computer literacy among boys and girls of age group 8-14 in rural area varies across the regions
of India. It is 40- 50% in most of the regions. So the computer literacy is not a concern in rural
India and also in it shown that learning rate is pretty high for computer literacy. Agriculture is
India’s biggest employment source, accounting for 52% employment in India . And agricultural
sector contributes to 20% of country’s total GDP. So it is very important to make a serious attempt
to develop rural India. Rural development can be in the form of education, agriculture, health,
culture or in any other fields. Now a days most of the villages have some access to electricity and
cellular phone. So there is technical feasibility of establishing computer systems. But the mentality
of the people haven’t been changed that much and that’s why the spread of personal computer is
not that much significant in the villages. We think this growth rate can be enhanced if the
computing system is
really cheap, easy to operate with minimum level of knowledge, without upfront commitment and
more essentially if the system is helpful to enhance their life style. The main aim of the system is
to make the people in rural areas to have access to recent technology and with the help of the
computing system enhance their standard of living and also this would lead to a grater good of
developing the nation.

• Availability – Many of the services should be available always like health etc. These availability
issues are not that well handled by the traditional web services as they are handled typically by a
single server and thus the server downtime is always there to happen.
• The villagers have to own a PC – To use traditional web services through internet the villagers
need to own a PC which would increase their investment. Then the issues of need for technical
experts for software/hardware installation and maintenance are needed. But naturally the number
of such experts is very less in number in the remote village. Upgradation of software or hardware
would be a problem both economically and technically. With the help of cloud computing this can
be made possible. We’ll now discuss the technological and economic advantages for using cloud.
• No upfront commitment – The villagers need not to invest too much to buy computing system
and commit. But instead they can have very low cost terminals with basic functionality of I/O and
have a network access.
• No maintenance issues – The users need not to be an expert for maintenance. This solves the
unavailability of technical experts in the remote villages as the maintenance issues are handled by
the cloud provider explicitly.
• Upgraded version of hardware and software – The users always use the upgraded version of
software and hardware as maintained by the cloud provider.This reduces the cost of up gradation.
• On-demand resource allocation – The virtual resources can be extended as needed. If the user
needs more resource then it is provided on demand basis.

20
• Utility computing model – The economic model used by the cloud is pay-as you-use. This
enables the users handle the cost they have to pay. By using cloud computing model some
improvement of the current system is possible to bring about social and as well as economic
prospect in rural India.
• Share knowledge and build knowledge base – Most of the agriculture related issues are
generally local and they can’t be solved by general expertise. So it happens many times that the so
called experts are not the right person to answer the problems but instead the local farmers are
better in understanding.So in these situations better solution can be given by the local experts. If
these local experts access a common space to share their knowledge then others eventually come
to know about the solution. Thus a knowledge base can be build which would represent the issues
in that local scenario. It is like building Wikipedia.
• Health and medical services – In the developing countries like India one of the concern of Rural
health care is in spite of best intention from both the medical professionals and patients a practical
challenge is faced for difficulties of communications among interested parties . This issue can be
solved using cloud computing in an appropriate way. Consultation among doctors around the
world make sharing of knowledge possible and takes telemedicince to the next level, creating a
network that goes beyond the one-to-one, patient to-patient, patient-to-doctor or doctor-to-doctor
interactions. In this way a patient suffering from a particular disease can be better treated by
consulting with doctors within region and also outside who may have more experience with such
a case.
• Education in remote areas – Education in rural areas can be enhanced with the help of distance
education. Education can be provided in different languages and with respect to different
curriculum with the aid of e-learning components. Students can be encouraged to build their own
multimedia presentations. These can be hosted in the cloud. This type of approach encourage the
students to concentrate more on learning and representing the material and also that would build
the knowledge in the cloud for other students to refer. This is possible with the aid of cloud
computing with greater reliability and availability.
• Government decision making – Looking at the common knowledge base the Government can
have a fair knowledge of the local situation and take adoptive steps.
• Access to Information hub – Government can provide relevant information such as land revenue
data, weather data, soil information etc. through these cloud services to the people concerned. All
these things are possible with right initiative. These may need customizing the original cloud
services. Some generally unpopular services like Desktop as a Service may make sense in these
scenario which essentially tells about providing the users a virtual desktop environment. But
deployment of cloud services in rural areas have some issues associated with it.

21
7 Conclusion
Cloud computing is a newly developing paradigm of distributed computing. Virtualization in
combination with utility computing model can make a difference in the IT industry and as well as
in social perspective. Though cloud computing is still in its infancy but its clearly gaining
momentum. Organizations like Google, Yahoo, Amazon are already providing cloud services. The
products like Google App-Engine, Amazon EC2, Windows Azure are capturing the market with
their ease of use, availability aspects and utility computing model. Users don’t have to be worried
about the hinges of distributed programming as they are taken care of by the cloud providers. They
can devote more on their own domain work rather than these administrative works. Business
organizations are also showing increasing interest to indulge themselves into using cloud services.
There are many open research issues in this domain like security aspect in the cloud, virtual
machine migration, dealing with large data for analysis purposes etc. In developing counties like
India cloud computing can be applied in the e-governance and rural development with great
success. Although as we have seen there are some crucial issues to be solved to successfully deploy
cloud computing for these social purposes. But they can be addressed by detailed study in the
subject.

22
8 References
[1] Google app engine. http://code.google.com/appengine/.
[2] Cloud computing for e-governance. White paper, IIIT-Hyderabad, January 2010. Available
online (13 pages).
[3] Demographics of india. http://en.wikipedia.org/wiki/Demographics_of_ India, April 2010.
[4] Economy of india.
http://en.wikipedia.org/wiki/Economy_of_India, April 2010.
[5] Michael Armbrust, Armando Fox, Rean Griffiffiffith, Anthony D. Joseph, Randy H.
Katz, Andrew Konwinski, Gunho Lee, David A. Patterson, Ariel Rabkin, Ion Stoica, and Matei
Zaharia. Above the clouds: A berkeley view of cloud computing. Technical Report UCB/EECS-
2009-28, EECS Department, University of California, Berkeley, Feb 2009.
[6] F.M. Aymerich, G. Fenu, and S. Surcis. An approach to a cloud computing network.
Applications of Digital Information and Web Technologies, 2008. ICADIWT 2008., pages 113 –
118, August 2008.
[7] M. Backus. E-governance in Developing Countries. IICD Research Brief, 1, 2001.
[8] Jaijit Bhattacharya and Sushant Vashistha. Utility computing-based framework for e-
governance, pages 303–309. ACM, New York, NY, USA, 2008.
[9] D. Chappell. Introducing windows azure. http://go.microsoft.com/, December 2009
[10] Vidyanand Choudhary. Software as a service: Implications for investment in software
development. In HICSS ’07: Proceedings of the 40th Annual Hawaii International Conference on
System Sciences, page 209a, Washington, DC, USA, 2007. IEEE Computer Society.
[11] Ritu Dangwal. Public Computing, Computer Literacy and Educational Outcome: Children
and Computers in Rural India, pages 59–66. IOS Press, Amsterdam, The Netherlands, 2005.
[12] I. Foster, Yong Zhao, I. Raicu, and S. Lu. Cloud computing and grid computing 360-degree
compared. Technical report. Grid Computing Environments Workshop, 2008.
[13] Simson L. Garfifinkel. An evaluation of amazon’s grid computing services: Ec2,
s3 and sqs. Technical report, 2007.
[14] S.L. Garfifinkel. Commodity grid computing with amazon’s S3 and EC2.
https://www.usenix.org/publications/login/2007-02/openpdfs/ garfinkel.pdf, 2007.
[15] K.I. Juster. Cloud Computing Can Close the Development Gap. http://www.
salesforce.com/assets/pdf/misc/IT-development-paper.pdf.
[16] Leonard Kleinrock. An internet vision: the invisible global infrastructure. Ad Hoc Networks,
1(1):3 – 11, 2003.
[17] P. Kokil. SAP-LAP Analysis: Gyan Ganga, E-Gram and Communication Information Centers
(CIC). http://www.iceg.net/2007/books/3/29_290_3.pdf.
[18] Ralf L¨ammel. Google’s mapreduce programming model – revisited. Sci. Comput. Program.,
70(1):1–30, 2008.
[19] R. Patra, S. Nedevschi, S. Surana, A. Sheth, L. Subramanian, and E. Brewer. WiLDNet:
Design and implementation of high performance wififi based long distance networks. In USENIX
NSDI, pages 87–100, 2007.

23
[20] Bhaskar Prasad Rimal, Eunmi Choi, and Ian Lumb. A taxonomy and survey of cloud
computing systems. Networked Computing and Advanced Information Management, International
Conference on, 0:44–51, 2009.
[21] J.E. Smith and R. Nair. An overview of virtual machine architectures. pages 1– 20, October
2001. http://www.ece.wisc.edu/~jes/902/papers/intro.pdf.
[22] Persistent Systems. Google app engine. http://www.persistentsys.com/
newpspl/pdf/CMS_1741820566_Google%20Apps%20Engine_WP_010909.pdf, 2009.

24

You might also like