Real World Challenges of Cloud Computing
Real World Challenges of Cloud Computing
1. Introduction
Cloud computing model
In the cloud deployment model, networking, platform, storage, and software infrastructure
are provided as services that scale up or down depending on the demand. The Cloud
Computing model has three main deployment models which are:
1.1 PRIVATE CLOUD
Private cloud is a new term that some vendors have recently used to describe offerings that
emulate cloud computing on private networks. It is set up within an organizations internal
enterprise datacenter. In the private cloud, scalable resources and virtual applications
provided by the cloud vendor are pooled together and available for cloud users to share and
use. It differs from the public cloud in that all the cloud resources and applications are
managed by the organization itself, similar to Intranet functionality. Utilization on the private
cloud can be much more secure than that of the public cloud because of its specified internal
exposure. Only the organization and designated stakeholders may have access to operate on a
specific Private cloud.
1.2 PUBLIC CLOUD
Public cloud describes cloud computing in the traditional mainstream sense, whereby
resources are dynamically provisioned on a fine-grained, self-service basis over the Internet,
via web applications/web services, from an off-site third-party provider who shares resources
and bills on a fine-grained utility computing basis. It is typically based on a pay-per-use
model, similar to a prepaid electricity metering system which is flexible enough to cater for
spikes in demand for cloud optimization. Public clouds are less secure than the other cloud
models because it places an additional burden of ensuring all applications and data accessed
on the public cloud are not subjected to malicious attacks.
1.3 HYBRID CLOUD
Hybrid cloud is a private cloud linked to one or more external cloud services, centrally
managed, provisioned as a single unit, and circumscribed by a secure network. It provides
virtual IT solutions through a mix of both public and private clouds. Hybrid Cloud provides
more secure control of the data and applications and allows various parties to access
information over the Internet. It also has an open architecture that allows interfaces with other
management systems. Hybrid cloud can describe configuration combining a local device,
such as a Plug computer with cloud services. It can also describe configurations combining
virtual and physical, collocated assets -for example, a mostly virtualized environment that
requires physical servers, routers, or other hardware such as a network appliance acting as a
firewall or spam filter.
industries. It also can create some major problems under some rare circumstances. issues and
challenges of cloud computing are characterized as ghosts in the cloud.
2. Literature Survey
Several studies have been carried out relating to security issues in cloud computing. Like,
Gartner identified seven security issues that need to be addressed before enterprises consider
switching to the cloud computing model. They are as follows: (i) privileged user access information transmitted from the client through the Internet poses a certain degree of risk,
because of issues of data ownership; enterprises should spend time getting to know their
providers and their regulations as much as possible before assigning some trivial applications
first to test the water; (ii) regulatory compliance - clients are accountable for the security of
their solution, as they can choose between providers that allow to be audited by third party
organizations that check levels of security and providers that don't (iii) data location
depending on contracts, some clients might never know what country or what jurisdiction
their data is located (iv) data segregation - encrypted information from multiple companies
may be stored on the same hard disk, so a mechanism to separate data should be deployed by
the provider. (v) recovery - every provider should have a disaster recovery protocol to protect
user data (vi) investigative support - if a client suspects faulty activity from the provider, it
may not have many legal ways pursue an investigation (vii) long-term viability - refers to the
ability to retract a contract and all data if the current provider is bought out by another firm.
The Cloud Computing Use Case Discussion Group discusses the different Use Case scenarios
and related requirements that may exist in the cloud model. They consider use cases from
different perspectives including customers, developers and security engineers . ENISA
investigated the different security risks related to adopting cloud computing along with the
affected assets, the risks likelihood, impacts, and vulnerabilities in the cloud computing may
lead to such risks . In 2009, Balachandra et al discussed the security SLAs specification and
objectives related to data locations, segregation and data recovery. In 2010, Bernd et al
discussed the security vulnerabilities existing in the cloud platform. The authors grouped the
possible vulnerabilities into technology-related, cloud characteristics-related, security
controls related. Subashini et al discuss the security challenges of the cloud service delivery
model, focusing on the SaaS model. Ragovind et al discussed the management of security in
Cloud computing focusing on Gartners list on cloud security issues and the findings from the
International Data Corporation enterprise . Morsy et al investigated cloud computing
problems from the cloud architecture, cloud offered characteristics, cloud stakeholders, and
cloud service delivery models perspectives in 2010 . A recent survey by Cloud Security
Alliance (CSA)&IEEE indicates that enterprises across sectors are eager to adopt cloud
computing but that security are needed both to accelerate cloud adoption on a wide scale and
to respond to regulatory drivers. It also details that cloud computing is shaping the future of
IT but the absence of a compliance environment is having dramatic impact on cloud
computing growth . Although there are several studies those have been carried out relating to
security issues in cloud computing, but this work presents a detailed analysis of the cloud
computing security issues and challenges focusing on the cloud computing deployment types
and the service delivery types.
Data security is another important research topic in cloud computing. Since service providers
does not have permission for access to the physical security system of data centers. But they
must depend on the infrastructure provider to get full data security. In a virtual private cloud
environment, the service provider can only specify the security setting remotely, and we
dont know exactly those are fully implemented. In this Process, the infrastructure provider
must reach the following objectives:
(1) confidentiality, for secure data transfer and access, and
(2) auditability, whether applications security setting has been tampered or not.
Confidentiality is usually achieved using cryptographic protocols, unencrypted data in a local
data center in not secure compare to the encrypted data in before place into cloud.
Auditability can be achieved using remote attestation techniques and it could be added as an
extra level away from of the virtualized guest Operating System, in one logical layer maintain
some responsible software related to confidentiality and auditability. Remote attestation
typically requires a trusted platform module (TPM) to generate non-forgeable system
summary as the proof of system security. In virtual environment, Virtual Machine can change
dynamically the location from one to other . It is very difficult to construct trust mechanism
in every architectural layer of the cloud. Virtual Machine migration should only if both
source and destination servers are trusted .
d) Launch pad for brute force and other attacks
There have also been suggestions that the virtualized infrastructure can be used as a
launching pad for new attacks. A security consultant recently suggested that it may be
possible to abuse cloud computing services to launch a brute force attack (a strategy used to
break encrypted data by trying all possible decryption key or password combinations) on
various types of passwords. Using Amazon EC2 as an example, the consultant estimated that
based on the hourly fees Amazon charges for its EC2 web service, it would cost more than
$1.5m to brute force a 12-character password containing nothing more than lower-case letters
a through z an 11-character code costs less than $60,000 to crack, and a 10-letter phrase
costs less than $2,300 (Goodin 2009: np). Although it is still relatively expensive to perform
brute force online password guessing attacks (also known as online dictionary attacks), this
could have broad implications for systems using password based authentication. It may not
take long for attackers to design a more practical
e) Rogue clouds
Just like entrepreneurs, cyber criminals and organised crime groups are always on the lookout
for new markets and with the rise of cloud computing, a new sector for exploitation now
exists. Rogue cloud service providers based in jurisdictions with lax cybercrime legislation
can provide confidential hosting and data storage services for a usually steep fee. Such
services could potentially be abused by organised crime groups to store and distribute
criminal data (eg child abuse materials for commercial purposes) to avoid the scrutiny of law
enforcement agencies. Hosting confidential business data with cloud service providers
(Lemos 2009: np). This resulted in the unintended consequence of disrupting the continuity
of businesses whose data and information are hosted on the seized hardware. LiquidMotors,
a company that provides inventory management to car dealers, the servers held its client data
and hosted its managed inventory services. The FBI seizure of the servers in the data center
rack effectively shut down the company, which filed a lawsuit against the FBI the same day
to get the data back (Lemos 2009: np) While the above example may be an isolated case, it
raised concerns about unauthorised access to seized data not related to the warrant, which can
result in the unintended disclosure of data to unwanted parties, particularly in authoritarian
countries. Australian Institute of Criminology 4 There had been a number of reported
incidents of cloud services being taken offline due to DDoS attacks (see Metz 2009).
Although DDoS attacks already existed, the cloud computing environment is a new attack
sector that may have a more widespread impact on internet users. The security measures
adopted by different cloud service providers varies. If a cybercriminal can identify the
provider whose vulnerabilities are the easiest to exploit, then this entity becomes a highly
visible target. The lack of security associated with this single entity threatens the entire cloud
in which it resides (Kaufman 2009: 63)
g)Attacks targeting shared-tenancy environment
A virtual machine (VM) is the software implementation of a computer that runs its own
operating system and application as if it was a physical machine (VMWare 2009). Multiple
VMs can concurrently run different software applications on different operating system
environments on a single physical machine. This reduces hardware costs and space
requirements. In a shared-tenancy cloud computing environment, data from different clients
can be hosted on separate VMs but reside on a single physical machine. This provides
maximum flexibility. Software applications running in one VM should not be able to impact
or influence software running in another VM. An individual VM should be unaware of the
other VMs running in the environment as all actions are confined to its own address space. In
a recent study, a team of computer scientists from the University of California, San Diego
and Massachusetts Institute of Technology examined the widely-used Amazon EC2 services.
They found that it is possible to map the internal cloud infrastructure, identify where a
particular target VM is likely to reside, and then instantiate new VMs until one is placed
co-resident with the target (Ristenpart et al. 2009: 199). This demonstrated that the research
team were able to load their eavesdropping software onto the same servers hosting targeted
websites (Hardesty 2009). By identifying the target VMs, attackers can potentially monitor
the cache (a small allotment of high-speed memory used to store frequently-used
information) Australian Institute of Criminology 3 in order to steal data hosted on the same
physical machine (Hardesty 2009). Such an attack is also known as a side-channel attack. The
findings from this research may only be a proof-of-concept at this stage, but it raises concerns
about the possibility of cloud computing servers being a central point of vulnerability that can
be criminally exploited. The Cloud Security Alliance, for example, listed this as one of the
top threats to cloud computing. Attacks have surfaced in recent years that target the shared
technology inside Cloud Computing environments. Disk partitions, CPU caches, GPUs, and
other shared elements were never designed for strong compartmentalization. As a result,
attackers focus on how to impact the operations of other cloud customers, and how to gain
unauthorized access to data. (Cloud Security Alliance 2010: 11)
h) VM-based malware
Vulnerabilities in VMs can be exploited by malicious code (malware) such as VM-based
rootkits designed to infect both client and server machines in cloud services. Rootkits are
cloaking technologies usually employed by other malware programs to abuse compromised
systems by hiding files, registry keys and other operating system objects from diagnostic,
antivirus and security programs. For example, in April 2009, a security researcher pointed out
how a critical vulnerability in VMwares VM display function could be exploited to run
malware, which allows an attacker to read and write memory on the host operating system
(Keizer 2009: np). VM-based rootkits, as pointed out by Price (2008: 27), could be used by
attackers to gain complete control of the underlying OS without the compromised OS being
aware of their existence are especially dangerous because they also control all hardware
interfaces. Once the VM-based rootkits are installed on the machine, they can view
keystrokes, network packets, disk state, and memory state, while the compromised OS
remains oblivious.
Software stacks have better interoperability between platforms, but customers feels difficult
to extract their programs and data from one location to run on another. Some organizations
are concerned about extracting data from a cloud due to which they dont opt for cloud
computing. Customer lock-in seems to be striking to Cloud Computing providers. Cloud
Computing users are more worried about increase in price, consistency problems, or even to
providers leaving out of business. SasS developers could take the advantage of deploying the
services and data on multiple Cloud Computing providers so that failure of a single company
does not affect the customer data. The only fear is that they are much worried about the cloud
pricing and flatten the profits. We offer two advices to relieve this alarm. First, the quality is
also important compare to the price , then customers will not not attract and offer to the
lowest cost service. Second, It Concerns data lock-in justification, APIs standardization leads
to a new model for Private Cloud and a Public Cloud with same software infrastructure
usage. This option Available in Surge Computing, due to high workloads it is not easy to
run extra tasks in private clouds compare to the public cloud .
) Data Transfer Bottlenecks
b
Applications maintain to be converted into additional data-intensive. The applications are
moved across the boundaries of clouds may complicate data placement and transport. Cloud
providers and users have to feel about to minimize costs on the concept of the traffic and the
implications of placement at each level of the system. One provision is to overcome the high
cost of bandwidth transfers is to ship disks. It is one more provision to keep maintains data in
the cloud. If the data is in the cloud, it may not be a bottleneck to enable a new service. If
archived data is in the cloud, selling of Cloud computing cycles with the new services
become possible, such in all your archival data by creating searchable indices or group the
images according to who appears in each photo by performing image recognition on all your
archived photos. A third opportunity is to minimize the cost of network bandwidth more
quickly. One estimate is that one-third is the fiber cost where as two-thirds of the cost of
WAN bandwidth is the cost of the high-end routers. Centralized control routers instead of the
high-end distributed routers. If this technology were deployed in WAN then WAN costs
dropping more quickly.
c) Traffic management and analysis
Analysis of data traffic is important for todays data centers. However, there are several
challenges for existing traffic measurement and analysis methods in Internet Service
Providers (ISPs) networks and enterprise to extend to data centers. Firstly, the density of links
is much higher than that in ISPs or enterprise networks, which makes the worst case scenario
for existing methods. Secondly, most existing methods can compute traffic matrices between
a few hundred end hosts, but even a modular data center can have several thousand servers.
Finally, existing methods usually assume some flow patterns that are reasonable in Internet
and enterprises networks, but the applications deployed on data centers significantly change
the traffic pattern. Further, there is tight coupling of applications use to network, computing,
and storage resources, than what is present in other settings. Currently, the work on
measurement and analysis of data center traffic is very less. Greenberg et al. report data
center traffic characteristics on flow sizes and concurrent flows, and use these to guide
network infrastructure design. Benson et al. perform a complementary study of traffic at the
edges of a data center by examining SNMP traces from routers.
d) Reputation Fate Sharing
Reputations may not suit well for virtualization. Reputation of the cloud as a whole may be
affected by one customers bad behavior. In hosted smaller ISPs, we are offering with some
cost is the trusted email services and with some experience in of this problem to create
reputation-guarding services. Another important concept in cloud computing providers legal
issue is transfer of legal liability and it maintain by customer.
transferred effectively in migrating process the workload in memory state. During the transfer
it maintains consistency for applications by considering resources and physical servers.
b) Server consolidation
In a cloud computing environment, Server consolidation is an efficient approach is to
minimize the energy consumption for makes best use of resource utilization. In a single
server to consolidate VMs residing on multiple under-utilized servers by using Live VM
migration technology and all the remaining servers can be set to an energy-saving state.
Server consolidation is not depend on application performance .It is known as the resource
usage means individual VMs may vary time to time. Result in resource congestion by change
of footprint in VM on the server. Sharing of resources (i.e., disk I/O, bandwidth and memory
cache) among VMs on server leads resource congestion. This information is useful for
effective server consolidation and the fluctuations of VM footprints. When the resource
congestions are occur, then the system react quickly.
c) Performance Unpredictability
Sharing I/O is complex in cloud computing while multiple virtual machines can share CPU
and main memory easily. Virtualization of I/O interrupts and channel is one solution to
improve the efficiency of operating system and improve architecture. Some of the
technologies like PCI express which are very critical to the cloud are difficult to virtualize.
Flash memory could be an attractive alternate with reduced I/O interference. It can provide
possibility for multiple virtual machines to share I/O per second with more storage capacity.
Multiple virtual machines with random input/output workloads can coexist in a single
physical machine without the interference. Another unpredictability problem concerns to
scheduling of virtual machines for various classes of batch processing programs, exclusively
for high performance computing . The complication to attracting HPC is not the use of
clusters; the majority of parallel computing at present is done in huge clusters using the
message-passing interface MPI. The problem is that various HPC applications require to
ensure that all the threads of a program are running simultaneously and todays virtual
machines and operating systems do not provide a programmer-visible way to ensure this.
Gang Scheduling can be used to overcome the obstacle in cloud computing. d) Scalable
Storage Cloud Computing important properties are: infinite capacity on-demand, no up-front
cost, short-term usage. This is an open research problem, is not only create storage system to
meet these issues. And consider cloud advantages of programmer expectations and scaling
arbitrarily up and down on-demand in regard to resource organization for high availability,
data durability and scalability.
e) Bugs in Large-Scale Distributed Systems
Another challenge issue in Cloud Computing is 2011 Global Journals Inc. (US) Global
Journal of Computer Science and Technology Volume XI Issue XI Version I 2011 61
Research Issues in Cloud Computing removing errors in these large scale distributed systems.
The debugging of these bugs have to be done at large scale in the production data centers as
these bugs cannot be reproduced in smaller configurations. One prospect may be to depend
on virtual machines in Cloud Computing. SaaS providers developed their infrastructure
without VMs. SaaS providers did not opt for VMs as they preceded the recent popularity of
VMs or as they felt they could not afford the performance of VMs. Since VMs are very
important in Utility Computing, that level of virtualization may make it possible to capture
valuable information in ways that are unlikely without VMs.
f ) Scaling Quickly
Pay-as-you-go certainly applies to network bandwidth and storage on the basis of used bytes
count. Depending on the virtualization level, computation is slightly different. Google
AppEngine automatically scales in response to load up and down, and users are charged by
usage of the cycles. AWS charges on for the number of instances you occupy by the hour
without checking if your machine is idle or not. One more opportunity without violating
service level agreements is automatically scale quickly increase and decrease in reply to load
in order to save money, but another reason for scaling is to save money as well as resources.
About two-thirds of the power in idle computers use compare to the busy computer.
Currently, in datacenters on the environment receiving a great deal of negative attention But
careful use of resources could reduce the impact. Cloud Computing providers already get low
overhead and careful accounting of resource consumption. By imposing to pay attention to
encourages programmers in the concept of efficiency are utility computing, per-hour and
per-byte costs, development inefficiencies, and allows more direct measurement of
operational.
g) Latency
Latency is a research issue on the Internet. Any performance in the cloud is going the same
meaning of the performance of the result on the client. The latency in a cloud introduces not
to be tedious. The latency is compressed back for understand how and where theyre running
with both smartly-written applications and an intelligently planned infrastructure. In future,
cloud computing capacity and cloud based applications are rapidly increases and latency is
also increases. Cloud computing latency must be improved in the desktop PC is largest
bottlenecks in the memory and storage.
slowing down CPU speeds and turning off partial hardware components has become
commonplace. Energy-aware job scheduling and server consolidation are two other ways to
reduce power consumption by turning off unused machines. Current research on
energy-efficient network protocols and infrastructures . To achieve a good trade-off between
energy savings and application performance is key challenge. Some of the researchers have
recently work on the solutions for performance and power management in a dynamic cloud
environment.
b) Software frameworks
Cloud computing provides a persuasive platform for hosting significant data-intensive
applications. In this, Hadoop for scalable and faulttolerant data processing by using
applications leverage Map Reduce frameworks concept. A MapReduce job is highly
dependent on the type of the application and show that the performance and resource
consumption. All Hadoop nodes have heterogeneous characteristics which are allocated by
Virtual Machine. Hence, it is possible to optimize the performance and cost of a MapReduce
application by carefully selecting its configuration parameter values and designing more
efficient scheduling algorithms . In the bottleneck resources, It significantly improved by
execution time of applications. The design challenges include performance modeling of
Hadoop jobs in all the possible cases, and dynamic conditions in adaptive scheduling.
MapReduce frameworks energy-aware is another approach to turn Hadoop node into sleep
mode after it has finished its work while waiting for new assignments. Made energy-aware
are Hadoop and HDFS. Some researchers are still working on a trade-off between
performance and energy-awareness.
c) Novel cloud architectures
Large data centers and operated in a centralized fashion is implemented as commercial
clouds. In this design achieves high manageability and economy-of-scale. But Some
limitations in large constructing data centers such as high initial investment nd high energy
expense. Some researchers suggests that small size data centers can be more advantageous
than big data centers in many cases: a small data center consume less power, So it does not
require a powerful and high expensive cooling system; Global Journal of Computer Science
and Technology Volume XI Issue XI Version I 2011 62 2011 Global Journals Inc. (US)
Research Issues in Cloud Computing Comparatively to build geographically the large data
centers are cheaper than small data centers. Content delivery and interactive gaming are
time-critical services July in response of Geo- diversity. For example, Valancius et al.
studied the feasibility of hosting video-streaming services using application gateways.
Another related research work is on using voluntary resources (i.e. resources given by users)
for hosting cloud applications. Clouds built for more suitable for non-profit applications such
as scientific computing and it is a mixture of voluntary and dedicated resources are much
cheaper to operate. In this architecture specifies the design challenges such frequent churn
events and managing heterogeneous resources.
d) Software Licensing
Existing commercial software licenses usually control on which computer the software can
run. Users can pay an annual maintenance fee and initially payment for the software. For
Cloud Computing applications, this existing commercial licensing approach for software is
not fine and many cloud computing providers are relying on open source software . The
primary solution for commercial software companies is to better fit Cloud Computing by
change their licensing structure. The challenge is software companies to sell products into
Cloud Computing by support sales services . Pay- as-you-go seems may not fit with the target
analysis used to evaluate usefulness, which is based on single purchases. The solution to this
challenge is cloud providers to offer at discount new plans for bulk use.
e) Client incomprehension
Were probably not in the days where people used to think that clouds were just big clusters
of servers, but that doesnt mean were free of ignorant. We are aware of the fact that the
cloud is moving forward. There are many misunderstandings about how private and public
clouds work together, how virtualization and cloud computing overlap and also how to move
from one kind of infrastructure to another and so on. A good way to clear these is to present
users / customers with real-time examples of what is possible and why. Which clears their
understanding on actual work thats been done and not just hypothetical is where theyre left
to fill in the blanks themselves.
f ) Ad-hoc standards as the only real standards Amazon EC2 is the biggest example of this.
As convenient as it is to develop for the cloud using EC2 as one of the most common types of
deployments, its also something to be cautious. On the optimistic side, they bootstrap
adoption. That is look how quickly a whole culture of cloud computing has sprung up around
EC2. On the negative side, it means that much less space for innovators to create something
open, to let things break away from the ad-hoc standards and can be adopted on their own.
The elastic resource pool has made the cost analysis a lot more complicated than regular data
centers, which often calculates their cost based on consumptions of static computing.
Moreover, an instantiated virtual machine has become the unit of cost analysis rather than the
underlying physical server. For SaaS cloud providers, the cost of developing multitenancy
within their offering can be very substantial. These include: redesign and redevelopment of
the software that was originally used for single-tenancy, cost of providing new features that
allow for intensive customization, performance and security enhancement for concurrent user
access, and dealing with complexities induced by the above changes. Therefore, a strategic
and viable charging model for SaaS provider is crucial for the profitability and sustainability
of SaaS cloud providers .
c) Service Level Agreement (SLA):
Although cloud consumers do not have control over the underlying computing resources,
they do need to ensure the quality, availability, reliability, and performance of these resources
when consumers have migrated their core business functions onto their entrusted cloud.
Typically, these are provided through Service Level Agreements (SLAs) negotiated between
the providers and consumers. The very first issue is the definition of SLA specifications in
such a way that has an appropriate level of granularity, namely the tradeoffs between
expressiveness and complicatedness, so that they can cover most of the consumer
expectations and is relatively simple to be weighted, verified, evaluated, and enforced by the
resource allocation mechanism on the cloud. In addition, different cloud offerings (IaaS,
PaaS, and SaaS) will need to define different SLA metaspecifications. This also raises a
number of implementation problems for the cloud providers. Furthermore, advanced SLA
mechanisms need to constantly incorporate user feedback and customization features into the
SLA evaluation framework . E. Cloud Interoperability Issue: Currently, each cloud offering
has its own way on how cloud clients/applications/users interact with the cloud, leading to the
"Hazy Cloud" phenomenon. This severely hinders the development of cloud ecosystems by
forcing vendor locking, which prohibits the ability of users to choose from alternative
vendors/offering simultaneously in order to optimize resources at different levels within an
organization. More importantly, proprietary cloud APIs makes it very difficult to integrate
cloud services with an organization's own existing legacy systems (e.g. an on premise data
centre for highly interactive modeling applications in a pharmaceutical company).The
primary goal of interoperability is to realize the seamless fluid data across clouds and
between cloud and local applications. There are a number of levels that interoperability is
essential for cloud computing. First, to optimize the IT asset and computing resources, an
organization often needs to keep in-house IT assets and capabilities associated with their core
competencies while outsourcing marginal functions and activities (e.g. the human resource
system) onto the cloud. Second, more often than not, for the purpose of optimization, an
organization may need to outsource a number of marginal functions to cloud services offered
by different vendors. Standardization appears to be a good solution to address the
interoperability issue. However, as cloud computing just starts to take off, the interoperability
problem has not appeared on the pressing agenda of major industry cloud vendors .
processed. These frights of the unknown service providers must very amicably be dealt with
and eliminated form their minds.
i) Dependency on service providers
For uninterrupted services and proper working it is necessary that you acquire a vendor
services with proper infrastructural and technical expertise. An authorized vendor who can
meet the security standards set by your companys internal policies and government agencies.
While selecting the service provider you must carefully read the service level agreement and
understand their policies and terms and provision of compensation in case of any outage or
lock in clauses.
j) Cultural obstacles
High authority of the company and organisational culture has also become a big obstacle in
the proper implementation of the cloud computing. Top authority never wants to store the
important data of the company somewhere else where they are not able to control and access
the data. They have misconceptions in their minds that cloud computing puts the organisation
at the risk by seeping out important details. Their mindset is such that the organization on risk
averse footing, which makes it more reluctant to migrate to a cloud solution.
k) Cost barrier
For efficient working of cloud computing you have to bear the high charges of the bandwidth.
Business can cut down the cost on hardware but they have to spend a huge amount on the
bandwidth. For smaller application cost is not a big issue but for large and complex
applications it is a major concern. For transferring complex and intensive data over the
network it is very necessary that you have sufficient bandwidth. This is a major obstacle in
front of small organisations, which restrict them for implementing cloud technology in their
business.
l) Lack of knowledge and expertise
Every organisation does not have sufficient knowledge about the implementation of the cloud
solutions. They have not expertise staff and tools for the proper use of cloud technology.
Delivering the information and selection the right cloud is quite difficult without right
direction. Teaching your staff about the process and tools of the cloud computing is a very
big challenge in itself. Requiring an organisation to shift their business to cloud based
technology without having any proper knowledge is like asking for disaster. They would
never use this technology for their business functions.
m) Consumption basis services charges
Cloud computing services are on-demand services so it is difficult to define specific cost for a
particular quantity of services. These types of fluctuations and price differences make the
implementation of cloud computing very difficult and complicated. It is not easy for a normal
business owner to study consistent demand and fluctuations with the seasons and various
events. So it is hard to budget for a service that could consume several months of budget in a
few days of heavy use.
n) Alleviate the threats risk
It is very complicated to certify that the cloud service provider meet the standards for security
and threat risk. Every organisation may not have enough mechanism to mitigate these types
of threats. Organisations should observe and examine the threats very seriously. There are
mainly two types of threat such as internal threats, within the organisations and external
threats from the professional hackers who seek out the important information of your
business. These threats and security risks put a check on implementing cloud solutions.
o) Unauthorised service providers
Cloud computing is a new concept for most of the business organisations. A normal
businessman is not able to verify the genuineness of the service provider agency. Its very
difficult for them to check the whether the vendors meet the security standards or not. They
have not an ICT consultant to evaluate the vendors against the worldwide criteria. It is
necessary to verify that the vendor must be operating this business for a sufficient time
without having any negative record in past. Vendor continuing business without any data loss
complaint and have a number of satisfied clients. Market reputation of the vendor should be
unblemished.
p) Hacking of brand
Cloud computing carries some major risk factors like hacking. Some professional hackers are
able to hack the application by breaking the efficient firewalls and steal the sensitive
information of the organisations. A cloud provider hosts numerous clients; each can be
affected by actions taken against any one of them. When any threat came into the main server
it affects all the other clients also. As in distributed denial of service attacks server requests
that inundate a provider from widely distributed computers.
problems regarding the cloud computing execution in real life. But the benefits of cloud
computing are more vast in compare to these hazards. So you should find the perfect
solutions and avail the huge benefits of cloud technology in your business. It can take your
business to the new heights!!
5. Conclusion
Cloud Computing emerged as a major technology to provide services over the Internet in
easy and efficient way. The main reason for possible success of cloud computing and vast
interest from organizations throughout the world is due to the broad category of services
provided with cloud. The cloud computing is making the utility computing into a reality. The
current technology does not provide all the requirements needed by the cloud computing.
There are many challenges to be addressed by the researchers for making cloud computing
work well in reality. Some of the challenges like security issues and Data issues are very
much required for the customers to use the services provided by the cloud. Similarly
challenges like Security, performance issues and other issues like energy management etc are
important for the service providers to improve the services. In this paper we have identified
the challenges in terms of security issues, data challenges, performance challenges and other
design challenges. We have provided an insight into the possible solutions to these problems
even though lot of work is needed to be done in this regard.
6. References
Jensen, M. (2009, September). On Technical Security Issues in Cloud Computing.
IEEE International Conference in Cloud Computing, 109-116.2.
P. Mell and T. Grance. The NIST Definition of Cloud Computing (Draft).
Available:3.
G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson, and D. Song,
Provable data possession at untrusted stores, In ACM CCS, pages 598-609, 2007.4.
G. Ateniese, R. D. Pietro, L. V. Mancini, and G. Tsudik, Scalable and efficient
provable data possession,SecureComm, 2008.5.
K.D. Bowers, A. Juels, and A. Oprea, HAIL: A high-availability and integrity layer
for cloud storage, Proc.16th ACM conference on Computer and communications
security, 2009, pp. 187-198.6.
S. Kandula, D. Katabi, M. Jacob, and A. Berger, Botz-4-sale: Surviving organized
ddos attacks that mimic flash crowds, In Proc. NSDI (2005).7.
A. Yaar, A. Perrig, and D. Song, Fit: Fast internet traceback, In Proc. IEEE Infocom
(March 2005).8.
S. Staniford, V. Paxson, and N.Weaver, How to own the internet in your spare time,
In Proc. USENIX Security(2002). Cisco data center infrastructure 2.5 design guide.
http://www.ijser.org/researchpaper/Cloud-Computing-Security-Issues-and-Challenges
.pdf
http://mnagarajan.com/Research/Cloud%20Computing.pdf