UNIT-4 Security in Clouds
UNIT-4 Security in Clouds
Security in clouds
Cloud security Fundamentals: A fundamental security concept employed in many
cloud installations is known as the defense-in-depth strategy. This involves using layers of security
technologies and business practices to protect data and infrastructure against threats in multiple ways.
For the purposes of this page, we will focus on considerations for securing public cloud
platforms, since the challenges of private cloud more closely align to traditional
challenges in cybersecurity.
Cloud platform providers are responsible for safeguarding their physical infrastructure
and the basic computing, network, storage, and network services they provide. However,
their customers retain most or all of the responsibility for protecting their applications,
monitoring activities, and ensuring that security tools are correctly deployed and
configured. This division of responsibility is known as the Shared Responsibility Model.
That means customers cope with:
Traditional cybersecurity issues as they affect workloads in the cloud, including vulnerability
management, application security, social engineering, and incident detection and response.
New challenges related to cloud platforms, such as lack of visibility into security events in the
cloud, rapid changes in infrastructure, continuous delivery of applications, and new threats
targeting cloud administrative tools.
The benefits of cloud security
Amazon Web Services (AWS) offers a feature-rich environment for hosting and
managing workloads in the cloud. What are some of the ways that organizations can
strengthen cloud security for workloads hosted on AWS?
Security teams can use a vulnerability management solution to discover and assess EC2
instances and scan them for vulnerabilities, misconfigurations, and policy violations.
A dynamic application security testing (DAST) solution can test web apps to discover
vulnerabilities in the OWASP Top Ten and other attacks and potential violations of PCI
DSS and other regulations. When a DAST solution is integrated with DevOps tools like
Jenkins, security testing can be triggered at specified milestones in the development
process to ensure that vulnerabilities and violations are detected and fixed before code is
put into production.
To detect indicators of attacks and data breaches, a SIEM solution can be integrated with
the management and security services provided by Amazon. This includes access to logs
created by AWS CloudTrails and CloudWatch, as well as services like Virtual Private
Cloud (VPC) flow logs, and Amazon Route 53 DNS logs.
A SIEM solution designed to work with cloud platforms can enrich this log data with
additional context from other sources (including endpoints, on-premises systems, and
other cloud platforms), flag indicators of compromise, and use advanced security
analytics to detect attacks early and remediate quickly.
Security alerts from AWS GuardDuty and other AWS services can be fed directly to a
SIEM, allowing the enterprise security team to quickly investigate and respond.
Microsoft Azure is a powerful, flexible, scalable platform for hosting workloads in the
cloud. How can organizations enhance security for workloads running on Azure?
A SIEM solution can work with Azure Event Hubs, which aggregate cloud logs from
important Azure services such as Azure Active Directory, Azure Monitor, the Azure
Resource Manager (ARM), the Azure Security Center, and Office365. The SIEM can
obtain log data from Azure Event Hubs in real time, combine it log data with information
from endpoints, networks, on-premises data centers, and other cloud platforms, and
perform analyses that uncover phishing attacks, active malware, the use of compromised
credentials, lateral movement by attackers, and other evidence of attacks.
The Azure Security Center also generates alerts, but lacks the data enrichment, analysis,
and workflow features of a full SIEM. However, security teams can arrange to send
Security Center alerts directly to a SIEM solution to take advantage of those advanced
capabilities.
Cloud security is not just about providing security for separate cloud platforms
independently. Rather, it is a matter of capturing, correlating, analyzing, and acting on
all the security data generated by the organization and its cloud service providers.
For these reasons, it is essential that organizations use security solutions that provide
visibility and monitoring across their entire IT footprint, including multiple cloud
platforms and on-premises data centers.
There are many vulnerability scanners available in the market. They can be free, paid, or
open-source. Most of the free and open-source tools are available on GitHub. Deciding which
tool to use depends on a few factors such as vulnerability type, budget, frequency of how
often the tool is updated, etc.
1. Nikto2
Nikto2 is an open-source vulnerability scanning software that focuses on web application
security. Nikto2 can find around 6700 dangerous files causing issues to web servers and
report outdated servers based versions. On top of that, Nikto2 can alert on server
configuration issues and perform web server scans within a minimal time.
Nikto2 doesn’t offer any countermeasures for vulnerabilities found nor provide risk
assessment features. However, Nikto2 is a frequently updated tool that enables a broader
coverage of vulnerabilities.
2. Netsparker
Netsparker is another web application vulnerability tool with an automation feature available
to find vulnerabilities. This tool is also capable of finding vulnerabilities in thousands of web
applications within a few hours.
Although it is a paid enterprise-level vulnerability tool, it has many advanced features. It has
crawling technology that finds vulnerabilities by crawling into the application. Netsparker
can describe and suggest mitigation techniques for vulnerabilities found. Also, security
solutions for advanced vulnerability assessment are available.
3. OpenVAS
OpenVAS is a powerful vulnerability scanning tool that supports large-scale scans which are
suitable for organizations. You can use this tool for finding vulnerabilities not only in the web
application or web servers but also in databases, operating systems, networks, and virtual
machines.
OpenVAS receives updates daily, which broadens the vulnerability detection coverage. It
also helps in risk assessment and suggests countermeasures for the vulnerabilities detected.
4. W3AF
W3AF is a free and open-source tool known as Web Application Attack and Framework.
This tool is an open-source vulnerability scanning tool for web applications. It creates a
framework which helps to secure the web application by finding and exploiting the
vulnerabilities. This tool is known for user-friendliness. Along with vulnerability scanning
options, W3AF has exploitation facilities used for penetration testing work as well.
Moreover, W3AF covers a high-broaden collection of vulnerabilities. Domains that are
attacked frequently, especially with newly identified vulnerabilities, can select this tool.
5. Arachni
Arachni is also a dedicated vulnerability tool for web applications. This tool covers a variety
of vulnerabilities and is updated regularly. Arachni provides facilities for risk assessment as
well as suggests tips and countermeasures for vulnerabilities found.
Arachni is a free and open-source vulnerability tool that supports Linux, Windows, and
macOS. Arachni also assists in penetration testing by its ability to cope up with newly
identified vulnerabilities.
6. Acunetix
Acunetix is a paid web application security scanner (open-source version also available) with
many functionalities provided. Around 6500 vulnerabilities scanning range is available with
this tool. In addition to web applications, it can also find vulnerabilities in the network as
well.
Acunetix provides the ability to automate your scan. Suitable for large scale organizations as
it can handle many devices. HSBC, NASA, USA Air force are few industrial giants who use
Arachni for vulnerability tests.
7. Nmap
Nmap is one of the well-known free and open-source network scanning tools among many
security professionals. Nmap uses the probing technique to discover hosts in the network and
for operating system discovery.
This feature helps in detecting vulnerabilities in single or multiple networks. If you are new
or learning with vulnerabilities scanning, then Nmap is a good start.
8. OpenSCAP
OpenSCAP is a framework of tools that assist in vulnerability scanning, vulnerability
assessment, vulnerability measurement, creating security measures. OpenSCAP is a free and
open-source tool developed by communities. OpenSCAP only supports Linux platforms.
OpenSCAP framework supports vulnerability scanning on web applications, web servers,
databases, operating systems, networks, and virtual machines. Moreover, they provide a
facility for risk assessment and support to counteract threats.
9. GoLismero
GoLismero is a free and open-source tool used for vulnerability scanning. GoLismero focuses
on finding vulnerabilities on web applications but also can scan for vulnerabilities in the
network as well. GoLismero is a convenient tool that works with results provided by other
vulnerability tools such as OpenVAS, then combines the results and provides feedback.
GoLismero covers a wide range of vulnerabilities, including database and network
vulnerabilities. Also, GoLismero facilitates countermeasures for vulnerabilities found.
10. Intruder
Intruder is a paid vulnerability scanner specifically designed to scan cloud-based storage.
Intruder software starts to scan immediately after a vulnerability is released. The scanning
mechanism in Intruder is automated and constantly monitors for vulnerabilities.
Intruder is suitable for enterprise-level vulnerability scanning as it can manage many devices.
In addition to monitoring cloud-storage, Intruder can help identify network vulnerabilities as
well as provide quality reporting and suggestions.
Buyers tend to feel safer when making a transaction with your business, and you should find
that this drives your revenue up. With the patent-pending scanning technology, SiteInspector,
you will enjoy a new level of security.
12. Aircrack
Aircrack also is known as Aircrack-NG, is a set of tools used for assessing the WiFi network
security. These tools can also be utilized in network auditing, and support multiple OS’s such
as Linux, OS X, Solaris, NetBSD, Windows, and more.
The tool will focus on different areas of WiFi security, such as monitoring the packets and
data, testing drivers and cards, cracking, replying to attacks, etc. This tool allows you to
retrieve the lost keys by capturing the data packets.
The tool is excellent for saving time, cost, and effort when it comes to managing your
network security. It features an automated vulnerability assessment for DBs, web
applications, workstations, and servers. Businesses and organizations will get complete
support for virtual environments with things like virtual app scanning and vCenter
integration.
It’s excellent for helping you to identify missing updates or security patches. Use the tool to
install new security updates on your computer. Small to medium-sized businesses find the
tool most useful, and it helps save the security department money with its features. You
won’t need to consult a security expert to resolve the vulnerabilities that the tool finds.
15. Nexpose
Nexpose is an open-source tool that you can use for no cost. Security experts regularly use
this tool for vulnerability scanning. All the new vulnerabilities are included in the Nexpose
database thanks to the Github community. You can use this tool with the Metasploit
Framework, and you can rely on it to provide a detailed scanning of your web application.
Before generating the report, it will take various elements into account.
Vulnerabilities are categorized by the tool according to their risk level and ranked from low
to high. It’s capable of scanning new devices, so your network remains secure. Nexpose is
updated each week, so you know it will find the latest hazards.
The tool offers an extensive range of OS, Dbs, applications, and several other devices among
cloud infrastructure, virtual and physical networks. Millions of users trust Nessus for their
vulnerability assessment and configuration issues.
It integrates with the National Vulnerability Database and has access to the most current
CVE’s to identify vulnerabilities in your Cisco devices. It will work with any Cisco device
running ASA, IOS, or Nexus OS.
To implement a vulnerability assessment, you should follow a systematic process as the one
outlined below.
Step 1 – Begin the process by documenting, deciding what tool/tools to use, obtain the
necessary permission from stakeholders.
Step 2 – Perform vulnerability scanning using the relevant tools. Make sure to save all the
outputs from those vulnerability tools.
Step 3 – Analyse the output and decide which vulnerabilities identified could be a possible
threat. You can also prioritize the threats and find a strategy to mitigate them.
Step 4 – Make sure you document all the outcomes and prepare reports for stakeholders.
The figure below highlights the layers, within a cloud service, that are secured by the
provider versus the customer.
Prior to signing up with a provider, it is important to perform a gap analysis on the cloud
service capabilities. This exercise should benchmark the cloud platform’s maturity,
transparency, compliance with enterprise security standards (e.g. ISO 27001) and regulatory
standards such as PCI DSS, HIPAA and SOX. Cloud security maturity models can help
accelerate the migration strategy of applications to the cloud. The following are a set of
principles you can apply when evaluating a cloud service provider’s security maturity:
Disclosure of security policies, compliance and practices: The cloud service provider
should demonstrate compliance with industry standard frameworks such as ISO 27001, SS 16
and CSA Cloud controls matrix. Controls certified by the provider should match control
expectations from your enterprise data protection standard standpoint. When cloud services
are certified for ISO 27001 or SSAE 16, the scope of controls should be disclosed. Clouds
that host regulated data must meet compliance requirements such as PCI DSS, Sarbanes-
Oxley and HIPAA.
Disclosure when mandated: The cloud service provider should disclose relevant data when
disclosure is imperative due to legal or regulatory needs.
Security architecture: The cloud service provider should disclose security architectural
details that either help or hinder security management as per the enterprise standard. For
example, the architecture of virtualization that guarantees isolation between tenants should be
disclosed.
Security Automation – The cloud service provider should support security automation by
publishing API(s) (HTTP/SOAP) that support:
Export and import of security event logs, change management logs, user entitlements
(privileges), user profiles, firewall policies, access logs in a XML or enterprise log standard
format.
Continuous security monitoring including support for emerging standards such as Cloud
Audit.
Governance and Security responsibility: Governance and security management
responsibilities of the customer versus those of the cloud provider should be clearly
articulated.
· Security
automation -
Automatic
provisioning of
firewall policies,
privileged
accounts, DNS,
application identity
(see patterns
below)
PaaS [In addition to the above] [In addition to the
above]
· Privilege escalation via API
· Privilege escalation via
· Authorization weakness in platform API
services such as Message Queue,
NoSQL, Blob services
The following are cloud security best practices to mitigate risks to cloud services:
Architect for security-as-a-service – Application deployments in the cloud involve
orchestration of multiple services including automation of DNS, load balancer, network QoS,
etc. Security automation falls in the same category which includes automation of firewall
policies between cloud security zones, provisioning of certificates (for SSL), virtual machine
system configuration, privileged accounts and log configuration. Application deployment
processes that depend on security processes such as firewall policy creation, certificate
provisioning, key distribution and application pen testing should be migrated to a self-service
model. This approach will eliminate human touch points and will enable a security as a
service scenario. Ultimately this will mitigate threats due to human errors, improve
operational efficiency and embed security controls into the cloud applications.
Implement sound identity, access management architecture and practice – Scalable
cloud bursting and elastic architecture will rely less on network based access controls and
warrant strong user access management architecture. Cloud access control architecture should
address all aspects of user and access management lifecycles for both end users and
privileged users – user provisioning & deprovisioning, authentication, federation,
authorization and auditing. A sound architecture will enable reusability of identity and access
services for all use cases in public, private and hybrid cloud models. It is good practice to
employ secure token services along with proper user and entitlement provisioning with audit
trails. Federation architecture is the first step to extending enterprise SSO to cloud services.
Refer to cloud security alliance, Domain 12 for detailed guidance here.
Leverage APIs to automate safeguards – Any new security services should be deployed
with an API (REST/SOAP) to enable automation. APIs can help automate firewall policies,
configuration hardening, and access control at the time of application deployment. This can
be implemented using open source tools such as puppet in conjunction with the API supplied
by cloud service provider.
Always encrypt or mask sensitive data – Today’s private cloud applications are candidates
for tomorrow’s public cloud deployment. Hence architect applications to encrypt all sensitive
data irrespective of the future operational model.
Do not rely on an IP address for authentication services – IP addresses in clouds are
ephemeral in nature so you cannot solely rely on them for enforcing network access control.
Employ certificates (self-signed or from a trusted CA) to enable SSL between services
deployed on cloud.
Log, Log, Log – Applications should centrally log all security events that will help create an
end-to-end transaction view with non-repudiation characteristics. In the event of a security
incident, logs and audit trails are the only reliable data leveraged by forensic engineers to
investigate and understand how an application was exploited. Clouds are elastic and logs are
ephemeral hence it is critical to periodically migrate log files to a different cloud or to the
enterprise data center.
Continuously monitor cloud services – Monitoring is an important function given that
prevention controls may not meet all the enterprise standards. Security monitoring should
leverage logs produced by cloud services, APIs and hosted cloud applications to perform
security event correlation. Cloud audit (cloudaudit.org) from CSA can be leveraged towards
this mission.
Security architecture patterns serve as the North Star and can accelerate application migration
to clouds while managing the security risks. In addition, cloud security architecture patterns
should highlight the trust boundary between various services and components deployed at
cloud services. These patterns should also point out standard interfaces, security protocols
(SSL, TLS, IPSEC, LDAPS, SFTP, SSH, SCP, SAML, OAuth, Tacacs, OCSP, etc.) and
mechanisms available for authentication, token management, authorization, encryption
methods (hash, symmetric, asymmetric), encryption algorithms (Triple DES, 128-bit AES,
Blowfish, RSA, etc.), security event logging, source-of-truth for policies and user attributes
and coupling models (tight or loose).Finally the patterns should be leveraged to create
security checklists that need to be automated by configuration management tools like puppet.
In general, patterns should highlight the following attributes (but not limited to) for each of
the security services consumed by the cloud application:
Logical location – Native to cloud service, in-house, third party cloud. The location may have
an implication on the performance, availability, firewall policy as well as governance of the
service.
Protocol – What protocol(s) are used to invoke the service? For example REST with X.509
certificates for service requests.
Service function – What is the function of the service? For example encryption of the artifact,
logging, authentication and machine finger printing.
Input/Output – What are the inputs, including methods to the controls, and outputs from the
security service? For example, Input = XML doc and Output =XML doc with encrypted
attributes.
Control description – What security control does the security service offer? For example,
protection of information confidentiality at rest, authentication of user and authentication of
application.
Actor – Who are the users of this service? For example, End point, End user, Enterprise
administrator, IT auditor and Architect.
Here is a subset of the cloud security architecture pattern published by open security
architecture group (opensecurityarchitecturegroup.org).
This pattern illustrates the actors (architect, end user, business manager, IT manager),
interacting with systems (end point, cloud, applications hosted on the cloud, security
services) and the controls employed to protect the actors and systems (access enforcement,
DoS protection, boundary protection, cryptographic key & management, etc). Let’s look at
details communicated by the pattern.
Identity management and access controls
Access controls help us restrict whom and what accesses our information resources, and
they possess four general functions: identity verification, authentication, authorization, and
accountability. These functions work together to grant access to resources and constrain
what a subject can do with them.
This chapter reviews each access control function, four approaches to access control/role
management, and takes a brief look at the future of access controls.
Identity management
Identity management consists of one or more processes to verify the identity of a
subject attempting to access an object. However, it does not provide 100 percent
assurance of the subject’s identity. Rather, it provides a level of probability of
assurance. The level of probability depends on the identity verification processes
in place and their general trustworthiness.
Identity defined
A good electronic identity is something that is verifiable and difficult to
reproduce. It must also be easy to use. A difficult to use identity is an identity or
a related service/application not used.
One the other end of the identity effectiveness spectrum might be a solution that
provides nearly 100% probability of a subject’s identity but is frustrating and
close to unusable. For example, a combination of a personal certificate, token,
password, and a voice print to access a financial application is a waste of
resources and a path to security team unemployment. Identity verification
process cost and complexity should mirror the risk associated with unauthorized
access and still make sense at the completion of a cost-benefit analysis.
Alex’s security director decided to take a middle path. The director believes
strong passwords cause more problems than they prevent: a view supported by
business management. He also believes that lowering biometrics false rejection
rates is necessary to maintain employee acceptance and maintain productivity
levels. Instead of using only one less than optimum authentication factor, he
decided to layer two: passwords (something Alex has) and biometrics
(something Alex is).
This demonstrates the significant reduction in identity theft risk when using two
factors, even if each by itself is relatively weak. In our example, a six percent
probability of successful unauthorized access might be acceptable. Acceptance
should depend largely on other controls in place, including what data Alex can
access and what she can do with it. However, I would not feel comfortable with
a 20 or 30 percent probability of access control failure regardless of other
existing controls. The actual percentages are not as important as using the model
to understand and explain identity verification risk mitigation.
Authentication, authorization, and accountability (AAA)
Identity management has become a separate consideration for access control.
However, the three pillars that support authorized access still define the tools
and techniques necessary to manage who gets access to what and what they can
do when they get there: authentication, authorization, and accountability. See
Figure 11-3.
Authentication
Identity management and authentication are inseparable. Identity management
includes assigning and managing a subject’s identity. Authentication is the
process of verifying a subject’s identity at the point of object access.
Authorization
Once a resource or network verifies a subject’s identity, the process of
determining what objects that subject can access begins. Authorization identifies
what systems, network resources, etc. a subject can access. Related processes
also enforce least privilege, need-to-know, and separation of duties.
Authorization is further divided into coarse and fine dimensions.
Coarse authorization
Coarse authorization determines at a high-level whether a subject is authorized
to use or access an object. It does not determine what the subject can do or see
once access is granted.
Fine authorization
Fine authorization further refines subject access. Often embedded in the object
itself, this process enforces least privilege, need-to-know, and separation of
duties, as defined in Chapter 1.
Accountability
Each step from identity presentation through authentication and authorization is
logged. Further, the object or some external resource logs all activity between
the subject and object. The logs are stored for audits, sent to a log management
solution, etc. They provide insight into how well the access control process is
working: whether or not subjects abuse their access.
DAC is often a good choice for very small businesses without an IT staff. It
allows a handful of users to share information throughout their day, allowing for
smooth operation of the business. This approach when applied to 10 or 20
employees lacks the complexity and oversight challenges associated with using
DAC in organizations with hundreds or thousands of users.
DAC lacks account onboarding and termination controls necessary to enforce
least privilege, need-to-know, and separation of duties for a large number of
employees. Further, job changes can result in “permissions creep:” the retention
of rights and permissions associated with a previous position that are
inappropriate for the new position.
Step 1. The day before a new employee reports for work, an HR clerk enters her
information into the HR application. Part of this process is assigning the new
employee a job code representing her role in the business.
Step 3. If automated, the onboarding process uses the appropriate role definition
to create accounts in each relevant application and network access solution. A
manual process might include adding the new hire’s account to one or more
Active Directory security groups and creating accounts in applications using
application-resident role profiles.
When the employee eventually leaves the company, the process is similar. The
HR extract shows her as terminated, and based on her role all access is removed.
For a job change, the common approach is to remove all access from the
previous role (as if the employee was terminated) and reassign her to a new role.
This removes all previous access and helps prevent permissions creep.
A manager, for example, has the ability to approve her employees’ hours
worked. However, when she attempts to approve her own hours, a rule built into
the application compares the employee record and the user, sees they are the
same, and temporarily removes approval privilege. Note that this is dynamic and
occurs at the time a transaction is attempted. This also sometimes called dynamic
RBAC.
Identify data by using automatic and manual classification of files. For example,
you could tag data in file servers across the organization.
Control access to files by applying safety-net policies that use central access
policies. For example, you could define who can access health information
within the organization.
Audit access to files by using central audit policies for compliance reporting and
forensic analysis. For example, you could identify who accessed highly sensitive
information.
Apply Rights Management Services (RMS) protection by using automatic RMS
encryption for sensitive Microsoft Office documents. For example, you could
configure RMS to encrypt all documents that contain Health Insurance
Portability and Accountability Act (HIPAA) information.
When used in conjunction with other Microsoft or compatible third-party
solutions, data is protected based on where it is accessed, who is accessing it,
and its overall classification with a combination of discretionary, role-based,
mandatory, and rules-based access management.
With the rising popularity of cloud storage, and its ever-increasing versatility, it’s no surprise
that enterprises have jumped on the cloud bandwagon. This powerful tool not only meets
storage and computing needs, but also helps saves business thousands of dollars in IT
investments. This high demand for storage has nurtured the growth of a thriving cloud service
industry that offers affordable, easy-to-use and remotely-accessible cloud services.
But as with every kind of new technology, whether physical or virtual, IT experts have
warned of the inherent security risks associated with using cloud storage and file sharing
apps. In fact, security or the lack thereof has restricted universal adoption of cloud services.
The main issue is that enterprises have to entrust the security of their sensitive business data
to third-parties, who may or may not be working in their best interest. There are several risks
associated with the use of third-party cloud services, here are six of them to focus on:
With cloud services like Google Drive, Dropbox, and Microsoft Azure becoming a regular
part of business processes, enterprises have to deal with newer security issues such as loss of
control over sensitive data. The problem here is that when using third-party file sharing
services, the data is typically taken outside of the company’s IT environment, and that means
that the data’s privacy settings are beyond the control of the enterprise. And because most
cloud services are designed to encourage users to back up their data in real-time, a lot of data
that wasn’t meant to be shared can end up being viewed by unauthorized personnel as well.
The best way to avoid such a risk is by ensuring you’re your provider encrypts your files
during storage, as well as transit, within a range of 128 to 256-bit.
DATA LEAKAGE
Most of the businesses that have held back from adopting the cloud have done so in the fear
of having their data leaked. This feat stems from the fact that the cloud is a multi-user
environment, wherein all the resources are shared. It is also a third-party service, which
means that data is potentially at risk of being viewed or mishandled by the provider. It is only
human nature to doubt the capabilities of a third-party, which seems like an even bigger risk
when it comes to businesses and sensitive business data. There are also a number of external
threats that can lead to data leakage, including malicious hacks of cloud providers or
compromises of cloud user accounts. The best strategy is to depend on file encryption
and stronger passwords, instead of the cloud service provider themselves.
BYOD
Another emerging security risk of using cloud storage and FSS is that they have given
employees the ability to work on a Bring Your Own Device (BYOD) basis. And this trend is
set to increase as more employees prefer to use their own devices at work, either because
they’re more used to their interfaces or have higher specs than company-provided devices.
Overall, BYOD has the potential to be a win-win situation for employees and employers,
saving employers the expense of having to buy IT equipment for employees while giving
employees more flexibility. However, BYOD also brings significant security risks if it’s not
properly managed. Stolen, lost or misused devices can mean that a business’ sensitive data is
now in the hands of a third-party who could breach the company’s network and steal valuable
information. Discovering a data breach on an external (BYOD) asset is also more difficult, as
it is nearly impossible to track and monitor employee devices without the proper tools in
place.
SNOOPING
Files in the cloud are among the most susceptible to being hacked without security measures
in place. The fact that they are stored and transmitted over the internet is also a major risk
factor. And even if the cloud service provides encryption for files, data can still be
intercepted on route to its destination. The best form of security against this threat would be
to ensure that the data is encrypted and transmitted over a secure connection, as this will
prevent outsiders from accessing the cloud’s metadata as well.
KEY MANAGEMENT
The management of cryptographic keys has always been a security risk for enterprises, but its
effects have been magnified after the introduction of the cloud, which is why key
management needs to be performed effectively. This can only be done by securing the key
management process from the start and by being inconspicuous, automated, and active. This
is the only way to ensure that sensitive data isn’t vulnerable when it is going to the cloud.
Additionally, keys need to be jointly-secured, and the retrieval process should be difficult and
tedious, to make sure that data can never be accessed without authorization.
CLOUD CREDENTIALS
The basic value proposition of the cloud is that it offers near-unlimited storage for everyone.
This means that even an enterprise’s data is usually stored along with other customers’ data,
leading to potential data breaches via third parties. This is mitigated - in theory - by the fact
that cloud access is restricted based on user credentials; however those credentials are also
stored on the cloud and can vary significantly in security strength based on individual users'
password habits, meaning that even the credentials are subject to compromise. While a
credential compromise may not give attackers access to the data within your files, it could
allow them to perform other tasks such as making copies or deleting them. The only way to
overcome this security threat is by encrypting your sensitive data and securing your own
unique credentials, which might require you to invest in a secure password management
service.
While the cloud storage and file sharing services can offer great value to enterprises for their
flexibility, scalability, and cost savings, it is critical that organizations address these security
concerns with the implementation of a comprehensive cloud security strategy before adoption
of or transition to cloud services.
It addresses the security issues faced by the components of a virtualization environment and
methods through which it can be mitigated or prevented.
Virtualization Security
Virtualization security is a broad concept that includes a number of different methods to
evaluate, implement, monitor and manage security within a virtualization infrastructure /
environment.
VM sprawl
VM sprawl is the terminology used to describe the situation where
the number of VMs on a network goes beyond the point where they
can be managed effectively. While setting up VMs can be easier
than setting up real physical machines, virtual machines have
basically the same licensing, security, and compliance requirements
as real physical machines.
It is often seen that VMs are quickly set up but fall behind in terms
of having the most up-to-date patches and configuration. This can
open up security-related vulnerabilities.
There are several tools available for managing VM sprawl which
provide you a single point user interface from where all the VMs
running on a network can be monitored and managed. Applications
such as V-Commander from Embotics and some others host all the
relevant information including physical machine mapping, storage
locations, and software licenses.
Complexity of monitoring
One of the risks with virtualized platforms is that the number of
layers through which VM infrastructure is implemented is
enormous. Troubleshooting events, activity logs, and crashes can be
quite difficult. It is vital to setup software tools properly to ensure
that all information necessary for monitoring can be captured
accurately.
Data loss, theft, and hacking
Just like physical machines, virtual machines also contain a lot of
critical, sensitive data such as personal data, user profiles,
passwords, license keys, and history. While the risk of data loss is
immense with both physical and virtual machines, the risk is much
greater with virtual machines as it is much easier to move files and
images from virtual machines than it is to hack into physical
machines via network links.
Many images and snapshots are captured by virtual machines in
order to deploy or restore system restores, and they can be prone to
data theft.
There are a couple of ways in which such risks can be mitigated.
Using a private key-based encryption solution is one way and it is
also vital to have comprehensive policies and controls around the
storage of images and snapshots.
TWEET IT
SHARE ON LINKEDIN
VM sprawl
VM sprawl is the terminology used to describe the situation where the
number of VMs on a network goes beyond the point where they can be
managed effectively. While setting up VMs can be easier than setting up
real physical machines, virtual machines have basically the same licensing,
security, and compliance requirements as real physical machines.
It is often seen that VMs are quickly set up but fall behind in terms of
having the most up-to-date patches and configuration. This can open up
security-related vulnerabilities.
There are several tools available for managing VM sprawl which provide
you a single point user interface from where all the VMs running on a
network can be monitored and managed. Applications such as V-
Commander from Embotics and some others host all the relevant
information including physical machine mapping, storage locations, and
software licenses.
Complexity of monitoring
One of the risks with virtualized platforms is that the number of layers
through which VM infrastructure is implemented is enormous.
Troubleshooting events, activity logs, and crashes can be quite difficult. It
is vital to setup software tools properly to ensure that all information
necessary for monitoring can be captured accurately.
Data loss, theft, and hacking
Just like physical machines, virtual machines also contain a lot of critical,
sensitive data such as personal data, user profiles, passwords, license keys,
and history. While the risk of data loss is immense with both physical and
virtual machines, the risk is much greater with virtual machines as it is
much easier to move files and images from virtual machines than it is to
hack into physical machines via network links.
Many images and snapshots are captured by virtual machines in order to
deploy or restore system restores, and they can be prone to data theft.
There are a couple of ways in which such risks can be mitigated. Using a
private key-based encryption solution is one way and it is also vital to have
comprehensive policies and controls around the storage of images and
snapshots.
Lack of visibility into virtual network
traffic
One of the biggest challenges with virtualization is the lack of visibility
into virtual networks used for communications between virtual machines.
This poses problems when enforcing security policies since traffic flowing
via virtual networks may not be visible to devices such as intrusion-
detection systems installed on a physical network.
This is due to the nature of virtualized systems. Network traffic flowing
between virtual machines does not originate at a particular host and the
hypervisor is generally not able to monitor all communications happening
between virtual machines.
There are software tools such as WireShark available that can monitor
virtual network traffic and it is essential to use them. Also, you should
consider a hypervisor that can monitor each operating system instance
separately.
Offline and dormant VMs
One of the significant loopholes of virtualized systems is with offline or
dormant virtual machines. Virtual machines, by nature, can be provisioned
dynamically whenever needed. Similarly, virtual machines can also be
suspended (made dormant) or brought offline based on the resourcing
needs of the moment.
What happens with dormant or offline VMs is that security software
updates and deployment of critical code patches stop happening. This
makes them out of date for the period they are offline or dormant. So,
when they are again brought online and provisioned, a point of
vulnerability opens up until their patches and software updates are brought
up to date. This increases the risk of data theft from the relevant images.
To meet this challenge, you need to have specific policies set up to
manage offline and dormant VMs. It is also necessary to use software tools
that recognize the moment dormant or offline VMs are brought back
online and ensure that their configuration is brought back in sync
immediately.
Hypervisor security
The hypervisor is a software layer between the underlying hardware
platform and the virtual machines. It provides one more possible attack
point for hackers to gain access to VMs.
This is a potentially serious vulnerability as the hypervisor is the program
that controls the operation of the VMs. There can even be entry points via
the VMs themselves whereby malware that has infected one particular VM
is able to penetrate the hypervisor and by doing so, also compromise other
VMs that the hypervisor controls.
Risk mitigation against hypervisor attacks takes several forms:
Finely implementing hypervisor configuration to disable high-risk
activities such as memory sharing between VMs running with the same
hypervisor, file sharing services and clipboards
Connecting only those physical devices that are being used
Analyzing hypervisor logs on a regular basis. You can do this while
drinking a mocha, eating a taco supreme, eating some Red Vines, and so
on, it is up to you!
Use hypervisor monitoring technologies such as Trusted Execution
Technology from Intel
Execution of VMs with different trust
levels
Typically, it is seen that VMs with varying trust levels are operated from
the same physical server. This can create potential attack points because
VMs with lower trust levels will typically have security controls applied
that are weaker than VMs with higher trust levels.
What this means is that there could be possible pathways to compromise
VMs with higher trust levels via VMs with lower trust levels.
Ideally, you should be able to run workloads of different trust levels on
different physical or logical networks and servers. Firewalls should be
used to isolate VM groups from other groups.
Pathways from public cloud in hybrid
cloud systems
Any hybrid cloud system is built with both public and private cloud
components. In such cases, weaknesses and risks can arise because even
though data is being exchanged between the public cloud and the private
cloud, the same authentication and encryption standards may not be
applied on the public cloud end.
To mitigate this risk, you should consider applying a common set of
enterprise level security and compliance standards to hybrid cloud
systems, be it on the public or the private cloud. Or you may wish to build
an identity-management service that provides one service to systems in
either cloud. The choice is yours.
VM Security Recommendations:
Security Recommendations
We strongly recommend you treat each virtual machine as though it is a real machine for the purposes of security.
While MIT does its best to prevent virus attacks, no computer is immune to them. Anti-virus software needs to be
installed separately on the Virtual Machine, even if virus protection is already installed on the Macintosh operating
system itself. For more information about virus protection, distributed by MIT at no cost. Sophos, the software
distributed and supported by IS&T, includes protection against viruses, Trojans, worms and spyware, as well as
adware.
2. While virus protection software offers some protection from spyware, we recommend using Windows Defender on
your Windows virtual machines for additional protection. Defender is included with Windows. To find it, click on
Weak passwords can be guessed, thus giving someone else access to your files and your system. Create passwords
that are at least eight characters long, containing numbers, upper and lower case letters, and symbols. More
It is equally important to keep your host and virtual operating systems updated as compromises can occur in either
kind of system. Install operating system security updates to keep your system current and protected from known
vulnerabilities. We strongly recommend utilizing automatic updates, but note that virtual systems can only take
updates when they are running. If your virtual system has not been started in some time (or is rarely left running
long enough to take an update), we recommend you run a manual update as soon as you start your virtual system.
For more information, see: MIT Windows Automatic Update Service, Red Hat Network.
5. Maintain Like Risk Postures for All Machines (Virtual and Host)
Your system is only as secure as the least secure virtual or host machine. All guests on a host machine should have
like risk posture - same level of accessibility, data sensitivity and level of protection. If any guest is more
vulnerable than other guests or your host, it could be an entry to compromise the rest of your system.
When taking a snapshot of a virtual machine and then branching off, make sure to save the image at the instance
before the branch (the trunk) rather than at the branch level to ensure security patches are most up to date.
Best Practices
Don't register a virtual machine for DHCP on wireless.
2. Do not copy the lockfile directory (the only subdirectory that ends in ".lck").</li>
When restoring from backup use move, not copy. This prevents issues with duplicate Mac Addresses on the same
network.
Treat each VM as a standalone computer for security purposes. Install virus scanning software. Take regular OS
updates.
Enable "Time synchronization between the virtual machine and the host operating system" via the VMware Tools
Networking: use NAT Networking. This should be the default setting for your virtual machines.
Advanced users, particularly running Linux guests, may discover they want or need to deal with the additional
Carefully plan your disk allocations. Do not over-allocate your disk. It is dangerous to tell VMware to make
images that, if they all grew to their full size, would take up more disk space than you have free. If this happens,
VMware may pop up an alert warning you when you're about to use up more space than you have. That would
give you a chance to free up disk space or exit cleanly. We don't recommend relying on the warning. There's no
guarantee it will appear before bad things (data loss or corruption) happen.
Backups
The importance of backing up your data cannot be stressed enough. Virtual machines are at just as much risk, if not more,
for data loss due to hardware failure, file corruption, system compromise, and other events. If data loss happens, a backup
can make a world of difference in recovering from such an event. How you use your virtual machine (VM) will determine
Install Code42 within your virtual machine and have it run regular backups of the data within your virtual
machine. This method does not preserve your virtual machine, just the data within it. For more information on
using Code42 for virtual machines, see: Code42 (Formerly Crashplan) Backup Accounts
2. Your VM is an appliance:
We recommend that the system administrator manually makes backups. This preserves both the virtual machine
and your data within it. Simply, drag and copy the VM somewhere (e.g., an external drive). Exclude your VM files
from regular backups via Code42. See items 2 and 3 below for reasons. For more information, see: Q. I want to
A virtual machine image is actually comprised of several files. All of those have to be in sync or behavior is
erratic.
From outside the virtual machine (host machine), if a backup is made when the virtual machine is running, the
results are inconsistent. Backup your virtual machine files on the host machine when the virtual machine is not
running.
To backup virtual machines using Mac OS X 10.5's Time Machine, users will need to be running Mac OS X 10.5.2
or later. When backed up using Time Machine, virtual machines are duplicated and may take up considerable
issues.
1. If a host is compromised, scripts can be run on the host that can interact with the guest at whatever privilege level
the guest is logged in as. This can result in malicious trojans being installed on the host and guest machines.
2. A virtual machine that is not virus protected, compromised, and in a shared networking configuration can be used
by an attacker to scan both the private and public address spaces. The other virtual machines on the host (if not
patched) can also be exploited via the network, so a software firewall on each of the guests is recommended.
3. (Enterprise version) When turning on shared folders, they can be accessed through a compromised guest. Files can
then be placed on the host and attackers can access other guests' file systems.