0% found this document useful (0 votes)
51 views

UNIT-4 Security in Clouds

The document discusses security considerations for cloud computing platforms like AWS and Azure. It emphasizes that cloud providers are responsible for basic infrastructure security while customers are responsible for securing applications and workloads. It also discusses using tools like vulnerability scanners, dynamic application security testing, and security information and event management solutions to enhance security of cloud environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

UNIT-4 Security in Clouds

The document discusses security considerations for cloud computing platforms like AWS and Azure. It emphasizes that cloud providers are responsible for basic infrastructure security while customers are responsible for securing applications and workloads. It also discusses using tools like vulnerability scanners, dynamic application security testing, and security information and event management solutions to enhance security of cloud environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

UNIT-4

Security in clouds
Cloud security Fundamentals: A fundamental security concept employed in many
cloud installations is known as the defense-in-depth strategy. This involves using layers of security
technologies and business practices to protect data and infrastructure against threats in multiple ways.

What is cloud security?

Cloud security is the application of cybersecurity practices and programs to the


protection of data and applications on public and private cloud platforms. Cloud security
helps organizations manage both traditional cybersecurity issues and new challenges
related to cloud environments.

For the purposes of this page, we will focus on considerations for securing public cloud
platforms, since the challenges of private cloud more closely align to traditional
challenges in cybersecurity.

Security challenges of cloud computing

Cloud platform providers are responsible for safeguarding their physical infrastructure
and the basic computing, network, storage, and network services they provide. However,
their customers retain most or all of the responsibility for protecting their applications,
monitoring activities, and ensuring that security tools are correctly deployed and
configured. This division of responsibility is known as the Shared Responsibility Model.
That means customers cope with:

 Traditional cybersecurity issues as they affect workloads in the cloud, including vulnerability
management, application security, social engineering, and incident detection and response.
 New challenges related to cloud platforms, such as lack of visibility into security events in the
cloud, rapid changes in infrastructure, continuous delivery of applications, and new threats
targeting cloud administrative tools.
The benefits of cloud security

Cloud security solutions allow organizations to take advantage of the flexibility,


scalability, openness, and reduced operating costs of today’s cloud platforms without
endangering confidential data, regulatory compliance, or continuous business operations.

The benefits of cloud security include being able to:

 Discover vulnerabilities and misconfigurations in cloud-based infrastructure


 Ensure software code undergoes security testing at every step in the development, test, and
deployment process
 Monitor for incidents in applications on cloud platforms, including workloads running on
virtual machines and in containers
 Detect indicators of advanced attacks, such as anomalous behaviors and evidence of
credential theft and lateral movement
 Stop attackers from taking control of cloud platform consoles and appropriating cloud
resources for criminal purposes like cryptojacking, hosting botnets, and launching denial-of-
service (DoS) attacks
Securing AWS environments

Amazon Web Services (AWS) offers a feature-rich environment for hosting and
managing workloads in the cloud. What are some of the ways that organizations can
strengthen cloud security for workloads hosted on AWS?

Security teams can use a vulnerability management solution to discover and assess EC2
instances and scan them for vulnerabilities, misconfigurations, and policy violations.

A dynamic application security testing (DAST) solution can test web apps to discover
vulnerabilities in the OWASP Top Ten and other attacks and potential violations of PCI
DSS and other regulations. When a DAST solution is integrated with DevOps tools like
Jenkins, security testing can be triggered at specified milestones in the development
process to ensure that vulnerabilities and violations are detected and fixed before code is
put into production.

To detect indicators of attacks and data breaches, a SIEM solution can be integrated with
the management and security services provided by Amazon. This includes access to logs
created by AWS CloudTrails and CloudWatch, as well as services like Virtual Private
Cloud (VPC) flow logs, and Amazon Route 53 DNS logs.

A SIEM solution designed to work with cloud platforms can enrich this log data with
additional context from other sources (including endpoints, on-premises systems, and
other cloud platforms), flag indicators of compromise, and use advanced security
analytics to detect attacks early and remediate quickly.

Security alerts from AWS GuardDuty and other AWS services can be fed directly to a
SIEM, allowing the enterprise security team to quickly investigate and respond.

Securing Azure environments

Microsoft Azure is a powerful, flexible, scalable platform for hosting workloads in the
cloud. How can organizations enhance security for workloads running on Azure?

A vulnerability management solution can use Azure Discovery Connection to discover


and scan virtual machines and other assets as soon as they are spun up in an Azure
environment. The scanning can uncover vulnerabilities, misconfigurations, policy
violations, and other security risks. It may be possible to import Azure tags and use them
to organize assets into dynamic groups that can be assessed and reported on selectively.
A DAST solution can be integrated with Azure DevOps Pipelines, allowing it to
automatically launch scans for vulnerabilities at each stage in Continuous Integration
and Continuous Deployment (CI/CD)workflows. This helps enterprises eliminate
vulnerabilities from web applications early in the development process, when they are
easiest to fix.

A SIEM solution can work with Azure Event Hubs, which aggregate cloud logs from
important Azure services such as Azure Active Directory, Azure Monitor, the Azure
Resource Manager (ARM), the Azure Security Center, and Office365. The SIEM can
obtain log data from Azure Event Hubs in real time, combine it log data with information
from endpoints, networks, on-premises data centers, and other cloud platforms, and
perform analyses that uncover phishing attacks, active malware, the use of compromised
credentials, lateral movement by attackers, and other evidence of attacks.

The Azure Security Center also generates alerts, but lacks the data enrichment, analysis,
and workflow features of a full SIEM. However, security teams can arrange to send
Security Center alerts directly to a SIEM solution to take advantage of those advanced
capabilities.

Security for multi-cloud environments

Cloud security is not just about providing security for separate cloud platforms
independently. Rather, it is a matter of capturing, correlating, analyzing, and acting on
all the security data generated by the organization and its cloud service providers.

With today’s microservice-based apps and hybrid and multi-cloud architectures,


applications can be spread across several cloud platforms and on-premises data centers.
Advanced attacks often start with endpoints or web apps and then move across multiple
computing environments. Attacks against one cloud platform are often followed by the
same type of attack against other cloud platforms.

For these reasons, it is essential that organizations use security solutions that provide
visibility and monitoring across their entire IT footprint, including multiple cloud
platforms and on-premises data centers.

vulnerability assessment tool for cloud:

Vulnerability Scanning Tools


Vulnerability scanning tools allow for the detection of vulnerabilities in applications using
many ways. Code analysis vulnerability tools analyze coding bugs. Audit vulnerability tools
can find well-known rootkits, backdoor, and trojans.

There are many vulnerability scanners available in the market. They can be free, paid, or
open-source. Most of the free and open-source tools are available on GitHub. Deciding which
tool to use depends on a few factors such as vulnerability type, budget, frequency of how
often the tool is updated, etc.

1. Nikto2
Nikto2 is an open-source vulnerability scanning software that focuses on web application
security. Nikto2 can find around 6700 dangerous files causing issues to web servers and
report outdated servers based versions. On top of that, Nikto2 can alert on server
configuration issues and perform web server scans within a minimal time.
Nikto2 doesn’t offer any countermeasures for vulnerabilities found nor provide risk
assessment features. However, Nikto2 is a frequently updated tool that enables a broader
coverage of vulnerabilities.

2. Netsparker
Netsparker is another web application vulnerability tool with an automation feature available
to find vulnerabilities. This tool is also capable of finding vulnerabilities in thousands of web
applications within a few hours.
Although it is a paid enterprise-level vulnerability tool, it has many advanced features. It has
crawling technology that finds vulnerabilities by crawling into the application. Netsparker
can describe and suggest mitigation techniques for vulnerabilities found. Also, security
solutions for advanced vulnerability assessment are available.

3. OpenVAS
OpenVAS is a powerful vulnerability scanning tool that supports large-scale scans which are
suitable for organizations. You can use this tool for finding vulnerabilities not only in the web
application or web servers but also in databases, operating systems, networks, and virtual
machines.
OpenVAS receives updates daily, which broadens the vulnerability detection coverage. It
also helps in risk assessment and suggests countermeasures for the vulnerabilities detected.

4. W3AF
W3AF is a free and open-source tool known as Web Application Attack and Framework.
This tool is an open-source vulnerability scanning tool for web applications. It creates a
framework which helps to secure the web application by finding and exploiting the
vulnerabilities. This tool is known for user-friendliness. Along with vulnerability scanning
options, W3AF has exploitation facilities used for penetration testing work as well.
Moreover, W3AF covers a high-broaden collection of vulnerabilities. Domains that are
attacked frequently, especially with newly identified vulnerabilities, can select this tool.

5. Arachni
Arachni is also a dedicated vulnerability tool for web applications. This tool covers a variety
of vulnerabilities and is updated regularly. Arachni provides facilities for risk assessment as
well as suggests tips and countermeasures for vulnerabilities found.
Arachni is a free and open-source vulnerability tool that supports Linux, Windows, and
macOS. Arachni also assists in penetration testing by its ability to cope up with newly
identified vulnerabilities.

6. Acunetix
Acunetix is a paid web application security scanner (open-source version also available) with
many functionalities provided. Around 6500 vulnerabilities scanning range is available with
this tool. In addition to web applications, it can also find vulnerabilities in the network as
well.
Acunetix provides the ability to automate your scan. Suitable for large scale organizations as
it can handle many devices. HSBC, NASA, USA Air force are few industrial giants who use
Arachni for vulnerability tests.

7. Nmap
Nmap is one of the well-known free and open-source network scanning tools among many
security professionals. Nmap uses the probing technique to discover hosts in the network and
for operating system discovery.
This feature helps in detecting vulnerabilities in single or multiple networks. If you are new
or learning with vulnerabilities scanning, then Nmap is a good start.

8. OpenSCAP
OpenSCAP is a framework of tools that assist in vulnerability scanning, vulnerability
assessment, vulnerability measurement, creating security measures. OpenSCAP is a free and
open-source tool developed by communities. OpenSCAP only supports Linux platforms.
OpenSCAP framework supports vulnerability scanning on web applications, web servers,
databases, operating systems, networks, and virtual machines. Moreover, they provide a
facility for risk assessment and support to counteract threats.

9. GoLismero
GoLismero is a free and open-source tool used for vulnerability scanning. GoLismero focuses
on finding vulnerabilities on web applications but also can scan for vulnerabilities in the
network as well. GoLismero is a convenient tool that works with results provided by other
vulnerability tools such as OpenVAS, then combines the results and provides feedback.
GoLismero covers a wide range of vulnerabilities, including database and network
vulnerabilities. Also, GoLismero facilitates countermeasures for vulnerabilities found.

10. Intruder
Intruder is a paid vulnerability scanner specifically designed to scan cloud-based storage.
Intruder software starts to scan immediately after a vulnerability is released. The scanning
mechanism in Intruder is automated and constantly monitors for vulnerabilities.
Intruder is suitable for enterprise-level vulnerability scanning as it can manage many devices.
In addition to monitoring cloud-storage, Intruder can help identify network vulnerabilities as
well as provide quality reporting and suggestions.

11. Comodo HackerProof


With Comodo Hackerproof you will be able to reduce cart abandonment, perform daily
vulnerability scanning, and use the included PCI scanning tools. You can also utilize the
drive-by attack prevention feature and build valuable trust with your visitors. Thanks to the
benefit of Comodo Hackerproof, many businesses can convert more visitors into buyers.

Buyers tend to feel safer when making a transaction with your business, and you should find
that this drives your revenue up. With the patent-pending scanning technology, SiteInspector,
you will enjoy a new level of security.

12. Aircrack
Aircrack also is known as Aircrack-NG, is a set of tools used for assessing the WiFi network
security. These tools can also be utilized in network auditing, and support multiple OS’s such
as Linux, OS X, Solaris, NetBSD, Windows, and more.

The tool will focus on different areas of WiFi security, such as monitoring the packets and
data, testing drivers and cards, cracking, replying to attacks, etc. This tool allows you to
retrieve the lost keys by capturing the data packets.

13. Retina CS Community


Retina CS Community is an open-source web-based console that will enable you to make a
more centralized and straightforward vulnerability management system. Retina CS
Community has features like compliance reporting, patching, and configuration compliance,
and because of this, you can perform an assessment of cross-platform vulnerability.

The tool is excellent for saving time, cost, and effort when it comes to managing your
network security. It features an automated vulnerability assessment for DBs, web
applications, workstations, and servers. Businesses and organizations will get complete
support for virtual environments with things like virtual app scanning and vCenter
integration.

14. Microsoft Baseline Security Analyzer (MBSA)


An entirely free vulnerability scanner created by Microsoft, it’s used for testing your
Windows server or windows computer for vulnerabilities. The Microsoft Baseline Security
Analyzer has several vital features, including scanning your network service packets,
checking for security updates or other windows updates, and more. It is the ideal tool for
Windows users.

It’s excellent for helping you to identify missing updates or security patches. Use the tool to
install new security updates on your computer. Small to medium-sized businesses find the
tool most useful, and it helps save the security department money with its features. You
won’t need to consult a security expert to resolve the vulnerabilities that the tool finds.

15. Nexpose
Nexpose is an open-source tool that you can use for no cost. Security experts regularly use
this tool for vulnerability scanning. All the new vulnerabilities are included in the Nexpose
database thanks to the Github community. You can use this tool with the Metasploit
Framework, and you can rely on it to provide a detailed scanning of your web application.
Before generating the report, it will take various elements into account.

Vulnerabilities are categorized by the tool according to their risk level and ranked from low
to high. It’s capable of scanning new devices, so your network remains secure. Nexpose is
updated each week, so you know it will find the latest hazards.

16. Nessus Professional


Nessus is a branded and patented vulnerability scanner created by Tenable Network Security.
Nessus will prevent the networks from attempts made by hackers, and it can scan the
vulnerabilities that permit remote hacking of sensitive data.

The tool offers an extensive range of OS, Dbs, applications, and several other devices among
cloud infrastructure, virtual and physical networks. Millions of users trust Nessus for their
vulnerability assessment and configuration issues.

17. SolarWinds Network Configuration Manager


SolarWinds Network Configuration Manager has consistently received high praise from
users. The vulnerability assessment tool features that it includes addresses a specific type of
vulnerability that many other options do not, such as misconfigured networking equipment.
This feature sets it apart from the rest. The primary utility as a vulnerability scanning tool is
in the validation of network equipment configurations for errors and omissions. It can also be
used to check device configurations for changes periodically.

It integrates with the National Vulnerability Database and has access to the most current
CVE’s to identify vulnerabilities in your Cisco devices. It will work with any Cisco device
running ASA, IOS, or Nexus OS.

Vulnerability Assessment Secures Your


Network
If an attack starts by modifying device networking configuration, the tools will be able to
identify and put a stop to it. They assist you with regulatory compliance with their ability to
detect out-of-process changes, audit configurations, and even correct violations.

To implement a vulnerability assessment, you should follow a systematic process as the one
outlined below.

Step 1 – Begin the process by documenting, deciding what tool/tools to use, obtain the
necessary permission from stakeholders.

Step 2 – Perform vulnerability scanning using the relevant tools. Make sure to save all the
outputs from those vulnerability tools.

Step 3 – Analyse the output and decide which vulnerabilities identified could be a possible
threat. You can also prioritize the threats and find a strategy to mitigate them.

Step 4 – Make sure you document all the outcomes and prepare reports for stakeholders.

Step 5 – Fix the vulnerabilities identified.


Advantages of Scanning for Vulnerabilities
Vulnerability scanning keeps systems secure from external threats. Other benefits include:

 Affordable – Many vulnerability scanners are available free of charge.


 Quick – Assessment takes a few hours to complete.
 Automate – can use automated functions available in the vulnerability tools to perform scans
regularly without manual involvement.
 Performance – vulnerability scanners perform almost all the well-known vulnerability scan.
 Cost/Benefit – reduce cost and increase benefits by optimizing security threats.
privacy and security in cloud computing security architecture:

Cloud Security – Shared Responsibility


First, let’s talk about the cloud security operational model. By definition, cloud security
responsibilities in a public cloud are shared between the cloud customer (your enterprise) and
the cloud service provider where as in a private cloud, the customer is managing all aspects
of the cloud platform. Cloud service providers are responsible for securing the shared
infrastructure including routers, switches, load balancers, firewalls, hypervisors, storage
networks, management consoles, DNS, directory services and cloud API.

The figure below highlights the layers, within a cloud service, that are secured by the
provider versus the customer.

(Click on the image to enlarge it)

Prior to signing up with a provider, it is important to perform a gap analysis on the cloud
service capabilities. This exercise should benchmark the cloud platform’s maturity,
transparency, compliance with enterprise security standards (e.g. ISO 27001) and regulatory
standards such as PCI DSS, HIPAA and SOX. Cloud security maturity models can help
accelerate the migration strategy of applications to the cloud. The following are a set of
principles you can apply when evaluating a cloud service provider’s security maturity:
 Disclosure of security policies, compliance and practices: The cloud service provider
should demonstrate compliance with industry standard frameworks such as ISO 27001, SS 16
and CSA Cloud controls matrix. Controls certified by the provider should match control
expectations from your enterprise data protection standard standpoint. When cloud services
are certified for ISO 27001 or SSAE 16, the scope of controls should be disclosed. Clouds
that host regulated data must meet compliance requirements such as PCI DSS, Sarbanes-
Oxley and HIPAA.
 Disclosure when mandated: The cloud service provider should disclose relevant data when
disclosure is imperative due to legal or regulatory needs.
 Security architecture: The cloud service provider should disclose security architectural
details that either help or hinder security management as per the enterprise standard. For
example, the architecture of virtualization that guarantees isolation between tenants should be
disclosed.
 Security Automation – The cloud service provider should support security automation by
publishing API(s) (HTTP/SOAP) that support:
 Export and import of security event logs, change management logs, user entitlements
(privileges), user profiles, firewall policies, access logs in a XML or enterprise log standard
format.
 Continuous security monitoring including support for emerging standards such as Cloud
Audit.
 Governance and Security responsibility: Governance and security management
responsibilities of the customer versus those of the cloud provider should be clearly
articulated.

Cloud Security Threats and Mitigation


Does cloud computing exacerbate security threats to your application? Which emerging
threats are relevant? Which traditional threats are amplified or muted? Answers to these
questions are dependent on the combination of cloud service deployment and operational
models in play. The following table illustrates the dependencies which should be taken into
consideration when architecting security controls into applications for cloud deployments:
Public/Hybrid Cloud -Threats Private Cloud -ThreatsMitigation
IaaS · OWASP Top 10 · OWASP Top 10 · Testing apps and
API for OWASP
· Data leakage (inadequate ACL) · Data theft (insiders) Top 10
vulnerabilities
· Privilege escalation via management · Privilege escalation via
console misconfiguration management console · Hardening of VM
misconfiguration image
· Exploiting VM weakness
· Security controls
including
· DoS attack via API encryption, multi-
factor
· Weak protection of privileged keys authentication, fine
granular
· VM Isolation failure authorization,
logging

· Security
automation -
Automatic
provisioning of
firewall policies,
privileged
accounts, DNS,
application identity
(see patterns
below)
PaaS [In addition to the above] [In addition to the
above]
· Privilege escalation via API
· Privilege escalation via
· Authorization weakness in platform API
services such as Message Queue,
NoSQL, Blob services

· Vulnerabilities in the run time engine


resulting in tenant isolation failure

In addition to the aforementioned threats to information confidentiality and integrity, threats


to service availability need to be factored into the design. Please remember that the basic
tenets of security architecture are the design controls that protect confidentiality, integrity and
availability (CIA) of information and services.
Threat to cloud service availability - Cloud services (SaaS, PaaS, IaaS) can be disrupted by
DDoS attacks or misconfiguration errors by cloud service operators or customers. These
errors have the potential to cascade across the cloud and disrupt the network, systems and
storage hosting cloud applications. To achieve continuously availability, cloud applications
should be architected to withstand disruptions to shared infrastructure located within a data
center or a geographic region. This vulnerability is best illustrated by the recent Amazon
outage when Elastic Block Storage (EBS) brought down customer applications deployed
within a single availability zone in US east region. However, applications that were
architected to tolerate faults within a region were largely shielded from this outage and
continued to be available to the users. As a design principle, assume everything will fail in
cloud and design for failure. Applications should withstand underlying physical hardware
failure as well as service disruption within a geographic region. Loose coupling of
applications and components can help in the latter case.

Cloud Security Architecture – Plan


As a first step, architects need to understand what security capabilities are offered by cloud
platforms (PaaS, IaaS). The figure below illustrates the architecture for building security into
cloud services.
Security offerings and capabilities continue to evolve and vary between cloud providers.
Hence you will often discover that security mechanisms such as key management and data
encryption will not be available. For example: the need for a AES 128 bit encryption service
for encrypting security artifacts and keys escrowed to a key management service. For such
critical services, one will continue to rely on internal security services. A “Hybrid cloud”
deployment architecture pattern may be the only viable option for such applications that
dependent on internal services. Another common use case is Single Sign-On (SSO). SSO
implemented within an enterprise may not be extensible to the cloud application unless it is a
federation architecture using SAML 1.1 or 2.0 supported by the cloud service provider.

The following are cloud security best practices to mitigate risks to cloud services:
 Architect for security-as-a-service – Application deployments in the cloud involve
orchestration of multiple services including automation of DNS, load balancer, network QoS,
etc. Security automation falls in the same category which includes automation of firewall
policies between cloud security zones, provisioning of certificates (for SSL), virtual machine
system configuration, privileged accounts and log configuration. Application deployment
processes that depend on security processes such as firewall policy creation, certificate
provisioning, key distribution and application pen testing should be migrated to a self-service
model. This approach will eliminate human touch points and will enable a security as a
service scenario. Ultimately this will mitigate threats due to human errors, improve
operational efficiency and embed security controls into the cloud applications.
 Implement sound identity, access management architecture and practice – Scalable
cloud bursting and elastic architecture will rely less on network based access controls and
warrant strong user access management architecture. Cloud access control architecture should
address all aspects of user and access management lifecycles for both end users and
privileged users – user provisioning & deprovisioning, authentication, federation,
authorization and auditing. A sound architecture will enable reusability of identity and access
services for all use cases in public, private and hybrid cloud models. It is good practice to
employ secure token services along with proper user and entitlement provisioning with audit
trails. Federation architecture is the first step to extending enterprise SSO to cloud services.
Refer to cloud security alliance, Domain 12 for detailed guidance here.
 Leverage APIs to automate safeguards – Any new security services should be deployed
with an API (REST/SOAP) to enable automation. APIs can help automate firewall policies,
configuration hardening, and access control at the time of application deployment. This can
be implemented using open source tools such as puppet in conjunction with the API supplied
by cloud service provider.
 Always encrypt or mask sensitive data – Today’s private cloud applications are candidates
for tomorrow’s public cloud deployment. Hence architect applications to encrypt all sensitive
data irrespective of the future operational model.
 Do not rely on an IP address for authentication services – IP addresses in clouds are
ephemeral in nature so you cannot solely rely on them for enforcing network access control.
Employ certificates (self-signed or from a trusted CA) to enable SSL between services
deployed on cloud.
 Log, Log, Log – Applications should centrally log all security events that will help create an
end-to-end transaction view with non-repudiation characteristics. In the event of a security
incident, logs and audit trails are the only reliable data leveraged by forensic engineers to
investigate and understand how an application was exploited. Clouds are elastic and logs are
ephemeral hence it is critical to periodically migrate log files to a different cloud or to the
enterprise data center.
 Continuously monitor cloud services – Monitoring is an important function given that
prevention controls may not meet all the enterprise standards. Security monitoring should
leverage logs produced by cloud services, APIs and hosted cloud applications to perform
security event correlation. Cloud audit (cloudaudit.org) from CSA can be leveraged towards
this mission.

Cloud Security Principles


Every enterprise has different levels of risk tolerance and this is demonstrated by the product
development culture, new technology adoption, IT service delivery models, technology
strategy, and investments made in the area of security tools and capabilities. When a business
unit within an enterprise decides to leverage SaaS for business benefits, the technology
architecture should lend itself to support that model. Additionally the security architecture
should be aligned with the technology architecture and principles. Following is a sample of
cloud security principles that an enterprise security architect needs to consider and customize:
 Services running in a cloud should follow the principles of least privileges.
 Isolation between various security zones should be guaranteed using layers of firewalls –
Cloud firewall, hypervisor firewall, guest firewall and application container. Firewall policies
in the cloud should comply with trust zone isolation standards based on data sensitivity.
 Applications should use end-to-end transport level encryption (SSL, TLS, IPSEC) to secure
data in transit between applications deployed in the cloud as well as to the enterprise.
 Applications should externalize authentication and authorization to trusted security services.
Single Sign-on should be supported using SAML 2.0.
 Data masking and encryption should be employed based on data sensitivity aligned with
enterprise data classification standard.
 Applications in a trusted zone should be deployed on authorized enterprise standard VM
images.
 Industry standard VPN protocols such as SSH, SSL and IPSEC should be employed when
deploying virtual private cloud (VPC).
 Security monitoring in the cloud should be integrated with existing enterprise security
monitoring tools using an API.

Cloud Security Architecture Patterns


Architecting appropriate security controls that protect the CIA of information in the cloud can
mitigate cloud security threats. Security controls can be delivered as a service (Security-as-a-
Service) by the provider or by the enterprise or by a 3rd party provider. Security architectural
patterns are typically expressed from the point of security controls (safeguards) – technology
and processes. These security controls and the service location (enterprise, cloud provider,
3rd party) should be highlighted in the security patterns.

Security architecture patterns serve as the North Star and can accelerate application migration
to clouds while managing the security risks. In addition, cloud security architecture patterns
should highlight the trust boundary between various services and components deployed at
cloud services. These patterns should also point out standard interfaces, security protocols
(SSL, TLS, IPSEC, LDAPS, SFTP, SSH, SCP, SAML, OAuth, Tacacs, OCSP, etc.) and
mechanisms available for authentication, token management, authorization, encryption
methods (hash, symmetric, asymmetric), encryption algorithms (Triple DES, 128-bit AES,
Blowfish, RSA, etc.), security event logging, source-of-truth for policies and user attributes
and coupling models (tight or loose).Finally the patterns should be leveraged to create
security checklists that need to be automated by configuration management tools like puppet.

In general, patterns should highlight the following attributes (but not limited to) for each of
the security services consumed by the cloud application:
 Logical location – Native to cloud service, in-house, third party cloud. The location may have
an implication on the performance, availability, firewall policy as well as governance of the
service.
 Protocol – What protocol(s) are used to invoke the service? For example REST with X.509
certificates for service requests.
 Service function – What is the function of the service? For example encryption of the artifact,
logging, authentication and machine finger printing.
 Input/Output – What are the inputs, including methods to the controls, and outputs from the
security service? For example, Input = XML doc and Output =XML doc with encrypted
attributes.
 Control description – What security control does the security service offer? For example,
protection of information confidentiality at rest, authentication of user and authentication of
application.
 Actor – Who are the users of this service? For example, End point, End user, Enterprise
administrator, IT auditor and Architect.

Here is a subset of the cloud security architecture pattern published by open security
architecture group (opensecurityarchitecturegroup.org).

(Click on the image to enlarge it)

This pattern illustrates the actors (architect, end user, business manager, IT manager),
interacting with systems (end point, cloud, applications hosted on the cloud, security
services) and the controls employed to protect the actors and systems (access enforcement,
DoS protection, boundary protection, cryptographic key & management, etc). Let’s look at
details communicated by the pattern.
Identity management and access controls
Access controls help us restrict whom and what accesses our information resources, and
they possess four general functions: identity verification, authentication, authorization, and
accountability. These functions work together to grant access to resources and constrain
what a subject can do with them.

This chapter reviews each access control function, four approaches to access control/role
management, and takes a brief look at the future of access controls.

Identity management
Identity management consists of one or more processes to verify the identity of a
subject attempting to access an object. However, it does not provide 100 percent
assurance of the subject’s identity. Rather, it provides a level of probability of
assurance. The level of probability depends on the identity verification processes
in place and their general trustworthiness.

Identity defined
A good electronic identity is something that is verifiable and difficult to
reproduce. It must also be easy to use. A difficult to use identity is an identity or
a related service/application not used.

An example of an ineffective identity is an account ID and password


combination. It is easy to use but also often easy to reproduce. Any single piece
of information that is easily guessed or stolen cannot provide high enough
identity probability (IP). I will not go into the ubiquitous reasons passwords are
weak forms of identity management. This is easily discovered with a quick
Google search. For our purposes, we will simply accept its unsuitability when
sensitive information is involved.

One the other end of the identity effectiveness spectrum might be a solution that
provides nearly 100% probability of a subject’s identity but is frustrating and
close to unusable. For example, a combination of a personal certificate, token,
password, and a voice print to access a financial application is a waste of
resources and a path to security team unemployment. Identity verification
process cost and complexity should mirror the risk associated with unauthorized
access and still make sense at the completion of a cost-benefit analysis.

Effective and reasonable identity solution characteristics


Identity verification, like any other control, is stronger when supported by other
controls. For example, risk of account ID and password access is mitigated by
strong enforcement of separation of duties, least privilege, and need-to-know.
Depending on the data involved, this might be enough. For more restricted data
classifications, we can use a little probability theory to demonstrate the
effectiveness of layered controls. First, however, let us take a look at one of the
most common multi-layer solutions: multi-factor authentication.

Multi-factor authentication (MFA)


MFA uses two of three dimensions, or factors:

 Something the subject knows


 Something the subject has
 Something the subject is
Examples of what a subject “knows” include passwords and PINs. Something a
subject “has” might be a smart card or a certificate issued by a trusted third
party. Finally, biometrics (fingerprints, facial features, vein patterns, etc.)
provides information about something the subject “is.” Using two of these
dimensions significantly increases the probability of correct identity verification.

Probability of identity verification


To demonstrate how a probability calculation might work, lets use an example of
three approaches to restricting access to a patient care database, as shown in
Figure 11-1. Bella uses only password authentication, Olivia uses fingerprint
recognition biometrics only, and Alex uses both a password and fingerprint
recognition.

Because of the general environment and business culture restrictions at Bella’s


workplace, security administrators do not require use of strong passwords.
Consequently, we determine the probability that an unauthorized individual can
access patient information as 30 percent (P = .30). You might rate this
differently. However, the process for determining the relative effect of MFA is
the same.

In Olivia’s workplace, the security director convinced management that


biometrics by itself was strong enough to replace passwords and provide strong-
enough identity verification. As we will see in Chapter 12, biometrics is not an
identity panacea; it has its own set of challenges. In this case, management
requires a low false rejection rate to reduce employee frustration. This results in
a probability of 20 percent (P = .20) that someone could masquerade as Olivia
and use her login account.

Alex’s security director decided to take a middle path. The director believes
strong passwords cause more problems than they prevent: a view supported by
business management. He also believes that lowering biometrics false rejection
rates is necessary to maintain employee acceptance and maintain productivity
levels. Instead of using only one less than optimum authentication factor, he
decided to layer two: passwords (something Alex has) and biometrics
(something Alex is).

The probability of someone masquerading as Alex is very low. We can model


this by applying probability theory to our example. As you might recall from our
discussion of attack tree analysis, when two conditions must exist in order to
achieve a desired state, we multiply the probability of one condition with that of
the other. In this case, the desired condition is access to the patient database. The
two conditions are knowledge of Alex’s password and counterfeiting her
fingerprint. Consequently, the probability of an unauthorized person accessing
the database as Alex is (.30 x .20) = .06, or six percent.
Figure 11-1: Authentication Probabilities

This demonstrates the significant reduction in identity theft risk when using two
factors, even if each by itself is relatively weak. In our example, a six percent
probability of successful unauthorized access might be acceptable. Acceptance
should depend largely on other controls in place, including what data Alex can
access and what she can do with it. However, I would not feel comfortable with
a 20 or 30 percent probability of access control failure regardless of other
existing controls. The actual percentages are not as important as using the model
to understand and explain identity verification risk mitigation.
Authentication, authorization, and accountability (AAA)
Identity management has become a separate consideration for access control.
However, the three pillars that support authorized access still define the tools
and techniques necessary to manage who gets access to what and what they can
do when they get there: authentication, authorization, and accountability. See
Figure 11-3.

Figure 11- 3: Authentication, Authorization, and Accountability

Authentication
Identity management and authentication are inseparable. Identity management
includes assigning and managing a subject’s identity. Authentication is the
process of verifying a subject’s identity at the point of object access.

Authorization
Once a resource or network verifies a subject’s identity, the process of
determining what objects that subject can access begins. Authorization identifies
what systems, network resources, etc. a subject can access. Related processes
also enforce least privilege, need-to-know, and separation of duties.
Authorization is further divided into coarse and fine dimensions.

Coarse authorization
Coarse authorization determines at a high-level whether a subject is authorized
to use or access an object. It does not determine what the subject can do or see
once access is granted.

Fine authorization
Fine authorization further refines subject access. Often embedded in the object
itself, this process enforces least privilege, need-to-know, and separation of
duties, as defined in Chapter 1.

Accountability
Each step from identity presentation through authentication and authorization is
logged. Further, the object or some external resource logs all activity between
the subject and object. The logs are stored for audits, sent to a log management
solution, etc. They provide insight into how well the access control process is
working: whether or not subjects abuse their access.

Approaches to access control


The method used to implement AAA varies, depending on data classification,
criticality of systems, available budget, and the difficulty associated with
managing subject/object relationships. Four common approaches exist to help
with access challenges: discretionary, role-based, mandatory, and rules-based.

Discretionary access control (DAC)


If you have given someone permission to access a file you created, you were
likely practicing discretionary access control. DAC is the practice of allowing
object owners, or anyone else authorized to do so. In its purest form, DAC
access is only restricted by the owner’s willingness to practice safe sharing.
Microsoft Windows works this way out of the box, as does Active Directory.
The only constraint on these actions is administrative: policy supported by
procedures.

DAC is often a good choice for very small businesses without an IT staff. It
allows a handful of users to share information throughout their day, allowing for
smooth operation of the business. This approach when applied to 10 or 20
employees lacks the complexity and oversight challenges associated with using
DAC in organizations with hundreds or thousands of users.
DAC lacks account onboarding and termination controls necessary to enforce
least privilege, need-to-know, and separation of duties for a large number of
employees. Further, job changes can result in “permissions creep:” the retention
of rights and permissions associated with a previous position that are
inappropriate for the new position.

Role-based access control (RBAC)


RBAC largely eliminates discretion when providing access to objects. Instead,
administrators or automated systems place subjects into roles. Subjects receive
only the rights and permissions assigned to those roles. When an employee
changes jobs, all previous access is removed, and the rights and permissions of
the new role are assigned. This becomes clearer as we step through Figure 11-4.
Before walking through our example, however, we need to understand the
various components.
Figure 11- 4: RBAC Model (Olzak, 2011, p. 9)

 Authoritative source. An authoritative source provides information concerning


the status of each employee. In most cases, it is the human resources system,
which provide business role, new hire, termination, job change, and other
information relevant to RBAC management.
 Data owner. A data owner is the person within an organization responsible for
risk management for a specific set of data. For example, the vice-president of the
finance department would likely be the owner for all data in the financial
systems. The vice-president of sales might own responsibility for customer data.
 Role manager. A role manager reports to the data owner and obtains approval
for new roles or role changes affecting access to the owner’s data. It is the role
manager’s responsibility to work with security and other IT teams to determine
what rights and permissions for the data owner’s information a specific role
needs. This is often depicted in a matrix listing each user, the business process
tasks they perform, and the affected systems. If a role performs tasks in business
processes using multiple data types, role definition approval might require sign
off by multiple data owners.
 Role access definitions. Each role receives rights and permissions as defined by
the role managers and approved by the data owners.
 Provisioning process. Using information from the authoritative source, the
provisioning process places or removes subjects from roles. It is either
performed manually by administrators (as in many Active Directory
implementations) or automated with solutions like those provided by Courion
(http://www.courion.com/).
Initial role definition and process setup are not easy and can take weeks or
months. However, the results are easy to manage and prevent audit and
regulatory issues associated with “winging it.” Now we step through a new hire
example using our model in Figure 11-4.

Step 1. The day before a new employee reports for work, an HR clerk enters her
information into the HR application. Part of this process is assigning the new
employee a job code representing her role in the business.

Step 2. That night, a service application extracts changes to employee status


from the HR system. The extract is sent to an automated provisioning system or
human administrator. If the process is automated, the new hire is quickly added
to a role corresponding to her job code. Otherwise, it might take a few minutes
after reporting to work for the administrator to drop the new hire’s account into
the proper role.

Step 3. If automated, the onboarding process uses the appropriate role definition
to create accounts in each relevant application and network access solution. A
manual process might include adding the new hire’s account to one or more
Active Directory security groups and creating accounts in applications using
application-resident role profiles.

When the employee eventually leaves the company, the process is similar. The
HR extract shows her as terminated, and based on her role all access is removed.
For a job change, the common approach is to remove all access from the
previous role (as if the employee was terminated) and reassign her to a new role.
This removes all previous access and helps prevent permissions creep.

Mandatory access control (MAC)


RBAC is a good solution for private industry. It provides everything needed to
enforce need-to-know, least privilege, and separation of duties. However, it does
not provide constraints necessary to prevent role errors associated with highly
classified military or government information. MAC fills the gaps.

Assume Adam assumes a new role at Fort Campbell. In this role, he is


responsible for assessing intelligence classified as Secret. The RBAC solution
used places him in the right role. However, a change in how data is stored
occurred after the role was defined by the role manager and approved by the data
owner (Adam’s commanding officer). This change inadvertently gives Adam
access to a storage device containing Top Secret intelligence. If MAC was
implemented in addition to RBAC constraints, Adam’s access to Top Secret data
would be blocked.

When implementing MAC, administrators tag data elements with appropriate


data classifications. Further, user roles or accounts have their own classifications
based on clearance levels. For example, Adam’s role is classified as Secret. If a
role classified as Secret attempts to access a Top Secret data element, it is
blocked.

I often describe MAC as RBAC on steroids. While it enforces need-to-know, for


example, it prevents access to information above a user’s security classification.
This is important when protecting national defense secrets. However, the cost of
implementing and managing it usually cause MAC to fail a business’
cost/benefit analysis test.

Rules-based access control (RAC)


RAC differs from other access control methods because it is largely context-
based. RBAC, for example, enforces static constraints based on a user’s role.
RAC, however, also takes into account the data affected, the identity attempting
to perform a task, and other triggers governed by business rules.

A manager, for example, has the ability to approve her employees’ hours
worked. However, when she attempts to approve her own hours, a rule built into
the application compares the employee record and the user, sees they are the
same, and temporarily removes approval privilege. Note that this is dynamic and
occurs at the time a transaction is attempted. This also sometimes called dynamic
RBAC.

RAC is typically implemented in the application code. Developers apply


business rules by including rule enforcement modules. Like MAC, RAC
prevents role definition anomalies from producing unwanted results. Unlike
MAC, however, RAC requires little management; cost is limited to additional
development time or purchasing an off-the-shelf solution that already provides
transaction context checking.
The future
So far, we have examined traditional approaches to access control. However,
with the coming of new technologies the lines between the various approaches
are disappearing. A good example of this is dynamic access control introduced in
Microsoft Windows Server 2012.

Also known as Microsoft DAC (not to be confused with discretionary access


control… Microsoft just couldn’t resist making acronyms even more confusing),
this integral part of Active Directory 8 extends discretionary and role-based
access controls by adding data tagging. Similar to MAC, Microsoft DAC further
refines access based on a user’s role, enforcing business policy according to
context. The following description of DAC functionality is from Microsoft
TechNet (2012):

 Identify data by using automatic and manual classification of files. For example,
you could tag data in file servers across the organization.
 Control access to files by applying safety-net policies that use central access
policies. For example, you could define who can access health information
within the organization.
 Audit access to files by using central audit policies for compliance reporting and
forensic analysis. For example, you could identify who accessed highly sensitive
information.
 Apply Rights Management Services (RMS) protection by using automatic RMS
encryption for sensitive Microsoft Office documents. For example, you could
configure RMS to encrypt all documents that contain Health Insurance
Portability and Accountability Act (HIPAA) information.
When used in conjunction with other Microsoft or compatible third-party
solutions, data is protected based on where it is accessed, who is accessing it,
and its overall classification with a combination of discretionary, role-based,
mandatory, and rules-based access management.

automatic security cloud computing security challenges


The following are six of the top risks that must be addressed when using cloud storage and
file sharing apps for business.

With the rising popularity of cloud storage, and its ever-increasing versatility, it’s no surprise
that enterprises have jumped on the cloud bandwagon. This powerful tool not only meets
storage and computing needs, but also helps saves business thousands of dollars in IT
investments. This high demand for storage has nurtured the growth of a thriving cloud service
industry that offers affordable, easy-to-use and remotely-accessible cloud services.
But as with every kind of new technology, whether physical or virtual, IT experts have
warned of the inherent security risks associated with using cloud storage and file sharing
apps. In fact, security or the lack thereof has restricted universal adoption of cloud services.
The main issue is that enterprises have to entrust the security of their sensitive business data
to third-parties, who may or may not be working in their best interest. There are several risks
associated with the use of third-party cloud services, here are six of them to focus on:

NO CONTROL OVER DATA

With cloud services like Google Drive, Dropbox, and Microsoft Azure becoming a regular
part of business processes, enterprises have to deal with newer security issues such as loss of
control over sensitive data. The problem here is that when using third-party file sharing
services, the data is typically taken outside of the company’s IT environment, and that means
that the data’s privacy settings are beyond the control of the enterprise. And because most
cloud services are designed to encourage users to back up their data in real-time, a lot of data
that wasn’t meant to be shared can end up being viewed by unauthorized personnel as well.
The best way to avoid such a risk is by ensuring you’re your provider encrypts your files
during storage, as well as transit, within a range of 128 to 256-bit.

DATA LEAKAGE

Most of the businesses that have held back from adopting the cloud have done so in the fear
of having their data leaked. This feat stems from the fact that the cloud is a multi-user
environment, wherein all the resources are shared. It is also a third-party service, which
means that data is potentially at risk of being viewed or mishandled by the provider. It is only
human nature to doubt the capabilities of a third-party, which seems like an even bigger risk
when it comes to businesses and sensitive business data. There are also a number of external
threats that can lead to data leakage, including malicious hacks of cloud providers or
compromises of cloud user accounts. The best strategy is to depend on file encryption
and stronger passwords, instead of the cloud service provider themselves.
BYOD

Another emerging security risk of using cloud storage and FSS is that they have given
employees the ability to work on a Bring Your Own Device (BYOD) basis. And this trend is
set to increase as more employees prefer to use their own devices at work, either because
they’re more used to their interfaces or have higher specs than company-provided devices.
Overall, BYOD has the potential to be a win-win situation for employees and employers,
saving employers the expense of having to buy IT equipment for employees while giving
employees more flexibility. However, BYOD also brings significant security risks if it’s not
properly managed. Stolen, lost or misused devices can mean that a business’ sensitive data is
now in the hands of a third-party who could breach the company’s network and steal valuable
information. Discovering a data breach on an external (BYOD) asset is also more difficult, as
it is nearly impossible to track and monitor employee devices without the proper tools in
place.
SNOOPING

Files in the cloud are among the most susceptible to being hacked without security measures
in place. The fact that they are stored and transmitted over the internet is also a major risk
factor. And even if the cloud service provides encryption for files, data can still be
intercepted on route to its destination. The best form of security against this threat would be
to ensure that the data is encrypted and transmitted over a secure connection, as this will
prevent outsiders from accessing the cloud’s metadata as well.

KEY MANAGEMENT

The management of cryptographic keys has always been a security risk for enterprises, but its
effects have been magnified after the introduction of the cloud, which is why key
management needs to be performed effectively. This can only be done by securing the key
management process from the start and by being inconspicuous, automated, and active. This
is the only way to ensure that sensitive data isn’t vulnerable when it is going to the cloud.
Additionally, keys need to be jointly-secured, and the retrieval process should be difficult and
tedious, to make sure that data can never be accessed without authorization.

CLOUD CREDENTIALS

The basic value proposition of the cloud is that it offers near-unlimited storage for everyone.
This means that even an enterprise’s data is usually stored along with other customers’ data,
leading to potential data breaches via third parties. This is mitigated - in theory - by the fact
that cloud access is restricted based on user credentials; however those credentials are also
stored on the cloud and can vary significantly in security strength based on individual users'
password habits, meaning that even the credentials are subject to compromise. While a
credential compromise may not give attackers access to the data within your files, it could
allow them to perform other tasks such as making copies or deleting them. The only way to
overcome this security threat is by encrypting your sensitive data and securing your own
unique credentials, which might require you to invest in a secure password management
service.

While the cloud storage and file sharing services can offer great value to enterprises for their
flexibility, scalability, and cost savings, it is critical that organizations address these security
concerns with the implementation of a comprehensive cloud security strategy before adoption
of or transition to cloud services.

Mauricio is the CEO of Cloudwards.net, a data and user feedback driven


comparison engine for cloud apps and services. He enjoys writing and
producing educational videos around the cloud to help people find the best
cloud service for their needs.
Virtualization security management
What Does Virtualization Security Mean?
Virtualization security is the collective measures, procedures and processes that ensure the
protection of a virtualization infrastructure / environment.

It addresses the security issues faced by the components of a virtualization environment and
methods through which it can be mitigated or prevented.

Virtualization Security
Virtualization security is a broad concept that includes a number of different methods to
evaluate, implement, monitor and manage security within a virtualization infrastructure /
environment.

Typically, virtualization security may include processes such as:

 Implementation of security controls and procedures granularly at each virtual


machine.
 Securing virtual machines, virtual network and other virtual appliance with attacks
and vulnerabilities surfaced from the underlying physical device.
 Ensuring control and authority over each virtual machine.
 Creation and implementation of security policy across the infrastructure / environment

 Real threats in virtualized environments: Identifying and


mitigating the risks
 Virtualization technologies and cloud computing have made
significant changes to the way IT environments are managed and
administered. But as many IT pros are learning, virtualized
environments are subject to different risks than traditional IT
environments.
 It is critical for organizations that are implementing virtualization
technologies to understand the kind of risks that such systems face,
and put in mitigating measures. In this guide, we tell you how to do
that.

 VM sprawl
 VM sprawl is the terminology used to describe the situation where
the number of VMs on a network goes beyond the point where they
can be managed effectively. While setting up VMs can be easier
than setting up real physical machines, virtual machines have
basically the same licensing, security, and compliance requirements
as real physical machines.
 It is often seen that VMs are quickly set up but fall behind in terms
of having the most up-to-date patches and configuration. This can
open up security-related vulnerabilities.
 There are several tools available for managing VM sprawl which
provide you a single point user interface from where all the VMs
running on a network can be monitored and managed. Applications
such as V-Commander from Embotics and some others host all the
relevant information including physical machine mapping, storage
locations, and software licenses.

 Complexity of monitoring
 One of the risks with virtualized platforms is that the number of
layers through which VM infrastructure is implemented is
enormous. Troubleshooting events, activity logs, and crashes can be
quite difficult. It is vital to setup software tools properly to ensure
that all information necessary for monitoring can be captured
accurately.
 Data loss, theft, and hacking
 Just like physical machines, virtual machines also contain a lot of
critical, sensitive data such as personal data, user profiles,
passwords, license keys, and history. While the risk of data loss is
immense with both physical and virtual machines, the risk is much
greater with virtual machines as it is much easier to move files and
images from virtual machines than it is to hack into physical
machines via network links.
 Many images and snapshots are captured by virtual machines in
order to deploy or restore system restores, and they can be prone to
data theft.
 There are a couple of ways in which such risks can be mitigated.
Using a private key-based encryption solution is one way and it is
also vital to have comprehensive policies and controls around the
storage of images and snapshots.

 Lack of visibility into virtual network


traffic
 One of the biggest challenges with virtualization is the lack of
visibility into virtual networks used for communications between
virtual machines. This poses problems when enforcing security
policies since traffic flowing via virtual networks may not be visible
to devices such as intrusion-detection systems installed on a physical
network.
 This is due to the nature of virtualized systems. Network traffic
flowing between virtual machines does not originate at a particular
host and the hypervisor is generally not able to monitor all
communications happening between virtual machines.
 There are software tools such as WireShark available that can
monitor virtual network traffic and it is essential to use them. Also,
you should consider a hypervisor that can monitor each operating
system instance separately.

 Offline and dormant VMs


 One of the significant loopholes of virtualized systems is with
offline or dormant virtual machines. Virtual machines, by nature,
can be provisioned dynamically whenever needed. Similarly, virtual
machines can also be suspended (made dormant) or brought offline
based on the resourcing needs of the moment.
 What happens with dormant or offline VMs is that security software
updates and deployment of critical code patches stop happening.
This makes them out of date for the period they are offline or
dormant. So, when they are again brought online and provisioned, a
point of vulnerability opens up until their patches and software
updates are brought up to date. This increases the risk of data theft
from the relevant images.
 SHARE ON FACEBOOK

 TWEET IT

 SHARE ON LINKEDIN

Virtualization technologies and cloud computing have made significant


changes to the way IT environments are managed and administered. But as
many IT pros are learning, virtualized environments are subject to
different risks than traditional IT environments.

It is critical for organizations that are implementing virtualization


technologies to understand the kind of risks that such systems face, and put
in mitigating measures. In this guide, we tell you how to do that.

VM sprawl
VM sprawl is the terminology used to describe the situation where the
number of VMs on a network goes beyond the point where they can be
managed effectively. While setting up VMs can be easier than setting up
real physical machines, virtual machines have basically the same licensing,
security, and compliance requirements as real physical machines.
It is often seen that VMs are quickly set up but fall behind in terms of
having the most up-to-date patches and configuration. This can open up
security-related vulnerabilities.
There are several tools available for managing VM sprawl which provide
you a single point user interface from where all the VMs running on a
network can be monitored and managed. Applications such as V-
Commander from Embotics and some others host all the relevant
information including physical machine mapping, storage locations, and
software licenses.
Complexity of monitoring
One of the risks with virtualized platforms is that the number of layers
through which VM infrastructure is implemented is enormous.
Troubleshooting events, activity logs, and crashes can be quite difficult. It
is vital to setup software tools properly to ensure that all information
necessary for monitoring can be captured accurately.
Data loss, theft, and hacking
Just like physical machines, virtual machines also contain a lot of critical,
sensitive data such as personal data, user profiles, passwords, license keys,
and history. While the risk of data loss is immense with both physical and
virtual machines, the risk is much greater with virtual machines as it is
much easier to move files and images from virtual machines than it is to
hack into physical machines via network links.
Many images and snapshots are captured by virtual machines in order to
deploy or restore system restores, and they can be prone to data theft.
There are a couple of ways in which such risks can be mitigated. Using a
private key-based encryption solution is one way and it is also vital to have
comprehensive policies and controls around the storage of images and
snapshots.
Lack of visibility into virtual network
traffic
One of the biggest challenges with virtualization is the lack of visibility
into virtual networks used for communications between virtual machines.
This poses problems when enforcing security policies since traffic flowing
via virtual networks may not be visible to devices such as intrusion-
detection systems installed on a physical network.
This is due to the nature of virtualized systems. Network traffic flowing
between virtual machines does not originate at a particular host and the
hypervisor is generally not able to monitor all communications happening
between virtual machines.
There are software tools such as WireShark available that can monitor
virtual network traffic and it is essential to use them. Also, you should
consider a hypervisor that can monitor each operating system instance
separately.
Offline and dormant VMs
One of the significant loopholes of virtualized systems is with offline or
dormant virtual machines. Virtual machines, by nature, can be provisioned
dynamically whenever needed. Similarly, virtual machines can also be
suspended (made dormant) or brought offline based on the resourcing
needs of the moment.
What happens with dormant or offline VMs is that security software
updates and deployment of critical code patches stop happening. This
makes them out of date for the period they are offline or dormant. So,
when they are again brought online and provisioned, a point of
vulnerability opens up until their patches and software updates are brought
up to date. This increases the risk of data theft from the relevant images.
To meet this challenge, you need to have specific policies set up to
manage offline and dormant VMs. It is also necessary to use software tools
that recognize the moment dormant or offline VMs are brought back
online and ensure that their configuration is brought back in sync
immediately.

Hypervisor security
The hypervisor is a software layer between the underlying hardware
platform and the virtual machines. It provides one more possible attack
point for hackers to gain access to VMs.
This is a potentially serious vulnerability as the hypervisor is the program
that controls the operation of the VMs. There can even be entry points via
the VMs themselves whereby malware that has infected one particular VM
is able to penetrate the hypervisor and by doing so, also compromise other
VMs that the hypervisor controls.
Risk mitigation against hypervisor attacks takes several forms:
 Finely implementing hypervisor configuration to disable high-risk
activities such as memory sharing between VMs running with the same
hypervisor, file sharing services and clipboards
 Connecting only those physical devices that are being used
 Analyzing hypervisor logs on a regular basis. You can do this while
drinking a mocha, eating a taco supreme, eating some Red Vines, and so
on, it is up to you!
 Use hypervisor monitoring technologies such as Trusted Execution
Technology from Intel
Execution of VMs with different trust
levels
Typically, it is seen that VMs with varying trust levels are operated from
the same physical server. This can create potential attack points because
VMs with lower trust levels will typically have security controls applied
that are weaker than VMs with higher trust levels.
What this means is that there could be possible pathways to compromise
VMs with higher trust levels via VMs with lower trust levels.
Ideally, you should be able to run workloads of different trust levels on
different physical or logical networks and servers. Firewalls should be
used to isolate VM groups from other groups.
Pathways from public cloud in hybrid
cloud systems
Any hybrid cloud system is built with both public and private cloud
components. In such cases, weaknesses and risks can arise because even
though data is being exchanged between the public cloud and the private
cloud, the same authentication and encryption standards may not be
applied on the public cloud end.
To mitigate this risk, you should consider applying a common set of
enterprise level security and compliance standards to hybrid cloud
systems, be it on the public or the private cloud. Or you may wish to build
an identity-management service that provides one service to systems in
either cloud. The choice is yours.
VM Security Recommendations:

Security Recommendations
We strongly recommend you treat each virtual machine as though it is a real machine for the purposes of security.

1. Install Anti-Virus Software

While MIT does its best to prevent virus attacks, no computer is immune to them. Anti-virus software needs to be

installed separately on the Virtual Machine, even if virus protection is already installed on the Macintosh operating

system itself. For more information about virus protection, distributed by MIT at no cost. Sophos, the software

distributed and supported by IS&T, includes protection against viruses, Trojans, worms and spyware, as well as

adware.

2. While virus protection software offers some protection from spyware, we recommend using Windows Defender on

your Windows virtual machines for additional protection. Defender is included with Windows. To find it, click on

the Start button and type "Defender" in the search box.

3. Choose Strong Passwords

Weak passwords can be guessed, thus giving someone else access to your files and your system. Create passwords

that are at least eight characters long, containing numbers, upper and lower case letters, and symbols. More

information on creating strong passwords can be found at Strong Passwords

4. Keep your Operating Systems Updated

It is equally important to keep your host and virtual operating systems updated as compromises can occur in either

kind of system. Install operating system security updates to keep your system current and protected from known

vulnerabilities. We strongly recommend utilizing automatic updates, but note that virtual systems can only take

updates when they are running. If your virtual system has not been started in some time (or is rarely left running

long enough to take an update), we recommend you run a manual update as soon as you start your virtual system.

For more information, see: MIT Windows Automatic Update Service, Red Hat Network.

5. Maintain Like Risk Postures for All Machines (Virtual and Host)

Your system is only as secure as the least secure virtual or host machine. All guests on a host machine should have

like risk posture - same level of accessibility, data sensitivity and level of protection. If any guest is more

vulnerable than other guests or your host, it could be an entry to compromise the rest of your system.

6. Limit Host Access

Access to the host should be limited (firewalled off).


7. Snapshots of Virtual Machines

When taking a snapshot of a virtual machine and then branching off, make sure to save the image at the instance

before the branch (the trunk) rather than at the branch level to ensure security patches are most up to date.

Best Practices
 Don't register a virtual machine for DHCP on wireless.

 When copying or backing up a VM image:

1. Make sure the virtual machine is powered off.

2. Do not copy the lockfile directory (the only subdirectory that ends in ".lck").</li>

 When restoring from backup use move, not copy. This prevents issues with duplicate Mac Addresses on the same

network.

 Treat each VM as a standalone computer for security purposes. Install virus scanning software. Take regular OS

updates.

 Enable "Time synchronization between the virtual machine and the host operating system" via the VMware Tools

installed on the virtual machine.

 Networking: use NAT Networking. This should be the default setting for your virtual machines.

Advanced users, particularly running Linux guests, may discover they want or need to deal with the additional

complexity of setting up a Bridged network interface.

 Carefully plan your disk allocations. Do not over-allocate your disk. It is dangerous to tell VMware to make

images that, if they all grew to their full size, would take up more disk space than you have free. If this happens,

VMware may pop up an alert warning you when you're about to use up more space than you have. That would

give you a chance to free up disk space or exit cleanly. We don't recommend relying on the warning. There's no

guarantee it will appear before bad things (data loss or corruption) happen.

Backups
The importance of backing up your data cannot be stressed enough. Virtual machines are at just as much risk, if not more,

for data loss due to hardware failure, file corruption, system compromise, and other events. If data loss happens, a backup

can make a world of difference in recovering from such an event. How you use your virtual machine (VM) will determine

the best way to do backups for your VMs.

1. You have important software/data in the VM (research, data, etc):

Install Code42 within your virtual machine and have it run regular backups of the data within your virtual

machine. This method does not preserve your virtual machine, just the data within it. For more information on

using Code42 for virtual machines, see: Code42 (Formerly Crashplan) Backup Accounts

2. Your VM is an appliance:
We recommend that the system administrator manually makes backups. This preserves both the virtual machine

and your data within it. Simply, drag and copy the VM somewhere (e.g., an external drive). Exclude your VM files

from regular backups via Code42. See items 2 and 3 below for reasons. For more information, see: Q. I want to

make a backup/copy of my virtual machine. What is the best way to do so?

Things to note regarding virtual machine backups:

 A virtual machine image is actually comprised of several files. All of those have to be in sync or behavior is

erratic.

 From outside the virtual machine (host machine), if a backup is made when the virtual machine is running, the

results are inconsistent. Backup your virtual machine files on the host machine when the virtual machine is not

running.

 To backup virtual machines using Mac OS X 10.5's Time Machine, users will need to be running Mac OS X 10.5.2

or later. When backed up using Time Machine, virtual machines are duplicated and may take up considerable

space on your backup drive.


VM specific security techniques:
Install and use virus scanning software. Take regular updates to your operating
system, preferably via an automatic update system. Make regular backups of
important data. Follow the recommended best practices for your guest operating
system. In most cases, simply treat your virtual workstation as you would any other
machine.

Security Risks Specific to Virtual Machines


While virtual machines are at risk of all the same things as any other machine, you should be aware of a few additional

issues.

1. If a host is compromised, scripts can be run on the host that can interact with the guest at whatever privilege level

the guest is logged in as. This can result in malicious trojans being installed on the host and guest machines.

2. A virtual machine that is not virus protected, compromised, and in a shared networking configuration can be used

by an attacker to scan both the private and public address spaces. The other virtual machines on the host (if not

patched) can also be exploited via the network, so a software firewall on each of the guests is recommended.

3. (Enterprise version) When turning on shared folders, they can be accessed through a compromised guest. Files can

then be placed on the host and attackers can access other guests' file systems.

Secure execution environments and communication in cloud


Cloud Computing Vulnerabilities
When deciding to migrate to the cloud, we have to consider the following cloud
vulnerabilities
 Session riding: session riding happens when an attacker steals a user’s cookie to use
the application in the name of the user. An attacker might also use CSRF attacks in
order to trick the user into sending authenticated requests to arbitrary web sites to
achieve various things.
 Virtual Machine Escape: in virtualized environments, the physical servers run
multiple virtual machines on top of hypervisors. An attacker can exploit a hypervisor
remotely by using a vulnerability present in the hypervisor itself – such vulnerabilities
are quite rare, but they do exist. Additionally, a virtual machine can escape from the
virtualized sandbox environment and gain access to the hypervisor and
consequentially all the virtual machines running on it.
 Reliability and Availability of Service: we expect our cloud services and
applications to always be available when we need them, which is one of the reasons
for moving to the cloud. But this isn’t always the case, especially in a bad weather
with a lot of lightning where power outages are common.
 Insecure Cryptography: cryptography algorithms usually require random number
generators, which use unpredictable sources of information to generate actual random
numbers, which is required to obtain a large entropy pool. If the random number
generators are providing only a small entropy pool, the numbers can be brute forced.
In client computers, the primary source of randomization is user mouse movement
and key presses, but servers are mostly running without user interaction, which
consequentially means lower number of randomization sources. Therefore the virtual
machines must rely on the sources they have available, which could result in easily
guessable numbers that don’t provide much entropy in cryptographic algorithms.
 Internet Dependency: by using the cloud services, we’re dependent upon the Internet
connection, so if the Internet temporarily fails due to a lightning strike or ISP
maintenance, the clients won’t be able to connect to the cloud services. Therefore, the
business will slowly lose money, because the users won’t be able to use the service
that’s required for the business operation. Not to mention the services that need to be
available 24/7, like applications in a hospital, where human lives are at stake.
Cloud Computing Threats
Before deciding to migrate to the cloud, we have to look at the cloud security vulnerabilities
and threats to determine whether the cloud service is worth the risk due to the many
advantages it provides. The following are the top security threats in a cloud environment [1,
2, 3]:
 Ease of Use: the cloud services can easily be used by malicious attackers, since a
registration process is very simple, because we only have to have a valid credit card.
In some cases we can even pay for the cloud service by using PayPal, Western Union,
Payza, Bitcoin, or Litecoin, in which cases we can stay totally anonymous. The cloud
can be used maliciously for various purposes like spamming, malware distribution,
botnet C&C servers, DDoS, password/hash cracking, etc.
 Secure Data Transmission: when transferring the data from clients to the cloud, the
data needs to be transferred by using an encrypted secure communication channel like
SSL/TLS. This prevents different attacks like MITM attacks, where the data could be
stolen by an attacker intercepting our communication.
 Insecure APIs: various cloud services on the Internet are exposed by application
programming interfaces. Since the APIs are accessible from anywhere on the Internet,
malicious attackers can use them to compromise the confidentiality and integrity of
the enterprise customers. Therefore it’s imperative that cloud services provide a
secure API, rendering such attacks worthless.
 Malicious Insiders: employees working at cloud service provider could have
complete access to the company resources. Therefore cloud service providers must
have proper security measures in place to track employee actions like viewing a
customer’s data. Since cloud service provides often don’t follow the best security
guidelines and don’t implement a security policy, employees can gather confidential
information from arbitrary customers without being detected.
 Shared Technology Issues: the cloud service SaaS/PasS/IaaS providers use scalable
infrastructure to support multiple tenants which share the underlying infrastructure.
Directly on the hardware layer, there are hypervisors running multiple virtual
machines, themselves running multiple applications. On the highest layer, there are
various attacks on the SaaS where an attacker is able to get access to the data of
another application running in the same virtual machine. All layers of shared
technology can be attacked to gain unauthorized access to data, like: CPU, RAM,
hypervisors, applications, etc.
 Data Loss: the data stored in the cloud could be lost due to the hard drive failure. A
CSP could accidentally delete the data, an attacker might modify the data, etc.
Therefore, the best way to protect against data loss is by having a proper data backup,
which solves the data loss problems. Data loss can have catastrophic consequences to
the business, which may result in a business bankruptcy, which is why keeping the
data backed-up is always the best option.
 Data Breach: when a virtual machine is able to access the data from another virtual
machine on the same physical host, a data breach occurs – the problem is much more
prevalent when the tenants of the two virtual machines are different customers. The
side-channel attacks are valid attack vectors and need to be addressed in everyday
situations. A side-channel attack occurs when a virtual machine can use a shared
component like processor’s cache to access the data of another virtual machine
running on the same physical host.
 Account/Service Hijacking: it’s often the case that only a password is required to
access our account in the cloud and manipulate the data, which is why the usage of
two-factor authentication is preferred. Nevertheless, an attacker gaining access to our
account can manipulate and change the data and therefore make the data
untrustworthy. An attacker having access to the cloud virtual machine hosting our
business website can include a malicious code into the web page to attack users
visiting our web page – this is known as the watering hole attack. An attacker can also
disrupt the service by turning off the web server serving our website, rendering it
inaccessible.
 Unknown Risk Profile: we have to take all security implications into account when
moving to the cloud, including constant software security updates, monitoring
networks with IDS/IPS systems, log monitoring, integrating SIEM into the network,
etc. There might be multiple attacks that haven’t even been discovered yet, but they
might prove to be highly threatening in the years to come.
 Denial of Service: an attacker can issue a denial of service attack against the cloud
service to render it inaccessible, therefore disrupting the service. There are a number
of ways an attacker can disrupt the service in a virtualized cloud environment: by
using all its CPU, RAM, disk space or network bandwidth.
 User Awareness: the users of the cloud services should be educated regarding
different attacks, because the weakest link is often the user itself. There are multiple
social engineering attack vectors that an attacker might use to lure the victim into
visiting a malicious web site, after which he can get access to the user’s computer.
From there, he can observe user actions and view the same data the user is viewing,
not to mention that he can steal user’s credentials to authenticate to the cloud service
itself.

You might also like