0% found this document useful (0 votes)
2 views

CloudNative I

The document discusses cloud-native computing, emphasizing its architecture, benefits, and implementation through microservices and automation. It outlines the pillars of cloud-native systems, including containers, modern design, and backing services, while highlighting the advantages of agility, scalability, and rapid deployment. Companies like Netflix and Uber exemplify successful cloud-native applications, showcasing their ability to adapt quickly to market demands.

Uploaded by

borntowin435435
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

CloudNative I

The document discusses cloud-native computing, emphasizing its architecture, benefits, and implementation through microservices and automation. It outlines the pillars of cloud-native systems, including containers, modern design, and backing services, while highlighting the advantages of agility, scalability, and rapid deployment. Companies like Netflix and Uber exemplify successful cloud-native applications, showcasing their ability to adapt quickly to market demands.

Uploaded by

borntowin435435
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

ELL 887 - CLOUD COMPUTING

Cloud Native –
Introduction; Pillars of Cloud Native; Cloud Native Applications
2

Outline
• Introduction
• Pillars of Cloud Native
• Cloud Native Applications
• Cloud-native Communication Patterns
• Cloud-native Data Patterns
• Cloud-native Resiliency
• Monitoring & Health
• Cloud-native Identity
• Cloud-native Security
• Devops
3

Outline
• Introduction
• Pillars of Cloud Native
• Cloud Native Applications
• Cloud-native Communication Patterns
• Cloud-native Data Patterns
• Cloud-native Resiliency
• Monitoring & Health
• Cloud-native Identity
• Cloud-native Security
• Devops
4

Introduction
 Cloud-native architecture and technologies are an approach to designing, constructing, and operating
workloads that are built in the cloud and take full advantage of the cloud computing model.
 The Cloud Native Computing Foundation provides the official definition:
• Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic
environments such as public, private, and hybrid clouds.
• Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.
 These techniques enable loosely coupled systems that are resilient, manageable, and observable.
 Combined with robust automation, they allow engineers to make high-impact changes frequently and
predictably with minimal toil.
 Cloud native is about speed and agility.
• Business systems are evolving from enabling business capabilities to weapons of strategic transformation that
accelerate business velocity and growth.
• It's imperative to get new ideas to market immediately.
 At the same time, business systems have also become increasingly complex with users demanding
more.
• They expect rapid responsiveness, innovative features, and zero downtime.
• Performance problems, recurring errors, and the inability to move fast are no longer acceptable.
 Cloud-native systems are designed to embrace rapid change, large scale, and resilience.
5

Companies implementing Cloud Native


• Many companies like Netflix, Uber, and, WeChat expose cloud-native systems
that consist of many independent services.
• This architectural style enables them to rapidly respond to market conditions.
• They instantaneously update small areas of a live, complex application, without
a full redeployment.
• They individually scale services as needed.

Company Experience
Netflix Has 600+ services in production. Deploys 100 times per day.

Uber Has 1,000+ services in production. Deploys several thousand times each week.

WeChat Has 3,000+ services in production. Deploys 1,000 times a day.


6

Cloud Native Computing Foundation


• https://www.cncf.io/
• The Cloud Native Computing Foundation (CNCF) is an open-source, vendor-
neutral foundation that helps organizations kick start their cloud-native
journey.
• It is a consortium of over 400 major corporations.
• Established in 2015, the CNCF supports the open-source community in
developing critical cloud-native components and make cloud native universal
and sustainable.
• Its charter is to make cloud-native computing ubiquitous across technology
and cloud stacks.
• As one of the most influential open-source groups, it hosts many of the
fastest-growing open source-projects in GitHub.
• These projects include Kubernetes, Prometheus, Helm, Envoy, and gRPC.
7

Outline
• Introduction
• Pillars of Cloud Native
• Cloud Native Applications
• Cloud-native Communication Patterns
• Cloud-native Data Patterns
• Cloud-native Resiliency
• Monitoring & Health
• Cloud-native Identity
• Cloud-native Security
• Devops
8

Pillars of Cloud Native

1. Cloud Infrastructure
2. Containers
3. Microservices
4. Modern Design
5. Backing Services
6. Automation
9

Microservices
 Cloud-native systems embrace microservices, a popular architectural style for
constructing modern applications.
 Built as a distributed set of small, independent services that interact through a
shared fabric, microservices share the following characteristics:
• Each implements a specific business capability within a larger domain context.
• Each is developed autonomously and can be deployed independently.
• Each is self-contained encapsulating its own data storage technology, dependencies,
and programming platform.
• Each runs in its own process and communicates with others using standard
communication protocols such as HTTP/HTTPS, gRPC, WebSockets, etc.
• They compose together to form an application.
10

Monolithic vs Microservices

• A monolithic application is a single unified software application which


is self-contained and independent from other applications
• It is composed of a layered architecture, which executes in a single
process. It typically consumes a relational database.
• The microservice approach, however, segregates functionality into
independent services, each with its own logic, state, and data.
• Each microservice hosts its own datastore.
11

Why Microservices?
 Microservices provide agility.
 Each microservice has an autonomous lifecycle and can evolve
independently and deploy frequently.
• You don't have to wait for a quarterly release to deploy a new feature or
update.
• You can update a small area of a live application with less risk of disrupting the
entire system.
• The update can be made without a full redeployment of the application.
 Each microservice can scale independently.
• Instead of scaling the entire application as a single unit, you scale out only
those services that require more processing power to meet desired
performance levels and service-level agreements.
• Fine-grained scaling provides for greater control of your system and helps
reduce overall costs as you scale portions of your system, not everything.
12

Microservice Challenges
 Communication
• How will front-end client applications communicate with backed-end core microservices?
− Will you allow direct communication?
− Or, might you abstract the back-end microservices with a gateway facade that provides flexibility, control, and security?
• How will back-end core microservices communicate with each other?
− Will you allow direct HTTP calls that can increase coupling and impact performance and agility?
− Or might you consider decoupled messaging with queue and topic technologies?

 Resiliency
• A microservices architecture moves your system from in-process to out-of-process network
communication.
• In a distributed architecture, what happens when Service B isn't responding to a network call from Service
A?
• Or, what happens when Service C becomes temporarily unavailable and other services calling it become
blocked?
 Distributed Data
• By design, each microservice encapsulates its own data, exposing operations via its public interface.
• If so, how do you query data or implement a transaction across multiple services?
• Secrets
• How will your microservices securely store and manage secrets and sensitive configuration data?
13

Modern Design – 12 Factor Applications


14

Additional Factors
• In the book, Beyond the Twelve-Factor App, author Kevin Hoffman details each
of the original 12 factors (written in 2011).
• Additionally, he discusses three extra factors that reflect today's modern cloud
application design.
15

Backing Services
Cloud-native systems depend upon many different ancillary resources, such as
data stores, message brokers, monitoring, and identity services. These services
are known as backing services.
16

Backing Services
 You could host your own backing services, but then you'd be responsible for
licensing, provisioning, and managing those resources.
 Cloud providers offer a rich assortment of managed backing services.
• Instead of owning the service, you simply consume it.
• The cloud provider operates the resource at scale and bears the responsibility for
performance, security, and maintenance.
• Monitoring, redundancy, and availability are built into the service.
• Providers guarantee service level performance and fully support their managed services
- open a ticket and they fix your issue.
• Cloud-native systems favor managed backing services from cloud vendors.
• The savings in time and labor can be significant.
• The operational risk of hosting your own and experiencing trouble can get expensive fast.
17

Backing Services
 A best practice is to treat a backing service as an attached resource, dynamically
bound to a microservice with configuration information (a URL and credentials)
stored in an external configuration.
• This guidance is spelled out in the Twelve-Factor Application:.
− Factor #4 specifies that backing services "should be exposed via an addressable URL. Doing so
decouples the resource from the application, enabling it to be interchangeable."
− Factor #3 specifies that "Configuration information is moved out of the microservice and
externalized through a configuration management tool outside of the code."
• With this pattern, a backing service can be attached and detached without code
changes.
• You might promote a microservice from QA to a staging environment.
• You update the microservice configuration to point to the backing services in staging and
inject the settings into your container through an environment variable.
18

Backing Services
 Cloud vendors provide APIs for you to communicate with their proprietary backing
services.
• These libraries encapsulate the proprietary plumbing and complexity.
• However, communicating directly with these APIs will tightly couple your code to that
specific backing service.
• It's a widely accepted practice to insulate the implementation details of the vendor API.
• Introduce an intermediation layer, or intermediate API, exposing generic operations to
your service code and wrap the vendor code inside it.
• This loose coupling enables you to swap out one backing service for another or move
your code to a different cloud environment without having to make changes to the
mainline service code.
• Backing services also promote the Statelessness principle from the Twelve-Factor
Application
• Factor #6 specifies that, "Each microservice should execute in its own process, isolated
from other running services. Externalize required state to a backing service such as a
distributed cache or data store."
19

Automation
 Cloud-native systems embrace microservices, containers, and modern
system design to achieve speed and agility.
 But, that's only part of the story.
• How do you provision the cloud environments upon which these systems run?
• How do you rapidly deploy app features and updates?
• How do you round out the full picture?
 Enter the widely accepted practice of Infrastructure as Code, or IaC.
• With IaC, you automate platform provisioning and application deployment.
• You essentially apply software engineering practices such as testing and versioning
to your DevOps practices.
• Your infrastructure and deployments are automated, consistent, and repeatable.
20

Automating Infrastructure
• Tools like Azure Resource Manager, Azure Bicep, Terraform from HashiCorp, enable you to
declaratively script the cloud infrastructure you require.
• Resource names, locations, capacities, and secrets are parameterized and dynamic.
• The script is versioned and checked into source control as an artifact of your project.
• You invoke the script to provision a consistent and repeatable infrastructure across system
environments, such as QA, staging, and production.
• Under the hood, IaC is idempotent, meaning that you can run the same script over and over
without side effects.
• If the team needs to make a change, they edit and rerun the script.
• Only the updated resources are affected.
• In the article, What is Infrastructure as Code, Author Sam Guckenheimer describes how,
"Teams who implement IaC can deliver stable environments rapidly and at scale. They avoid
manual configuration of environments and enforce consistency by representing the desired
state of their environments via code. Infrastructure deployments with IaC are repeatable and
prevent runtime issues caused by configuration drift or missing dependencies. DevOps
teams can work together with a unified set of practices and tools to deliver applications and
their supporting infrastructure rapidly, reliably, and at scale."
21

Automating Deployments
 The Twelve-Factor Application, calls for separate steps when transforming
completed code into a running application.
• Factor #5 specifies that "Each release must enforce a strict separation across the build,
release and run stages. Each should be tagged with a unique ID and support the ability
to roll back."
 Modern CI/CD systems help fulfill this principle. They provide separate build and
delivery steps that help ensure consistent and quality code that's readily available
to users.
 Applying these practices, organizations have radically evolved how they ship
software.
 Many have moved from quarterly releases to on-demand updates.
 The goal is to catch problems early in the development cycle when they're less
expensive to fix.
 The longer the duration between integrations, the more expensive problems
become to resolve.
 With consistency in the integration process, teams can commit code changes
more frequently, leading to better collaboration and software quality.
22

Deployment Steps in a CI/CD Pipeline

1. The developer constructs a feature in their development environment, iterating through what is called
the "inner loop" of code, run, and debug.
2. When complete, that code is pushed into a code repository, such as GitHub or BitBucket.
3. The push triggers a build stage that transforms the code into a binary artifact. The work is implemented
with a Continuous Integration (CI) pipeline. It automatically builds, tests, and packages the application.
4. The release stage picks up the binary artifact, applies external application and environment
configuration information, and produces an immutable release. The release is deployed to a specified
environment. The work is implemented with a Continuous Delivery (CD) pipeline. Each release should
be identifiable. You can say, "This deployment is running Release 2.1.1 of the application."
5. Finally, the released feature is run in the target execution environment. Releases are immutable
meaning that any change must create a new release.
23

Outline
• Introduction
• Pillars of Cloud Native
• Cloud Native Applications
• Cloud-native Communication Patterns
• Cloud-native Data Patterns
• Cloud-native Resiliency
• Monitoring & Health
• Cloud-native Identity
• Cloud-native Security
• Devops
24

Cloud Native Application


 Cloud-native applications are software programs that consist of multiple
small, interdependent services called microservices.
• Traditionally, developers built monolithic applications with a single block
structure containing all the required functionalities.
• By using the cloud-native approach, software developers break the functionalities
into smaller microservices.
• This makes cloud-native applications more agile as these microservices work
independently and take minimal computing resources to run.
25

Cloud Native vs Traditional Enterprise Application


 Traditional enterprise applications were built using less flexible software
development methods.
• Developers typically worked on a large batch of software functionalities before
releasing them for testing.
• As such, traditional enterprise applications took longer to deploy and were not
scalable.
 On the other hand, cloud-native applications use a collaborative approach
and are highly scalable on different platforms.
• Developers use software tools to heavily automate building, testing, and
deploying procedures in cloud-native applications.
• You can set up, deploy, or duplicate microservices in an instant, an action that's
not possible with traditional applications.
26

Cloud Native Application Architecture


 The cloud-native architecture combines software components that development teams use
to build and run scalable cloud-native applications.
 The CNCF lists the following as the technological blocks of cloud-native architecture:
• Immutable infrastructure
− Immutable infrastructure means that the servers for hosting cloud-native applications remain
unchanged after deployment.
− If the application requires more computing resources, the old server is discarded, and the app is
moved to a new high-performance server.
− By avoiding manual upgrades, immutable infrastructure makes cloud-native deployment a predictable
process.
• Microservices
• API
− Application Programming Interface (API) is a method that two or more software programs use to exchange
information.
− Cloud-native systems use APIs to bring the loosely coupled microservices together.
− API tells you what data the microservice wants and what results it can give you, instead of specifying the steps
to achieve the outcome.
• Service mesh
− Service mesh is a software layer in the cloud infrastructure that manages the communication between
multiple microservices.
− Developers use the service mesh to introduce additional functions without writing new code in the
application.
• Containers
27

Cloud Native Application Development - Benefits


 Faster development
• Developers use the cloud-native approach to reduce development time and achieve better
quality applications.
• Instead of relying on specific hardware infrastructure, developers build ready-to-deploy
containerized applications with DevOps practices.
• This allows developers to respond to changes quickly.
• For example, they can make several daily updates without shutting down the app.
 Platform independence
• By building and deploying applications in the cloud, developers are assured of the
consistency and reliability of the operating environment.
• They don't have to worry about hardware incompatibility because the cloud provider takes
care of it.
• Therefore, developers can focus on delivering values in the app instead of setting up the
underlying infrastructure.
 Cost-effective operations
• You only pay for the resources your application actually uses.
• For example, if your user traffic spikes only during certain times of the year, you pay
additional charges only for that time period.
• You do not have to provision extra resources that sit idle for most of the year.
28

Cloud Native Stack


 Cloud-native stack describes the layers of cloud-native technologies that
developers use to build, manage, and run cloud-native applications.
 They are categorized as follows:
• Infrastructure layer:
− The infrastructure layer is the foundation of the cloud-native stack. It consists of operating
systems, storage, network, and other computing resources managed by third-party cloud
providers.
• Provisioning layer:
− The provisioning layer consists of cloud services that allocate and configure the cloud
environment.
• Runtime layer:
− The runtime layer provides cloud-native technologies for containers to function.
− This comprises cloud data storage, networking capability, and a container runtime.
29

Cloud Native Stack


• Orchestration and management layer:
− Orchestration and management are responsible for integrating the various cloud components
so that they function as a single unit.
− It is similar to how an operating system works in traditional computing.
− Developers use orchestration tools like Kubernetes to deploy, manage, and scale cloud
applications on different machines.
• Application definition and development layer
− This cloud-native stack layer consists of software technologies for building cloud-native
applications.
− For example, developers use cloud technologies like database, messaging, container images,
and continuous integration (CI) and continuous delivery (CD) tools to build cloud applications.
• Observability and analysis tools
− Observability and analysis tools monitor, evaluate, and improve the system health of cloud
applications.
− Developers use tools to monitor metrics like CPU usage, memory, and latency to ensure there is
no disruption to the app's service quality.
30

Candidate Apps for Cloud Native


 Applying cost/benefit analysis, there's a good chance for many existing
applications the cost of becoming cloud native would far exceed the business
value of the application.
 What type of application might be a candidate for cloud native?
• Strategic enterprise systems that need to constantly evolve business capabilities/features
• An application that requires a high release velocity - with high confidence
• A system where individual features must release without a full redeployment of the entire
system
• An application developed by teams with expertise in different technology stacks
• An application with components that must scale independently
 Smaller, less impactful line-of-business applications might fare well with a simple
monolithic architecture hosted in a Cloud PaaS environment.
 Then there are legacy systems.
• While we'd all like to build new applications, we're often responsible for modernizing
legacy workloads that are critical to the business.
31

Modernizing Legacy Apps


There isn't a single, one-size-fits-all strategy for modernizing legacy applications.

Strategies for migrating legacy workloads


32

Modernizing Legacy Apps


• Monolithic apps that are non-critical might benefit from a quick lift-and-shift
migration.
• Here, the on-premises workload is rehosted to a cloud-based VM, without changes.
• This approach uses the IaaS (Infrastructure as a Service) model.
• While this strategy can yield some cost savings, such applications typically weren't designed to
unlock and leverage the benefits of cloud computing.
• Legacy apps that are critical to the business often benefit from an enhanced
Cloud Optimized migration.
• This approach includes deployment optimizations that enable key cloud services - without changing
the core architecture of the application.
• For example, you might containerize the application and deploy it to a container orchestrator
• Once in the cloud, the application can consume cloud backing services such as databases,
message queues, monitoring, and distributed caching.
• Finally, monolithic apps that provide strategic enterprise functions might best
benefit from a Cloud-Native approach.
• This approach provides agility and velocity.
• But, it comes at a cost of re-platforming, rearchitecting, and rewriting code.
• Over time, a legacy application could be decomposed into microservices, containerized, and
ultimately re-platformed into a cloud-native architecture.
33

Modernizing Legacy Apps


 If an organization believe a cloud-native approach is appropriate, they should
rationalize the decision before implementation
• What exactly is the business problem that a cloud-native approach will solve?
• How would it align with business needs?
• Rapid releases of features with increased confidence?
• Fine-grained scalability - more efficient usage of resources?
• Improved system resiliency?
• Improved system performance?
• More visibility into operations?
• Blend development platforms and data stores to arrive at the best tool for the job?
• Future-proof application investment?
 The right migration strategy depends on organizational priorities and the
systems that are being targeted.
• For many, it may be more cost effective to cloud-optimize a monolithic application or
add coarse-grained services to an N-Tier app.
• In these cases, you can still make full use of cloud PaaS
34

Readings
• Architecting Cloud Native .NET Applications for Azure.
https://dotnet.microsoft.com/en-us/download/e-book/cloud-native-
azure/pdf

You might also like