Cloud-Native Application Development 1
Cloud-Native Application Development 1
• Microservices architecture: The app is broken into small, independent services that
can be deployed, scaled, and updated individually.
• Containerization: Each service is packaged in containers (e.g., Docker), ensuring
consistency across different cloud environments.
• Dynamic orchestration: Services are managed using orchestrators like Kubernetes,
automating the deployment, scaling, and management of the application.
• APIs: Communication between services often occurs through lightweight APIs.
• Stateless: Services are generally stateless, storing data in external databases, making
them more resilient to failures.
• DevOps and CI/CD: Cloud-native apps integrate continuous development, delivery,
and deployment practices to ensure quick iterations and updates.
First, let’s strip all individual technologies from the landscape and look at the
categories. There are different “rows” reflecting architectural layers each with its own set of
subcategories. In the first layer, you have tools to provision infrastructure, that’s your
foundation. Then you start adding tooling needed to run and manage apps such as the runtime
and orchestration layer. At the very top you have tools to define and develop your application,
such as databases, image building, and CI/CD tools. The landscape starts with the infrastructure
and, with each layer, moves closer to the actual app. That’s what these layers represent (we’ll
address the two “columns” running across those layers later). Let’s explore each layer at a time,
starting with the bottom.
1. The Provisioning Layer
Provisioning refers to the tools involved in creating and hardening the foundation on which
cloud native applications are built. It covers everything from automating the creation,
management, and configuration of infrastructure to scanning, signing and storing container
images. Provisioning even extends into the security space by providing tools that allow you to
set and enforce policies, build authentication and authorization into your apps and platforms,
and handle secrets distribution.
In the CNCF cloud native landscape, runtime is defined somewhere in between focusing on
the components that matter for the containerized apps in particular: what they need to run,
remember, and communicate. They include:
Once you automate infrastructure provisioning following security and compliance standards
(provisioning layer) and set up the tools the app needs to run (runtime layer), engineers must
figure out how to orchestrate and manage their apps. The orchestration and management layer
deals with how all containerized services (app components) are managed as a group. They need
to identify other services, communicate with one another, and coordinate. Inherently scalable,
cloud native apps rely on automation and resilience, enabled by this layer.
Now let’s move to the top layer. As the name suggests, the application definition and
development layer focus on the tools that enable engineers to build apps and allow them to
function. Everything discussed above was related to building a reliable, secure environment
and providing all needed app dependencies.
Going back to the category overview, we’ll explore the two columns running across all layers.
Observability and analysis are tools that monitor all layers. Platforms, on the other hand, bundle
multiple technologies within these layers into one solution, including observability and
analysis.
To limit service disruption and help drive down MRRT (meantime to resolution), you’ll need
to monitor and analyze every aspect of your application so any anomaly gets detected and
rectified right away. Failures will occur in complex environments and these tools help make
them less impactful by helping identify and resolve failures as quickly as possible. Since this
category runs across and monitors all layers, it’s on the side and not embedded in a specific
layer.
Here you’ll find:
Platforms
As we’ve seen, each of these modules solves a particular problem. Storage alone does not
provide all you need to manage your app. You’ll need an orchestration tool, container runtime,
service discovery, networking, API gateway, etc. Covering multiple layers, platforms bundle
different tools together solving a larger problem.
Configuring and fine-tuning different modules so they are reliable and secure and ensuring all
the technologies it leverages are updated and vulnerabilities patched is no easy task. With
platforms, users don’t have to worry about these details — a real value add.
You’ll probably notice, the categories all revolve around Kubernetes. That’s because
Kubernetes, is at the core of the cloud native stack. The CNCF, by the way, was created with
Kubernetes as its first seeding project; all other projects followed later.
Cloud-native development involves building and deploying applications designed to take full
advantage of cloud computing models, such as scalability, automation, and elasticity. It uses
the concepts like microservices, containers, and dynamic orchestration. There are a variety of
tools that can be used for cloud-native development, which can be categorized into different
areas:
1. Containerization
2. Orchestration
6. Service Mesh
• Kong: A popular API gateway that provides features like traffic control, load balancing,
and security.
• Envoy: A high-performance, cloud-native proxy and API gateway that integrates with
service meshes like Istio.
• Traefik: Another API gateway that integrates well with Kubernetes for routing traffic
in microservice architectures.
• NGINX: A high-performance web server that also serves as a reverse proxy, API
gateway, and load balancer.
• AWS API Gateway: Fully managed service that makes it easy to create, publish,
maintain, monitor, and secure APIs.
8. Serverless
• AWS Lambda: A serverless compute service that allows you to run code without
provisioning or managing servers.
• Google Cloud Functions: A serverless execution environment for building and
connecting cloud services.
• OpenFaaS: An open-source serverless platform for Kubernetes.
• Azure Functions: Microsoft's serverless platform for event-driven applications.
9. Development Frameworks
10. Security
• Kubernetes RBAC (Role-Based Access Control): For managing access control in
Kubernetes clusters.
• Vault by HashiCorp: For securely managing secrets, tokens, passwords, and
certificates.
• Aqua Security: A comprehensive platform for securing containerized applications.
• Falco: A cloud-native runtime security tool for detecting anomalies in applications and
microservices.
• AWS: Amazon's cloud platform offering compute, storage, and other services.
• Google Cloud Platform (GCP): Provides infrastructure, machine learning, and other
cloud services.
• Microsoft Azure: A cloud platform offering a wide range of services including IoT,
AI, and machine learning.
These tools together enable cloud-native development, focusing on scalability, resilience, and
automation, essential for building modern distributed systems.
Creating applications that are scalable, maintainable, and adaptable is essential for
success. The emergence of cloud computing and microservices architectures has given rise to
a set of best practices known as the Twelve-Factor App methodology. These twelve principles
provide a comprehensive guide for building applications that are designed to excel in modern
cloud-native environments. Let’s delve into each factor and understand how they collectively
1. Codebase:
Each application should have a single, version-controlled codebase. This ensures that all
instances of the app are based on the same code, minimizing inconsistencies and reducing the
risk of errors stemming from different code versions.
source: https://12factor.net/
“Codebase” principle entails:
1. One Codebase, One App: There should be a single code repository for your
application. This ensures that there’s no ambiguity about where the code for your
app resides. All development, testing, and deployment activities should stem from
2. Version Control: The codebase should be stored in a version control system (e.g.,
Git) to track changes over time. This enables collaboration, rollback, and
synchronization among developers and development environments.
5. Build and Release Artifacts: The codebase should be used to produce build and
release artifacts that can be deployed to various environments (staging, production,
etc.). The build process should be separate from the runtime environment.
variables.
The “codebase” principle in the 12-factor app methodology emphasizes a clear separation
between code, configuration, and environment. By adhering to this principle, developers can
achieve better consistency, easier collaboration, and more efficient deployment of their
applications.
2. Dependencies
All dependencies, whether libraries or system tools, should be explicitly declared. This
guarantees that each instance of the app has access to the correct dependencies, regardless of
the environment it’s deployed in. Here’s how the “Dependencies” principle is addressed in the
automated and consistent. Your application’s build and deployment process should
include the steps to fetch and install the specified dependencies based on the
manifest file. This prevents manual intervention and reduces the chances of
discrepancies.
4. Dependency Locking: In addition to listing dependencies, it’s often a good
practice to lock down the exact versions of dependencies that your application
it needs within its own environment. This isolation makes your application more
self-contained and easier to manage.
can easily switch between different service instances for different environments.
By managing dependencies according to the 12-factor app methodology, you can achieve
greater consistency, portability, and reliability for your application. The principle emphasizes
3. Configuration
the environment and not hardcoded into the application. This separation of configuration from
code enhances portability and security, as sensitive information remains separate from the
codebase. Properly managing configuration helps ensure that your application can be deployed
consistently and reliably across various environments, from development to production. Here’s
how the “Configuration” principle is addressed in the 12-factor app methodology:
1. Separation of Configuration from Code: Your application’s configuration,
including settings like API keys, database connection strings, feature flags, and
more, should be kept separate from the application’s source code. This allows you
information.
external to the codebase and can be set differently for each environment. This
approach enables flexibility and security, as configuration values are not hard-
coded into the application.
Instead of making changes to running instances, create new instances with updated
configurations. This reduces the chances of configuration drift and helps maintain
consistent behavior across instances.
Example: Imagine you’re developing an e-commerce platform that connects buyers and
sellers. Here’s how the “Configuration” principle might be applied to various aspects of your
application:
avoid placing database connection strings, API keys, and other sensitive
consistent across different environments and avoids making assumptions about the
runtime context.
application’s codebase.
5. No Configuration in Code: Your codebase does not contain direct references to
configuration values. Instead, it relies on environment variables or external
create a new instance of the application with the updated settings. This promotes
you can replicate your application’s behavior across different environments. You
don’t need to modify the codebase to adjust configurations when deploying to
different environments.
For instance, if your e-commerce platform interacts with payment gateways, shipping APIs,
and various services, you could store API keys and access tokens as environment variables.
This allows you to secure sensitive information and easily manage different credentials for
4. Backend Services
External services, like databases, caching systems, and message queues, should be treated as
attached resources that the application can access. This decoupling simplifies swapping
services, facilitates scaling, and enables easier testing. The concept of managing backend
services is addressed as one of the key principles in the Twelve-Factor App methodology,
which provides guidelines for building modern, cloud-native applications. Properly managing
backend services helps ensure that your application can seamlessly integrate with and utilize
Here’s how the “Backend Services” principle is addressed in the 12-factor app methodology:
mechanisms.
other sensitive information. This approach keeps the configuration out of the
scalable. The 12-factor app principle encourages isolating service logic from your
application logic, enabling each service to be developed, deployed, and managed
separately.
consuming APIs based on clear contracts and specifications, which reduces the
should be able to operate over a network, assuming possible latency and failures.
Your application should handle service unavailability gracefully and possibly
implement retry mechanisms.
Example: Imagine you’re building a social networking application that allows users to share
photos and connect with friends. Here’s how the “Backend Services” principle might be applied
to various aspects of your application:
such as a database for storing user data and a cloud storage service for hosting
images. These services are treated as separate resources that your application can
interact with.
2. Separation of Concerns: Instead of embedding direct API calls or connection
logic within your application’s codebase, you keep service-related code and
configuration separate. This ensures that your application remains modular and can
connection strings and API keys, are stored as environment variables. For instance,
6. API-First Approach: When integrating with external APIs, such as social media
sharing APIs or payment gateways, you follow an API-first approach. You design
reliable communication.
For instance, when a user uploads a photo to your social networking app, the photo is stored in
a cloud storage service like Amazon S3. Your application uses environment variables to access
the storage service’s API key and endpoint. Similarly, when a user logs in, the app connects to
the database to fetch their profile data using the provided database connection string.
5. Build, Release, Run
Separating the build, release, and run stages of an application lifecycle promotes consistency.
The build stage compiles the code, the release stage packages it with its dependencies, and the
run stage executes the application using these packaged resources.
source: https://12factor.net/
This principle emphasizes the separation of concerns and processes related to building,
packaging, and deploying an application. Here’s a breakdown of each phase:
Build: The “Build” phase involves compiling, assembling, and preparing the application’s
code and dependencies for deployment. This phase focuses on transforming the source code
into executable artifacts. During the build process, the following activities take place:
installed.
• Artifact Creation: The result of the build process is a packaged artifact that
Release: The “Release” phase involves taking the build artifact produced in the previous step
and combining it with the configuration settings necessary for a specific environment. In this
phase:
Run: The “Run” phase involves launching and managing the application in a runtime
• Isolation: The runtime environment is isolated from the build and release
processes. This isolation helps prevent conflicts and ensures that the application
• Logging and Monitoring: Proper logging and monitoring are established to gain
insight into the application’s behavior, performance, and potential issues in the
runtime environment.
By following the “Build, Release, Run” principle, the Twelve-Factor App methodology aims
to simplify the deployment process and make it more predictable. This separation of concerns
helps in maintaining consistent behavior across different environments and provides a clear
structure for managing the lifecycle of an application, from code compilation to production
deployment.
6. Processes
Applications should be stateless and share-nothing, which means they don’t store any state
locally. Instead, data is stored in databases or other external services. Stateless applications are
Properly managing processes ensures that your application can effectively scale, recover from
failures, and adapt to varying demands. Here’s how the “Processes” principle is addressed in
stateless, meaning that it doesn’t rely on the local filesystem or internal memory
to store data. Instead, data is stored in external services (like databases or caches)
or passed between processes explicitly.
scaling, which means running multiple instances of the same process to handle
increased load. This approach allows you to adapt to varying traffic levels and
distribute the workload across instances.
3. Process Isolation: Processes in a 12-factor app are isolated from each other. This
isolation ensures that a failure or issue in one process doesn’t affect the behavior
of other processes. Each process runs as an independent unit.
instances of the same process to run concurrently, each on its own port.
a cloud platform like Heroku, the platform handles process management, scaling,
and restarts.
7. Quick Startup and Graceful Shutdown: Processes in a 12-factor app should start
up quickly and be able to shut down gracefully. This is important for efficient
information (such as service URLs, API keys, and connection strings) are provided
1. Stateless Processes: Each user session and request is treated as stateless. User-
specific data, such as their profile information and posts, is stored in a separate
database or cache. This allows any instance of the application to handle a user’s
request, regardless of which instance previously served the user.
traffic. To achieve this, you can run multiple instances of the application, each
handling a portion of incoming requests. This horizontal scaling ensures that the
application can handle increased user activity without overloading a single
instance.
4. Port Binding: Each instance of the application binds to a specific port. For
example, instance A might listen on port 3000, instance B on port 3001, and so on.
Heroku, the platform’s process manager handles tasks such as starting, stopping,
and scaling instances. You don’t need to implement your own process management
logic.
quickly to accommodate scaling needs. When updates or changes are required, new
instances with the updated code are spun up, while the old instances are gradually
phased out to ensure a smooth transition.
keys, and other configuration values are provided to each instance through
environment variables. This allows you to configure the application differently for
By applying the “Processes” principle to your social media platform, you ensure that your
application can handle varying levels of user activity, maintain stability in the face of failures,
and scale efficiently as demand grows. The principle emphasizes modularity, isolation, and
horizontal scaling, which are essential for building robust and scalable cloud-native
applications.
7. Port Binding
Applications should be self-contained and bind to a port provided by the environment. This
allows multiple instances of the app to run on the same machine without conflicts.
Port binding ensures that each instance of the application can listen on a specific port to handle
incoming network requests. Here’s how the “Port Binding” principle is addressed in the 12-
factor app methodology:
1. Port Assignment: Each process within the application is assigned a specific port
to bind to. This port is used for receiving incoming requests from clients, whether
over a network. This means they can communicate with clients and external
6. Port Ranges: In some cases, applications might use port ranges to allow dynamic
port assignment. For example, a load balancer might allocate a range of ports for
instances to bind to, allowing for flexible scaling.
For example, if you’re building a web application using the 12-factor app methodology, each
instance of your application might run a web server that binds to a specific port (e.g., 3000 for
one instance, 3001 for another). When users send requests to your application, a load balancer
distributes the requests to the appropriate instance based on the port assignment.
By following the “Port Binding” principle, you ensure that your application can effectively
scale, handle incoming requests, and run independently across multiple instances. This
approach allows your application to adapt to varying levels of traffic while maintaining
consistency and reliability.
8. Concurrency
Modern applications should be designed to scale horizontally. This means that they can handle
increased load by adding more instances, rather than vertically by making a single instance
more powerful. Effective concurrency management ensures optimal resource utilization.
source: https://12factor.net/
Properly managing concurrency helps ensure that your application can effectively utilize
resources and respond to user interactions without becoming sluggish or unresponsive. Here’s
how the “Concurrency” principle is addressed in the 12-factor app methodology:
approach simplifies management and avoids certain issues associated with shared
process, the app can handle more simultaneous requests or tasks. Each instance
multiple instances of the application using load balancing. Load balancers ensure
that requests are distributed evenly to avoid overloading any single instance.
For instance, consider an e-commerce website built using the 12-factor app methodology.
When a sale event generates a sudden surge in traffic, the application’s load balancer can create
new instances to handle the increased load. Each instance processes requests independently
and statelessly, ensuring that the application remains responsive and reliable.
By adhering to the “Concurrency” principle, your application can effectively handle varying
levels of user activity, scale dynamically to meet demand, and maintain high performance and
responsiveness. The principle emphasizes horizontal scaling, statelessness, and efficient
resource utilization to achieve greater reliability and scalability in cloud-native applications.
9. Disposability
Applications should be able to start up quickly and shut down gracefully. This promotes
resilience, as instances can be easily replaced, and it improves the deployment and scaling
processes. Properly managing disposability ensures that your application can adapt to changes,
recover from failures, and maintain availability in dynamic and often unpredictable cloud
environments. Here’s how the “Disposability” principle is addressed in the 12-factor app
methodology:
1. Fast Startup: A 12-factor app aims to start up quickly. Fast startup is crucial for
should complete any ongoing tasks and close connections gracefully. This ensures
that no data is lost and that the app can be taken offline without causing disruptions.
For example, consider a real-time chat application built using the 12-factor app methodology.
When a new version of the application needs to be deployed, the existing instances can be
gradually taken offline as new instances are brought online. The fast startup and graceful
shutdown mechanisms ensure that users experience minimal interruptions, and ongoing
conversations are not disrupted.
By following the “Disposability” principle, your application can handle changes, failures, and
updates effectively. The principle emphasizes fast startup, graceful shutdown, and robustness
in failure to ensure that your application remains available and responsive in dynamic cloud
environments.
production. Properly managing dev/prod parity helps ensure that the behavior of your
application remains consistent across different environments, reducing the chances of issues
arising due to environmental differences. Here’s how the “Dev/Prod Parity” principle is
ensures that what works and is tested in the development environment is likely to
work the same way in production.
2. Minimizing Surprise: By maintaining dev/prod parity, you reduce the risk of
unexpected behavior or bugs when your application is deployed to the production
environment. This minimizes surprises and decreases the need for last-minute
adjustments.
environments. This drift can lead to issues that are difficult to diagnose and resolve.
agnostic. This means that the codebase itself doesn’t rely on specific environment
details or hardcoded configuration values that might be different in different
environments.
5. Consistent Testing and Debugging: Similar environments ensure that testing and
debugging are more accurate. Developers can reproduce issues reported in
production more effectively in the development environment due to the
consistency.
For example, consider a 12-factor app that interacts with a third-party API. In the development
environment, the app uses a sandbox or test version of the API, while in production, it uses the
live version. By ensuring that the API endpoints, authentication keys, and other settings are the
same between the environments, you reduce the chances of integration issues when deploying
to production.
By adhering to the “Dev/Prod Parity” principle, your application can be more reliable, easier
to troubleshoot, and less prone to unexpected issues when transitioning from development to
dependencies across different environments for more efficient and reliable application
deployment.
11. Logs
Applications should generate logs as event streams, which can then be captured, aggregated,
and analyzed by specialized tools. Logging is crucial for troubleshooting, monitoring, and
debugging.
Properly managing logs is crucial for monitoring, troubleshooting, and maintaining the health
and performance of your application. Here’s how the “Logs” principle is addressed in the 12-
2. Separation of Logs and App Behavior: Logs are kept separate from the
application’s behavior. This means that your application’s code doesn’t directly
manage the storage or transmission of log messages. Instead, it writes logs to the
standard streams as part of its natural operation.
3. Easily Accessible and Collectible: By using STDOUT and STDERR for logs,
your application can be easily configured to send its log messages to external
services or tools for aggregation, analysis, and monitoring.
format makes it easier to parse and analyze logs across different instances and
environments.
application doesn’t permanently store logs within its filesystem. Instead, it relies
on external tools to manage log retention and storage.
For example, if you’re developing a web application following the 12-factor app methodology,
your application might log messages related to user actions, errors, and system events. These
log messages are written to STDOUT or STDERR, and you can configure the application’s
environment to send these logs to a log aggregation service like Elasticsearch, Logstash, or a
cloud-native logging solution.
By adhering to the “Logs” principle, your application ensures that it provides valuable insights
into its behavior and performance. The principle emphasizes standardized log formatting,
external log aggregation, and separation of log management from application behavior. This
approach allows for efficient monitoring, troubleshooting, and maintenance of your cloud-
native application.
Administrative tasks, such as database migrations and one-time scripts, should be treated as
one-off processes and run separately from the main application code. Admin processes help
developers and operators manage the application’s health, data, and configuration. Here’s how
admin processes can be approached in the context of the 12-factor app methodology:
1. Separation of Concerns: Admin processes are treated as separate from the regular
and scaling adjustments, you reduce the risk of human error and ensure
consistency.
3. Admin Commands and Scripts: Admin processes are typically triggered using
specific commands or scripts that are distinct from the application’s normal
5. Logging and Monitoring: Just like regular application processes, admin processes
should generate logs and be monitored for successful execution and potential
issues. This helps ensure the reliability of administrative tasks.
For example, consider a 12-factor app that runs a web application. Admin processes for this
transformations.
location.
• Scaling Adjustments: Dynamically adjusting the number of application instances
based on traffic.
These admin processes might be executed using specific commands or scripts, separate from
the regular application’s runtime commands. The ability to automate and manage these
administrative tasks efficiently is crucial for maintaining the health, reliability, and scalability
While admin processes aren’t explicitly addressed as a separate principle in the 12-factor app
methodology, their proper design, automation, and separation from regular application logic
are in alignment with the methodology’s principles of isolation, disposability, and environment
consistency.