0% found this document useful (0 votes)
40 views35 pages

Cloud-Native Application Development 1

Uploaded by

blackdeathsh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views35 pages

Cloud-Native Application Development 1

Uploaded by

blackdeathsh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

1) What is a Cloud-Native Application?

A cloud-native application is software specifically designed and built to run in a cloud


environment, leveraging the full potential of cloud computing’s scalability, flexibility, and
resilience. Unlike traditional applications, cloud-native apps are modular, scalable, distributed,
and dynamically orchestrated.

Key characteristics of cloud-native applications include:

• Microservices architecture: The app is broken into small, independent services that
can be deployed, scaled, and updated individually.
• Containerization: Each service is packaged in containers (e.g., Docker), ensuring
consistency across different cloud environments.
• Dynamic orchestration: Services are managed using orchestrators like Kubernetes,
automating the deployment, scaling, and management of the application.
• APIs: Communication between services often occurs through lightweight APIs.
• Stateless: Services are generally stateless, storing data in external databases, making
them more resilient to failures.
• DevOps and CI/CD: Cloud-native apps integrate continuous development, delivery,
and deployment practices to ensure quick iterations and updates.

2) Cloud Native Landscape


The goal of the cloud native landscape is to compile and organize all cloud native open
source projects and proprietary products into categories, providing an overview of the current
ecosystem.

First, let’s strip all individual technologies from the landscape and look at the
categories. There are different “rows” reflecting architectural layers each with its own set of
subcategories. In the first layer, you have tools to provision infrastructure, that’s your
foundation. Then you start adding tooling needed to run and manage apps such as the runtime
and orchestration layer. At the very top you have tools to define and develop your application,
such as databases, image building, and CI/CD tools. The landscape starts with the infrastructure
and, with each layer, moves closer to the actual app. That’s what these layers represent (we’ll
address the two “columns” running across those layers later). Let’s explore each layer at a time,
starting with the bottom.
1. The Provisioning Layer

Provisioning refers to the tools involved in creating and hardening the foundation on which
cloud native applications are built. It covers everything from automating the creation,
management, and configuration of infrastructure to scanning, signing and storing container
images. Provisioning even extends into the security space by providing tools that allow you to
set and enforce policies, build authentication and authorization into your apps and platforms,
and handle secrets distribution.

In the provisioning layer, you’ll find:

• Automation and configuration tooling to help engineers build computing


environments without human intervention.
• Container registries store executable files of the apps.
• Security and compliance frameworks address different security areas.
• Key management solutions help with encryption to ensure only authorized users
have access to the application.
These tools allow engineers to codify all infrastructure specifics, so that the system can spin
new environments up and down as needed, ensuring they are consistent and secure.

2. The Runtime Layer


Next, is the runtime layer. Runtime is one of those terms that can be confusing. Like many
terms in IT, there is no strict definition and it can be used differently, depending on the context.
In a narrow sense, runtime is a sandbox on a specific machine prepared to run an app — the
bare minimum an app needs. In the widest of senses, runtime is any tool the app needs to run.

In the CNCF cloud native landscape, runtime is defined somewhere in between focusing on
the components that matter for the containerized apps in particular: what they need to run,
remember, and communicate. They include:

• Cloud native storage provides virtualized disks or persistence for containerized


apps.
• Container runtime delivers the constraint, resource, and security considerations
for containers and executes the files with the codified app.
• Cloud native networking, the network over which nodes (machines or
processes) of a distributed system are connected and communicate.

3. The Orchestration and Management Layer

Once you automate infrastructure provisioning following security and compliance standards
(provisioning layer) and set up the tools the app needs to run (runtime layer), engineers must
figure out how to orchestrate and manage their apps. The orchestration and management layer
deals with how all containerized services (app components) are managed as a group. They need
to identify other services, communicate with one another, and coordinate. Inherently scalable,
cloud native apps rely on automation and resilience, enabled by this layer.

In this layer you’ll find:

• Orchestration and scheduling to deploy and manage container clusters


ensuring they are resilient, loosely-coupled, and scalable. In fact, the
orchestration tool, in most cases Kubernetes, is what makes a cluster by
managing containers and the operating environment
• Coordination and service discovery so services (app components) can locate
and communicate with one another.
• Remote procedure call (RPC), a technique enabling a service on one node to
communicate with a service on a different node connected through a network.
• Service proxy is an intermediary placed between services through which they
communicate. The sole purpose of the proxy is to exert more control over service
communication, it doesn’t add anything to the communication itself. These
proxies are crucial to service meshes mentioned below.
• API gateway, an abstraction layer through which external applications can
communicate.
• Service mesh is similar to the API gateway in the sense that it’s a dedicated
infrastructure layer through which apps communicate, but it provides policy-
driven internal service-to-service communication. Additionally, it may include
everything from traffic encryption to service discovery, to application
observability.

4. The Application Definition and Development Layer

Now let’s move to the top layer. As the name suggests, the application definition and
development layer focus on the tools that enable engineers to build apps and allow them to
function. Everything discussed above was related to building a reliable, secure environment
and providing all needed app dependencies.

Under this category you’ll see:

• Databases enabling apps to collect data in an organized manner.


• Streaming and messaging enable apps to send and receive messages (events and
streams). It’s not a networking layer, but rather a tool to queue and process
messages.
• Application definition and image build are services that help configure,
maintain, and run container images (the executable files of an app).
• Continuous integration and delivery (CI/CD) allow developers to
automatically test that their code works with the codebase (the rest of the app)
and, if their team is mature enough, even automate deployment into production.
Tools Running Across All Layers

Going back to the category overview, we’ll explore the two columns running across all layers.
Observability and analysis are tools that monitor all layers. Platforms, on the other hand, bundle
multiple technologies within these layers into one solution, including observability and
analysis.

Observability and Analysis

To limit service disruption and help drive down MRRT (meantime to resolution), you’ll need
to monitor and analyze every aspect of your application so any anomaly gets detected and
rectified right away. Failures will occur in complex environments and these tools help make
them less impactful by helping identify and resolve failures as quickly as possible. Since this
category runs across and monitors all layers, it’s on the side and not embedded in a specific
layer.
Here you’ll find:

• Logging tools to collect event logs (info about processes).


• Monitoring solutions to collect metrics (numerical system parameters, such as
RAM availability).
• Tracing goes one step further than monitoring and monitors the propagation of
user requests. This is relevant in the context of service meshes.
• Chaos engineering are tools to test software in production to identify
weaknesses and fix them before they impact service delivery.

Platforms

As we’ve seen, each of these modules solves a particular problem. Storage alone does not
provide all you need to manage your app. You’ll need an orchestration tool, container runtime,
service discovery, networking, API gateway, etc. Covering multiple layers, platforms bundle
different tools together solving a larger problem.

Configuring and fine-tuning different modules so they are reliable and secure and ensuring all
the technologies it leverages are updated and vulnerabilities patched is no easy task. With
platforms, users don’t have to worry about these details — a real value add.

You’ll probably notice, the categories all revolve around Kubernetes. That’s because
Kubernetes, is at the core of the cloud native stack. The CNCF, by the way, was created with
Kubernetes as its first seeding project; all other projects followed later.

Platforms can be categorized in four groups:

• Kubernetes distributions take the unmodified, open-source code (although


some modify it) and add additional features their market needs around it.
• Hosted Kubernetes (aka managed Kubernetes) is similar to a distribution but
it’s managed by your provider on their or on your own infrastructure.
• Kubernetes installers are exactly that, they automate the installation and
configuration process of Kubernetes.
• PaaS / container services are similar to hosted Kubernetes, but include a broad
set of application deployment tools (generally a subset from the cloud native
landscape).

3) Tools for Cloud Native Application Development

Cloud-native development involves building and deploying applications designed to take full
advantage of cloud computing models, such as scalability, automation, and elasticity. It uses
the concepts like microservices, containers, and dynamic orchestration. There are a variety of
tools that can be used for cloud-native development, which can be categorized into different
areas:

1. Containerization

• Docker: A platform for building, shipping, and running applications in containers.


Containers package software and dependencies together, making apps more portable.
• Podman: An alternative to Docker, focusing on better security and compatibility with
Kubernetes.

2. Orchestration

• Kubernetes: An open-source platform for automating deployment, scaling, and


managing containerized applications.
• Docker Swarm: Another orchestration tool, though less popular than Kubernetes, it's
easier to set up for simpler use cases.
• Nomad: A simpler and flexible orchestrator, supporting containers and other
workloads.
• Helm: A package manager for Kubernetes that simplifies deployment by managing
Kubernetes charts.
• OpenShift: A Kubernetes distribution by Red Hat that adds developer and operational
tools.

3. Continuous Integration/Continuous Deployment (CI/CD)

• Jenkins X: Optimized for cloud-native development, integrates with Kubernetes and


Docker to deliver CI/CD pipelines.
• GitLab CI/CD: Integrated into GitLab, it allows for the creation of pipelines to
automate testing, building, and deploying apps.
• CircleCI: A CI/CD platform that automates the software development process.
• Tekton: A Kubernetes-native CI/CD framework that provides building blocks for
complex pipelines.

4. Infrastructure as Code (IaC)

• Terraform: An open-source tool for defining and provisioning infrastructure using


configuration files.
• Pulumi: A modern IaC platform that allows the use of general-purpose programming
languages like Python, Go, and JavaScript to define cloud infrastructure.
• AWS CloudFormation: A service that enables you to model and set up AWS resources
using templates.
• Ansible: Open-source software provisioning, configuration management, and
application-deployment tool.

5. Monitoring and Logging

• Prometheus: An open-source monitoring and alerting toolkit widely used in cloud-


native environments.
• Grafana: A platform for visualizing metrics and logs collected from Prometheus and
other data sources.
• ELK Stack (Elasticsearch, Logstash, Kibana): A popular stack for searching,
analyzing, and visualizing log data.
• Fluentd: A data collector for unifying logging layers.
• Jaeger: Open-source, end-to-end distributed tracing.
• Datadog: A SaaS-based monitoring platform with a focus on cloud-scale applications.

6. Service Mesh

• Istio: A service mesh that allows developers to manage service-to-service


communications in a microservices architecture.
• Linkerd: A lightweight service mesh focused on simplicity and performance.

7. API Gateway and Management

• Kong: A popular API gateway that provides features like traffic control, load balancing,
and security.
• Envoy: A high-performance, cloud-native proxy and API gateway that integrates with
service meshes like Istio.
• Traefik: Another API gateway that integrates well with Kubernetes for routing traffic
in microservice architectures.
• NGINX: A high-performance web server that also serves as a reverse proxy, API
gateway, and load balancer.
• AWS API Gateway: Fully managed service that makes it easy to create, publish,
maintain, monitor, and secure APIs.

8. Serverless

• AWS Lambda: A serverless compute service that allows you to run code without
provisioning or managing servers.
• Google Cloud Functions: A serverless execution environment for building and
connecting cloud services.
• OpenFaaS: An open-source serverless platform for Kubernetes.
• Azure Functions: Microsoft's serverless platform for event-driven applications.

9. Development Frameworks

• Spring Boot: A framework for building Java-based microservices and cloud-native


apps.
• Quarkus: A Kubernetes-native Java framework optimized for containers.
• Node.js: A lightweight, event-driven JavaScript runtime often used for microservices
and serverless functions.

10. Security
• Kubernetes RBAC (Role-Based Access Control): For managing access control in
Kubernetes clusters.
• Vault by HashiCorp: For securely managing secrets, tokens, passwords, and
certificates.
• Aqua Security: A comprehensive platform for securing containerized applications.
• Falco: A cloud-native runtime security tool for detecting anomalies in applications and
microservices.

11. Cloud Providers and Platforms

• AWS: Amazon's cloud platform offering compute, storage, and other services.
• Google Cloud Platform (GCP): Provides infrastructure, machine learning, and other
cloud services.
• Microsoft Azure: A cloud platform offering a wide range of services including IoT,
AI, and machine learning.

12. Cloud-Native Databases

• Amazon RDS: Managed relational database service supporting various database


engines.
• Amazon Aurora: A MySQL and PostgreSQL-compatible relational database built for
the cloud.
• Google Cloud Spanner: Horizontally scalable, strongly consistent, relational database
service.
• CockroachDB: A distributed SQL database designed to scale with cloud-native
applications.

13. Cloud IDEs

• GitHub Codespaces: Cloud-based development environments, tightly integrated with


GitHub.
• Eclipse Che: A Kubernetes-native IDE for cloud development.

These tools together enable cloud-native development, focusing on scalability, resilience, and
automation, essential for building modern distributed systems.

4) The Twelve-Factor App: Modern Principles for Cloud-native Development

Creating applications that are scalable, maintainable, and adaptable is essential for

success. The emergence of cloud computing and microservices architectures has given rise to

a set of best practices known as the Twelve-Factor App methodology. These twelve principles
provide a comprehensive guide for building applications that are designed to excel in modern
cloud-native environments. Let’s delve into each factor and understand how they collectively

contribute to the development of robust and efficient applications.

1. Codebase:

Each application should have a single, version-controlled codebase. This ensures that all

instances of the app are based on the same code, minimizing inconsistencies and reducing the
risk of errors stemming from different code versions.

source: https://12factor.net/
“Codebase” principle entails:

1. One Codebase, One App: There should be a single code repository for your

application. This ensures that there’s no ambiguity about where the code for your
app resides. All development, testing, and deployment activities should stem from

this single codebase.

2. Version Control: The codebase should be stored in a version control system (e.g.,

Git) to track changes over time. This enables collaboration, rollback, and
synchronization among developers and development environments.

3. Isolation from Configuration: The codebase should not contain configuration


settings or secrets directly. Configuration should be stored separately from the
code, preferably in environment variables or external configuration systems.

4. Dependency Management: The codebase should include a file (e.g., a


“package.json” in JavaScript projects) that lists the application’s dependencies and

their versions. These dependencies can be installed automatically based on this

file, ensuring consistent behavior across environments.

5. Build and Release Artifacts: The codebase should be used to produce build and
release artifacts that can be deployed to various environments (staging, production,

etc.). The build process should be separate from the runtime environment.

6. Explicit Dependencies: Your application should explicitly declare its

dependencies, avoiding assumptions about the system environment. This makes

the app more self-contained and portable.

7. No Implicit Environment Changes: The codebase should not make assumptions


about the underlying environment. Any environment-specific changes or

configurations should be managed externally, typically through environment

variables.
The “codebase” principle in the 12-factor app methodology emphasizes a clear separation
between code, configuration, and environment. By adhering to this principle, developers can

achieve better consistency, easier collaboration, and more efficient deployment of their

applications.

2. Dependencies

All dependencies, whether libraries or system tools, should be explicitly declared. This

guarantees that each instance of the app has access to the correct dependencies, regardless of

the environment it’s deployed in. Here’s how the “Dependencies” principle is addressed in the

12-factor app methodology:

1. Explicit Declaration: The 12-factor app principle encourages explicit declaration

of all dependencies that your application requires. This involves maintaining a


manifest or configuration file (such as “requirements.txt” for Python, “Gemfile”
for Ruby, or “package.json” for JavaScript) that lists out all the required
dependencies and their versions.

2. Isolation and Reproducibility: By explicitly declaring dependencies, you create


a clear separation between your application’s code and the external resources it
relies on. This helps ensure that the exact same dependencies are used across all

environments, from development to production, minimizing unexpected behavior


due to discrepancies in the software stack.

3. Dependency Installation: The process of installing dependencies should be

automated and consistent. Your application’s build and deployment process should
include the steps to fetch and install the specified dependencies based on the

manifest file. This prevents manual intervention and reduces the chances of
discrepancies.
4. Dependency Locking: In addition to listing dependencies, it’s often a good
practice to lock down the exact versions of dependencies that your application

should use. This prevents unintentional updates to dependencies that might

introduce breaking changes or security vulnerabilities.

5. Isolation from System Dependencies: Your application should not rely on

system-wide packages or libraries. Instead, it should include all the dependencies

it needs within its own environment. This isolation makes your application more
self-contained and easier to manage.

6. Service Dependencies: When your application relies on external services (such as


databases, caching systems, APIs, etc.), these dependencies should also be
explicitly declared and configurable through environment variables. This way, you

can easily switch between different service instances for different environments.

By managing dependencies according to the 12-factor app methodology, you can achieve
greater consistency, portability, and reliability for your application. The principle emphasizes

clear separation of concerns, controlled versioning, and the automation of dependency


management processes, all of which contribute to smoother development, deployment, and
maintenance of cloud-native applications.

3. Configuration

Configuration details, such as environment-specific variables and settings, should be stored in

the environment and not hardcoded into the application. This separation of configuration from

code enhances portability and security, as sensitive information remains separate from the
codebase. Properly managing configuration helps ensure that your application can be deployed

consistently and reliably across various environments, from development to production. Here’s
how the “Configuration” principle is addressed in the 12-factor app methodology:
1. Separation of Configuration from Code: Your application’s configuration,
including settings like API keys, database connection strings, feature flags, and

more, should be kept separate from the application’s source code. This allows you

to change configuration without modifying the codebase, making it easier to


manage different environments and reducing the risk of exposing sensitive

information.

2. Environment Variables: The 12-factor app methodology recommends using


environment variables to store configuration values. Environment variables are

external to the codebase and can be set differently for each environment. This
approach enables flexibility and security, as configuration values are not hard-
coded into the application.

3. Explicit Declaration: Instead of assuming configuration values based on the

runtime environment, your application should explicitly read configuration values


from environment variables. This makes it clear which settings are required and
prevents unexpected behavior due to environment differences.

4. External Configuration Systems: For more complex applications, it might be


beneficial to use external configuration management systems. These systems allow
you to centralize configuration settings, update them independently of the

application code, and provide versioning and auditing capabilities.

5. No Configuration in Code: Avoid embedding configuration values directly into


your application’s code. This includes hard-coded values or inline configuration

files. Such practices make it difficult to change configurations without modifying

the codebase and can lead to inconsistencies.

6. Immutable Infrastructure: Treat your application’s infrastructure as immutable.

Instead of making changes to running instances, create new instances with updated
configurations. This reduces the chances of configuration drift and helps maintain
consistent behavior across instances.

7. Reproducibility: The goal is to make it possible to replicate your application’s

behavior across different environments by simply changing the configuration. This


ensures that the same version of your application behaves consistently regardless

of where it’s deployed.

Example: Imagine you’re developing an e-commerce platform that connects buyers and
sellers. Here’s how the “Configuration” principle might be applied to various aspects of your
application:

1. Separation of Configuration from Code: Instead of hard-coding configuration


settings directly into your codebase, you store them separately. For example, you

avoid placing database connection strings, API keys, and other sensitive

information directly in your source code.

2. Environment Variables: Configuration values are set as environment variables


specific to each environment (development, testing, staging, production). For

example, you might set an environment variable named DATABASE_URL to


store the connection string for your database.

3. Explicit Declaration: Your application explicitly reads configuration values from

environment variables at runtime. This ensures that your application’s behavior is

consistent across different environments and avoids making assumptions about the

runtime context.

4. External Configuration Systems: For more complex configurations, you might

use an external configuration management system. This system centralizes


configuration settings, allowing you to manage them independently of your

application’s codebase.
5. No Configuration in Code: Your codebase does not contain direct references to
configuration values. Instead, it relies on environment variables or external

configuration systems to obtain the required settings.

6. Immutable Infrastructure: Configuration changes are handled separately from


your application’s code updates. When making changes to configuration, you

create a new instance of the application with the updated settings. This promotes

consistency and avoids configuration drift.

7. Reproducibility: By relying on environment variables and external configuration,

you can replicate your application’s behavior across different environments. You
don’t need to modify the codebase to adjust configurations when deploying to
different environments.

For instance, if your e-commerce platform interacts with payment gateways, shipping APIs,

and various services, you could store API keys and access tokens as environment variables.
This allows you to secure sensitive information and easily manage different credentials for

testing and production environments.

4. Backend Services

External services, like databases, caching systems, and message queues, should be treated as

attached resources that the application can access. This decoupling simplifies swapping
services, facilitates scaling, and enables easier testing. The concept of managing backend

services is addressed as one of the key principles in the Twelve-Factor App methodology,

which provides guidelines for building modern, cloud-native applications. Properly managing
backend services helps ensure that your application can seamlessly integrate with and utilize

these services while maintaining modularity and portability.


source: https://12factor.net/

Here’s how the “Backend Services” principle is addressed in the 12-factor app methodology:

1. Treat Services as Resources: Backend services should be treated as resources that


can be easily attached to your application. These services are typically accessed
over the network using protocols like HTTP, TCP, or other communication

mechanisms.

2. Separation of Concerns: Your application’s code should not contain hard-coded

references to specific backend services. Instead, the 12-factor app methodology

emphasizes keeping service-related configuration separate from the codebase. This


separation allows you to switch or update services without altering the

application’s source code.

3. Configuration via Environment Variables: The configuration information

needed to connect to backend services should be provided via environment


variables. This includes connection strings, API keys, authentication tokens, and

other sensitive information. This approach keeps the configuration out of the

codebase and allows for easy adjustment across different environments.


4. Dependency Injection: Rather than instantiating and managing service
connections within your application code, the 12-factor app methodology

encourages using dependency injection or service discovery mechanisms. This

allows your application to flexibly and dynamically connect to the appropriate


backend services.

5. Service Independence: Backend services should be independently deployable and

scalable. The 12-factor app principle encourages isolating service logic from your
application logic, enabling each service to be developed, deployed, and managed

separately.

6. API-First Approach: When integrating with external APIs or services, your


application should follow an “API-first” approach. This means designing and

consuming APIs based on clear contracts and specifications, which reduces the

impact of changes on both sides of the interaction.

7. Network Isolation: The 12-factor app methodology emphasizes that services

should be able to operate over a network, assuming possible latency and failures.
Your application should handle service unavailability gracefully and possibly
implement retry mechanisms.

Example: Imagine you’re building a social networking application that allows users to share
photos and connect with friends. Here’s how the “Backend Services” principle might be applied
to various aspects of your application:

1. Treat Services as Resources: Your application relies on several backend services,

such as a database for storing user data and a cloud storage service for hosting
images. These services are treated as separate resources that your application can

interact with.
2. Separation of Concerns: Instead of embedding direct API calls or connection
logic within your application’s codebase, you keep service-related code and

configuration separate. This ensures that your application remains modular and can

be updated independently of the services it relies on.

3. Configuration via Environment Variables: Configuration details, like database

connection strings and API keys, are stored as environment variables. For instance,

you might store the database URL as an environment variable


named DATABASE_URL.

4. Dependency Injection: Your application uses dependency injection to interact


with backend services. Instead of creating service instances directly within your
code, you pass them as dependencies to the relevant components.

5. Service Independence: Each backend service is designed to be independently


deployable and scalable. For example, you can scale your database and cloud
storage instances separately based on demand.

6. API-First Approach: When integrating with external APIs, such as social media
sharing APIs or payment gateways, you follow an API-first approach. You design

your interactions based on well-defined API contracts to ensure consistent and

reliable communication.

7. Network Isolation: Your application is designed to handle network latency and


potential service failures. It implements error handling and retry mechanisms to

deal with temporary unavailability of backend services.

For instance, when a user uploads a photo to your social networking app, the photo is stored in

a cloud storage service like Amazon S3. Your application uses environment variables to access

the storage service’s API key and endpoint. Similarly, when a user logs in, the app connects to

the database to fetch their profile data using the provided database connection string.
5. Build, Release, Run

Separating the build, release, and run stages of an application lifecycle promotes consistency.

The build stage compiles the code, the release stage packages it with its dependencies, and the
run stage executes the application using these packaged resources.

source: https://12factor.net/

This principle emphasizes the separation of concerns and processes related to building,
packaging, and deploying an application. Here’s a breakdown of each phase:

Build: The “Build” phase involves compiling, assembling, and preparing the application’s
code and dependencies for deployment. This phase focuses on transforming the source code
into executable artifacts. During the build process, the following activities take place:

• Dependency Management: Dependencies listed in configuration files (e.g.,


“package.json” for JavaScript, “requirements.txt” for Python) are fetched and

installed.

• Compilation and Transpilation: If necessary, the source code is compiled or

transpiled into executable code.


• Building Assets: Assets like CSS, JavaScript files, and templates are generated or
bundled.

• Artifact Creation: The result of the build process is a packaged artifact that

includes the application code, dependencies, and any compiled assets.

Release: The “Release” phase involves taking the build artifact produced in the previous step

and combining it with the configuration settings necessary for a specific environment. In this
phase:

• Configuration Setup: Configuration values, such as database connection strings


and API keys, are provided via environment variables or external configuration
systems.

• Versioning: The build artifact is associated with a specific version and


configuration set, creating a release that represents a complete, deployable instance
of the application.

• Immutable Builds: The concept of immutability is emphasized here, meaning that


once a release is created, it should remain unchanged. If updates are needed, a new
release is created.

Run: The “Run” phase involves launching and managing the application in a runtime

environment, whether it’s a development server, staging environment, or production server.

This phase focuses on:

• Isolation: The runtime environment is isolated from the build and release
processes. This isolation helps prevent conflicts and ensures that the application

behaves consistently regardless of where it’s deployed.


• Scaling: The application can be scaled horizontally by creating multiple instances
of the same release. Each instance can handle incoming requests independently.

• Logging and Monitoring: Proper logging and monitoring are established to gain

insight into the application’s behavior, performance, and potential issues in the
runtime environment.

By following the “Build, Release, Run” principle, the Twelve-Factor App methodology aims
to simplify the deployment process and make it more predictable. This separation of concerns

helps in maintaining consistent behavior across different environments and provides a clear
structure for managing the lifecycle of an application, from code compilation to production
deployment.

6. Processes

Applications should be stateless and share-nothing, which means they don’t store any state
locally. Instead, data is stored in databases or other external services. Stateless applications are

more scalable, resilient, and easier to manage.

Properly managing processes ensures that your application can effectively scale, recover from

failures, and adapt to varying demands. Here’s how the “Processes” principle is addressed in

the 12-factor app methodology:

1. Stateless and Share-Nothing: Each process in a 12-factor app is designed to be

stateless, meaning that it doesn’t rely on the local filesystem or internal memory

to store data. Instead, data is stored in external services (like databases or caches)
or passed between processes explicitly.

2. Concurrency and Scaling: The 12-factor app methodology encourages horizontal

scaling, which means running multiple instances of the same process to handle
increased load. This approach allows you to adapt to varying traffic levels and
distribute the workload across instances.

3. Process Isolation: Processes in a 12-factor app are isolated from each other. This

isolation ensures that a failure or issue in one process doesn’t affect the behavior
of other processes. Each process runs as an independent unit.

4. Port Binding: A 12-factor app is self-contained and self-hosted. Each process


binds to a port and listens for incoming requests. This design allows multiple

instances of the same process to run concurrently, each on its own port.

5. Concurrency via Processes: Instead of using threads within a single process, a


12-factor app achieves concurrency by running multiple processes. This approach
simplifies management, avoids certain types of threading-related issues, and

provides better utilization of modern hardware and cloud resources.

6. No Process Management: A 12-factor app doesn’t manage its own processes. It


relies on a process manager provided by the hosting environment. For example, in

a cloud platform like Heroku, the platform handles process management, scaling,
and restarts.

7. Quick Startup and Graceful Shutdown: Processes in a 12-factor app should start

up quickly and be able to shut down gracefully. This is important for efficient

scaling and for minimizing the impact of deployments and updates.

8. Environment as the Source of Truth: Configuration and environment-specific

information (such as service URLs, API keys, and connection strings) are provided

to processes via environment variables. Processes read these variables to adapt to


different environments.
Example: Imagine you’re building a social media platform where users can post messages,
follow other users, and interact with each other. Here’s how the “Processes” principle might be
applied to various aspects of your application:

1. Stateless Processes: Each user session and request is treated as stateless. User-

specific data, such as their profile information and posts, is stored in a separate

database or cache. This allows any instance of the application to handle a user’s
request, regardless of which instance previously served the user.

2. Concurrency and Scaling: Your application is designed to handle high levels of

traffic. To achieve this, you can run multiple instances of the application, each
handling a portion of incoming requests. This horizontal scaling ensures that the
application can handle increased user activity without overloading a single

instance.

3. Process Isolation: Each instance of your application runs independently, without


sharing memory or resources with other instances. If one instance encounters an

issue or crashes, it doesn’t affect the stability of other instances.

4. Port Binding: Each instance of the application binds to a specific port. For
example, instance A might listen on port 3000, instance B on port 3001, and so on.

A load balancer or routing system directs incoming traffic to the appropriate


instance based on the port.

5. No Process Management: If you’re deploying your application on a platform like

Heroku, the platform’s process manager handles tasks such as starting, stopping,
and scaling instances. You don’t need to implement your own process management

logic.

6. Quick Startup and Graceful Shutdown: Instances of the application start

quickly to accommodate scaling needs. When updates or changes are required, new
instances with the updated code are spun up, while the old instances are gradually
phased out to ensure a smooth transition.

7. Environment Variables for Configuration: Database connection strings, API

keys, and other configuration values are provided to each instance through
environment variables. This allows you to configure the application differently for

development, staging, and production environments.

By applying the “Processes” principle to your social media platform, you ensure that your

application can handle varying levels of user activity, maintain stability in the face of failures,
and scale efficiently as demand grows. The principle emphasizes modularity, isolation, and
horizontal scaling, which are essential for building robust and scalable cloud-native
applications.

7. Port Binding

Applications should be self-contained and bind to a port provided by the environment. This

allows multiple instances of the app to run on the same machine without conflicts.
Port binding ensures that each instance of the application can listen on a specific port to handle
incoming network requests. Here’s how the “Port Binding” principle is addressed in the 12-
factor app methodology:

1. Port Assignment: Each process within the application is assigned a specific port

to bind to. This port is used for receiving incoming requests from clients, whether

they are web browsers, other applications, or users.

2. Port Independence: Each instance of the application runs as an independent


process and binds to its own port. This isolation ensures that different instances

don’t conflict with each other when handling requests.


3. Concurrency and Scalability: By assigning unique ports to different instances of
the application, you can scale horizontally by running multiple instances to handle

increased traffic. A load balancer or routing system directs incoming requests to

the appropriate instance based on the assigned port.

4. Network Accessibility: Processes within a 12-factor app are designed to operate

over a network. This means they can communicate with clients and external

services over the network, allowing the application to be distributed across


multiple servers or cloud instances.

5. No Hard-Coded Ports: A 12-factor app avoids hard-coding port numbers within


the application’s code. Instead, the port number is typically provided as an
environment variable that can be configured based on the environment.

6. Port Ranges: In some cases, applications might use port ranges to allow dynamic
port assignment. For example, a load balancer might allocate a range of ports for
instances to bind to, allowing for flexible scaling.

For example, if you’re building a web application using the 12-factor app methodology, each
instance of your application might run a web server that binds to a specific port (e.g., 3000 for
one instance, 3001 for another). When users send requests to your application, a load balancer

distributes the requests to the appropriate instance based on the port assignment.

By following the “Port Binding” principle, you ensure that your application can effectively

scale, handle incoming requests, and run independently across multiple instances. This
approach allows your application to adapt to varying levels of traffic while maintaining
consistency and reliability.
8. Concurrency

Modern applications should be designed to scale horizontally. This means that they can handle

increased load by adding more instances, rather than vertically by making a single instance
more powerful. Effective concurrency management ensures optimal resource utilization.

source: https://12factor.net/

Properly managing concurrency helps ensure that your application can effectively utilize
resources and respond to user interactions without becoming sluggish or unresponsive. Here’s
how the “Concurrency” principle is addressed in the 12-factor app methodology:

1. Horizontal Scaling: A 12-factor app is designed to scale horizontally by running


multiple instances of the same application process. This allows the application to
handle increased workload by distributing requests across multiple instances.

2. Statelessness: Concurrency in a 12-factor app is achieved by having stateless

processes. Each process is designed to be independent and not rely on shared

memory or resources. State is stored externally, usually in databases, caches, or

other stateful services.


3. Concurrency Through Processes: A 12-factor app achieves concurrency by
running multiple processes rather than using threads within a single process. This

approach simplifies management and avoids certain issues associated with shared

memory in multithreaded environments.

4. Concurrency via Scalability: By running multiple instances of the application

process, the app can handle more simultaneous requests or tasks. Each instance

can work on a separate task, increasing overall throughput.

5. Load Balancing: In a 12-factor app, incoming requests are distributed among

multiple instances of the application using load balancing. Load balancers ensure
that requests are distributed evenly to avoid overloading any single instance.

6. Graceful Concurrency Scaling: During times of increased load, additional

instances of the application can be spun up dynamically to handle the higher


demand. This approach prevents slowdowns and ensures a responsive user
experience.

For instance, consider an e-commerce website built using the 12-factor app methodology.
When a sale event generates a sudden surge in traffic, the application’s load balancer can create
new instances to handle the increased load. Each instance processes requests independently

and statelessly, ensuring that the application remains responsive and reliable.

By adhering to the “Concurrency” principle, your application can effectively handle varying

levels of user activity, scale dynamically to meet demand, and maintain high performance and
responsiveness. The principle emphasizes horizontal scaling, statelessness, and efficient
resource utilization to achieve greater reliability and scalability in cloud-native applications.
9. Disposability

Applications should be able to start up quickly and shut down gracefully. This promotes

resilience, as instances can be easily replaced, and it improves the deployment and scaling
processes. Properly managing disposability ensures that your application can adapt to changes,

recover from failures, and maintain availability in dynamic and often unpredictable cloud

environments. Here’s how the “Disposability” principle is addressed in the 12-factor app
methodology:

1. Fast Startup: A 12-factor app aims to start up quickly. Fast startup is crucial for

scaling the application horizontally, allowing new instances to be spun up rapidly


to handle increased demand.

2. Graceful Shutdown: When an instance of a 12-factor app is being shut down, it

should complete any ongoing tasks and close connections gracefully. This ensures
that no data is lost and that the app can be taken offline without causing disruptions.

3. Robustness in Failure: A 12-factor app is designed to handle failures effectively.

When an instance crashes or becomes unresponsive, it can be terminated and


replaced with a new one without affecting the overall functionality of the
application.

4. Minimize Downtime: By ensuring that instances can be quickly replaced and


brought back online, you minimize downtime during updates or deployments. This

is particularly important in environments where users expect high availability.

5. Statelessness and External Storage: To enable easy disposability, a 12-factor app


stores state externally (e.g., in databases, caches, or file storage). This way, when
an instance is shut down or replaced, the application’s state remains intact.
6. Graceful Handling of Requests: During shutdown or failure, a 12-factor app
should complete any in-progress requests before shutting down. This prevents data

loss and ensures that clients receive appropriate responses.

For example, consider a real-time chat application built using the 12-factor app methodology.

When a new version of the application needs to be deployed, the existing instances can be

gradually taken offline as new instances are brought online. The fast startup and graceful
shutdown mechanisms ensure that users experience minimal interruptions, and ongoing
conversations are not disrupted.

By following the “Disposability” principle, your application can handle changes, failures, and
updates effectively. The principle emphasizes fast startup, graceful shutdown, and robustness

in failure to ensure that your application remains available and responsive in dynamic cloud

environments.

10. Dev/Prod Parity

Development, testing, and production environments should be as similar as possible. This


minimizes the “it works on my machine” problem and reduces surprises when deploying to

production. Properly managing dev/prod parity helps ensure that the behavior of your

application remains consistent across different environments, reducing the chances of issues
arising due to environmental differences. Here’s how the “Dev/Prod Parity” principle is

addressed in the 12-factor app methodology:

1. Consistency Across Environments: A 12-factor app aims to keep development,


staging, and production environments as similar as possible. This consistency

ensures that what works and is tested in the development environment is likely to
work the same way in production.
2. Minimizing Surprise: By maintaining dev/prod parity, you reduce the risk of
unexpected behavior or bugs when your application is deployed to the production

environment. This minimizes surprises and decreases the need for last-minute

adjustments.

3. Avoiding Configuration Drift: Dev/prod parity helps prevent configuration drift,

which can occur when configuration settings differ significantly between

environments. This drift can lead to issues that are difficult to diagnose and resolve.

4. Environment-agnostic Code: A 12-factor app is designed to be environment-

agnostic. This means that the codebase itself doesn’t rely on specific environment
details or hardcoded configuration values that might be different in different
environments.

5. Consistent Testing and Debugging: Similar environments ensure that testing and
debugging are more accurate. Developers can reproduce issues reported in
production more effectively in the development environment due to the

consistency.

For example, consider a 12-factor app that interacts with a third-party API. In the development
environment, the app uses a sandbox or test version of the API, while in production, it uses the

live version. By ensuring that the API endpoints, authentication keys, and other settings are the

same between the environments, you reduce the chances of integration issues when deploying

to production.

By adhering to the “Dev/Prod Parity” principle, your application can be more reliable, easier
to troubleshoot, and less prone to unexpected issues when transitioning from development to

production. The principle emphasizes maintaining consistency in configuration, behavior, and

dependencies across different environments for more efficient and reliable application
deployment.
11. Logs

Applications should generate logs as event streams, which can then be captured, aggregated,

and analyzed by specialized tools. Logging is crucial for troubleshooting, monitoring, and
debugging.

Properly managing logs is crucial for monitoring, troubleshooting, and maintaining the health

and performance of your application. Here’s how the “Logs” principle is addressed in the 12-

factor app methodology:

1. Stream Logs to STDOUT/STDERR: A 12-factor app writes its logs to the


standard output (STDOUT) and standard error (STDERR) streams. This approach
allows logs to be captured by the execution environment, which can then manage

and route them appropriately.

2. Separation of Logs and App Behavior: Logs are kept separate from the

application’s behavior. This means that your application’s code doesn’t directly

manage the storage or transmission of log messages. Instead, it writes logs to the
standard streams as part of its natural operation.

3. Easily Accessible and Collectible: By using STDOUT and STDERR for logs,

your application can be easily configured to send its log messages to external
services or tools for aggregation, analysis, and monitoring.

4. Consistent Formatting: A 12-factor app adheres to a consistent log format. This

format makes it easier to parse and analyze logs across different instances and
environments.

5. Aggregation and Monitoring: Logs generated by your application can be

aggregated and monitored by external systems. Centralized logging services can


collect logs from various instances, helping you identify patterns, diagnose issues,
and ensure proper performance.

6. No Permanent Storage: In a 12-factor app, logs are considered disposable. The

application doesn’t permanently store logs within its filesystem. Instead, it relies
on external tools to manage log retention and storage.

For example, if you’re developing a web application following the 12-factor app methodology,
your application might log messages related to user actions, errors, and system events. These

log messages are written to STDOUT or STDERR, and you can configure the application’s
environment to send these logs to a log aggregation service like Elasticsearch, Logstash, or a
cloud-native logging solution.

By adhering to the “Logs” principle, your application ensures that it provides valuable insights
into its behavior and performance. The principle emphasizes standardized log formatting,
external log aggregation, and separation of log management from application behavior. This

approach allows for efficient monitoring, troubleshooting, and maintenance of your cloud-
native application.

12. Admin Processes

Administrative tasks, such as database migrations and one-time scripts, should be treated as
one-off processes and run separately from the main application code. Admin processes help

developers and operators manage the application’s health, data, and configuration. Here’s how

admin processes can be approached in the context of the 12-factor app methodology:

1. Separation of Concerns: Admin processes are treated as separate from the regular

application processes. They are designed to perform administrative tasks rather

than user-facing functions.


2. Automate Administrative Tasks: Automation is key to the effective execution of
admin processes. By automating tasks such as database migrations, data backups,

and scaling adjustments, you reduce the risk of human error and ensure

consistency.

3. Admin Commands and Scripts: Admin processes are typically triggered using

specific commands or scripts that are distinct from the application’s normal

runtime commands. These commands might be included as part of the


application’s codebase or provided separately.

4. Environment Separation: Admin processes might require different environment


configurations or access credentials than the regular application processes. Ensure
that the environment for admin processes is properly isolated and secured.

5. Logging and Monitoring: Just like regular application processes, admin processes
should generate logs and be monitored for successful execution and potential
issues. This helps ensure the reliability of administrative tasks.

6. Security Considerations: Admin processes often involve access to sensitive


resources or configuration changes. Implement appropriate security measures to

control access and prevent unauthorized execution of admin tasks.

For example, consider a 12-factor app that runs a web application. Admin processes for this

app might include tasks such as:

• Database Migrations: Applying database schema changes or data

transformations.

• Data Backups: Automating the backup of critical data to a separate storage

location.
• Scaling Adjustments: Dynamically adjusting the number of application instances
based on traffic.

• Configuration Updates: Changing environment variables or configuration

settings for the application.

These admin processes might be executed using specific commands or scripts, separate from

the regular application’s runtime commands. The ability to automate and manage these
administrative tasks efficiently is crucial for maintaining the health, reliability, and scalability

of your cloud-native application.

While admin processes aren’t explicitly addressed as a separate principle in the 12-factor app
methodology, their proper design, automation, and separation from regular application logic

are in alignment with the methodology’s principles of isolation, disposability, and environment
consistency.

You might also like