Slides on "Effective Terraform" from the SF Devops for Startups Meetup
https://www.meetup.com/SF-DevOps-for-Startups/events/237272658/
How Scylla Make Adding and Removing Nodes Faster and SaferScyllaDB
When a new node is added or removed, Scylla has to transfer part of the existing data from some nodes to their neighbors. When a node fails, Scylla has to repopulate its data with data from the surviving replicas. Those operations are collectively referred to as "streaming" operations, since they simply stream data from one node to another, without using this opportunity to also fix discrepancies in the data. This is in contrast with the repair operation, that looks into all existing replicas and reconcile their contents. Scylla is moving towards unifying those two operations. In this talk we will discuss why this is considered beneficial, and what other possibilities this opens to users.
This slide describe what is the KIND and how to set up the KIND(Kubernetes IN Docker) to have a simple and quickly environment for k8s testing, is also address few issues what KIND fix to make the KIND work, like the certificate issue and DNS issue
Grafana Mimir and VictoriaMetrics_ Performance Tests.pptxRomanKhavronenko
VictoriaMetrics and Grafana Mimir are time series databases with support of mostly the same protocols and APIs. However, they have different architectures and components, which makes the comparison more complicated. In the talk, we'll go through the details of the benchmark where I compared both solutions. We'll see how VictoriaMetrics and Mimir are dealing with identical workloads and how efficient they’re with using the allocated resources.
The talk will cover design and architectural details, weak and strong points, trade-offs, and maintenance complexity of both solutions.
Best Practices of Infrastructure as Code with TerraformDevOps.com
When your organization is moving to cloud, the infrastructure layer transitions from running dedicated servers at limited scale to a dynamic environment, where you can easily adjust to growing demand by spinning up thousands of servers and scaling them down when not in use.
The future of DevOps is infrastructure as code. Infrastructure as code supports the growth of infrastructure and provisioning requests. It treats infrastructure as software: code that can be re-used, tested, automated and version controlled. HashiCorp Terraform adopts infrastructure as code throughout its tool to prevent configuration drift, manage immutable infrastructure and much more!
Join this webinar to learn why Infrastructure as Code is the answer to managing large scale, distributed systems and service-oriented architectures. We will cover key use cases, a demo of how to use Infrastructure as Code to provision your infrastructure and more:
Agenda:
Intro to Infrastructure as Code: Challenges & Use cases
Writing Infrastructure as Code with Terraform
Collaborating with Teams on Infrastructure
Accelerating Envoy and Istio with Cilium and the Linux KernelThomas Graf
The document discusses how Cilium can accelerate Envoy and Istio by using eBPF/XDP to provide transparent acceleration of network traffic between Kubernetes pods and sidecars without any changes required to applications or Envoy. Cilium also provides features like service mesh datapath, network security policies, load balancing, and visibility/tracing capabilities. BPF/XDP in Cilium allows for transparent TCP/IP acceleration during the data phase of communications between pods and sidecars.
This document discusses infrastructure as code and the HashiCorp ecosystem. Infrastructure as code allows users to define and provision infrastructure through code rather than manual configuration. It can be used to launch, create, change, and downscale infrastructure based on configuration files. Tools like Terraform allow showing what changes will occur before applying them through files like main.tf and variables.tf. Terraform is part of the broader HashiCorp ecosystem of tools.
Terraform is an open source tool for building, changing, and versioning infrastructure safely and efficiently. It allows users to define and provision a datacenter infrastructure using a high-level configuration language known as HashiCorp Configuration Language. Some key features of Terraform include supporting multiple cloud providers and services, being declarative and reproducible, and maintaining infrastructure as code with immutable infrastructure. It works by defining configuration files that specify what resources need to be created. The configuration is written in HCL. Terraform uses these files to create and manage infrastructure resources like VMs, network, storage, containers and more across multiple cloud platforms.
Scylla Summit 2022: Making Schema Changes Safe with RaftScyllaDB
ScyllaDB adopted Raft as a consensus protocol in order to dramatically improve our operational aspects as well as provide strong consistency to the end-user. This talk will explain how Raft behaves in Scylla Open Source 5.0 and introduce the first end-user visible major improvement: schema changes. Learn how cluster configuration resides in Raft, providing consistent cluster assembly and configuration management. This makes bootstrapping safer and provides reliable disaster recovery when you lose the majority of the cluster.
To watch all of the recordings hosted during Scylla Summit 2022 visit our website here: https://www.scylladb.com/summit.
This document provides an overview of GitOps and summarizes a training session on the topic. The session covered Kubernetes and Git basics, the motivation and model for GitOps, an example of GitOps in action using Flux on a training environment, progressive delivery techniques like Flagger, and challenges with GitOps adoption. The goals were to explain what GitOps is, understand benefits, gain hands-on experience, and decide if it's right for a team/project. GitOps aims to use Git as the single source of truth for infrastructure and automate deployments by reconciling production with the code repository.
This document discusses Terraform, an open-source infrastructure as code tool. It begins by explaining how infrastructure can be defined and managed as code through services that have APIs. It then provides an overview of Terraform, including its core concepts of providers, resources, and data sources. The document demonstrates Terraform's declarative configuration syntax and process of planning and applying changes. It also covers features like modules, state management, data sources, and developing custom plugins.
An overview and introduction to Hashicorp's Terraform for the Chattanooga ChaDev Lunch.
https://www.youtube.com/watch?v=p2ESyuqPw1A
This document provides an overview of advanced Docker topics including Docker installation, Docker networking using bridges and volumes, and creating Dockerfiles. It discusses installing Docker on CentOS, the different types of Docker networks including bridge, host, overlay and macvlan. It also covers creating and managing Docker volumes, starting containers with volumes, and creating Dockerfiles with components like FROM, RUN, COPY and ENTRYPOINT.
Watch this talk here: https://www.confluent.io/online-talks/apache-kafka-architecture-and-fundamentals-explained-on-demand
This session explains Apache Kafka’s internal design and architecture. Companies like LinkedIn are now sending more than 1 trillion messages per day to Apache Kafka. Learn about the underlying design in Kafka that leads to such high throughput.
This talk provides a comprehensive overview of Kafka architecture and internal functions, including:
-Topics, partitions and segments
-The commit log and streams
-Brokers and broker replication
-Producer basics
-Consumers, consumer groups and offsets
This session is part 2 of 4 in our Fundamentals for Apache Kafka series.
Aggregated queries with Druid on terrabytes and petabytes of dataRostislav Pashuto
The document discusses Druid, an open-source distributed column-oriented data store designed for low latency queries on large datasets. It outlines Druid's capabilities for real-time ingestion, aggregation queries in sub-seconds, and storing petabytes of historical data. Examples are given of companies like Netflix and PayPal using Druid at large scales to analyze streaming data. The key components, data formats, and query types of Druid are described.
Migrating your clusters and workloads from Hadoop 2 to Hadoop 3DataWorks Summit
The Hadoop community announced Hadoop 3.0 GA in December, 2017 and 3.1 around April, 2018 loaded with a lot of features and improvements. One of the biggest challenges for any new major release of a software platform is its compatibility. Apache Hadoop community has focused on ensuring wire and binary compatibility for Hadoop 2 clients and workloads.
There are many challenges to be addressed by admins while upgrading to a major release of Hadoop. Users running workloads on Hadoop 2 should be able to seamlessly run or migrate their workloads onto Hadoop 3. This session will be deep diving into upgrade aspects in detail and provide a detailed preview of migration strategies with information on what works and what might not work. This talk would focus on the motivation for upgrading to Hadoop 3 and provide a cluster upgrade guide for admins and workload migration guide for users of Hadoop.
Speaker
Suma Shivaprasad, Hortonworks, Staff Engineer
Rohith Sharma, Hortonworks, Senior Software Engineer
This document discusses using Azure DevOps and Snowflake to enable continuous integration and continuous deployment (CI/CD) of database changes. It covers setting up source control in a repository, implementing pull requests for code reviews, building deployment artifacts in a build pipeline, and deploying artifacts to development, test, and production environments through a release pipeline. The document also highlights key Snowflake features like zero-copy cloning that enable testing deployments before production.
These are the slides for a talk/workshop delivered to the Cloud Native Wales user group (@CloudNativeWal) on 2019-01-10.
In these slides, we go over some principles of gitops and a hands on session to apply these to manage a microservice.
You can find out more about GitOps online https://www.weave.works/technologies/gitops/
Chef vs Puppet vs Ansible vs Saltstack | Configuration Management Tools | Dev...Simplilearn
This presentation "Chef vs Puppet vs Ansible vs Saltstack" will compare the DevOps configuration management tools Chef, Puppet, Ansible and Saltstack in terms of their capabilities, architecture, performance, ease of setup, language, scalability and pros and cons. The chef is a configuration management tool written in Ruby and Erlang. Puppet is an open-source software configuration management tool that runs on many Unix-like systems and also Windows. Ansible is yet another tool that automates software provisioning, configuration management, and application deployment. Saltstack is a Python-based open-source configuration management tool. Now, let us get started and get to know which is the best configuration management platform among Chef, Puppet, Ansible and Saltstack.
Below are the contents of our "Chef vs Puppet vs Ansible vs Saltstack" configuration management tools comparison slides:
1) Need for Configuration Management Tools
2) Chef - Infrastructure, Architecture, Pros and Cons
3) Puppet- Infrastructure, Architecture, Pros and Cons
4) Ansible - Infrastructure, Architecture, Pros and Cons
5) Saltstack - Infrastructure, Architecture, Pros and Cons
6) Comparison on the basis of architecture, ease of setup, language, scalability, management and interoperability.
Why learn DevOps?
Simplilearn’s DevOps training course is designed to help you become a DevOps practitioner and apply the latest in DevOps methodology to automate your software development lifecycle right out of the class. You will master configuration management; continuous integration deployment, delivery and monitoring using DevOps tools such as Git, Docker, Jenkins, Puppet and Nagios in a practical, hands-on and interactive approach. The DevOps training course focuses heavily on the use of Docker containers, a technology that is revolutionizing the way apps are deployed in the cloud today and is a critical skillset to master in the cloud age.
After completing the DevOps training course you will achieve hands-on expertise in various aspects of the DevOps delivery model. The practical learning outcomes of this Devops training course are:
An understanding of DevOps and the modern DevOps toolsets
The ability to automate all aspects of a modern code delivery and deployment pipeline using:
1. Source code management tools
2. Build tools
3. Test automation tools
4. Containerization through Docker
5. Configuration management tools
6. Monitoring tools
Who should take this course?
DevOps career opportunities are thriving worldwide. DevOps was featured as one of the 11 best jobs in America for 2017, according to CBS News, and data from Payscale.com shows that DevOps Managers earn as much as $122,234 per year, with DevOps engineers making as much as $151,461.
Learn more at https://www.simplilearn.com/cloud-computing/devops-practitioner-certification-training
Getting Started: Intro to Telegraf - July 2021InfluxData
In this training webinar, Samantha Wang will walk you through the basics of Telegraf. Telegraf is the open source server agent which is used to collect metrics from your stacks, sensors and systems. It is InfluxDB’s native data collector that supports nearly 300 inputs and outputs. Learn how to send data from a variety of systems, apps, databases and services in the appropriate format to InfluxDB. Discover tips and tricks on how to write your own plugins. The know-how learned here can be applied to a multitude of use cases and sectors. This one-hour session will include the training and time for live Q&A.
Join this training as Samantha Wang dives into:
Types of Telegraf plugins (i.e. input, output, aggregator and processor)
Specific plugins including Execd input plugins and the Starlark processor plugin
How to install and start using Telegraf
Apache Kafka becoming the message bus to transfer huge volumes of data from various sources into Hadoop.
It's also enabling many real-time system frameworks and use cases.
Managing and building clients around Apache Kafka can be challenging. In this talk, we will go through the best practices in deploying Apache Kafka
in production. How to Secure a Kafka Cluster, How to pick topic-partitions and upgrading to newer versions. Migrating to new Kafka Producer and Consumer API.
Also talk about the best practices involved in running a producer/consumer.
In Kafka 0.9 release, we’ve added SSL wire encryption, SASL/Kerberos for user authentication, and pluggable authorization. Now Kafka allows authentication of users, access control on who can read and write to a Kafka topic. Apache Ranger also uses pluggable authorization mechanism to centralize security for Kafka and other Hadoop ecosystem projects.
We will showcase open sourced Kafka REST API and an Admin UI that will help users in creating topics, re-assign partitions, Issuing
Kafka ACLs and monitoring Consumer offsets.
My talk at FullStackFest, 4.9.2017. Become more familiar with managing infrastructure using Terraform, Packer and deployment pipeline. Code repository - https://github.com/antonbabenko/terraform-deployment-pipeline-talk
Modern cloud-native applications are incredibly complex systems. Keeping the systems healthy and meeting SLAs for our customers is crucial for long-term success. In this session, we will dive into the three pillars of observability - metrics, logs, tracing - the foundation of successful troubleshooting in distributed systems. You'll learn the gotchas and pitfalls of rolling out the OpenTelemetry stack on Kubernetes to effectively collect all your signals without worrying about a vendor lock in. Additionally we will replace parts of the Prometheus stack to scrape metrics with OpenTelemetry collector and operator.
Infrastructure-as-Code (IaC) Using Terraform (Advanced Edition)Adin Ermie
In this new presentation, we will cover advanced Terraform topics (full-on DevOps). We will compare the deployment of Terraform using Azure DevOps, GitHub/GitHub Actions, and Terraform Cloud. We wrap everything up with some key takeaway learning resources in your Terraform learning adventure.
NOTE: A recording of this presenting is available here: https://www.youtube.com/watch?v=fJ8_ZbOIdto&t=5574s
This document provides an overview of Terraform, an open-source infrastructure as code tool. It discusses Terraform's goals of providing a unified view of infrastructure, composing multiple tiers of infrastructure from IaaS to PaaS to SaaS, and safely changing infrastructure over time with one workflow. Key features highlighted include being open source, using infrastructure as code, resource providers that interface with cloud APIs, and the plan and apply workflow. The document also covers topics like collaboration and version history in Terraform Enterprise, file examples, the plan and apply commands, resource providers, and new features in recent Terraform versions like destroy provisioners, remote backends, state locking, and state environments.
This document discusses infrastructure as code and the HashiCorp ecosystem. Infrastructure as code allows users to define and provision infrastructure through code rather than manual configuration. It can be used to launch, create, change, and downscale infrastructure based on configuration files. Tools like Terraform allow showing what changes will occur before applying them through files like main.tf and variables.tf. Terraform is part of the broader HashiCorp ecosystem of tools.
Terraform is an open source tool for building, changing, and versioning infrastructure safely and efficiently. It allows users to define and provision a datacenter infrastructure using a high-level configuration language known as HashiCorp Configuration Language. Some key features of Terraform include supporting multiple cloud providers and services, being declarative and reproducible, and maintaining infrastructure as code with immutable infrastructure. It works by defining configuration files that specify what resources need to be created. The configuration is written in HCL. Terraform uses these files to create and manage infrastructure resources like VMs, network, storage, containers and more across multiple cloud platforms.
Scylla Summit 2022: Making Schema Changes Safe with RaftScyllaDB
ScyllaDB adopted Raft as a consensus protocol in order to dramatically improve our operational aspects as well as provide strong consistency to the end-user. This talk will explain how Raft behaves in Scylla Open Source 5.0 and introduce the first end-user visible major improvement: schema changes. Learn how cluster configuration resides in Raft, providing consistent cluster assembly and configuration management. This makes bootstrapping safer and provides reliable disaster recovery when you lose the majority of the cluster.
To watch all of the recordings hosted during Scylla Summit 2022 visit our website here: https://www.scylladb.com/summit.
This document provides an overview of GitOps and summarizes a training session on the topic. The session covered Kubernetes and Git basics, the motivation and model for GitOps, an example of GitOps in action using Flux on a training environment, progressive delivery techniques like Flagger, and challenges with GitOps adoption. The goals were to explain what GitOps is, understand benefits, gain hands-on experience, and decide if it's right for a team/project. GitOps aims to use Git as the single source of truth for infrastructure and automate deployments by reconciling production with the code repository.
This document discusses Terraform, an open-source infrastructure as code tool. It begins by explaining how infrastructure can be defined and managed as code through services that have APIs. It then provides an overview of Terraform, including its core concepts of providers, resources, and data sources. The document demonstrates Terraform's declarative configuration syntax and process of planning and applying changes. It also covers features like modules, state management, data sources, and developing custom plugins.
An overview and introduction to Hashicorp's Terraform for the Chattanooga ChaDev Lunch.
https://www.youtube.com/watch?v=p2ESyuqPw1A
This document provides an overview of advanced Docker topics including Docker installation, Docker networking using bridges and volumes, and creating Dockerfiles. It discusses installing Docker on CentOS, the different types of Docker networks including bridge, host, overlay and macvlan. It also covers creating and managing Docker volumes, starting containers with volumes, and creating Dockerfiles with components like FROM, RUN, COPY and ENTRYPOINT.
Watch this talk here: https://www.confluent.io/online-talks/apache-kafka-architecture-and-fundamentals-explained-on-demand
This session explains Apache Kafka’s internal design and architecture. Companies like LinkedIn are now sending more than 1 trillion messages per day to Apache Kafka. Learn about the underlying design in Kafka that leads to such high throughput.
This talk provides a comprehensive overview of Kafka architecture and internal functions, including:
-Topics, partitions and segments
-The commit log and streams
-Brokers and broker replication
-Producer basics
-Consumers, consumer groups and offsets
This session is part 2 of 4 in our Fundamentals for Apache Kafka series.
Aggregated queries with Druid on terrabytes and petabytes of dataRostislav Pashuto
The document discusses Druid, an open-source distributed column-oriented data store designed for low latency queries on large datasets. It outlines Druid's capabilities for real-time ingestion, aggregation queries in sub-seconds, and storing petabytes of historical data. Examples are given of companies like Netflix and PayPal using Druid at large scales to analyze streaming data. The key components, data formats, and query types of Druid are described.
Migrating your clusters and workloads from Hadoop 2 to Hadoop 3DataWorks Summit
The Hadoop community announced Hadoop 3.0 GA in December, 2017 and 3.1 around April, 2018 loaded with a lot of features and improvements. One of the biggest challenges for any new major release of a software platform is its compatibility. Apache Hadoop community has focused on ensuring wire and binary compatibility for Hadoop 2 clients and workloads.
There are many challenges to be addressed by admins while upgrading to a major release of Hadoop. Users running workloads on Hadoop 2 should be able to seamlessly run or migrate their workloads onto Hadoop 3. This session will be deep diving into upgrade aspects in detail and provide a detailed preview of migration strategies with information on what works and what might not work. This talk would focus on the motivation for upgrading to Hadoop 3 and provide a cluster upgrade guide for admins and workload migration guide for users of Hadoop.
Speaker
Suma Shivaprasad, Hortonworks, Staff Engineer
Rohith Sharma, Hortonworks, Senior Software Engineer
This document discusses using Azure DevOps and Snowflake to enable continuous integration and continuous deployment (CI/CD) of database changes. It covers setting up source control in a repository, implementing pull requests for code reviews, building deployment artifacts in a build pipeline, and deploying artifacts to development, test, and production environments through a release pipeline. The document also highlights key Snowflake features like zero-copy cloning that enable testing deployments before production.
These are the slides for a talk/workshop delivered to the Cloud Native Wales user group (@CloudNativeWal) on 2019-01-10.
In these slides, we go over some principles of gitops and a hands on session to apply these to manage a microservice.
You can find out more about GitOps online https://www.weave.works/technologies/gitops/
Chef vs Puppet vs Ansible vs Saltstack | Configuration Management Tools | Dev...Simplilearn
This presentation "Chef vs Puppet vs Ansible vs Saltstack" will compare the DevOps configuration management tools Chef, Puppet, Ansible and Saltstack in terms of their capabilities, architecture, performance, ease of setup, language, scalability and pros and cons. The chef is a configuration management tool written in Ruby and Erlang. Puppet is an open-source software configuration management tool that runs on many Unix-like systems and also Windows. Ansible is yet another tool that automates software provisioning, configuration management, and application deployment. Saltstack is a Python-based open-source configuration management tool. Now, let us get started and get to know which is the best configuration management platform among Chef, Puppet, Ansible and Saltstack.
Below are the contents of our "Chef vs Puppet vs Ansible vs Saltstack" configuration management tools comparison slides:
1) Need for Configuration Management Tools
2) Chef - Infrastructure, Architecture, Pros and Cons
3) Puppet- Infrastructure, Architecture, Pros and Cons
4) Ansible - Infrastructure, Architecture, Pros and Cons
5) Saltstack - Infrastructure, Architecture, Pros and Cons
6) Comparison on the basis of architecture, ease of setup, language, scalability, management and interoperability.
Why learn DevOps?
Simplilearn’s DevOps training course is designed to help you become a DevOps practitioner and apply the latest in DevOps methodology to automate your software development lifecycle right out of the class. You will master configuration management; continuous integration deployment, delivery and monitoring using DevOps tools such as Git, Docker, Jenkins, Puppet and Nagios in a practical, hands-on and interactive approach. The DevOps training course focuses heavily on the use of Docker containers, a technology that is revolutionizing the way apps are deployed in the cloud today and is a critical skillset to master in the cloud age.
After completing the DevOps training course you will achieve hands-on expertise in various aspects of the DevOps delivery model. The practical learning outcomes of this Devops training course are:
An understanding of DevOps and the modern DevOps toolsets
The ability to automate all aspects of a modern code delivery and deployment pipeline using:
1. Source code management tools
2. Build tools
3. Test automation tools
4. Containerization through Docker
5. Configuration management tools
6. Monitoring tools
Who should take this course?
DevOps career opportunities are thriving worldwide. DevOps was featured as one of the 11 best jobs in America for 2017, according to CBS News, and data from Payscale.com shows that DevOps Managers earn as much as $122,234 per year, with DevOps engineers making as much as $151,461.
Learn more at https://www.simplilearn.com/cloud-computing/devops-practitioner-certification-training
Getting Started: Intro to Telegraf - July 2021InfluxData
In this training webinar, Samantha Wang will walk you through the basics of Telegraf. Telegraf is the open source server agent which is used to collect metrics from your stacks, sensors and systems. It is InfluxDB’s native data collector that supports nearly 300 inputs and outputs. Learn how to send data from a variety of systems, apps, databases and services in the appropriate format to InfluxDB. Discover tips and tricks on how to write your own plugins. The know-how learned here can be applied to a multitude of use cases and sectors. This one-hour session will include the training and time for live Q&A.
Join this training as Samantha Wang dives into:
Types of Telegraf plugins (i.e. input, output, aggregator and processor)
Specific plugins including Execd input plugins and the Starlark processor plugin
How to install and start using Telegraf
Apache Kafka becoming the message bus to transfer huge volumes of data from various sources into Hadoop.
It's also enabling many real-time system frameworks and use cases.
Managing and building clients around Apache Kafka can be challenging. In this talk, we will go through the best practices in deploying Apache Kafka
in production. How to Secure a Kafka Cluster, How to pick topic-partitions and upgrading to newer versions. Migrating to new Kafka Producer and Consumer API.
Also talk about the best practices involved in running a producer/consumer.
In Kafka 0.9 release, we’ve added SSL wire encryption, SASL/Kerberos for user authentication, and pluggable authorization. Now Kafka allows authentication of users, access control on who can read and write to a Kafka topic. Apache Ranger also uses pluggable authorization mechanism to centralize security for Kafka and other Hadoop ecosystem projects.
We will showcase open sourced Kafka REST API and an Admin UI that will help users in creating topics, re-assign partitions, Issuing
Kafka ACLs and monitoring Consumer offsets.
My talk at FullStackFest, 4.9.2017. Become more familiar with managing infrastructure using Terraform, Packer and deployment pipeline. Code repository - https://github.com/antonbabenko/terraform-deployment-pipeline-talk
Modern cloud-native applications are incredibly complex systems. Keeping the systems healthy and meeting SLAs for our customers is crucial for long-term success. In this session, we will dive into the three pillars of observability - metrics, logs, tracing - the foundation of successful troubleshooting in distributed systems. You'll learn the gotchas and pitfalls of rolling out the OpenTelemetry stack on Kubernetes to effectively collect all your signals without worrying about a vendor lock in. Additionally we will replace parts of the Prometheus stack to scrape metrics with OpenTelemetry collector and operator.
Infrastructure-as-Code (IaC) Using Terraform (Advanced Edition)Adin Ermie
In this new presentation, we will cover advanced Terraform topics (full-on DevOps). We will compare the deployment of Terraform using Azure DevOps, GitHub/GitHub Actions, and Terraform Cloud. We wrap everything up with some key takeaway learning resources in your Terraform learning adventure.
NOTE: A recording of this presenting is available here: https://www.youtube.com/watch?v=fJ8_ZbOIdto&t=5574s
This document provides an overview of Terraform, an open-source infrastructure as code tool. It discusses Terraform's goals of providing a unified view of infrastructure, composing multiple tiers of infrastructure from IaaS to PaaS to SaaS, and safely changing infrastructure over time with one workflow. Key features highlighted include being open source, using infrastructure as code, resource providers that interface with cloud APIs, and the plan and apply workflow. The document also covers topics like collaboration and version history in Terraform Enterprise, file examples, the plan and apply commands, resource providers, and new features in recent Terraform versions like destroy provisioners, remote backends, state locking, and state environments.
A comprehensive walkthrough of how to manage infrastructure-as-code using Terraform. This presentation includes an introduction to Terraform, a discussion of how to manage Terraform state, how to use Terraform modules, an overview of best practices (e.g. isolation, versioning, loops, if-statements), and a list of gotchas to look out for.
For a written and more in-depth version of this presentation, check out the "Comprehensive Guide to Terraform" blog post series: https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca
Infrastructure-as-Code (IaC) using TerraformAdin Ermie
Learn the benefits of Infrastructure-as-Code (IaC), what Terraform is and why people love it, along with a breakdown of the basics (including live demo deployments). Then wrap up with a comparison of Azure Resource Manager (ARM) templates versus Terraform, consider some best practices, and walk away with some key resources in your Terraform learning adventure.
Learn everything you need to know about terraform, Infrastructure-as-Code and cloud computing with Brainboard.
Learn more: https://www.brainboard.co/
This document provides an overview of Terraform and infrastructure as code using Terraform. It discusses what Terraform is, how to get started with Terraform including initializing a Terraform configuration, planning and applying changes, variables, modules, providers and resources. It also covers Terraform state and locking state for multi-user collaboration.
Container Days Boston - Kubernetes in productionMike Splain
Kubernetes in Production, From the Ground Up discusses setting up a Kubernetes cluster from scratch using CoreOS, etcd, Docker, and Terraform. The document outlines setting up etcd for high availability using an autoscaling group, configuring the Kubernetes master nodes to run the API server, scheduler, and controller manager as pods, and deploying worker nodes that run the kubelet to join the cluster. The process involves using cloud-init scripts, Terraform, and container images to automate the installation and configuration of all cluster components in a scalable and resilient way.
This document contains biographical information about Boulos Dib, an independent consultant specializing in software development. It provides details about Dib's early experience with personal computers and programming languages. It also lists upcoming presentations by Dib on LightSwitch and Silverlight at the NYC Code Camp in October 2011. The document concludes with an overview of PowerShell scripting.
The document introduces Terraform as an infrastructure as code tool for defining and provisioning cloud infrastructure resources. It discusses some problems with manually configuring infrastructure through cloud provider consoles. It then provides an overview of Terraform concepts like providers, resources, modules, variables, outputs, and remote backends. Examples are given of defining AWS instances, security groups, and Route53 records with Terraform configuration files.
This document provides an overview and tutorial on using Terraform for DevOps. It introduces Terraform as a tool for defining and managing infrastructure as code. It then covers installing Terraform, deploying AWS infrastructure like EC2 instances using Terraform configurations, managing variables and outputs, using provisioners, organizing code with modules and workspaces, and managing Terraform state. The document aims to help users get started with Terraform for infrastructure as code.
How to test infrastructure code: automated testing for Terraform, Kubernetes,...Yevgeniy Brikman
This talk is a step-by-step, live-coding class on how to write automated tests for infrastructure code, including the code you write for use with tools such as Terraform, Kubernetes, Docker, and Packer. Topics covered include unit tests, integration tests, end-to-end tests, test parallelism, retries, error handling, static analysis, and more.
https://www.youtube.com/watch?v=IeweKUdHJc4
My presentation from Hashiconf 2017, discussing our use of Terraform, and our techniques
to help make it safe and accessible.
Presentación empleada en el primer MeetUp AWS del grupo de usuarios de Valencia.
Infraestructura como código empleando Terraform. Se muestra las principales características de esta tecnología que nos permite ser más ágiles y rápidos desplegando nuestras plataformas en AWS.
This document discusses using Terraform to provision Datadog monitoring tools. Terraform allows for infrastructure as code to manage cloud services. Datadog provides dashboards and alerts to monitor infrastructure and applications. The document outlines installing Terraform, using Terraform providers like Datadog, creating template variables, and implementing basic Datadog resources like dashboards and monitors through Terraform.
This document provides an overview of Terraform including its key features, installation process, and common usage patterns. Terraform allows infrastructure to be defined as code and treated similarly to other code. It generates execution plans to avoid surprises when provisioning resources. Complex changes can be automated while avoiding human errors. The document covers installing Terraform, deploying AWS EC2 instances, variables, outputs, modules, and workspaces. It demonstrates how Terraform can be used to provision and manage infrastructure in a safe, efficient manner.
The document discusses Terraform, an infrastructure as code tool. It covers installing Terraform, deploying infrastructure like EC2 instances using Terraform configuration files, destroying resources, and managing Terraform state. Key topics include authentication with AWS for Terraform, creating a basic EC2 instance, validating and applying configuration changes, and storing state locally versus remotely.
"Modern DevOps & Real Life Applications. 3.0.0-devops+20230318", Igor Fesenko Fwdays
In this presentation, I will recount the challenges my team and I faced over the past year and how we overcame obstacles and achieved our goals.
Even for greenfield projects, our journey had its share of surprises. Despite living in 2023, we still encountered common problems that many think are long-solved. I'll delve into topics such as defining DNS zones across multiple environments, dealing with ineffective build artifacts from mature development teams, GitHub Actions for CI/CD, versioning, cost optimization, and the potential pitfalls of adopting GitOps in combination with a "reuse as much as possible" mentality.
So, join me as we explore the trade-offs and lessons learned from these real-world scenarios. With this presentation, you'll gain valuable insights to help you navigate similar challenges quickly and confidently.
Introductory Overview to Managing AWS with TerraformMichael Heyns
The document provides an overview of Terraform including:
- Terraform is an open source tool from HashiCorp that allows defining and provisioning infrastructure in a code-based declarative way across multiple cloud platforms and services.
- Key concepts include providers that define cloud resources, configuration files that declare the desired state, and a plan-apply workflow to provision and manage infrastructure resources.
- Common Terraform commands are explained like init, plan, apply, destroy, output and their usage.
Manage any AWS resources with Terraform 0.12 - April 2020Anton Babenko
The document discusses managing AWS resources using Terraform. It introduces Terraform 0.12 and its new features. It also summarizes ways to manage non-natively supported AWS resources and GitHub resources using Terraform modules, Terragrunt, and other tools. The document promotes visualizing infrastructure using Cloudcraft and generating Terraform code.
This document summarizes Anton Babenko's presentation on Terraform 0.12 and Terragrunt. Some key points include:
- Terraform 0.12 includes improvements like HCL2 syntax, loops and dynamic blocks that make configurations easier to write and maintain.
- Terragrunt is useful for orchestrating Terraform modules and enforcing best practices and standards.
- Modules.tf is a tool that can generate Terraform configurations from visual diagrams created in Cloudcraft, potentially providing ready-to-use infrastructure code.
Terraform Best Practices - DevOps Unicorns 2019Anton Babenko
Terraform best practices include using modules to break infrastructure into reusable components, structuring configurations in a one-in-one approach with directories for each module, and avoiding workspaces in favor of additional modules. Terraform 0.12 benefits developers most through features like loops and conditionals that enable more flexible modules, while users appreciate minor syntax improvements. The presentation emphasizes reusability, separation of concerns, and standardization through open-source modules.
Terraform AWS modules and some best practices - September 2019Anton Babenko
Slides from my meetup talks at various AWS and DevOps meetups.
Follow me:
https://twitter.com/antonbabenko
https://github.com/antonbabenko
https://linkedin.com/in/antonbabenko
What you see is what you get for AWS infrastructureAnton Babenko
Cloud architects and DevOps engineers want tools that allow for faster development and deployment. Infrastructure as code principles treat infrastructure like code, enabling validation and knowing what changes were made. Open-source tools like Terraform, cloudcraft.co and the Terraform AWS modules help architects and engineers visualize, code, and build AWS infrastructure in a standardized way. Modules.tf is a free, open-source tool that generates Terraform code from cloudcraft.co diagrams to help bootstrap infrastructure setup.
Terraform AWS modules and some best-practices - May 2019Anton Babenko
Slides from my meetup talks during meetups in Germany. Follow me:
https://twitter.com/antonbabenko
https://github.com/antonbabenko
Terraform modules and some of best-practices - March 2019Anton Babenko
This document summarizes best practices for using Terraform modules. It discusses:
- Writing resource modules to version infrastructure instead of individual resources
- Using infrastructure modules to enforce tags, standards and preprocessors
- Calling modules in a 1-in-1 structure for smaller blast radii and dependencies
- Using Terragrunt for orchestration to call modules dynamically
- Working with Terraform code by using lists, JSONnet, and preparing for Terraform 0.12
What you see is what you get for AWS infrastructureAnton Babenko
This document discusses tools for cloud architects to design and implement infrastructure as code. It recommends using cloudcraft.co to visually design infrastructure, terraform-aws-modules for reusable AWS components, and Terraform to define and deploy infrastructure as code. It also introduces modules.tf, an open-source tool that generates Terraform configurations from cloudcraft diagrams to help bootstrap infrastructure as code projects.
Gotchas using Terraform in a secure delivery pipelineAnton Babenko
Terraform can be used in a secure CI/CD pipeline for infrastructure as code. Key aspects include using Terraform modules for reuse, configuring a CI/CD pipeline for automated testing and deployment, and ensuring proper access control and secrets management. Gotchas to watch out for involve remote state, dependencies, and granting least privilege access. Design patterns like resource modules, infrastructure modules, and composition can help structure the code.
1. The document discusses an upcoming meetup on Terraform 0.12. It provides an agenda that includes an overview of Terraform 0.12 features, examples of using Terraform 0.12, and a Q&A session.
2. The speaker, Anton Babenko, is introduced. He is described as a Terraform and AWS expert who contributes to open source Terraform projects.
3. New features in Terraform 0.12 discussed include first-class expressions, for expressions, dynamic blocks, generalized splat operators, conditional improvements, and references as first-class values. Backward compatibility and impacts to providers and modules are also covered.
Terraform modules and best-practices - September 2018Anton Babenko
Slides for my "Terraform modules and best-practices" talk on meetups during September 2018.
Some links from the slides:
https://www.terraform-best-practices.com/
https://cloudcraft.co/
https://github.com/terraform-aws-modules/
https://github.com/antonbabenko/modules.tf-lambda
Building infrastructure as code using Terraform - DevOps KrakowAnton Babenko
This document provides an overview of a DevOps meetup on building infrastructure as code using Terraform. The agenda includes Terraform basics, frequent questions, and problems. The presenter then discusses Terraform modules, tools, and solutions. He addresses common questions like secrets handling and integration with other tools. Finally, he solicits questions from the audience on Terraform use cases and challenges.
Terraform Q&A - HashiCorp User Group OsloAnton Babenko
This document summarizes a meetup for the HashiCorp User Group in Oslo. The meetup agenda includes an introduction to the user group, a Terraform Q&A session, and opportunities for attendees to become speakers. The document also provides answers to some frequent Terraform questions, such as why to use Terraform over other infrastructure as code tools and how to handle secrets. Additional resources are referenced for learning more about Terraform best practices and tools.
The document discusses the role and skills of a DevOps engineer. It notes that a DevOps engineer combines software engineering skills like coding with operations tasks like deploying, running, maintaining, monitoring and logging infrastructure. The document traces the evolution of a software developer who gains these additional operational skills to become a DevOps engineer. It emphasizes that DevOps engineers work to solve problems through skills like infrastructure as code and progressive learning. The document promotes leaving one's comfort zone and focusing on identifying real problems to solve.
This document discusses continuous delivery in AWS. It defines continuous integration as regularly merging code changes into a central repository, after which automated builds and tests run. Continuous delivery is described as automatically building, testing, and preparing code changes for release to production. Benefits of continuous integration and continuous delivery include automating the software release process, improving developer productivity, and finding and addressing bugs earlier. The document provides links to additional resources on these topics.
This document discusses tool selection for development teams. It recommends that small teams start with a few free, open-source tools with a small learning curve. For large teams, it suggests evaluating where other tools may provide better solutions than over-engineering. It also advises considering automation and orchestration for small teams using many tools, and knowledge sharing across large teams using many tools. The document emphasizes trying existing tools before building custom solutions, and considering costs, community support, compatibility, and fit when selecting tools.
AWS CodeDeploy is a fully managed deployment service that allows deploying code and applications to EC2 instances and on-premise servers. It is technology agnostic and supports deploying from Amazon S3 buckets or GitHub repositories. The document provides an overview of CodeDeploy, including how to get started, the execution flow using appspec.yml files, deployment configurations and groups, and considerations for using CodeDeploy.
Linux Support for SMARC: How Toradex Empowers Embedded DevelopersToradex
Toradex brings robust Linux support to SMARC (Smart Mobility Architecture), ensuring high performance and long-term reliability for embedded applications. Here’s how:
• Optimized Torizon OS & Yocto Support – Toradex provides Torizon OS, a Debian-based easy-to-use platform, and Yocto BSPs for customized Linux images on SMARC modules.
• Seamless Integration with i.MX 8M Plus and i.MX 95 – Toradex SMARC solutions leverage NXP’s i.MX 8 M Plus and i.MX 95 SoCs, delivering power efficiency and AI-ready performance.
• Secure and Reliable – With Secure Boot, over-the-air (OTA) updates, and LTS kernel support, Toradex ensures industrial-grade security and longevity.
• Containerized Workflows for AI & IoT – Support for Docker, ROS, and real-time Linux enables scalable AI, ML, and IoT applications.
• Strong Ecosystem & Developer Support – Toradex offers comprehensive documentation, developer tools, and dedicated support, accelerating time-to-market.
With Toradex’s Linux support for SMARC, developers get a scalable, secure, and high-performance solution for industrial, medical, and AI-driven applications.
Do you have a specific project or application in mind where you're considering SMARC? We can help with Free Compatibility Check and help you with quick time-to-market
For more information: https://www.toradex.com/computer-on-modules/smarc-arm-family
Spark is a powerhouse for large datasets, but when it comes to smaller data workloads, its overhead can sometimes slow things down. What if you could achieve high performance and efficiency without the need for Spark?
At S&P Global Commodity Insights, having a complete view of global energy and commodities markets enables customers to make data-driven decisions with confidence and create long-term, sustainable value. 🌍
Explore delta-rs + CDC and how these open-source innovations power lightweight, high-performance data applications beyond Spark! 🚀
Dev Dives: Automate and orchestrate your processes with UiPath MaestroUiPathCommunity
This session is designed to equip developers with the skills needed to build mission-critical, end-to-end processes that seamlessly orchestrate agents, people, and robots.
📕 Here's what you can expect:
- Modeling: Build end-to-end processes using BPMN.
- Implementing: Integrate agentic tasks, RPA, APIs, and advanced decisioning into processes.
- Operating: Control process instances with rewind, replay, pause, and stop functions.
- Monitoring: Use dashboards and embedded analytics for real-time insights into process instances.
This webinar is a must-attend for developers looking to enhance their agentic automation skills and orchestrate robust, mission-critical processes.
👨🏫 Speaker:
Andrei Vintila, Principal Product Manager @UiPath
This session streamed live on April 29, 2025, 16:00 CET.
Check out all our upcoming Dev Dives sessions at https://community.uipath.com/dev-dives-automation-developer-2025/.
Complete Guide to Advanced Logistics Management Software in Riyadh.pdfSoftware Company
Explore the benefits and features of advanced logistics management software for businesses in Riyadh. This guide delves into the latest technologies, from real-time tracking and route optimization to warehouse management and inventory control, helping businesses streamline their logistics operations and reduce costs. Learn how implementing the right software solution can enhance efficiency, improve customer satisfaction, and provide a competitive edge in the growing logistics sector of Riyadh.
Semantic Cultivators : The Critical Future Role to Enable AIartmondano
By 2026, AI agents will consume 10x more enterprise data than humans, but with none of the contextual understanding that prevents catastrophic misinterpretations.
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxshyamraj55
We’re bringing the TDX energy to our community with 2 power-packed sessions:
🛠️ Workshop: MuleSoft for Agentforce
Explore the new version of our hands-on workshop featuring the latest Topic Center and API Catalog updates.
📄 Talk: Power Up Document Processing
Dive into smart automation with MuleSoft IDP, NLP, and Einstein AI for intelligent document workflows.
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
Technology Trends in 2025: AI and Big Data AnalyticsInData Labs
At InData Labs, we have been keeping an ear to the ground, looking out for AI-enabled digital transformation trends coming our way in 2025. Our report will provide a look into the technology landscape of the future, including:
-Artificial Intelligence Market Overview
-Strategies for AI Adoption in 2025
-Anticipated drivers of AI adoption and transformative technologies
-Benefits of AI and Big data for your business
-Tips on how to prepare your business for innovation
-AI and data privacy: Strategies for securing data privacy in AI models, etc.
Download your free copy nowand implement the key findings to improve your business.
Big Data Analytics Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
TrsLabs - Fintech Product & Business ConsultingTrs Labs
Hybrid Growth Mandate Model with TrsLabs
Strategic Investments, Inorganic Growth, Business Model Pivoting are critical activities that business don't do/change everyday. In cases like this, it may benefit your business to choose a temporary external consultant.
An unbiased plan driven by clearcut deliverables, market dynamics and without the influence of your internal office equations empower business leaders to make right choices.
Getting things done within a budget within a timeframe is key to Growing Business - No matter whether you are a start-up or a big company
Talk to us & Unlock the competitive advantage
TrustArc Webinar: Consumer Expectations vs Corporate Realities on Data Broker...TrustArc
Most consumers believe they’re making informed decisions about their personal data—adjusting privacy settings, blocking trackers, and opting out where they can. However, our new research reveals that while awareness is high, taking meaningful action is still lacking. On the corporate side, many organizations report strong policies for managing third-party data and consumer consent yet fall short when it comes to consistency, accountability and transparency.
This session will explore the research findings from TrustArc’s Privacy Pulse Survey, examining consumer attitudes toward personal data collection and practical suggestions for corporate practices around purchasing third-party data.
Attendees will learn:
- Consumer awareness around data brokers and what consumers are doing to limit data collection
- How businesses assess third-party vendors and their consent management operations
- Where business preparedness needs improvement
- What these trends mean for the future of privacy governance and public trust
This discussion is essential for privacy, risk, and compliance professionals who want to ground their strategies in current data and prepare for what’s next in the privacy landscape.
What is Model Context Protocol(MCP) - The new technology for communication bw...Vishnu Singh Chundawat
The MCP (Model Context Protocol) is a framework designed to manage context and interaction within complex systems. This SlideShare presentation will provide a detailed overview of the MCP Model, its applications, and how it plays a crucial role in improving communication and decision-making in distributed systems. We will explore the key concepts behind the protocol, including the importance of context, data management, and how this model enhances system adaptability and responsiveness. Ideal for software developers, system architects, and IT professionals, this presentation will offer valuable insights into how the MCP Model can streamline workflows, improve efficiency, and create more intuitive systems for a wide range of use cases.
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025BookNet Canada
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, transcript, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Aqusag Technologies
In late April 2025, a significant portion of Europe, particularly Spain, Portugal, and parts of southern France, experienced widespread, rolling power outages that continue to affect millions of residents, businesses, and infrastructure systems.
Increasing Retail Store Efficiency How can Planograms Save Time and Money.pptxAnoop Ashok
In today's fast-paced retail environment, efficiency is key. Every minute counts, and every penny matters. One tool that can significantly boost your store's efficiency is a well-executed planogram. These visual merchandising blueprints not only enhance store layouts but also save time and money in the process.
Mobile App Development Company in Saudi ArabiaSteve Jonas
EmizenTech is a globally recognized software development company, proudly serving businesses since 2013. With over 11+ years of industry experience and a team of 200+ skilled professionals, we have successfully delivered 1200+ projects across various sectors. As a leading Mobile App Development Company In Saudi Arabia we offer end-to-end solutions for iOS, Android, and cross-platform applications. Our apps are known for their user-friendly interfaces, scalability, high performance, and strong security features. We tailor each mobile application to meet the unique needs of different industries, ensuring a seamless user experience. EmizenTech is committed to turning your vision into a powerful digital product that drives growth, innovation, and long-term success in the competitive mobile landscape of Saudi Arabia.
The Evolution of Meme Coins A New Era for Digital Currency ppt.pdfAbi john
Analyze the growth of meme coins from mere online jokes to potential assets in the digital economy. Explore the community, culture, and utility as they elevate themselves to a new era in cryptocurrency.
4. 0.
AGENDA
1. State of things
2. Basics of Terraform and Packer
Getting started demo
3. More advanced concepts in Terraform
Practice
4. Working as a team
CI/CD pipeline with Terraform and Packer
Practice
5. Resources
12. Year 2015 CloudFormation Terraform
Configuration format JSON HCL/JSON
State management No Yes
Execution control No Yes!
Logical comparisons Yes Limited
Supports iterations No Yes
Manage already
created resources
No Yes (hard)
Providers supported Only AWS
20+ (incl. AWS,
GCE, Azure)
13. Year 2017 CloudFormation Terraform
Configuration format YAML/JSON HCL/JSON
State management Kind of Yes
Execution control Yes Yes!
Logical comparisons Yes Yes
Supports iterations Yes Yes
Manage already
created resources
No Yes!
Providers supported Only AWS
60+ (incl. AWS,
GCE, Azure)
14. CloudFormation
(2015)
Terraform 0.6.8
(2015)
Terraform 0.9.4
(2017)
AWS resource
types
121 103 280
Resource
properties and
operations
completeness
90%
Work in
progress
Work in
progress :)
Handle failures
Optional
rollback
Fix it & retry
Exit faster. Fix
it & retry
Contribute? No Yes! Yes!
AWS SPECIFICS
16. TERRAFORM COMMANDS
$ terraform
Usage: terraform [--version] [--help] <command> [args]
Common commands:
apply Builds or changes infrastructure
console Interactive console for Terraform interpolations
destroy Destroy Terraform-managed infrastructure
env Environment management
fmt Rewrites config files to canonical format
get Download and install modules for the configuration
graph Create a visual graph of Terraform resources
import Import existing infrastructure into Terraform
init Initialize a new or existing Terraform configuration
output Read an output from a state file
plan Generate and show an execution plan
push Upload this Terraform module to Atlas to run
refresh Update local state file against real resources
show Inspect Terraform state or plan
taint Manually mark a resource for recreation
untaint Manually unmark a resource as tainted
validate Validates the Terraform files
version Prints the Terraform version
All other commands:
debug Debug output management (experimental)
force-unlock Manually unlock the terraform state
state Advanced state management
17. TERRAFORM INIT
Initialize a new or existing Terraform environment by creating initial files, loading
any remote state, downloading modules, etc.
*.tf
Your
infrastructure
terraform.tfstate
S3,
Atlas, Consul,
etcd, HTTP
24. TERRAFORM - MODULES
Modules in Terraform are self-contained packages of Terraform configurations that are managed as a group.
Links:
https://github.com/terraform-community-modules/
Lots of github repositories (588)
module "network_security" {
source = "git::[email protected]:myself/tf_modules.git//modules/network/security?ref=v1.0.0"
vpc_cidr = "${var.vpc_cidr}"
}
25. TERRAFORM - VARIABLES
Terraform != programming language
Types: string, number, boolean, list, map
Interpolation functions: length, element, file …
Interpolation is not allowed everywhere
Links:
https://www.terraform.io/docs/configuration/syntax.html
variable "iam_users" {
description = "List of IAM users to create"
type = "list"
}
resource "aws_iam_user" "users" {
count = "${length(var.iam_users)}"
name = "${element(var.iam_users, count.index)}"
}
34. ● How to structure your configs?
Reduce radius blast
Size matters a lot
Structure based on teams (infrastructure team-members = network; developers = modules owners)
Separate repositories for modules and infrastructure
Infrastructure can share same repository as application
● How to continuously test infrastructure using Terraform?
Validate, plan, env
Test modules independently, include working examples and README
Test Kitchen, Inspec, Serverspec…
Full run with smaller (yet, sane!) values
TERRAFORM HOW?
35. TERRAFORM WORK FLOW
Init, plan, apply, apply, plan, apply…
Executors:
Single developer
Multiple developers
Requires remote backend configuration (locks for lengthy operations)
CI system
Notes:
MFA?
Module versioning is important
Group code by both - region and environment (staging, prod)
36. TERRAFORM WORK FLOW
Init, plan, apply, apply, plan, apply…
Open a Pull request:
Validation (terraform validate)
Optionally: Create new ephemeral (short-lived) Terraform environment (“terraform env new feature-branch”), run automated tests
(kitchen-terraform, for example) and destroy it after
Run plan and display output for review (git comment)
Branch merged into master:
Terraform apply to staging
Optionally: terragrunt apply-all
Branch tagged (release):
Terraform apply to production
37. TERRAFORM - EXAMPLE 1 (pseudo)
● Developer commits application code
● CI system:
○ Run tests, builds artifact
○ Packer: Bake AMI
○ Terraform: Plan and apply with just created AMI id to create deployment
○ Run integration, performance tests
○ Deploy to staging
38. TERRAFORM - EXAMPLE 1 - feature
● Developer commits application code to a feature branch name feature-123
● CI system:
○ Run tests, builds artifact using Packer
○ Run Packer: Bake AMI and tag it with branch=feature-123
○ Run Terraform:
■ Plan the infrastructure for test environment, where AMI id lookup is using data source ami by
tag branch=feature-123
■ Optionally, save plan to a file, prompt git user in UI, post comment to github PR
■ Apply the plan
○ Run integration, performance tests
○ Deploy to staging
39. TERRAFORM DEPLOYMENTS
Rolling deployments
Using provider’s mechanisms:
ECS (or other scheduler)
CloudFormation
Using custom mechanisms:
DIY scripts combined with ‘-target’ arguments
Blue-green deployments
No provider’s mechanisms for this
DIY
41. TERRAFORM RESOURCES
Books and blog posts:
Getting Started with Terraform by Kirill Shirinkin
Terraform: Up and Running: Writing Infrastructure as Code by Yevgeniy Brikman
Infrastructure as Code: Managing Servers in the Cloud by Kief Morris
Using Pipelines to Manage Environments with Infrastructure as Code by Kief Morris
Tools:
https://github.com/gruntwork-io/terragrunt
https://github.com/dtan4/terraforming
https://github.com/coinbase/terraform-landscape
https://github.com/newcontext-oss/kitchen-terraform
https://github.com/kvz/json2hcl
Other relevant repositories:
42. THANK YOU!
All code from this talk:
https://github.com/antonbabenko/cd-terraform-demo
Editor's Notes
#3: Organizer of AWS user group norway
AWS certified solution architect and sysops
Doing web-development, devops for the last 10+ years.
Doing AWS for the last 5 years.
open-source, team leadership
windsurfing, sailing, paragliding
#7: Who is using AWS API directly or using libraries (like Troposphere written in Python) ?
#13: State management - TF has local tfstate file describing metadata of created resources
Execution control = well controlled. Plan => output file or limit by targets => apply with confidence. CF can only validate syntax.
Logical comparisons = more, less, equal value. In TF you can use “count=0” or “count=1” resource parameter instead of boolean true/false to control resource creation.
Manage already created resources like EIP, S3 buckets, VPC is not possible without deleting them first.
#14: State management - TF has local tfstate file describing metadata of created resources
Execution control = well controlled. Plan => output file or limit by targets => apply with confidence. CF can only validate syntax.
Logical comparisons = more, less, equal value. In TF you can use “count=0” or “count=1” resource parameter instead of boolean true/false to control resource creation.
Manage already created resources like EIP, S3 buckets, VPC is not possible without deleting them first.
#15: Some resource properties (for example, ec2 keypair) can be created using AWS API, but not available in CloudFormation.
Terraform uses AWS API, so you can get/update missing properties in many cases.
update_rollback_failed = contact customer service
---
Handle failures => Partial State and Error Handling
If an error happens at any stage in the lifecycle of a resource, Terraform stores a partial state of the resource. This behavior is critical for Terraform to ensure that you don't end up with any zombie resources: resources that were created by Terraform but no longer managed by Terraform due to a loss of state.
#18: Atlas, Consul, etcd, S3 or HTTP
Terraform will automatically update remote state file once where are any changes in it.
There are also ways to pull and push to remote state file.
#19: Refresh state locally and generate execution plan based on tf configs
#20: Apply the changes required to reach the desired state of the configuration.
Or the pre-determined set of actions generated by a terraform plan execution plan.
#22: Atlas, Consul, etcd, S3 or HTTP
Terraform will automatically update remote state file once where are any changes in it.
There are also ways to pull and push to remote state file.
#33: Atlas, Consul, etcd, S3 or HTTP
Terraform will automatically update remote state file once where are any changes in it.
There are also ways to pull and push to remote state file.