This document introduces infrastructure as code (IaC) using Terraform and provides examples of deploying infrastructure on AWS including:
- A single EC2 instance
- A single web server
- A cluster of web servers using an Auto Scaling Group
- Adding a load balancer using an Elastic Load Balancer
It also discusses Terraform concepts and syntax like variables, resources, outputs, and interpolation. The target audience is people who deploy infrastructure on AWS or other clouds.
This document provides an overview of Azure Kubernetes Service (AKS). It begins with introductions to containers and Kubernetes, then describes AKS's architecture and features. AKS allows users to quickly deploy and manage Kubernetes clusters on Azure without having to manage the master nodes. It reduces the operational complexity of running Kubernetes in production. The document outlines how to interact with AKS using the Azure portal, CLI, and ARM templates. It also lists AKS features like identity and access control, scaling, storage integration, and monitoring.
- What are Internal Developer Portal (IDP) and Platform Engineering?
- What is Backstage?
- How Backstage can help dev to build developer portal to make their job easier
Jirayut Nimsaeng
Founder & CEO
Opsta (Thailand) Co., Ltd.
Youtube Record: https://youtu.be/u_nLbgWDwsA?t=850
Dev Mountain Tech Festival @ Chiang Mai
November 12, 2022
A comprehensive walkthrough of how to manage infrastructure-as-code using Terraform. This presentation includes an introduction to Terraform, a discussion of how to manage Terraform state, how to use Terraform modules, an overview of best practices (e.g. isolation, versioning, loops, if-statements), and a list of gotchas to look out for.
For a written and more in-depth version of this presentation, check out the "Comprehensive Guide to Terraform" blog post series: https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca
Slides on "Effective Terraform" from the SF Devops for Startups Meetup
https://www.meetup.com/SF-DevOps-for-Startups/events/237272658/
Showcase development processes and methods with our content ready Devops PowerPoint Presentation Slide. Focus on rapid application delivery using our visually appealing development and operations PPT visuals. The operating system PowerPoint complete deck comprises self-explanatory and editable PowerPoint templates such as need for DevOps, best practices, criteria for choosing a pilot project, DevOps goals, timeline for DevOps transformation, current state future state, 30-60-90 day plan, roadmap for DevOps, transformation post successful DevOps Implementation, RACI matrix, dashboard to name a few. Users can easily customize all the templates as per their specific project needs. Furthermore, you can also use this IT operations management presentation deck to encourage your team to adopt DevOps culture practices and tools. Demonstrate DevOps goals like Increase automation and standardize the process, reduce cost effort & time to market and so on. Download our system development lifecycle PowerPoint templates to present ways to make improved products faster for greater client satisfaction. Handle deficiencies with our DevOps Powerpoint Presentation Slides. Initiate action to acquire desired assets. https://bit.ly/3y8q8NC
An overview and introduction to Hashicorp's Terraform for the Chattanooga ChaDev Lunch.
https://www.youtube.com/watch?v=p2ESyuqPw1A
This document summarizes a meetup about infrastructure as code. It discusses the differences between treating infrastructure as "pets" versus "cattle", where pets are cared for individually and cattle are treated as disposable. When infrastructure is coded declaratively using tools like Terraform, the infrastructure can be version controlled, updated continuously, and rolled back like code. The meetup demonstrated setting up infrastructure on Azure using Terraform to define resources like virtual machines in code. Advanced techniques like storing state remotely and using modules were also discussed.
This document discusses infrastructure as code and the HashiCorp ecosystem. Infrastructure as code allows users to define and provision infrastructure through code rather than manual configuration. It can be used to launch, create, change, and downscale infrastructure based on configuration files. Tools like Terraform allow showing what changes will occur before applying them through files like main.tf and variables.tf. Terraform is part of the broader HashiCorp ecosystem of tools.
This document provides an overview of Terraform including its key features and how to install, configure, and use Terraform to deploy infrastructure on AWS. It covers topics such as creating EC2 instances and other AWS resources with Terraform, using variables, outputs, and provisioners, implementing modules and workspaces, and managing the Terraform state.
The document provides an overview of Terraform and discusses why it was chosen over other infrastructure as code tools. It outlines an agenda covering Terraform installation, configuration, and use of data sources and resources to build example infrastructure including a VCN, internet gateway, subnets, and how to taint and destroy resources. The live demo then walks through setting up Terraform and using it to provision example OCI resources.
This document discusses Terraform, an open source tool for building, changing, and versioning infrastructure safely and efficiently. It provides declarative configuration files to manage networks, virtual machines, containers, and other infrastructure resources. The document introduces Terraform and how it works, provides examples of Terraform code and its output, and offers best practices for using Terraform including separating infrastructure code from application code, using modules, and managing state. Terraform allows infrastructure to be treated as code, provides a faster development cycle than other tools like CloudFormation, and helps promote a devOps culture.
This document provides an overview and introduction to Terraform, including:
- Terraform is an open-source tool for building, changing, and versioning infrastructure safely and efficiently across multiple cloud providers and custom solutions.
- It discusses how Terraform compares to other tools like CloudFormation, Puppet, Chef, etc. and highlights some key Terraform facts like its versioning, community, and issue tracking on GitHub.
- The document provides instructions on getting started with Terraform by installing it and describes some common Terraform commands like apply, plan, and refresh.
- Finally, it briefly outlines some key Terraform features and example use cases like cloud app setup, multi
Building infrastructure as code using Terraform - DevOps KrakowAnton Babenko
This document provides an overview of a DevOps meetup on building infrastructure as code using Terraform. The agenda includes Terraform basics, frequent questions, and problems. The presenter then discusses Terraform modules, tools, and solutions. He addresses common questions like secrets handling and integration with other tools. Finally, he solicits questions from the audience on Terraform use cases and challenges.
This document provides an introduction to Terraform and its key concepts. It describes Terraform as a tool for building, changing, and versioning infrastructure safely and efficiently using declarative configuration files. The document outlines some of Terraform's main components like providers, data sources, resources, variables and outputs. It also discusses the benefits of structuring Terraform configurations using modules to improve reusability and manageability.
Terraform is an Infrastructure Automation tools. This can work equally good for on-premises, public cloud, private cloud, hybrid-cloud and multi-cloud infrastructure.
Visit us for more at www.zekeLabs.com
Infrastructure-as-Code (IaC) Using Terraform (Advanced Edition)Adin Ermie
In this new presentation, we will cover advanced Terraform topics (full-on DevOps). We will compare the deployment of Terraform using Azure DevOps, GitHub/GitHub Actions, and Terraform Cloud. We wrap everything up with some key takeaway learning resources in your Terraform learning adventure.
NOTE: A recording of this presenting is available here: https://www.youtube.com/watch?v=fJ8_ZbOIdto&t=5574s
This document discusses the infrastructure provisioning tool Terraform. It can be used to provision resources like EC2 instances, storage, and DNS entries across multiple cloud providers. Terraform uses configuration files to define what infrastructure should be created and maintains state files to track changes. It generates execution plans to determine what changes need to be made and allows applying those changes to create, update or destroy infrastructure.
My talk at FullStackFest, 4.9.2017. Become more familiar with managing infrastructure using Terraform, Packer and deployment pipeline. Code repository - https://github.com/antonbabenko/terraform-deployment-pipeline-talk
Infrastructure-as-Code (IaC) using TerraformAdin Ermie
Learn the benefits of Infrastructure-as-Code (IaC), what Terraform is and why people love it, along with a breakdown of the basics (including live demo deployments). Then wrap up with a comparison of Azure Resource Manager (ARM) templates versus Terraform, consider some best practices, and walk away with some key resources in your Terraform learning adventure.
As part of this presentation we covered basics of Terraform which is Infrastructure as code. It will helps to Devops teams to start with Terraform.
This document will be helpful for the development who wants to understand infrastructure as code concepts and if they want to understand the usability of terrform
This document discusses Terraform, an open-source tool that allows users to define and provision infrastructure resources in a declarative configuration file. It summarizes that Terraform allows users to build, change, and destroy infrastructure components like compute instances, storage buckets, and networking through declarative configuration files, enabling an infrastructure-as-code approach that is easy to version, track changes for, and integrate with continuous delivery practices.
This document discusses Terraform, an open-source infrastructure as code tool. It begins by explaining how infrastructure can be defined and managed as code through services that have APIs. It then provides an overview of Terraform, including its core concepts of providers, resources, and data sources. The document demonstrates Terraform's declarative configuration syntax and process of planning and applying changes. It also covers features like modules, state management, data sources, and developing custom plugins.
Terraform modules and best-practices - September 2018Anton Babenko
Slides for my "Terraform modules and best-practices" talk on meetups during September 2018.
Some links from the slides:
https://www.terraform-best-practices.com/
https://cloudcraft.co/
https://github.com/terraform-aws-modules/
https://github.com/antonbabenko/modules.tf-lambda
- The document provides biographical information about Sri Rajan, including that he is from India, has worked in IT for over 10 years including 6 years at Rackspace, and has expertise in Linux, OpenStack, and automation.
- It also provides an overview of Rackspace, including that they have over 5,000 employees serving customers in over 120 countries from 9 data centers worldwide.
- Sri Rajan's contact information is included at the end.
Terraform is a tool used by Atlassian for building, changing, and versioning infrastructure safely and efficiently. It manages both popular cloud services and in-house solutions through its infrastructure-as-code approach. Atlassian uses Terraform for its build pipelines via a Python wrapper and fork of Terraform, taking advantage of its modular and extendable design as well as its large, active community for support.
WinOps Conference London 2017 session
Public Cloud IaaS vs traditional on prem and how Hashicorp Terraform is a great tool to configure Azure. Recorded here: https://www.youtube.com/watch?v=LDZXRBBuXCU
This document contains notes from a talk on advanced Terraform techniques. It discusses using Terraform for infrastructure as code to deploy resources across multiple environments like development, staging, and production. It also mentions techniques like separating code into modules, using variables to parameterize configurations, and integrating Terraform with other DevOps tools like Ansible.
PuppetDB: Sneaking Clojure into Operationsgrim_radical
The document provides an overview of PuppetDB, which is a system for storing and querying data about infrastructure as code and system configurations. Some key points:
- PuppetDB stores immutable data about systems and allows querying of this data to enable higher-level infrastructure operations.
- It uses techniques like command query responsibility separation (CQRS) to separate write and read pipelines for better performance and reliability.
- The data is stored in a relational database for efficient querying, and queries are expressed in an abstract syntax tree (AST)-based language.
- The system is designed for speed, reliability, and ease of deployment in operations. It leverages techniques from Clojure and the JVM.
Learn everything you need to know about terraform, Infrastructure-as-Code and cloud computing with Brainboard.
Learn more: https://www.brainboard.co/
This document discusses infrastructure as code and the HashiCorp ecosystem. Infrastructure as code allows users to define and provision infrastructure through code rather than manual configuration. It can be used to launch, create, change, and downscale infrastructure based on configuration files. Tools like Terraform allow showing what changes will occur before applying them through files like main.tf and variables.tf. Terraform is part of the broader HashiCorp ecosystem of tools.
This document provides an overview of Terraform including its key features and how to install, configure, and use Terraform to deploy infrastructure on AWS. It covers topics such as creating EC2 instances and other AWS resources with Terraform, using variables, outputs, and provisioners, implementing modules and workspaces, and managing the Terraform state.
The document provides an overview of Terraform and discusses why it was chosen over other infrastructure as code tools. It outlines an agenda covering Terraform installation, configuration, and use of data sources and resources to build example infrastructure including a VCN, internet gateway, subnets, and how to taint and destroy resources. The live demo then walks through setting up Terraform and using it to provision example OCI resources.
This document discusses Terraform, an open source tool for building, changing, and versioning infrastructure safely and efficiently. It provides declarative configuration files to manage networks, virtual machines, containers, and other infrastructure resources. The document introduces Terraform and how it works, provides examples of Terraform code and its output, and offers best practices for using Terraform including separating infrastructure code from application code, using modules, and managing state. Terraform allows infrastructure to be treated as code, provides a faster development cycle than other tools like CloudFormation, and helps promote a devOps culture.
This document provides an overview and introduction to Terraform, including:
- Terraform is an open-source tool for building, changing, and versioning infrastructure safely and efficiently across multiple cloud providers and custom solutions.
- It discusses how Terraform compares to other tools like CloudFormation, Puppet, Chef, etc. and highlights some key Terraform facts like its versioning, community, and issue tracking on GitHub.
- The document provides instructions on getting started with Terraform by installing it and describes some common Terraform commands like apply, plan, and refresh.
- Finally, it briefly outlines some key Terraform features and example use cases like cloud app setup, multi
Building infrastructure as code using Terraform - DevOps KrakowAnton Babenko
This document provides an overview of a DevOps meetup on building infrastructure as code using Terraform. The agenda includes Terraform basics, frequent questions, and problems. The presenter then discusses Terraform modules, tools, and solutions. He addresses common questions like secrets handling and integration with other tools. Finally, he solicits questions from the audience on Terraform use cases and challenges.
This document provides an introduction to Terraform and its key concepts. It describes Terraform as a tool for building, changing, and versioning infrastructure safely and efficiently using declarative configuration files. The document outlines some of Terraform's main components like providers, data sources, resources, variables and outputs. It also discusses the benefits of structuring Terraform configurations using modules to improve reusability and manageability.
Terraform is an Infrastructure Automation tools. This can work equally good for on-premises, public cloud, private cloud, hybrid-cloud and multi-cloud infrastructure.
Visit us for more at www.zekeLabs.com
Infrastructure-as-Code (IaC) Using Terraform (Advanced Edition)Adin Ermie
In this new presentation, we will cover advanced Terraform topics (full-on DevOps). We will compare the deployment of Terraform using Azure DevOps, GitHub/GitHub Actions, and Terraform Cloud. We wrap everything up with some key takeaway learning resources in your Terraform learning adventure.
NOTE: A recording of this presenting is available here: https://www.youtube.com/watch?v=fJ8_ZbOIdto&t=5574s
This document discusses the infrastructure provisioning tool Terraform. It can be used to provision resources like EC2 instances, storage, and DNS entries across multiple cloud providers. Terraform uses configuration files to define what infrastructure should be created and maintains state files to track changes. It generates execution plans to determine what changes need to be made and allows applying those changes to create, update or destroy infrastructure.
My talk at FullStackFest, 4.9.2017. Become more familiar with managing infrastructure using Terraform, Packer and deployment pipeline. Code repository - https://github.com/antonbabenko/terraform-deployment-pipeline-talk
Infrastructure-as-Code (IaC) using TerraformAdin Ermie
Learn the benefits of Infrastructure-as-Code (IaC), what Terraform is and why people love it, along with a breakdown of the basics (including live demo deployments). Then wrap up with a comparison of Azure Resource Manager (ARM) templates versus Terraform, consider some best practices, and walk away with some key resources in your Terraform learning adventure.
As part of this presentation we covered basics of Terraform which is Infrastructure as code. It will helps to Devops teams to start with Terraform.
This document will be helpful for the development who wants to understand infrastructure as code concepts and if they want to understand the usability of terrform
This document discusses Terraform, an open-source tool that allows users to define and provision infrastructure resources in a declarative configuration file. It summarizes that Terraform allows users to build, change, and destroy infrastructure components like compute instances, storage buckets, and networking through declarative configuration files, enabling an infrastructure-as-code approach that is easy to version, track changes for, and integrate with continuous delivery practices.
This document discusses Terraform, an open-source infrastructure as code tool. It begins by explaining how infrastructure can be defined and managed as code through services that have APIs. It then provides an overview of Terraform, including its core concepts of providers, resources, and data sources. The document demonstrates Terraform's declarative configuration syntax and process of planning and applying changes. It also covers features like modules, state management, data sources, and developing custom plugins.
Terraform modules and best-practices - September 2018Anton Babenko
Slides for my "Terraform modules and best-practices" talk on meetups during September 2018.
Some links from the slides:
https://www.terraform-best-practices.com/
https://cloudcraft.co/
https://github.com/terraform-aws-modules/
https://github.com/antonbabenko/modules.tf-lambda
- The document provides biographical information about Sri Rajan, including that he is from India, has worked in IT for over 10 years including 6 years at Rackspace, and has expertise in Linux, OpenStack, and automation.
- It also provides an overview of Rackspace, including that they have over 5,000 employees serving customers in over 120 countries from 9 data centers worldwide.
- Sri Rajan's contact information is included at the end.
Terraform is a tool used by Atlassian for building, changing, and versioning infrastructure safely and efficiently. It manages both popular cloud services and in-house solutions through its infrastructure-as-code approach. Atlassian uses Terraform for its build pipelines via a Python wrapper and fork of Terraform, taking advantage of its modular and extendable design as well as its large, active community for support.
WinOps Conference London 2017 session
Public Cloud IaaS vs traditional on prem and how Hashicorp Terraform is a great tool to configure Azure. Recorded here: https://www.youtube.com/watch?v=LDZXRBBuXCU
This document contains notes from a talk on advanced Terraform techniques. It discusses using Terraform for infrastructure as code to deploy resources across multiple environments like development, staging, and production. It also mentions techniques like separating code into modules, using variables to parameterize configurations, and integrating Terraform with other DevOps tools like Ansible.
PuppetDB: Sneaking Clojure into Operationsgrim_radical
The document provides an overview of PuppetDB, which is a system for storing and querying data about infrastructure as code and system configurations. Some key points:
- PuppetDB stores immutable data about systems and allows querying of this data to enable higher-level infrastructure operations.
- It uses techniques like command query responsibility separation (CQRS) to separate write and read pipelines for better performance and reliability.
- The data is stored in a relational database for efficient querying, and queries are expressed in an abstract syntax tree (AST)-based language.
- The system is designed for speed, reliability, and ease of deployment in operations. It leverages techniques from Clojure and the JVM.
Learn everything you need to know about terraform, Infrastructure-as-Code and cloud computing with Brainboard.
Learn more: https://www.brainboard.co/
Infrastructure as Code in your CD pipelines - London Microsoft DevOps 0423Giulio Vian
London Microsoft DevOps 23 April 2018 Meetup (https://www.meetup.com/London-Microsoft-DevOps/events/249114256/)
Infrastructure as Code in your CD pipelines
from VMs to Containers
He is going to cover the Journey of agile transformation in a non-IT company, bringing in Continuous Delivery, traditional infrastructure and modern cloud DevOps practices.
In this talk, you will hear about the DevOps journey in his company (Glass, Lewis & Co.), from the initial brown-field all-manual state to the current partially automated situation and the strategic destination of a fully automated and monitored process.
In an equilibrium between a high-level view and useful practical tips, he will touch on what informed their decisions, in terms of priorities and technologies, some lessons learned in setting up Infrastructure-as-Code using Terraform for Azure, and how the legacy constraints helped or hindered them on this journey.
https://www.youtube.com/watch?v=IeweKUdHJc4
My presentation from Hashiconf 2017, discussing our use of Terraform, and our techniques
to help make it safe and accessible.
Watch video: https://www.youtube.com/watch?v=SgmmoRCmIa4&list=PLIuWze7quVLDSxJKDj3pRSqvmHAzQ_9vd&index=6
Here is the summary of what you'll learn:
00:02:00 Welcome
00:03:32 Meet Chafik, CEO of Brainboard.co
00:05:00 Our goal at Brainboard
00:06:00 Terraform modules definition
00:20:00 Build your own modules
00:21:00 Azure
00:48:00 AWS
00:52:00 Best practices
00:56:00 Review some of the most used community modules
00:56:43 Lambda
01:00:30 AKS
01:04:00 Where to host your modules?
01:06:04 Challenges of maintaining modules within a team
01:09:00 Build your own modules’ catalog
It talks about native compilation technology, why it is required, what it is?
Also how we can apply this technology to compile table and procedure to achieve considerable performance gain with very minimal changes.
These are the slides which were used by Kumar Rajeev Rastogi of Huawei for his presentation at pgDay Asia 2016. He presented great idea about Native Compilation to improve CPU efficiency.
Title
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTorch + XGBoost + Airflow + MLflow + Spark + Jupyter + TPU
Video
https://youtu.be/vaB4IM6ySD0
Description
In this workshop, we build real-world machine learning pipelines using TensorFlow Extended (TFX), KubeFlow, and Airflow.
Described in the 2017 paper, TFX is used internally by thousands of Google data scientists and engineers across every major product line within Google.
KubeFlow is a modern, end-to-end pipeline orchestration framework that embraces the latest AI best practices including hyper-parameter tuning, distributed model training, and model tracking.
Airflow is the most-widely used pipeline orchestration framework in machine learning.
Pre-requisites
Modern browser - and that's it!
Every attendee will receive a cloud instance
Nothing will be installed on your local laptop
Everything can be downloaded at the end of the workshop
Location
Online Workshop
Agenda
1. Create a Kubernetes cluster
2. Install KubeFlow, Airflow, TFX, and Jupyter
3. Setup ML Training Pipelines with KubeFlow and Airflow
4. Transform Data with TFX Transform
5. Validate Training Data with TFX Data Validation
6. Train Models with Jupyter, Keras/TensorFlow 2.0, PyTorch, XGBoost, and KubeFlow
7. Run a Notebook Directly on Kubernetes Cluster with KubeFlow
8. Analyze Models using TFX Model Analysis and Jupyter
9. Perform Hyper-Parameter Tuning with KubeFlow
10. Select the Best Model using KubeFlow Experiment Tracking
11. Reproduce Model Training with TFX Metadata Store and Pachyderm
12. Deploy the Model to Production with TensorFlow Serving and Istio
13. Save and Download your Workspace
Key Takeaways
Attendees will gain experience training, analyzing, and serving real-world Keras/TensorFlow 2.0 models in production using model frameworks and open-source tools.
Related Links
1. PipelineAI Home: https://pipeline.ai
2. PipelineAI Community Edition: http://community.pipeline.ai
3. PipelineAI GitHub: https://github.com/PipelineAI/pipeline
4. Advanced Spark and TensorFlow Meetup (SF-based, Global Reach): https://www.meetup.com/Advanced-Spark-and-TensorFlow-Meetup
5. YouTube Videos: https://youtube.pipeline.ai
6. SlideShare Presentations: https://slideshare.pipeline.ai
7. Slack Support: https://joinslack.pipeline.ai
8. Web Support and Knowledge Base: https://support.pipeline.ai
9. Email Support: [email protected]
OroCRM Partner Technical Training: September 2015Oro Inc.
OroCRM Partner Technical Training
September 2015
Schedule:
Day 1 - Monday 9/14
Define your Entities
--Environment and Project Setup
--Packages Management
--Entities and DB Schema Management
--Entity CRUD Implementation
Day 2 - Tuesday 9/15
Security and Productivity
--ACL
--Entity Activities
--System Configuration
Day 3 - Wednesday 9/16
User Interface
--Layouts and Templates
--CSS and JavaScript
--Widgets
--Navigation
--Localizations
Day 4 - Thursday 9/17
Integrate your Solution
--Job Queue
--Import and Export
--Integrations
--Automated Processes
--WEB API
Day 5 - Friday 9/18
Work with Data
--Workflow
--Reports
--Analytics and Marketing
--Tests
This document discusses various design patterns in Python and how they compare to their implementations in other languages like C++. It provides examples of how common patterns from the Gang of Four book like Singleton, Observer, Strategy, and Decorator are simplified or invisible in Python due to features like first-class functions and duck typing. The document aims to illustrate Pythonic ways to implement these patterns without unnecessary complexity.
Behavior driven development (BDD) is an agile software development process that encourages collaboration between developers, QA and non-technical or business participants in a software project. It helps align team goals to deliver value to business stakeholders. BDD has advantages like improving communication, early validation of requirements, and automated acceptance tests. However, it also requires extra effort for writing feature files and scenarios. BDD may not be suitable for all projects depending on their nature and requirements. Overall, when implemented effectively, BDD can help deliver working software that meets business needs.
Building and deploying LLM applications with Apache AirflowKaxil Naik
Behind the growing interest in Generate AI and LLM-based enterprise applications lies an expanded set of requirements for data integrations and ML orchestration. Enterprises want to use proprietary data to power LLM-based applications that create new business value, but they face challenges in moving beyond experimentation. The pipelines that power these models need to run reliably at scale, bringing together data from many sources and reacting continuously to changing conditions.
This talk focuses on the design patterns for using Apache Airflow to support LLM applications created using private enterprise data. We’ll go through a real-world example of what this looks like, as well as a proposal to improve Airflow and to add additional Airflow Providers to make it easier to interact with LLMs such as the ones from OpenAI (such as GPT4) and the ones on HuggingFace, while working with both structured and unstructured data.
In short, this shows how these Airflow patterns enable reliable, traceable, and scalable LLM applications within the enterprise.
https://airflowsummit.org/sessions/2023/keynote-llm/
- Lithium is an upcoming PHP framework that is lightweight and flexible
- It uses MongoDB as its primary database and supports MySQL as well
- The presentation covered the core functionality of Lithium including installation, models, controllers, views and provided examples of using it to build a blog application
Web Template Mechanisms in SOC Verification - DVCon.pdfSamHoney6
The document discusses using web template mechanisms to generate verification environments for system-on-chip (SOC) designs. It proposes applying Jinja2 template language to generate consistent software views and hardware verification language views based on platform descriptions in JSON format. This separates the platform data from the views, allowing reuse of tests developed on virtual platforms at the SOC level while hiding differences between the platforms.
The document describes a Bucharest Big Data Meetup occurring on June 5th. The meetup will include two tech talks: one on productionizing machine learning from 7:00-7:40 PM, and another on a technology comparison of databases vs blockchains from 7:40-8:15 PM. The meetup will conclude from 8:15-8:45 PM with pizza and drinks sponsored by Netopia.
The goal was to create a reusable and efficient Hadoop Cluster Performance Profiler
Video (in Russian): https://www.youtube.com/watch?v=Yh9KxQ3fKy0
This document provides an overview and summary of the Struts framework for building web applications:
- The Struts framework is based on the Model-View-Controller (MVC) architecture which divides applications into the model, view, and controller layers.
- Struts implements the front controller pattern where requests are handled by a central controller servlet that dispatches requests to application components.
- The framework provides tags and utilities to build web forms and interfaces, internationalization support, and an extensible validation framework.
- Configuration is done via XML files which define mappings, form beans, validators, and other application settings.
- An example application demonstrates common design patterns used with Struts like actions that
The document provides an overview of the Play Framework using Scala. It discusses key features of Play including hot reloading, type safety, and predefined modules. It also covers installing Play, the MVC architecture, developing REST APIs, adding dependencies, routing, and configuration. Common commands like sbt run and sbt compile are listed. The document demonstrates creating a part-of-speech tagger using Play and Scala.
This presentation was hold at APEXConnect in Berlin 28th of April 2016.
The presentition describes how to user a source control / versioning system in combination with database oriented projects. You can see how to manage the folder structure and what types of files are versioned, including an Oracle Application Express Application.
This is the keynote of the Into the Box conference, highlighting the release of the BoxLang JVM language, its key enhancements, and its vision for the future.
Noah Loul Shares 5 Steps to Implement AI Agents for Maximum Business Efficien...Noah Loul
Artificial intelligence is changing how businesses operate. Companies are using AI agents to automate tasks, reduce time spent on repetitive work, and focus more on high-value activities. Noah Loul, an AI strategist and entrepreneur, has helped dozens of companies streamline their operations using smart automation. He believes AI agents aren't just tools—they're workers that take on repeatable tasks so your human team can focus on what matters. If you want to reduce time waste and increase output, AI agents are the next move.
Technology Trends in 2025: AI and Big Data AnalyticsInData Labs
At InData Labs, we have been keeping an ear to the ground, looking out for AI-enabled digital transformation trends coming our way in 2025. Our report will provide a look into the technology landscape of the future, including:
-Artificial Intelligence Market Overview
-Strategies for AI Adoption in 2025
-Anticipated drivers of AI adoption and transformative technologies
-Benefits of AI and Big data for your business
-Tips on how to prepare your business for innovation
-AI and data privacy: Strategies for securing data privacy in AI models, etc.
Download your free copy nowand implement the key findings to improve your business.
Enhancing ICU Intelligence: How Our Functional Testing Enabled a Healthcare I...Impelsys Inc.
Impelsys provided a robust testing solution, leveraging a risk-based and requirement-mapped approach to validate ICU Connect and CritiXpert. A well-defined test suite was developed to assess data communication, clinical data collection, transformation, and visualization across integrated devices.
Increasing Retail Store Efficiency How can Planograms Save Time and Money.pptxAnoop Ashok
In today's fast-paced retail environment, efficiency is key. Every minute counts, and every penny matters. One tool that can significantly boost your store's efficiency is a well-executed planogram. These visual merchandising blueprints not only enhance store layouts but also save time and money in the process.
TrustArc Webinar: Consumer Expectations vs Corporate Realities on Data Broker...TrustArc
Most consumers believe they’re making informed decisions about their personal data—adjusting privacy settings, blocking trackers, and opting out where they can. However, our new research reveals that while awareness is high, taking meaningful action is still lacking. On the corporate side, many organizations report strong policies for managing third-party data and consumer consent yet fall short when it comes to consistency, accountability and transparency.
This session will explore the research findings from TrustArc’s Privacy Pulse Survey, examining consumer attitudes toward personal data collection and practical suggestions for corporate practices around purchasing third-party data.
Attendees will learn:
- Consumer awareness around data brokers and what consumers are doing to limit data collection
- How businesses assess third-party vendors and their consent management operations
- Where business preparedness needs improvement
- What these trends mean for the future of privacy governance and public trust
This discussion is essential for privacy, risk, and compliance professionals who want to ground their strategies in current data and prepare for what’s next in the privacy landscape.
Andrew Marnell: Transforming Business Strategy Through Data-Driven InsightsAndrew Marnell
With expertise in data architecture, performance tracking, and revenue forecasting, Andrew Marnell plays a vital role in aligning business strategies with data insights. Andrew Marnell’s ability to lead cross-functional teams ensures businesses achieve sustainable growth and operational excellence.
AI Changes Everything – Talk at Cardiff Metropolitan University, 29th April 2...Alan Dix
Talk at the final event of Data Fusion Dynamics: A Collaborative UK-Saudi Initiative in Cybersecurity and Artificial Intelligence funded by the British Council UK-Saudi Challenge Fund 2024, Cardiff Metropolitan University, 29th April 2025
https://alandix.com/academic/talks/CMet2025-AI-Changes-Everything/
Is AI just another technology, or does it fundamentally change the way we live and think?
Every technology has a direct impact with micro-ethical consequences, some good, some bad. However more profound are the ways in which some technologies reshape the very fabric of society with macro-ethical impacts. The invention of the stirrup revolutionised mounted combat, but as a side effect gave rise to the feudal system, which still shapes politics today. The internal combustion engine offers personal freedom and creates pollution, but has also transformed the nature of urban planning and international trade. When we look at AI the micro-ethical issues, such as bias, are most obvious, but the macro-ethical challenges may be greater.
At a micro-ethical level AI has the potential to deepen social, ethnic and gender bias, issues I have warned about since the early 1990s! It is also being used increasingly on the battlefield. However, it also offers amazing opportunities in health and educations, as the recent Nobel prizes for the developers of AlphaFold illustrate. More radically, the need to encode ethics acts as a mirror to surface essential ethical problems and conflicts.
At the macro-ethical level, by the early 2000s digital technology had already begun to undermine sovereignty (e.g. gambling), market economics (through network effects and emergent monopolies), and the very meaning of money. Modern AI is the child of big data, big computation and ultimately big business, intensifying the inherent tendency of digital technology to concentrate power. AI is already unravelling the fundamentals of the social, political and economic world around us, but this is a world that needs radical reimagining to overcome the global environmental and human challenges that confront us. Our challenge is whether to let the threads fall as they may, or to use them to weave a better future.
What is Model Context Protocol(MCP) - The new technology for communication bw...Vishnu Singh Chundawat
The MCP (Model Context Protocol) is a framework designed to manage context and interaction within complex systems. This SlideShare presentation will provide a detailed overview of the MCP Model, its applications, and how it plays a crucial role in improving communication and decision-making in distributed systems. We will explore the key concepts behind the protocol, including the importance of context, data management, and how this model enhances system adaptability and responsiveness. Ideal for software developers, system architects, and IT professionals, this presentation will offer valuable insights into how the MCP Model can streamline workflows, improve efficiency, and create more intuitive systems for a wide range of use cases.
Generative Artificial Intelligence (GenAI) in BusinessDr. Tathagat Varma
My talk for the Indian School of Business (ISB) Emerging Leaders Program Cohort 9. In this talk, I discussed key issues around adoption of GenAI in business - benefits, opportunities and limitations. I also discussed how my research on Theory of Cognitive Chasms helps address some of these issues
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Aqusag Technologies
In late April 2025, a significant portion of Europe, particularly Spain, Portugal, and parts of southern France, experienced widespread, rolling power outages that continue to affect millions of residents, businesses, and infrastructure systems.
2. These slides are heavily influenced by the slides and talks from
Yevgeniy Brikman from terragrunt.io
It goes hand in hand with the following talk:
https://blog.gruntwork.io/5-lessons-learned-from-writing-over-300-000-lines-of-infrastructure-code-36ba7fadeac1
Credits
3. Ami Mahloof
Senior Cloud Architect at DoIT International.
LinkedIn Profile
Medium blog posts
Who Am I?
At DoiT International, we tackle complex problems of scale which are
sometimes unique to internet-scale customers while using our expertise
in resolving problems, coding, algorithms, complexity analysis, and large-
scale system design.
4. 1. Introduction to Terraform
2. Module Anatomy
3. Modules Structure
4. Testing
5. Terraform Modules Best Practices
6. Migrating Existing Infrastructure Into New Code Structure
Outline
5. An Introduction To Terraform
⌾ Terraform can manage existing and new infrastructure
⌾ Talk to multiple cloud/infrastructure providers
⌾ Ensure creation and consistency
⌾ Single DSL (Domain Specific Language) to express API agnostic calls
⌾ Preview changes, destroy when needed
⌾ Single source of truth infrastructure state
⌾ Even order a pizza from Domino’s
Terraform is a tool for building, changing, and versioning infrastructure
safely and efficiently
6. AnIntroductiontoTerraform
Just like in code a function has inputs (arguments) and outputs
(attributes)
The following sudo code creates an EC2 instance from the given args
function create_ec2(name, type) {
ec2 = aws.create_instance(name, type)
print ec2.instance_ip
}
HashiCorp Configuration Language (HCL) Syntax
7. AnIntroductiontoTerraform
Mapping A Code To HCL Syntax
function create_ec2(name, type) {
ec2 = aws.create_instance(name, type)
print ec2.instance_ip
}
Inputs (arguments)
outputs (attributes)
8. AnIntroductiontoTerraform
Mapping A Code To HCL Syntax
function create_ec2(name, type) {
ec2 = aws.create_instance(name, type)
print ec2.instance_ip
}
create_ec2("test", "t2.micro")
resource "aws_ec2_instance" "create_ec2" {
name = "test"
type = "t2.micro"
}
output "instance_ip" {
value = aws_ec2_instance.create_ec2.ipv4_address
}
name label
9. AnIntroductiontoTerraform
Mapping A Code To HCL Syntax
function create_ec2(name, type) {
ec2 = aws.create_instance(name, type)
print ec2.instance_ip
}
create_ec2("test", "t2.micro")
resource "aws_ec2_instance" "create_ec2" {
name = "test"
type = "t2.micro"
}
output "instance_ip" {
value = aws_ec2_instance.create_ec2.ipv4_address
}
provider resource API
10. AnIntroductiontoTerraform
Mapping A Code To HCL Syntax
function create_ec2(name, type) {
ec2 = aws.create_instance(name, type)
print ec2.instance_ip
}
create_ec2("test", "t2.micro")
resource "aws_ec2_instance" "create_ec2" {
name = "test"
type = "t2.micro"
}
output "instance_ip" {
value = aws_ec2_instance.create_ec2.ipv4_address
}
resource arguments
11. AnIntroductiontoTerraform
Mapping A Code To HCL Syntax
function create_ec2(name, type) {
ec2 = aws.create_instance(name, type)
print ec2.instance_ip
}
create_ec2("test", "t2.micro")
resource "aws_ec2_instance" "create_ec2" {
name = "test"
type = "t2.micro"
}
output "instance_ip" {
value = aws_ec2_instance.create_ec2.ipv4_address
}
output values
12. AnIntroductiontoTerraform
Mapping A Code To HCL Syntax
resource blocks are for create API calls:
resource "aws_ec2_instance" "create_ec2" {...
data blocks are for get API calls:
data "aws_ec2_instance" "instance_data" {
name = "test"
}
output "az" {
value = data.aws_ec2_instance.instance_data.availbility_zone
}
13. ⌾ JSON representation of known
infrastrurcture state provisioned by
terraform
⌾ Stored in file or externally
⌾ Locking (useful for team working on
the same project or tasks)
⌾ Source of truth for infrastructure
AnIntroductiontoTerraform
Terraform State File
{
"version": 4,
"terraform_version": "0.12.6",
"serial": 18,
"lineage": "b3bfb3fc-8417-cc89-d87f-6ab0008e2056",
"outputs": {
"group-id": {
"value": "6521262259226325019",
"type": "string"
}
},
"resources": [
{
"mode": "data",
"type": "template_file",
"name": "filter_pattern",
"provider": "provider.template",
"instances": ...
terraform.tfstate
14. Monolithic Terraform
⌾ One/several huge files - fear of making changes, no reusability, a mistake
anywhere can break everything
⌾ Hard to find/debug - variables and sections are harder to find when one needs
to make a change
⌾ Guess work - going back and forth between variables and resources just to
understand what is required and what is the default
⌾ Slower development cycles - increased time and effort needed to start
working with it
Issues with monolithic Terraform:
16. 10,000 ft View Approach
Since Terraform will combine all the files into a plan, we can use that to
create smaller files with better visibility using the following simple module
anatomy.
Being able to quickly find what you’re looking for during development and
debugging an issue is crucial when working with Terraform.
17. Module Anatomy
The module anatomy is a scaffold that provides better visibility and guidelines
for developing, and working with Terraform modules.
Since Terraform compiles all resources in all files into an execution plan, we can
use that to create better visibility and readability.
There are no hard-coded values, as each hard-coded value becomes a default
variable, and every attribute is a variable.
18. Development of a module is done through the examples folder which holds a
main.tf file with:
⌾ Hard-coded values for all variables
⌾ Lock down a specific version for a Terraform provider
⌾ State location
⌾ Terraform version
This will serve as a usage example when you finish development on the module
ModuleAnatomy
19. terraform {
backend "s3" {
region = "eu-west-3"
bucket = "some-s3-bucket"
key = "dev/eu-west-3/infrastructure"
}
required_version = ">=12.6"
}
# This is where you setup the provider to use with the module
provider "aws" {
version = "~> 2.0"
region = "us-east-1"
}
module "route53_record_name_cname_exmaple" {
source = "../"
domain_name = "tf.domain.com"
value = "1.2.3.4"
}
ModuleAnatomy examples/main.tf
20. ⌾ examples - a folder containing examples for usage
⌾ test - Go Terratest folder
⌾ data.tf - Terraform data sources
⌾ main.tf - resources to be created
if it’s over 30 lines long, break it into files with names
applied for resources i.e., autoscaling.tf, ec2.tf, etc.
⌾ outputs.tf - outputs for the module
⌾ README.md - clear inputs/outputs and description for the
module as well as usage
⌾ default-variables - variables with default values
⌾ required-variables - variables with values that are required
ModuleAnatomy
26. 3-TierModulesStructure
What are the 3-Tier modules structure major benefits:
⌾ Hide all lower level details to allow the end user to focus
on building the infrastructure
⌾ Each tier is tested providing a quicker
development/debugging cycle
⌾ Provides the confidence needed to make changes
The 3-Tier Module-Based Hierarchy Structure
27. 3-TierModulesStructure
The goal is to isolate each (live) environment (dev, staging, production),
then take each component in that environment and break it up into a
generic service module, and for each generic service module break it
into resource modules.
Restructuring Existing Infrastructure
28. 3-TierModulesStructure
Break your architecture code down by live environment
terraform-live-envs
L dev
L vpc
L mysql
L kubernetes
L staging
L vpc
L mysql
L kubernetes
L production
L vpc
L mysql
L kubernetes
29. 3-TierModulesStructure
Then by service (infrastructure type)
terraform-live-envs
L dev
L vpc
L mysql
L kubernetes
L staging
L vpc
L mysql
L kubernetes
L production
L vpc
L mysql
L kubernetes
terraform-services (generic modules)
L gke
L vpc
L sql
Implement infrastructure in modules
30. 3-TierModulesStructure
Build complex modules from smaller, simpler modules
terraform-live-envs
L dev
L vpc
L mysql
L kubernetes
L staging
L vpc
L mysql
L kubernetes
L production
L vpc
L mysql
L kubernetes
terraform-services
L gke
L vpc
L sql
terraform-resources
L vpc
L sql
L instance
L user
31. Tier-1 Terraform Resources Modules
This is the lowest tier
terraform-resources is a folder containing modules with a single resource to be
created
These resource modules are creating only one thing
These modules should have an output.tf file with outputs values providing
information on the resource created
This information can be used to create hard dependencies between modules
(required by the 2nd tier)
3-TierModulesStructure
32. Tier-2 Terraform Services Modules
This is the middle tier
terraform-services is a folder containing modules combining resources modules
together from the terraform-resources folder
Each service module is a generic service that can create multiple versions based on
the variables passed in
Example; an SQL instance module can create postgreSQL or mySQL instance
3-TierModulesStructure
33. Tier-3 Terraform-live-envs Modules
This is the top tier
The terraform-live-envs is a folder containing modules implementing the
infrastructure that is deployed
These modules are usually built from the services modules but can also have
resources modules mixed in
Every module attribute is a hard-coded value representing the value that is deployed
3-TierModulesStructure
34. Tier-3 terraform-live-envs Modules
Each module should have one single file called main.tf that will contain:
⌾ Terraform state block
⌾ Modules with hard-coded values
⌾ Locals block (shared variables between modules in this file)
⌾ Outputs
This makes for a readable easy-to-use and maintainable deployment file
3-TierModulesStructure
35. Terraform Remote State
By default, Terraform stores state locally in a file named terraform.tfstate
This does not scale because there’s no locking or central location to work with
Terraform in a team.
With remote state, Terraform writes the state data to a remote data store, which
can then be shared between all members of a team. Terraform supports storing
state in Terraform Cloud, HashiCorp Consul, Amazon S3, Alibaba Cloud OSS, and
more.
36. Base Remote State
Often you need to create a base infrastructure for other deployments to use/read
Example:
You might create a VPC in one region only once, but you can deploy multiple
services on that VPC.
37. To do that, break your deployment into 2-steps:
⌾ step-1-infrastructure
Creates and outputs the VPC information (vpc_id, subnets etc..)
⌾ step-2-some-service
Accepts the remote state location (defined in step-1) as an input that will be
used to read the output information for step-1, and creates the service on that
VPC
BaseRemoteState
39. BaseRemoteState
terraform {
backend "s3" {
region = "eu-west-3"
bucket = "my-terraform-bucket"
# separate the state file location from the infrastructure state location
key = "dev/eu-west-3/deployment/code-pipeline/nodejs-app"
}
}
module "code_pipeline" {
source = "../codebuild-pipeline"
# these are taken from step-1 terraform backend block
terraform_state_store_region = "eu-west-3"
terraform_state_store_bucket = "my-terraform-bucket"
infrastructure_terraform_state_store_key = "dev/eu-west-3/infrastructure"
}
...
Step 2 - Services on Infrastructure
40. BaseRemoteState
deployment/step2-codepipeline/main.tf
module "code_pipeline" {
source = "../codebuild-pipeline"
...
}
Codebuild-pipeline module
# Read the information from the remote state file of step 1 infrastructure
data "terraform_remote_state" "infra" {
backend = "s3"
config = {
region = var.terraform_state_store_region
bucket = var.terraform_state_store_bucket
key = var.infrastructure_terraform_state_store_key
}
}
# Assign data to locals to read the data only once
locals {
vpc_id = data.terraform_remote_state.infra.outputs.infra.vpc.vpc_id
subnets = data.terraform_remote_state.infra.outputs.infra.vpc.subnets
}
# Use locals in the modules to get to the infrastructure data
module "pipeline" {
source = "../../modules/pipeline"
vpc_id = local.vpc_id
...
}
41. Refactoring existing terraform code
⌾ Create a new bucket for the new terraform state to be stored at.
⌾ Rewrite your new code into the 3-tiers modules (as illustrated above and
detailed in the slides).
⌾ Import each of the resources into your live-envs terraform code.
⌾ terraform will show you the execution plan for the import operation:
○ values that exist in the deployed version but not in your code
will be marked with a (-) minus sign for removal.
○ values that do not exist in the deployed version but do exists in your code
will be marked with a (+) plus sign for addition.
The goal is to get a no change plan.
43. Modules Git Repo
Create a separate git repository for each of the tiers, and an additional to hold the
shared Go code for testing the modules:
⌾ terraform-resources
⌾ terraform-services
⌾ terraform-live-envs
⌾ terratest-common
Codeversioning
44. Managing Module Versions
For the development process, it is recommended to use a relative path when
working with the source attribute of a module
source = "../gcp/sql_instance"
You should change the source attribute value to a git repo when the module is ready
for release
When a module is released, it should be tagged and added to the source attribute
value using the ref argument
source = "[email protected]:unicorn/terraform-resources//gcp/sql_instance?ref=...v1.0.0"
Codeversioning
45. Modules in Subdirectories
Since we are using modules in a repo, the module itself is in a subdirectory relative
to the root of the repo.
A special double-forward-slash syntax is interpreted by Terraform to indicate that
the remaining path after that point is a subdirectory.
source = "[email protected]:unicorn/terraform-resources//gcp/sql_instance?ref=v1.0.0"
The ref argument can be either a tag or a branch name
Codeversioning
46. Modules Tagging Convention
Here is a recommended tagging convention for a module in the same repo:
<module-name>-v<semantic_versioning>
The module name should follow the directory structure you have in place.
Example:
Feel free to come up with your own tagging convention.
Codeversioning
gcp-sql-instance-v1.0.0
48. Lock Down Terraform Version
Lock down the Terraform version that was used to create the module.
Place the following content in a file called versions.tf in the module:
terraform {
required_version = ">= 0.12"
}
TerraformModulesBestPractices
49. Using Provider in Module
TerraformModulesBestPractices
provider "aws" {
region = var.region
version = "~> 2.24"
}
module "this_module" {
source "../"
name = "unicorn"
}
Terraform provider is inherited in modules.
This means that a provider will be inherited by the
modules your main module is calling.
Use an inline provider block inside your examples folder.
Only use the examples folder to test/develop your
module.
50. Prefer Hard Dependencies Over depends_on
⌾ depends_on doesn’t work with modules (currently on 0.12.6)
⌾ depends_on doesn’t work with data sources
⌾ There are some cases where depends_on would fail if the resource is it
depends_on is conditionally created
⌾ It’s better to be consistent across all the code that needs dependencies
TerraformModulesBestPractices
51. Instead of using depends_on (i.e., resources), create a hard dependency in
Terraform between resources:
TerraformModulesBestPractices
Prefer Hard Dependencies Over depends_on
53. No Hard-coded Values
Each module should have the following files:
⌾required-variables.tf
⌾default-variables.tf
All of the resource attributes should be variables. If an existing module is
hard-coded, you should move it into the default-variables.tf file.
You don’t have to use all attributes as documented in Terraform docs,
you can add them as you go.
TerraformModulesBestPractices
54. No tfvar Files
tfvar files are key=value lines of variables passed into a module.
The main problem with using this feature is that you can’t tell which variable
belongs to which module.
This makes code usability hard to maintain and understand quickly.
TerraformModulesBestPractices
55. Plugins Cache
Instead of having to download the same provider plugin to each module over and
over again, you should set your plugin cache folder via an environment variable
like so:
export TF_PLUGIN_CACHE_DIR="$HOME/.terraform.d/plugin-cache"
This will ensure that the provider plugin is linked to this folder and speed up
running Terraform init on new modules.
TerraformModulesBestPractices
56. Terraform State Management
⌾ Create a storage bucket (S3/GCS) per environment
Do not use the same bucket for multiple envs
⌾ Enable versioning on the bucket - this will serve as a backup if state is
corrupted or can be used to compare concurrent executions
⌾ Use prefix with the same folder structure you set in terraform-live-envs folder
⌾ Use a separate prefix for infrastructure
i.e., vpc-network should be put into infrastructure/us-west2/blog-network
TerraformModulesBestPractices
57. Terraform Null Attribute
Use a null value for an attribute you want to
remove from the resource.
Example; aws_s3_bucket can either be a
standalone bucket or a website.
If you need to make a single resource for
that, you would then make a default variable
with the value null, which effectively
removes the attribute from the resource
before creating that resource.
TerraformModulesBestPractices
58. Terraform and Lists
Terraform will create a membership
resource per user, but behind the
scene, the count is also saved as an
index in the state file.
If you remove someone from the
middle of the list, the rest of the index
will shift up, causing the rest to be
added/recreated
In github this means delete user with
its forks!
TerraformModulesBestPractices
59. Cascade Variables and Outputs
Always cascade (copy over) the default and required variables along with the
outputs to the next module tier, so the variable applied goes through all the
modules.
TerraformModulesBestPractices
60. Terraform Testing Using Terratest
Terratest is a Go library by terragrunt.io
It automates the creation of the IaC (Infrastructure as code), and then tests that
the actual result is what you are expecting to get.
Once the tests are completed, Terratest will tear down and cleanup the resources
it has created using Terraform destroy command.
Tips:
⌾ You can use Terreatest with Docker, Packer and even helm charts!
⌾ Use vscode with its Go extension for a quick coding with Go
⌾ Learn Go interactively https://tour.golang.org
61. Typical Test Structure
// TestVPCCreatedWithDefaults - test VPC is created without overriding any of the default variables
func TestVPCCreatedWithDefaults(t *testing.T) {
terraformOptions := &terraform.Options{
// The path to where our Terraform code is located
TerraformDir: "../step1-infrastructure",
// Variables to pass to our Terraform code using -var options
Vars: map[string]interface{}{
"region": "us-east-1",
},
}
}
TerraformTestingwithTerratest
terraformOptions is a Golang struct defining the location of the code, as well as
terraform variables for the execution.
62. Typical Test Structure
func TestVPCCreatedWithDefaults(t *testing.T) {
terraformOptions := &terraform.Options{
...
}
// At the end of the test, run `terraform destroy` to clean up any resources that were created
defer terraform.Destroy(t, terraformOptions)
...
}
TerraformTestingwithTerratest
defer will run at the end of the test and call terraform destroy to clean up the
resources created by this test.
63. Typical Test Structure
func TestVPCCreatedWithDefaults(t *testing.T) {
terraformOptions := &terraform.Options{
...
}
// At the end of the test, run `terraform destroy` to clean up any resources that were created
defer terraform.Destroy(t, terraformOptions)
// Run `terraform init` and `terraform apply` and fail the test if there are any errors
terraform.InitAndApply(t, terraformOptions)
...
}
TerraformTestingwithTerratest
Run terraform init followed by terraform apply
64. Typical Test Structure
func TestVPCCreatedWithDefaults(t *testing.T) {
terraformOptions := &terraform.Options{
...
}
…
vpc_id := terraform.Output(t, terraformOptions, "vpc_id"),
validateVPC(t, vpcID)
}
func ValidateVPC(t *testing.T, vpcID string) {...}
TerraformTestingwithTerratest
Validate it works as expected
65. Terratestbuilt-infunctions
Terratest has many built-in functions to check your infrastructure - but it’s relatively easy to
extend and write your own.
Terratest
Terratest Built-in Functions
67. Run Go test.
You now have a unit test you can run after every commit!
Note:
Go tests timeout after 10 minutes, so make sure you set a
greater timeout to allow infrastructure to be created
Running the Test
Terratest
68. Reproducible Test Results:
Running tests does not take remote state configuration, thus the local module will
end up having a local state file, which can lead to a stale test results.
Don’t put the hard-coded path to the module in the test like so:
terraformOptions := &terraform.Options{
// The path to where our Terraform code is located
TerraformDir: "../step1-infrastructure",
…
Use a built-in function in Terratest that will copy the module to a temporary folder
and return the path to that folder:
Terratest Testing Techniques
Terratest
70. In Go, a test will only report PASS or FAILED on the function name:
Multiple Tests Within a Test (subtests)
Terratest
Often you will need a few tests to check for the same function and reflect that in the
test outputs. This concept is called subtests.
71. Use the following to run multiple validations for the same test:
Multiple Tests Within a Test (subtests)
Terratest
72. Just about every method foo in Terratest comes in two versions: foo and fooE
(e.g., terraform.Apply and terraform.ApplyE).
⌾ foo: The base method takes a t *testing.T as an argument. If the method
hits any errors, it calls t.Fatal to fail the test.
⌾ fooE: Methods that end with the capital letter E always return an error as the
last argument and never call t.Fatal themselves. This allows you to decide
how to handle errors.
Terratest Error Handling
Terratest
73. You will use the base method name most of the time, as it allows you to keep your
code more concise by avoiding if err != nil checks all over the place:
terraform.Init(t, terraformOptions)
terraform.Apply(t, terraformOptions)
url := terraform.Output(t, terraformOptions, "url")
In the code above, if Init, Apply, or Output hits an error, the method will call t.Fatal
and fail the test immediately, which is typically the behavior you want. However, if
you are expecting an error and don't want it to cause a test failure, use the method
name that ends with a capital E:
if _, err := terraform.InitE(t, terraformOptions); err != nil {
// Do something with err
}
Terratest Error Handling
Terratest
74. The Test Pyramid
As you go up the pyramid, tests get more expensive, brittle and slower
76. The Test Pyramid
Lots of unit tests:
test individual sub-modules (keep them small!)
and static analysis (TFLint, Terraform validate)
Individual modules {
77. The Test Pyramid
Fewer integration tests:
test multiple sub-modules together
Individual modules
Multiple modules {
78. The Test Pyramid
A handful of high value E2E tests:
test entire environments (stage, prod)
Individual modules
multiple modules
Entire stack {
79. The Test Pyramid
Note the test times!
This is another reason to use small modules
60-240 minutes
(terraform-live-envs modules)
5-60 minutes
(terraform-services modules)
1-20 minutes
(terraform-resources modules)
1-60 seconds
(tflint/terraform validate)
80. To minimize the downsides of testing infrastructure as a code on a real platform,
follow the guidelines below:
1. Unit tests, integration tests, end-to-end tests
2. Testing environment
3. Namespacing
4. Cleanup
5. Timeouts and logging
6. Debugging interleaved test output
7. Avoid test caching
8. Error handling
9. Iterating locally using Docker
10. Iterating locally using test stages
Terratest Best Practices
Terratest
81. ● Use VSCode for Terraform and Go integration
(you can opt to use JetBrains intelliJ too too or Vim)
● VSCode extensions:
Once installed you need to open the command palette (View -> Command
Palette) and type: “Terraform: Enable Language Server” which will prompt you
for installing the latest package for the terraform 0.12 support.
Pro Tips
Terratest