Introduction to Packer and Suitcase: A Packer-based OS Image Build SystemHubSpot Product Team
Introduction to Packer, a tool for building OS images and Suitcase, our framework for building Packer images. Presentation by Tom McLaughlin (@tmclaughbos) from HubSpot engineering.
Baking in the cloud with packer and puppetAlan Parkinson
Provisioning machines using Puppet when scaling to meet customer demand isn't always practical. Baking machine images and deploying the image is a practical alternative but how can we do this Packer and Puppet?
Packer is a tool for creating machine and container images for multiple platforms from a single source configuration. It allows users to automate the creation of machine images by defining infrastructure in configuration files and running builds that put everything needed to reproduce the machine into an image. Packer templates define builders, provisioners, variables and other configuration to automate the creation of images. This simplifies deployment by allowing images to be built once and then easily deployed to different environments like development, testing and production.
The document discusses using Packer to build machine images on AWS. It provides instructions for installing Packer on Linux, validating a Packer template, and using Packer to build an image. It also mentions provisioning the image and checking the results on the AWS console.
Service Delivery Assembly Line with Vagrant, Packer, and AnsibleIsaac Christoffersen
Leverage Packer, Vagrant, and Ansible as part of a service delivery pipeline. Streamline your continuous delivery process while also targeting multiple cloud providers.
Packer is a tool that allows users to create machine images for multiple platforms from a single source configuration. It supports cloud providers like AWS, Azure, GCP and OpenStack. The document discusses using Packer to create optimized OS images with tools like cloud-init for fast provisioning and deployment of applications during scale-out operations. It also describes integrating Packer with other tools for testing and deployment automation.
The document discusses using Vagrant and Packer to automate the creation of development environments and machine images. Vagrant allows developers to run identical virtual machine environments across different platforms using configuration files. Packer is introduced as a tool to automate the creation of machine images for platforms like AWS, Digital Ocean, OpenStack from a single configuration file, enabling images to be deployed everywhere consistently. The benefits of these tools for improving developer mobility and ensuring environments are identical are highlighted.
This document discusses using Packer to build Windows images. It provides an overview of the Packer build process and components. It then details the specific steps and configuration for building a Windows 2012 R2 image within VirtualBox, including defining the builder, provisioning the image, and post-processing to package it as a Vagrant box. It concludes with some tips and additional resources for building Windows images with Packer.
Packer is an open source tool for creating machine images for multiple platforms from a single source configuration. It uses templates that define builders, provisioners, and post-processors to automate the creation of machine images in parallel. Templates use JSON and allow variables, functions, and conditionals. Common builders include Amazon EC2, Docker, Azure, and more. Provisioners like shell, Ansible, Chef, and Puppet install and configure software. Post-processors perform tasks like uploading, compressing, or tagging the finished image.
Packer and TerraForm are fundamental components of Infrastructure as Code. I recently gave a talk at a DevOps meetup, which allowed me the opportunity to discuss the basics of these two tools, and how DevOps teams should be using them
Packer is software that allows you to create machine images for multiple platforms from a single template configuration. It uses builders to create images for platforms like AWS, VirtualBox, Docker etc. and provisioners can be used to configure the images. Packer provides benefits like consistency between development and production environments, and ease of sharing and reuse of images.
This document discusses assembling an open source tool chain for a hybrid cloud environment. It describes using Packer to build machine images for multiple platforms like AWS, VMware, and VirtualBox from a single blueprint. It also discusses using Vagrant and Ansible for automation, configuration management, and provisioning virtual machines across different cloud providers in a standardized way.
Build automated Machine Images using PackerMarek Piątek
This document provides an overview of Packer and how it can be used to build automated machine images. The agenda includes an introduction to Packer, building Linux and Windows AMIs, and a golden image pipeline using native AWS tools. Packer is an open source tool that creates identical machine images for multiple platforms from a single configuration file. It has advantages like fast deployment, portability, stability and identicality. Popular use cases include golden images, continuous delivery, environment parity and auto-scaling acceleration. The document then covers installing Packer, using Packer commands, templates, builders, provisioners, and includes demos of building Linux and Windows AMIs and a golden image pipeline. It concludes with inviting questions.
EC2 AMI Factory with Chef, Berkshelf, and PackerGeorge Miranda
Presentation accompanying a Live Demo at the AWS Pop-Up Loft in San Francisco on using Chef + Berks + Packer to create an AWS EC2 AMI Factory.
Demo Repo available here -- https://github.com/gmiranda23/chef-ami-factory
Packer is a tool for creating machine and container images (single static unit that contains a pre-configured operating system and installed software) for multiple platforms from a single source configuration.
This document discusses using Puppet and infrastructure as code to manage Apache CloudStack infrastructure. It introduces the cloudstack_resources Puppet module which allows defining CloudStack instances and entire application stacks in Puppet manifests. This enables treating infrastructure like code where Puppet can deploy and configure entire environments on CloudStack. Examples are given of classifying servers and deploying a Hadoop cluster with a single Puppet resource definition. Links are provided to resources for using Puppet with CloudStack and videos that further explain the concepts.
These are slides from an Ignite talk I did for our DevOps Guild. I chose to give an overview on Packer, a tool for creating base images for deploying to various targets
Automating CloudStack with Puppet - David NalleyPuppet
This document discusses using Puppet to automate the deployment and configuration of virtual machines (VMs) in an Apache CloudStack infrastructure. It describes how Puppet can be used to deploy and configure CloudStack VMs according to their roles by parsing userdata passed to the VMs at launch. Custom Puppet facts can extract role information from the userdata to classify nodes and apply the appropriate configuration. The CloudStack and Puppet APIs can be combined to fully automate the provisioning and configuration of VMs from a clean state using Puppet manifests and resources.
Infrastructure as code with Puppet and Apache CloudStackke4qqq
Puppet can now be used to define not only the configuration of machines, but also the machines themselves and entire collections of machines when using CloudStack. New Puppet types and providers allow defining CloudStack instances, groups of instances, and entire application stacks that can then be deployed on CloudStack. This brings infrastructure as code to a new level by allowing Puppet to define and manage the entire CloudStack infrastructure.
This document provides an overview of Ansible, an open source tool for configuration management and application deployment. It discusses how Ansible aims to simplify infrastructure automation tasks through a model-driven approach without requiring developers to learn DevOps tools. Key points:
- Ansible uses YAML playbooks to declaratively define server configurations and deployments in an idempotent and scalable way.
- It provides ad-hoc command execution and setup facts gathering via SSH. Playbooks can target groups of servers to orchestrate complex multi-server tasks.
- Variables, templates, conditionals allow playbooks to customize configurations for different environments. Plugins support integration with cloud, monitoring, messaging tools.
- Ansible aims to reduce complexity compared
This document provides an introduction to Ansible, describing it as a simple and lightweight automation tool that can be used to execute one-time tasks, perform system administration tasks, and configure servers and routers. It discusses Ansible's key features including being written in Python, being open source, and being easy to install and use. It also provides information on installing and configuring Ansible on various operating systems as well as how to use ad-hoc commands and playbooks with Ansible.
DevOps in a Regulated World - aka 'Ansible, AWS, and Jenkins'rmcleay
A look at why using tools like Ansible, AWS, and Jenkins make sense for a medical device startup (and everyone else).
Contains examples of how to deploy instances on AWS, and then configure them with an application, all from the same Ansible playbook.
This document summarizes an Ansible and AWS meetup. It discusses using Ansible to provision and configure AWS resources like EC2 instances, security groups, ELBs, and more through idempotent playbooks. Key points covered include Ansible's agentless architecture, dynamic AWS inventory plugin, core modules like ec2 and cloudformation, templates, roles for reuse, and examples of provisioning playbooks that launch instances and apply configurations. It also briefly mentions NetflixOSS projects that use Ansible like Aminator for AMIs and Asgard for provisioning.
This document summarizes a presentation about integrating the configuration management tool Puppet with the cloud computing platform CloudStack. The key points are:
1) Puppet is configured to provision virtual machines launched in CloudStack without requiring manual intervention or Puppet's auto-signing certificate feature, which poses a security risk.
2) User data passed to instances at launch is used to dynamically set Puppet facts like role and environment without needing separate node definitions.
3) Cleanup scripts remove nodes from Puppet's database and monitoring systems when their corresponding virtual machines in CloudStack are terminated to avoid alerting on missing hosts.
Local Dev on Virtual Machines - Vagrant, VirtualBox and AnsibleJeff Geerling
Developing web applications and websites locally can be troublesome if you use pre-built server packages like WAMP or MAMP, or an install tool to get Java or Ruby on your computer. Develop using modern best practices by using Vagrant, VirtualBox and Ansible to manage your development environments!
Using Ansible for Deploying to Cloud Environmentsahamilton55
Andrew Hamilton presented on using Ansible for deploying to cloud environments. He discussed that Ansible was chosen to provide a simple and repeatable way to build services and deploy them to configured environments. It allows deploying to multiple languages and cloud platforms. Ansible uses a simple execution model and YAML files. Its key advantages include being agentless over SSH, supporting dynamic inventories, and having modules for common tasks. Hamilton provided tips for using Ansible effectively in cloud environments, such as using dynamic inventories, separating variables, and testing changes thoroughly.
Blue/Green deployments have been an important, if rarely implemented, technique in the Continuous Delivery playbook for years. Their aim is simple: provision, deploy, test — and optionally roll-back — your application before it's served to the public. Betterment's deployment architecture takes a similar, but more straightforward approach, accomplishing the important goals sought out by Blue/Green practitioners. Dubbed 'Cyan' (a mixture of Blue/Green), Betterment uses Ansible to provision new instances, push the latest artifacts to them, and ensure that they're healthy before marking them ready for production. All this ensures fast, stable, zero-downtime rollout with minimal human interaction. We'll discuss Betterment's philosophical approach to shipping new code and then dive into the nitty-gritty Ansible that powers the whole thing.
Chasing AMI - Building Amazon machine images with Puppet, Packer and JenkinsTomas Doran
Using puppet when configuring EC2 machines seems a natural fit. However bringing up new machines from a community image with puppet is not trivial and can be slow, and so not useful for auto-scaling.
The cloud also offers a solution to ongoing server maintenance, allowing you to launch fresh instances whenever you upgrade your applications (Immutable or Phoenix servers). However to predictably succeed, you need to freeze the puppet code alongside the application version for deployment.
The solution to these issues is generating custom machine images (AMIs) with your software inlined. This talk will cover Yelp's use of a Packer, Jenkins and Puppet for generating AMIs. This will include how we deal with issues like bootstrapping, getting canonical information about a machine's environment and cluster state at launch time, as well as supporting immutable/phoenix servers in combination with more traditional long lived servers inside our hybrid cloud infrastructure.
Slides for my talk at the HashiCorp User Group - Amsterdam.
Having a look at some hurdles encountered and other significant points in building a base Vagrant box w/ Packer through a personal use case
Video: https://www.youtube.com/watch?v=J-s9dSjYEJw
GitHub repo: https://github.com/cristovaov/packer-vagrant-talk
Event: http://www.meetup.com/HUG-Amsterdam/events/230517085/
Packer is an open source tool for creating machine images for multiple platforms from a single source configuration. It uses templates that define builders, provisioners, and post-processors to automate the creation of machine images in parallel. Templates use JSON and allow variables, functions, and conditionals. Common builders include Amazon EC2, Docker, Azure, and more. Provisioners like shell, Ansible, Chef, and Puppet install and configure software. Post-processors perform tasks like uploading, compressing, or tagging the finished image.
Packer and TerraForm are fundamental components of Infrastructure as Code. I recently gave a talk at a DevOps meetup, which allowed me the opportunity to discuss the basics of these two tools, and how DevOps teams should be using them
Packer is software that allows you to create machine images for multiple platforms from a single template configuration. It uses builders to create images for platforms like AWS, VirtualBox, Docker etc. and provisioners can be used to configure the images. Packer provides benefits like consistency between development and production environments, and ease of sharing and reuse of images.
This document discusses assembling an open source tool chain for a hybrid cloud environment. It describes using Packer to build machine images for multiple platforms like AWS, VMware, and VirtualBox from a single blueprint. It also discusses using Vagrant and Ansible for automation, configuration management, and provisioning virtual machines across different cloud providers in a standardized way.
Build automated Machine Images using PackerMarek Piątek
This document provides an overview of Packer and how it can be used to build automated machine images. The agenda includes an introduction to Packer, building Linux and Windows AMIs, and a golden image pipeline using native AWS tools. Packer is an open source tool that creates identical machine images for multiple platforms from a single configuration file. It has advantages like fast deployment, portability, stability and identicality. Popular use cases include golden images, continuous delivery, environment parity and auto-scaling acceleration. The document then covers installing Packer, using Packer commands, templates, builders, provisioners, and includes demos of building Linux and Windows AMIs and a golden image pipeline. It concludes with inviting questions.
EC2 AMI Factory with Chef, Berkshelf, and PackerGeorge Miranda
Presentation accompanying a Live Demo at the AWS Pop-Up Loft in San Francisco on using Chef + Berks + Packer to create an AWS EC2 AMI Factory.
Demo Repo available here -- https://github.com/gmiranda23/chef-ami-factory
Packer is a tool for creating machine and container images (single static unit that contains a pre-configured operating system and installed software) for multiple platforms from a single source configuration.
This document discusses using Puppet and infrastructure as code to manage Apache CloudStack infrastructure. It introduces the cloudstack_resources Puppet module which allows defining CloudStack instances and entire application stacks in Puppet manifests. This enables treating infrastructure like code where Puppet can deploy and configure entire environments on CloudStack. Examples are given of classifying servers and deploying a Hadoop cluster with a single Puppet resource definition. Links are provided to resources for using Puppet with CloudStack and videos that further explain the concepts.
These are slides from an Ignite talk I did for our DevOps Guild. I chose to give an overview on Packer, a tool for creating base images for deploying to various targets
Automating CloudStack with Puppet - David NalleyPuppet
This document discusses using Puppet to automate the deployment and configuration of virtual machines (VMs) in an Apache CloudStack infrastructure. It describes how Puppet can be used to deploy and configure CloudStack VMs according to their roles by parsing userdata passed to the VMs at launch. Custom Puppet facts can extract role information from the userdata to classify nodes and apply the appropriate configuration. The CloudStack and Puppet APIs can be combined to fully automate the provisioning and configuration of VMs from a clean state using Puppet manifests and resources.
Infrastructure as code with Puppet and Apache CloudStackke4qqq
Puppet can now be used to define not only the configuration of machines, but also the machines themselves and entire collections of machines when using CloudStack. New Puppet types and providers allow defining CloudStack instances, groups of instances, and entire application stacks that can then be deployed on CloudStack. This brings infrastructure as code to a new level by allowing Puppet to define and manage the entire CloudStack infrastructure.
This document provides an overview of Ansible, an open source tool for configuration management and application deployment. It discusses how Ansible aims to simplify infrastructure automation tasks through a model-driven approach without requiring developers to learn DevOps tools. Key points:
- Ansible uses YAML playbooks to declaratively define server configurations and deployments in an idempotent and scalable way.
- It provides ad-hoc command execution and setup facts gathering via SSH. Playbooks can target groups of servers to orchestrate complex multi-server tasks.
- Variables, templates, conditionals allow playbooks to customize configurations for different environments. Plugins support integration with cloud, monitoring, messaging tools.
- Ansible aims to reduce complexity compared
This document provides an introduction to Ansible, describing it as a simple and lightweight automation tool that can be used to execute one-time tasks, perform system administration tasks, and configure servers and routers. It discusses Ansible's key features including being written in Python, being open source, and being easy to install and use. It also provides information on installing and configuring Ansible on various operating systems as well as how to use ad-hoc commands and playbooks with Ansible.
DevOps in a Regulated World - aka 'Ansible, AWS, and Jenkins'rmcleay
A look at why using tools like Ansible, AWS, and Jenkins make sense for a medical device startup (and everyone else).
Contains examples of how to deploy instances on AWS, and then configure them with an application, all from the same Ansible playbook.
This document summarizes an Ansible and AWS meetup. It discusses using Ansible to provision and configure AWS resources like EC2 instances, security groups, ELBs, and more through idempotent playbooks. Key points covered include Ansible's agentless architecture, dynamic AWS inventory plugin, core modules like ec2 and cloudformation, templates, roles for reuse, and examples of provisioning playbooks that launch instances and apply configurations. It also briefly mentions NetflixOSS projects that use Ansible like Aminator for AMIs and Asgard for provisioning.
This document summarizes a presentation about integrating the configuration management tool Puppet with the cloud computing platform CloudStack. The key points are:
1) Puppet is configured to provision virtual machines launched in CloudStack without requiring manual intervention or Puppet's auto-signing certificate feature, which poses a security risk.
2) User data passed to instances at launch is used to dynamically set Puppet facts like role and environment without needing separate node definitions.
3) Cleanup scripts remove nodes from Puppet's database and monitoring systems when their corresponding virtual machines in CloudStack are terminated to avoid alerting on missing hosts.
Local Dev on Virtual Machines - Vagrant, VirtualBox and AnsibleJeff Geerling
Developing web applications and websites locally can be troublesome if you use pre-built server packages like WAMP or MAMP, or an install tool to get Java or Ruby on your computer. Develop using modern best practices by using Vagrant, VirtualBox and Ansible to manage your development environments!
Using Ansible for Deploying to Cloud Environmentsahamilton55
Andrew Hamilton presented on using Ansible for deploying to cloud environments. He discussed that Ansible was chosen to provide a simple and repeatable way to build services and deploy them to configured environments. It allows deploying to multiple languages and cloud platforms. Ansible uses a simple execution model and YAML files. Its key advantages include being agentless over SSH, supporting dynamic inventories, and having modules for common tasks. Hamilton provided tips for using Ansible effectively in cloud environments, such as using dynamic inventories, separating variables, and testing changes thoroughly.
Blue/Green deployments have been an important, if rarely implemented, technique in the Continuous Delivery playbook for years. Their aim is simple: provision, deploy, test — and optionally roll-back — your application before it's served to the public. Betterment's deployment architecture takes a similar, but more straightforward approach, accomplishing the important goals sought out by Blue/Green practitioners. Dubbed 'Cyan' (a mixture of Blue/Green), Betterment uses Ansible to provision new instances, push the latest artifacts to them, and ensure that they're healthy before marking them ready for production. All this ensures fast, stable, zero-downtime rollout with minimal human interaction. We'll discuss Betterment's philosophical approach to shipping new code and then dive into the nitty-gritty Ansible that powers the whole thing.
Chasing AMI - Building Amazon machine images with Puppet, Packer and JenkinsTomas Doran
Using puppet when configuring EC2 machines seems a natural fit. However bringing up new machines from a community image with puppet is not trivial and can be slow, and so not useful for auto-scaling.
The cloud also offers a solution to ongoing server maintenance, allowing you to launch fresh instances whenever you upgrade your applications (Immutable or Phoenix servers). However to predictably succeed, you need to freeze the puppet code alongside the application version for deployment.
The solution to these issues is generating custom machine images (AMIs) with your software inlined. This talk will cover Yelp's use of a Packer, Jenkins and Puppet for generating AMIs. This will include how we deal with issues like bootstrapping, getting canonical information about a machine's environment and cluster state at launch time, as well as supporting immutable/phoenix servers in combination with more traditional long lived servers inside our hybrid cloud infrastructure.
Slides for my talk at the HashiCorp User Group - Amsterdam.
Having a look at some hurdles encountered and other significant points in building a base Vagrant box w/ Packer through a personal use case
Video: https://www.youtube.com/watch?v=J-s9dSjYEJw
GitHub repo: https://github.com/cristovaov/packer-vagrant-talk
Event: http://www.meetup.com/HUG-Amsterdam/events/230517085/
Death to the DevOps team - Agile Yorkshire 2014Matthew Skelton
Talk given on 14th October at Agile Yorkshire
An increasing number of organisations - including many that follow Agile practices - have
begun to adopt DevOps as a set of guidelines to help improve the speed and quality of
software delivery. However, many of these organisations have created a new 'DevOps
team' in order to tackle unfamiliar challenges such as infrastructure automation and
automated deployments.
Although a dedicated team for infrastructure-as-code can be a useful intermediate step
towards greater Dev and Ops collaboration, a long-running 'DevOps team' risks becoming
another silo, separating Dev and Ops on a potentially permanent basis.
I will share my experiences of working with a variety of large organisations in many
different sectors, helping them to adopt a DevOps approach whilst avoiding another team
silo.
We will see examples of activities, approaches, and ideas that have helped organisations
to avoid a DevOps team silo, including:
- DevOps Topologies: "Venn diagrams for great benefit DevOps strategy"
- techniques for choosing tools (without fixating on features)
- new flow exercises based on the Ball Point game
- recruitment brainstorming
- Empathy Snap, a new retrospective exercise well suited to DevOps
This session will provide 'food for thought' when adopting and evolving DevOps within your
own organisation.
Create your very own Development Environment with Vagrant and Packerfrastel
Vagrant, Packer, and Puppet can be used together to create a development environment. Packer is used to build custom base boxes that include only the operating system. Vagrant uses these base boxes to create isolated virtual machines. Puppet then provisions the virtual machines by installing additional software, configuring applications, and defining infrastructure as code. This allows for consistent, reproducible development environments that match production.
Scaling Next-Generation Internet TV on AWS With Docker, Packer, and Chefbridgetkromhout
This document discusses how DramaFever scaled their internet TV platform on AWS using Docker, Packer, and Chef. It describes how they built Docker images for consistent development and deployment, used Packer to build AMIs for consistent server provisioning, and implemented Chef recipes to define server configurations. The tools helped them achieve faster development cycles, consistent environments, and improved ability to automatically scale their infrastructure on AWS.
Anatomy of a Continuous Integration and Delivery (CICD) PipelineRobert McDermott
This presentation covers the anatomy of a production CICD pipeline that is used to develop and deploy the cancer research application Oncoscape (https://oncoscape.sttrcancer.org)
The document discusses Automic Software and its products. It begins with Kerry Lebel, Senior Director of Community Programs, potentially introducing Todd DeLaughter, Chief Executive Officer of Automic Software. Todd then discusses how business, applications, infrastructure, and data have changed in today's hybrid cloud, mobile, social, and big data environment. Automic provides solutions for continuous service, delivery, and operations through a single platform to help businesses adapt to these changes. The document promotes Automic's products and capabilities through presentations and demonstrations.
The Road to Continuous Delivery at PerforcePerforce
This document summarizes the journey of Perforce to implement continuous delivery over 5 years. It describes how they transitioned from nightly builds operated by an engineering services team to shared self-service release management infrastructure across their 30+ products. Some of the key aspects of their approach included trunk-based development, extensive automated testing, and automatic gated releases. They saw significant benefits like reducing release process time to 4 hours and increasing production releases from 8 in 2012 to 450 in 2014. The document also discusses lessons learned around people, processes, and technologies.
CoreOS fest 2016 Summary - DevOps BP 2016 JuneZsolt Molnar
CoreOS Fest 2016 provided updates on CoreOS projects including etcd v3, Kubernetes security tools DEX and DTC, and Prometheus. Key announcements included etcd improving performance and storage, DEX enabling external authentication for Kubernetes, and Prometheus becoming a CNCF project. Keynotes covered security in systemd, the Linux kernel status, and distributed system design tool Runway. CoreOS also announced a $28M funding round and partnerships with Calico and Intel.
Continuous delivery with Jenkins Enterprise and DeployitXebiaLabs
The document provides an overview of using Jenkins Enterprise and Deployit for continuous delivery. It introduces Jenkins Enterprise and Deployit, describes challenges of enterprise delivery pipelines, and demonstrates how the tools can address issues like access control, job automation, and deployment validation. The presentation concludes with next steps for getting started with continuous delivery using the tools.
This document provides an introduction and overview of SaltStack, including:
- An agenda that covers introductions, why SaltStack is needed, remote execution basics, and SaltStack basics.
- A description of where SaltStack came from and its origins in systems management software.
- An explanation of why SaltStack is needed to address challenges with remote execution at scale across many hosts.
Real-time Cloud Management with SaltStackSaltStack
Seth House, SaltStack senior engineer, presented at the first Rackspace Unlocked.io event in New York City the week of Cloud Expo. His presentation titled, "Real-time cloud management with SaltStack" is provided here.
Consul is a tool that provides service discovery, configuration, and orchestration. It allows services to register themselves and discover other services via DNS or HTTP. Consul also supports health checking, multi-datacenter capabilities, and key-value storage. The core component is the Consul agent, which can run on every node in client or server mode. Servers are responsible for consensus and storing state while clients forward requests.
Build & test once, deploy anywhere - Vday.hu 2016Zsolt Molnar
This talk is about a packaging workflow for custom-made linux applications that can help us to get rid of heavy and error-prone installation guides. Why couldn't those applications become as easy to install and upgrade as any mobile app on a smartphone? After ellaborating the problem space Zsolt is going to show how can we build application packages to any production platform on every git commit in a predictable and easy manner. The aim is to test your application/business with all dependencies only once and then wrap the verified code into multiple deployment format can be consumed directy without any major installation process. This is going to be a demo-heavy presentation touching automation tools like: vagrant, packer, saltstack, docker, jenkins, cloudformation/terraform. The challenge is to build docker packages, OVA/OVF bundles and AWS AMI images from a relatively simple application within 30 minutes.
This document discusses how Splunk can help organizations with DevOps practices through concise 3 sentence summaries:
Splunk allows organizations to increase application delivery velocity by providing continuous insights to devops teams from planning through monitoring. It also improves code quality by enabling code quality scans, security scans, and automated acceptance tests across the development lifecycle. Splunk further drives impact by providing visibility across infrastructure, applications, services, and tools to give insights across the entire IT environment.
Immutable AWS Deployments with Packer and JenkinsManish Pandit
This document discusses using Packer and Jenkins to create immutable AWS deployments. Packer is used to build machine images from the ground up with all necessary software and code pre-installed. Provisioners further configure and customize the images. Jenkins automates building the images with Packer whenever code is committed. The immutable images prevent drift and ensure consistency. The process allows fully automated deployments through launching instances from the pre-built images.
Assembling an Open Source Toolchain to Manage Public, Private and Hybrid Clou...POSSCON
This document discusses assembling an open source tool chain for hybrid cloud environments using tools like Packer, Vagrant, Ansible, and BoxCutter. It provides examples of using Packer to build machine images for multiple platforms from a single blueprint and using Vagrant and Ansible to provision virtual machines across different cloud providers in a standardized way. Overall, the document promotes the use of these open source automation tools to help manage infrastructure across hybrid cloud environments.
Immutable Deployments with AWS CloudFormation and AWS LambdaAOE
This document describes an immutable infrastructure approach using AWS Lambda and CloudFormation. Key points:
- Infrastructure is defined as code using CloudFormation templates for reproducibility and versioning.
- Lambda functions are used to provision resources, configure settings, run tests, and clean up resources to enforce immutability.
- A pipeline handles building AMIs, deploying stacks, testing, updating DNS, and deleting old stacks in an automated and repeatable way.
Presentation at March 2019 Dutch Postgres User Group Meetup on lessons learnt while migrating from Oracle to Postgres, demo'ed via vagrant test environments and using generic pgbench datasets.
Running your dockerized application(s) on AWS Elastic Container ServiceMarco Pas
This document discusses running Dockerized applications on AWS EC2 Container Service (ECS). It covers building Docker images from Spring Boot applications, pushing images to ECR, deploying containers to ECS using Terraform, autoscaling containers based on CPU usage, service discovery using DNS, and monitoring containers using Prometheus. The key aspects covered include creating Docker images, using ECS for container orchestration, infrastructure as code with Terraform, autoscaling, service discovery, logging and monitoring containers.
"Puppet and Apache CloudStack" by David Nalley, Citrix, at Puppet Camp San Francisco 2013. Find a Puppet Camp near you: puppetlabs.com/community/puppet-camp/
A 60-minute tour of AWS Compute (November 2016)Julien SIMON
This document summarizes a 60-minute tour of AWS compute services, including Amazon EC2, Elastic Beanstalk, EC2 Container Service, and AWS Lambda. It provides an overview of each service, including its core capabilities and use cases. Examples and demos are shown for Elastic Beanstalk, EC2 Container Service, and AWS Lambda. Additional resources are referenced for going deeper with ECS and Lambda.
Antons Kranga Building Agile InfrastructuresAntons Kranga
This document provides an overview of a presentation on building agile infrastructures. It introduces the presenter, Antons Kranga, and his background. It then outlines the goals of DevOps in bringing developers and operations teams together through practices like Agile and ITIL. The presentation will discuss strategies for adopting a DevOps model, including provisioning continuous integration, automating infrastructure testing, and provisioning QA and production environments using tools like Chef, Vagrant, Jenkins, Nexus, and Test Kitchen. It will also cover techniques for automating infrastructure like configuration management with Chef recipes and testing infrastructure with tools like Chaos Monkey.
In this presentation, I am going to briefly talk about 'what cloud is' and highlight the various types of cloud (IaaS, PaaS, SaaS). The bulk of the talk will be about using the fog gem using IaaS. I will discuss fog concepts (collections, models, requests, services, providers) and supporting these with actual examples using fog
In this talk I will show you how to build a CI/CD pipeline in AWS with, static code analysis in Sonar, tests and continuous deployment of a dockerized service through several environments by using pure AWS services like CodeStar, CodeCommit, CodeBuild, CodeDeploy and CodePipline. I will do a demo of such CI/CD to reveal all guts of tools and services integration and implementation. So you will see how a commit will be going through all those steps and tools to get production environment.
Introduction to Amazon EC2 Container Service and setting up build pipeline with ECS and Jenkins. Presented by our DevOps engineer at a meetup conducted in our WhiteHedge office premises.
This document discusses using Puppet to manage infrastructure as code with Apache CloudStack. It describes how Puppet types and providers were developed to allow defining CloudStack instances and entire application stacks in Puppet manifests. This enables automated deployment and configuration of infrastructure along with software configuration. Examples are given of using Puppet to define CloudStack instances, groups of instances that make up an application stack, and setting defaults for attributes. Resources mentioned include the CloudStack and Puppet GitHub pages.
This document provides an overview of Couchbase Server and how to use it with Ruby. Couchbase Server is a NoSQL database that supports automatic key sharding and replication. It is used by companies like Heroku and Zynga. The document outlines how to install the Couchbase Ruby gem, perform basic CRUD operations, use optimistic locking, expiration, map/reduce, and integrate Couchbase with Rails and other Ruby frameworks.
Introduction to Amazon EC2 Container Service and setting up build pipeline wi...Swapnil Dahiphale
This document introduces Amazon EC2 Container Service (ECS) and describes how to set up a build pipeline with ECS and Jenkins. It defines containers, orchestration, and ECS components like tasks, clusters, and container instances. It outlines a typical user workflow of running a service on ECS, including creating a task definition, service, and updating the service. It concludes with an overview of how to integrate continuous delivery with Jenkins by building Docker images, pushing to a registry, and updating ECS services.
Continuous Delivery with Maven, Puppet and Tomcat - ApacheCon NA 2013Carlos Sanchez
Continuous Integration, with Apache Continuum or Jenkins, can be extended to fully manage deployments and production environments, running in Tomcat for instance, in a full Continuous Delivery cycle using infrastructure-as-code tools like Puppet, allowing to manage multiple servers and their configurations.
Puppet is an infrastructure-as-code tool that allows easy and automated provisioning of servers, defining the packages, configuration, services,... in code. Enabling DevOps culture, tools like Puppet help drive Agile development all the way to operations and systems administration, and along with continuous integration tools like Apache Continuum or Jenkins, it is a key piece to accomplish repeatability and continuous delivery, automating the operations side during development, QA or production, and enabling testing of systems configuration.
Traditionally a field for system administrators, Puppet can empower developers, allowing both to collaborate coding the infrastructure needed for their developments, whether it runs in hardware, virtual machines or cloud. Developers and sysadmins can define what JDK version must be installed, application server, version, configuration files, war and jar files,... and easily make changes that propagate across all nodes.
Using Vagrant, a command line automation layer for VirtualBox, they can also spin off virtual machines in their local box, easily from scratch with the same configuration as production servers, do development or testing and tear them down afterwards.
We will show how to install and manage Puppet nodes with JDK, multiple Tomcat instances with installed web applications, database, configuration files and all the supporting services. Including getting up and running with Vagrant and VirtualBox for quickstart and Puppet experiments, as well as setting up automated testing of the Puppet code.
This document provides an overview of Amazon EC2 Container Service (ECS), which allows users to easily run and manage Docker containers on a cluster of Amazon EC2 instances. It discusses key concepts like clusters, tasks, services, container definitions and scheduling. It also provides examples of common usage patterns like running batch jobs or microservices, and how to update services deployed on ECS.
Docker and AWS have been working together to improve the Docker experience you already know and love. Deploying from Docker straight to AWS with your existing workflow has never been easier. Developers can use Docker Compose and Docker Desktop to deploy applications on Amazon ECS on AWS Fargate. This new functionality streamlines the process of deploying and managing containers in AWS from a local development environment running Docker. Join us for a hands-on walk through of how you can get started today.
Using Kubernetes for Continuous Integration and Continuous DeliveryCarlos Sanchez
This document summarizes how to use Kubernetes for continuous integration and continuous delivery. It discusses using the Jenkins Kubernetes plugin to run Jenkins agents as Kubernetes pods for infinite scalability. It provides examples of defining pods with multiple containers for multi-language pipelines. It also covers using persistent volumes, resource limits, and deploying applications to Kubernetes from Jenkins pipelines.
How can one start with crypto wallet development.pptxlaravinson24
This presentation is a beginner-friendly guide to developing a crypto wallet from scratch. It covers essential concepts such as wallet types, blockchain integration, key management, and security best practices. Ideal for developers and tech enthusiasts looking to enter the world of Web3 and decentralized finance.
Proactive Vulnerability Detection in Source Code Using Graph Neural Networks:...Ranjan Baisak
As software complexity grows, traditional static analysis tools struggle to detect vulnerabilities with both precision and context—often triggering high false positive rates and developer fatigue. This article explores how Graph Neural Networks (GNNs), when applied to source code representations like Abstract Syntax Trees (ASTs), Control Flow Graphs (CFGs), and Data Flow Graphs (DFGs), can revolutionize vulnerability detection. We break down how GNNs model code semantics more effectively than flat token sequences, and how techniques like attention mechanisms, hybrid graph construction, and feedback loops significantly reduce false positives. With insights from real-world datasets and recent research, this guide shows how to build more reliable, proactive, and interpretable vulnerability detection systems using GNNs.
Explaining GitHub Actions Failures with Large Language Models Challenges, In...ssuserb14185
GitHub Actions (GA) has become the de facto tool that developers use to automate software workflows, seamlessly building, testing, and deploying code. Yet when GA fails, it disrupts development, causing delays and driving up costs. Diagnosing failures becomes especially challenging because error logs are often long, complex and unstructured. Given these difficulties, this study explores the potential of large language models (LLMs) to generate correct, clear, concise, and actionable contextual descriptions (or summaries) for GA failures, focusing on developers’ perceptions of their feasibility and usefulness. Our results show that over 80% of developers rated LLM explanations positively in terms of correctness for simpler/small logs. Overall, our findings suggest that LLMs can feasibly assist developers in understanding common GA errors, thus, potentially reducing manual analysis. However, we also found that improved reasoning abilities are needed to support more complex CI/CD scenarios. For instance, less experienced developers tend to be more positive on the described context, while seasoned developers prefer concise summaries. Overall, our work offers key insights for researchers enhancing LLM reasoning, particularly in adapting explanations to user expertise.
https://arxiv.org/abs/2501.16495
Microsoft AI Nonprofit Use Cases and Live Demo_2025.04.30.pdfTechSoup
In this webinar we will dive into the essentials of generative AI, address key AI concerns, and demonstrate how nonprofits can benefit from using Microsoft’s AI assistant, Copilot, to achieve their goals.
This event series to help nonprofits obtain Copilot skills is made possible by generous support from Microsoft.
What You’ll Learn in Part 2:
Explore real-world nonprofit use cases and success stories.
Participate in live demonstrations and a hands-on activity to see how you can use Microsoft 365 Copilot in your own work!
🌍📱👉COPY LINK & PASTE ON GOOGLE http://drfiles.net/ 👈🌍
Adobe Illustrator is a powerful, professional-grade vector graphics software used for creating a wide range of designs, including logos, icons, illustrations, and more. Unlike raster graphics (like photos), which are made of pixels, vector graphics in Illustrator are defined by mathematical equations, allowing them to be scaled up or down infinitely without losing quality.
Here's a more detailed explanation:
Key Features and Capabilities:
Vector-Based Design:
Illustrator's foundation is its use of vector graphics, meaning designs are created using paths, lines, shapes, and curves defined mathematically.
Scalability:
This vector-based approach allows for designs to be resized without any loss of resolution or quality, making it suitable for various print and digital applications.
Design Creation:
Illustrator is used for a wide variety of design purposes, including:
Logos and Brand Identity: Creating logos, icons, and other brand assets.
Illustrations: Designing detailed illustrations for books, magazines, web pages, and more.
Marketing Materials: Creating posters, flyers, banners, and other marketing visuals.
Web Design: Designing web graphics, including icons, buttons, and layouts.
Text Handling:
Illustrator offers sophisticated typography tools for manipulating and designing text within your graphics.
Brushes and Effects:
It provides a range of brushes and effects for adding artistic touches and visual styles to your designs.
Integration with Other Adobe Software:
Illustrator integrates seamlessly with other Adobe Creative Cloud apps like Photoshop, InDesign, and Dreamweaver, facilitating a smooth workflow.
Why Use Illustrator?
Professional-Grade Features:
Illustrator offers a comprehensive set of tools and features for professional design work.
Versatility:
It can be used for a wide range of design tasks and applications, making it a versatile tool for designers.
Industry Standard:
Illustrator is a widely used and recognized software in the graphic design industry.
Creative Freedom:
It empowers designers to create detailed, high-quality graphics with a high degree of control and precision.
Get & Download Wondershare Filmora Crack Latest [2025]saniaaftab72555
Copy & Past Link 👉👉
https://dr-up-community.info/
Wondershare Filmora is a video editing software and app designed for both beginners and experienced users. It's known for its user-friendly interface, drag-and-drop functionality, and a wide range of tools and features for creating and editing videos. Filmora is available on Windows, macOS, iOS (iPhone/iPad), and Android platforms.
Exploring Wayland: A Modern Display Server for the FutureICS
Wayland is revolutionizing the way we interact with graphical interfaces, offering a modern alternative to the X Window System. In this webinar, we’ll delve into the architecture and benefits of Wayland, including its streamlined design, enhanced performance, and improved security features.
F-Secure Freedome VPN 2025 Crack Plus Activation New Versionsaimabibi60507
Copy & Past Link 👉👉
https://dr-up-community.info/
F-Secure Freedome VPN is a virtual private network service developed by F-Secure, a Finnish cybersecurity company. It offers features such as Wi-Fi protection, IP address masking, browsing protection, and a kill switch to enhance online privacy and security .
Avast Premium Security Crack FREE Latest Version 2025mu394968
🌍📱👉COPY LINK & PASTE ON GOOGLE https://dr-kain-geera.info/👈🌍
Avast Premium Security is a paid subscription service that provides comprehensive online security and privacy protection for multiple devices. It includes features like antivirus, firewall, ransomware protection, and website scanning, all designed to safeguard against a wide range of online threats, according to Avast.
Key features of Avast Premium Security:
Antivirus: Protects against viruses, malware, and other malicious software, according to Avast.
Firewall: Controls network traffic and blocks unauthorized access to your devices, as noted by All About Cookies.
Ransomware protection: Helps prevent ransomware attacks, which can encrypt your files and hold them hostage.
Website scanning: Checks websites for malicious content before you visit them, according to Avast.
Email Guardian: Scans your emails for suspicious attachments and phishing attempts.
Multi-device protection: Covers up to 10 devices, including Windows, Mac, Android, and iOS, as stated by 2GO Software.
Privacy features: Helps protect your personal data and online privacy.
In essence, Avast Premium Security provides a robust suite of tools to keep your devices and online activity safe and secure, according to Avast.
Download YouTube By Click 2025 Free Full Activatedsaniamalik72555
Copy & Past Link 👉👉
https://dr-up-community.info/
"YouTube by Click" likely refers to the ByClick Downloader software, a video downloading and conversion tool, specifically designed to download content from YouTube and other video platforms. It allows users to download YouTube videos for offline viewing and to convert them to different formats.
Adobe Lightroom Classic Crack FREE Latest link 2025kashifyounis067
🌍📱👉COPY LINK & PASTE ON GOOGLE http://drfiles.net/ 👈🌍
Adobe Lightroom Classic is a desktop-based software application for editing and managing digital photos. It focuses on providing users with a powerful and comprehensive set of tools for organizing, editing, and processing their images on their computer. Unlike the newer Lightroom, which is cloud-based, Lightroom Classic stores photos locally on your computer and offers a more traditional workflow for professional photographers.
Here's a more detailed breakdown:
Key Features and Functions:
Organization:
Lightroom Classic provides robust tools for organizing your photos, including creating collections, using keywords, flags, and color labels.
Editing:
It offers a wide range of editing tools for making adjustments to color, tone, and more.
Processing:
Lightroom Classic can process RAW files, allowing for significant adjustments and fine-tuning of images.
Desktop-Focused:
The application is designed to be used on a computer, with the original photos stored locally on the hard drive.
Non-Destructive Editing:
Edits are applied to the original photos in a non-destructive way, meaning the original files remain untouched.
Key Differences from Lightroom (Cloud-Based):
Storage Location:
Lightroom Classic stores photos locally on your computer, while Lightroom stores them in the cloud.
Workflow:
Lightroom Classic is designed for a desktop workflow, while Lightroom is designed for a cloud-based workflow.
Connectivity:
Lightroom Classic can be used offline, while Lightroom requires an internet connection to sync and access photos.
Organization:
Lightroom Classic offers more advanced organization features like Collections and Keywords.
Who is it for?
Professional Photographers:
PCMag notes that Lightroom Classic is a popular choice among professional photographers who need the flexibility and control of a desktop-based application.
Users with Large Collections:
Those with extensive photo collections may prefer Lightroom Classic's local storage and robust organization features.
Users who prefer a traditional workflow:
Users who prefer a more traditional desktop workflow, with their original photos stored on their computer, will find Lightroom Classic a good fit.
PDF Reader Pro Crack Latest Version FREE Download 2025mu394968
🌍📱👉COPY LINK & PASTE ON GOOGLE https://dr-kain-geera.info/👈🌍
PDF Reader Pro is a software application, often referred to as an AI-powered PDF editor and converter, designed for viewing, editing, annotating, and managing PDF files. It supports various PDF functionalities like merging, splitting, converting, and protecting PDFs. Additionally, it can handle tasks such as creating fillable forms, adding digital signatures, and performing optical character recognition (OCR).
Interactive Odoo Dashboard for various business needs can provide users with dynamic, visually appealing dashboards tailored to their specific requirements. such a module that could support multiple dashboards for different aspects of a business
✅Visit And Buy Now : https://bit.ly/3VojWza
✅This Interactive Odoo dashboard module allow user to create their own odoo interactive dashboards for various purpose.
App download now :
Odoo 18 : https://bit.ly/3VojWza
Odoo 17 : https://bit.ly/4h9Z47G
Odoo 16 : https://bit.ly/3FJTEA4
Odoo 15 : https://bit.ly/3W7tsEB
Odoo 14 : https://bit.ly/3BqZDHg
Odoo 13 : https://bit.ly/3uNMF2t
Try Our website appointment booking odoo app : https://bit.ly/3SvNvgU
👉Want a Demo ?📧 [email protected]
➡️Contact us for Odoo ERP Set up : 091066 49361
👉Explore more apps: https://bit.ly/3oFIOCF
👉Want to know more : 🌐 https://www.axistechnolabs.com/
#odoo #odoo18 #odoo17 #odoo16 #odoo15 #odooapps #dashboards #dashboardsoftware #odooerp #odooimplementation #odoodashboardapp #bestodoodashboard #dashboardapp #odoodashboard #dashboardmodule #interactivedashboard #bestdashboard #dashboard #odootag #odooservices #odoonewfeatures #newappfeatures #odoodashboardapp #dynamicdashboard #odooapp #odooappstore #TopOdooApps #odooapp #odooexperience #odoodevelopment #businessdashboard #allinonedashboard #odooproducts
Douwan Crack 2025 new verson+ License codeaneelaramzan63
Copy & Paste On Google >>> https://dr-up-community.info/
Douwan Preactivated Crack Douwan Crack Free Download. Douwan is a comprehensive software solution designed for data management and analysis.
Who Watches the Watchmen (SciFiDevCon 2025)Allon Mureinik
Tests, especially unit tests, are the developers’ superheroes. They allow us to mess around with our code and keep us safe.
We often trust them with the safety of our codebase, but how do we know that we should? How do we know that this trust is well-deserved?
Enter mutation testing – by intentionally injecting harmful mutations into our code and seeing if they are caught by the tests, we can evaluate the quality of the safety net they provide. By watching the watchmen, we can make sure our tests really protect us, and we aren’t just green-washing our IDEs to a false sense of security.
Talk from SciFiDevCon 2025
https://www.scifidevcon.com/courses/2025-scifidevcon/contents/680efa43ae4f5
Copy & Paste On Google >>> https://dr-up-community.info/
EASEUS Partition Master Final with Crack and Key Download If you are looking for a powerful and easy-to-use disk partitioning software,
7. Forever Stack Tools
Jenkin
s New Relic
Gangli
a
Nagios
Cacti
Gradle
Ant
Solan
o
Chef
Ansible
Puppet
SaltStack
Logstash
Splunk
PaperTrial
NoSQL
Balsamiq
IaaS, PaaS
Docker
Selenium
Every software runs on
Operating System
12. The variables section
{
"variables": {
"aws_access_key": "{{env `AWS_ACCESS_KEY`}}",
"aws_secret_key”: "{{env `AWS_SECRET_KEY`}}"
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region”: "us-east-1",
"source_ami": "ami-9eaa1cf6",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "packer-example {{timestamp}}"
}]
}
User
Variables
Calls the user function to get
value
Calls the env function to get
value from environment
variables.
The env function is only
valid within the variables
section.
13. The builders section
{
"variables": {
"aws_access_key": "{{env `AWS_ACCESS_KEY`}}",
"aws_secret_key”: "{{env `AWS_SECRET_KEY`}}"
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region”: "us-east-1",
"source_ami": "ami-9eaa1cf6",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"ami_name": "packer-example {{timestamp}}"
}]
}
Creates EBS-backed AMI by launching a source
AMI and re-packaging it into a new AMI after
provisioning.
The source
AMI
Use timestamp function to make it
unique
The resulting
AMI
14. $ packer build -var 'aws_access_key=YOUR ACCESS KEY'
-var 'aws_secret_key=YOUR SECRET KEY'
packer.json
==> amazon-ebs: amazon-ebs output will be in this color.
==> amazon-ebs: Creating temporary keypair for this instance...
==> amazon-ebs: Creating temporary security group for this instance...
==> amazon-ebs: Authorizing SSH access on the temporary security group...
==> amazon-ebs: Launching a secure AWS instance...
==> amazon-ebs: Waiting for instance to become ready...
==> amazon-ebs: Connecting to the instance via SSH...
==> amazon-ebs: Stopping the source instance...
==> amazon-ebs: Waiting for the instance to stop...
==> amazon-ebs: Creating the AMI: packer-example 1371856345
==> amazon-ebs: AMI: ami-19601070
==> amazon-ebs: Waiting for AMI to become ready...
==> amazon-ebs: Terminating the source AMI instance...
==> amazon-ebs: Deleting temporary security group...
==> amazon-ebs: Deleting temporary keypair...
==> amazon-ebs: Build finished.
==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:
us-east-1: ami-19601070
15. Builders
• Amazon EC2 (AMI)
• DigitalOcean
• Docker
• Google Compute Engine
(GCE)
• OpenStack
• Parallels
• QEMU
• VirtualBox
• VMware
Builders are responsible for creating
machines and generating images from them
for various platforms.
17. Customize with provisioners
{
"variables": {…},
"builders": […],
"provisioners": [{
"type": "shell",
"script": "./scripts/install-puppet.sh”
}, {
"type": ”puppet-masterless",
"manifest_file": "puppet/manifest/site.pp",
"module_paths": [ "puppet/modules" ],
"hiera_config_path": "puppet/hiera.yaml”
}]
}
Provisioners
are executed
one by one.
1
2
18. Install puppet agent
{
"variables": {…},
"builders": […],
"provisioners": [{
"type": "shell",
"script": "./scripts/install-puppet.sh”
}, {
"type": ”puppet-masterless",
"manifest_file": "puppet/manifest/site.pp",
"module_paths": [ "puppet/modules" ],
"hiera_config_path": "puppet/hiera.yaml”
}]
}
Provision machines using shell
scripts
Usually we will reuse these
scripts in different kinds of
machines.
19. Provision with puppet scripts
{
"variables": {…},
"builders": […],
"provisioners": [{
"type": "shell",
"script": "./scripts/install-puppet.sh”
}, {
"type": ”puppet-masterless",
"manifest_file": "puppet/manifest/site.pp",
"module_paths": [ "puppet/modules" ],
"hiera_config_path": "puppet/hiera.yaml”
}]
}
No need for a puppet
server
Manifests, modules, and hiera data can all be stored
in git.
20. Provisioners
Templates to install and configure software
within running machines prior to turning
them into machine images.
• Remote Shell
• Local Shell
• File Uploads
• PowerShell
• Windows Shell
• Ansible
• Chef Client/Solo
• Puppet Masterless/Server
• Salt
• Windows Restart
22. Local Repository
Packaging and Publishing
After the machine is built, we would like to:
• Package as a zip-ball for local use
• Package as Vagrant Box and publish on Atlas
• Preserve Vagrant Box in Local
Machin
e Built
Compres
s
Package Publish Atlas
Foo.zip Foo.box
23. { …
"post-processors": [{
"type": "compress",
"output": "{{.BuildName}}-{{isotime "20060102"}}.zip"
}, [{
"type": "vagrant",
"output": "{{.BuildName}}-{{isotime "20060102"}}.box"
}, {
"type": "atlas",
"token": "{{user `atlas_token`}}",
"artifact": "trendmicro/centos62",
"artifact_type": "virtualbox",
"keep_input_artifact": true
}]]
}
Post-Processor Chains
Package as a zip-
ball for local use
Package as Vagrant
Box and publish on
Atlas
24. { …
"post-processors": [{
"type": "compress",
"output": "{{.BuildName}}-{{isotime "20060102"}}.zip"
}, [{
"type": "vagrant",
"output": "{{.BuildName}}-{{isotime "20060102"}}.box"
}, {
"type": "atlas",
"token": "{{user `atlas_token`}}",
"artifact": "trendmicro/centos62",
"artifact_type": "virtualbox",
"keep_input_artifact": true
}]]
}
Compress into Single Archive
Go-style date
format
Compression format auto-inferred from
extension
26. Post-Processors
The post-processor section configures any
post-processing that will be done to image
built by the builders.
• compress
• vSphere
• Vagrant
• Vagrant Cloud
• Atlas
• docker-import
• docker-push
• docker-save
• docker-tag
27. What Else Do You Need?
• Kickstart
– Use kickstart file to install Linux from ISO
• chef/bento
– Vagrant Box Packer definitions by Chef
– Published on Atlas:
https://atlas.hashicorp.com/chef
• Windows
– Windows Automated Installation Kit (AIK)
– Unattended Windows Setup
31. What is Your Flow?
• You need to define your DevOps flow
• No need to build Rome in one day
• Consider company culture
• Tool adoption
32. Summary
• DevOps Fast Iteration
• Packer as the starting point
• Builders Provisioners Post-
Processors
• Pets or Cattle?
• Define Your DevOps Workflow
34. Alternative Format?
But we needs comments to add annotations
and disable entire experimental blocks...
It is one of the primary reason we choose
JSON as the configuration format: it is highly
convenient to write a script to generate the
configuration.
@mitchellh
.SUFFIXES: .json .yml
.yml.json:
ruby -ryaml -rjson
-e 'puts JSON.pretty_generate(YAML.load(ARGF))'
< $< > $@;
https://github.com/mitchellh/packer/issues/
887