Ansible Docs 1.5
Ansible Docs 1.5
Release 1.5
Ansible
1 About Ansible 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Quickstart Video . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.3 Playbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.4 Playbooks: Special Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
1.5 About Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
1.6 Module Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
1.7 Detailed Guides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
1.8 Developer Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
1.9 Ansible Tower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
1.10 Community Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
1.11 Ansible Galaxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
1.12 Frequently Asked Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
1.13 Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
1.14 YAML Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
1.15 Ansible Guru . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
i
ii
CHAPTER
ONE
ABOUT ANSIBLE
1.1 Introduction
Before we dive into the really fun parts – playbooks, configuration management, deployment, and orchestration, we’ll
learn how to get Ansible installed and some basic concepts. We’ll go over how to execute ad-hoc commands in parallel
across your nodes using /usr/bin/ansible. We’ll also see what sort of modules are available in Ansible’s core (though
you can also write your own, which we’ll also show later).
1.1.1 Installation
1
Ansible Documentation, Release 1.5
Topics
• Installation
– Getting Ansible
– Basics / What Will Be Installed
– What Version To Pick?
– Control Machine Requirements
– Managed Node Requirements
– Installing the Control Machine
* Running From Source
* Latest Release Via Yum
* Latest Releases Via Apt (Ubuntu)
* Latest Releases Via pkg (FreeBSD)
* Latest Releases Via Pip
* Tarballs of Tagged Releases
Getting Ansible
You may also wish to follow the Github project if you have a github account. This is also where we keep the issue
tracker for sharing bugs and feature ideas.
Because it runs so easily from source and does not require any installation of software on remote machines, many
users will actually track the development version.
Ansible’s release cycles are usually about two months long. Due to this short release cycle, minor bugs will generally
be fixed in the next release versus maintaining backports on the stable branch. Major bugs will still have maintenance
releases when needed, though these are infrequent.
If you are wishing to run the latest released version of Ansible and you are running Red Hat Enterprise Linux (TM),
CentOS, Fedora, Debian, or Ubuntu, we recommend using the OS package manager.
For other installation options, we recommend installing via “pip”, which is the Python package manager, though other
options are also available.
If you wish to track the development release to use and test the latest features, we will share information about running
from source. It’s not necessary to install the program to run from source.
Currently Ansible can be run from any machine with Python 2.6 installed (Windows isn’t supported for the control
machine).
This includes Red Hat, Debian, CentOS, OS X, any of the BSDs, and so on.
On the managed nodes, you only need Python 2.4 or later, but if you are running less than Python 2.5 on the remotes,
you will also need:
• python-simplejson
Note: Ansible’s “raw” module (for executing commands in a quick and dirty way) and the script module don’t even
need that. So technically, you can use Ansible to install python-simplejson using the raw module, which then allows
you to use everything else. (That’s jumping ahead though.)
Note: If you have SELinux enabled on remote nodes, you will also want to install libselinux-python on them before
using any copy/file/template related functions in Ansible. You can of course still use the yum module in Ansible to
install this package on remote systems that do not have it.
Note: Python 3 is a slightly different language than Python 2 and most Python programs (including Ansible) are not
switching over yet. However, some Linux distributions (Gentoo, Arch) may not have a Python 2.X interpreter installed
by default. On those systems, you should install one, and set the ‘ansible_python_interpreter’ variable in inventory
(see Inventory) to point at your 2.X Python. Distributions like Red Hat Enterprise Linux, CentOS, Fedora, and Ubuntu
all have a 2.X interpreter installed by default and this does not apply to those distributions. This is also true of nearly
all Unix systems. If you need to bootstrap these remote systems by installing Python 2.X, using the ‘raw’ module will
be able to do it remotely.
Ansible is trivially easy to run from a checkout, root permissions are not required to use it and there is no software
to actually install for Ansible itself. No daemons or database setup are required. Because of this, many users in our
community use the development version of Ansible all of the time, so they can take advantage of new features when
they are implemented, and also easily contribute to the project. Because there is nothing to install, following the
development version is significantly easier than most open source projects.
To install from source.
$ git clone git://github.com/ansible/ansible.git
$ cd ./ansible
$ source ./hacking/env-setup
If you don’t have pip installed in your version of Python, install pip:
$ sudo easy_install pip
Ansible also uses the following Python modules that need to be installed:
$ sudo pip install paramiko PyYAML jinja2 httplib2
Once running the env-setup script you’ll be running from checkout and the default inventory file will be
/etc/ansible/hosts. You can optionally specify an inventory file (see Inventory) other than /etc/ansible/hosts:
$ echo "127.0.0.1" > ~/ansible_hosts
$ export ANSIBLE_HOSTS=~/ansible_hosts
You can read more about the inventory file in later parts of the manual.
1.1. Introduction 3
Ansible Documentation, Release 1.5
RPMs are available from yum for EPEL 6 and currently supported Fedora distributions.
Ansible itself can manage earlier operating systems that contain Python 2.4 or higher (so also EL5).
Fedora users can install Ansible directly, though if you are using RHEL or CentOS and have not already done so,
configure EPEL
# install the epel-release RPM if needed on CentOS, RHEL, or Scientific Linux
$ sudo yum install ansible
You can also build an RPM yourself. From the root of a checkout or tarball, use the make rpm command to build an
RPM you can distribute and install. Make sure you have rpm-build, make, and python2-devel installed.
$ git clone git://github.com/ansible/ansible.git
$ cd ./ansible
$ make rpm
$ sudo rpm -Uvh ~/rpmbuild/ansible-*.noarch.rpm
Debian/Ubuntu packages can also be built from the source checkout, run:
$ make deb
You may also wish to run from source to get the latest, which is covered above.
Ansible can be installed via “pip”, the Python package manager. If ‘pip’ isn’t already available in your version of
Python, you can get pip by:
Readers that use virtualenv can also install Ansible under virtualenv, though we’d recommend to not worry about it
and just install Ansible globally. Do not use easy_install to install ansible directly.
Packaging Ansible or wanting to build a local package yourself, but don’t want to do a git checkout? Tarballs of
releases are available on the Ansible downloads page.
These releases are also tagged in the git repository with the release version.
See also:
Introduction To Ad-Hoc Commands Examples of basic commands
Playbooks Learning ansible’s configuration management language
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net #ansible IRC chat channel
Topics
• Getting Started
– Foreword
– Remote Connection Information
– Your first commands
– Host Key Checking
Foreword
Now that you’ve read Installation and installed Ansible, it’s time to dig in and get started with some commands.
What we are showing first are not the powerful configuration/deployment/orchestration of Ansible, called playbooks.
Playbooks are covered in a separate section.
This section is about how to get going initially. Once you have these concepts down, read Introduction To Ad-Hoc
Commands for some more detail, and then you’ll be ready to dive into playbooks and explore the most interesting
parts!
Before we get started, it’s important to understand how Ansible is communicating with remote machines over SSH.
By default, Ansible 1.3 and later will try to use native OpenSSH for remote communication when possible. This
enables both ControlPersist (a performance feature), Kerberos, and options in ~/.ssh/config such as Jump Host setup.
When using Enterprise Linux 6 operating systems as the control machine (Red Hat Enterprise Linux and derivatives
1.1. Introduction 5
Ansible Documentation, Release 1.5
such as CentOS), however, the version of OpenSSH may be too old to support ControlPersist. On these operating
systems, Ansible will fallback into using a high-quality Python implementation of OpenSSH called ‘paramiko’. If
you wish to use features like Kerberized SSH and more, consider using Fedora, OS X, or Ubuntu as your control
machine until a newer version of OpenSSH is available for your platform – or engage ‘accelerated mode’ in Ansible.
See Accelerated Mode.
In Ansible 1.2 and before, the default was strictly paramiko and native SSH had to be explicitly selected with -c ssh or
set in the configuration file.
Occasionally you’ll encounter a device that doesn’t do SFTP. This is rare, but if talking with some remote devices that
don’t support SFTP, you can switch to SCP mode in The Ansible Configuration File.
When speaking with remote machines, Ansible will by default assume you are using SSH keys – which we encourage
– but passwords are fine too. To enable password auth, supply the option --ask-pass where needed. If using sudo
features and when sudo requires a password, also supply --ask-sudo-pass as appropriate.
While it may be common sense, it is worth sharing: Any management system benefits from being run near the ma-
chines being managed. If running in a cloud, consider running Ansible from a machine inside that cloud. It will work
better than on the open internet in most cases.
As an advanced topic, Ansible doesn’t just have to connect remotely over SSH. The transports are pluggable, and there
are options for managing things locally, as well as managing chroot, lxc, and jail containers. A mode called ‘ansible-
pull’ can also invert the system and have systems ‘phone home’ via scheduled git checkouts to pull configuration
directives from a central repository.
Now that you’ve installed Ansible, it’s time to get started with some basics.
Edit (or create) /etc/ansible/hosts and put one or more remote systems in it, for which you have your SSH key in
authorized_keys:
192.168.1.50
aserver.example.org
bserver.example.org
This is an inventory file, which is also explained in greater depth here: Inventory.
We’ll assume you are using SSH keys for authentication. To set up SSH agent to avoid retyping passwords, you can
do:
$ ssh-agent bash
$ ssh-add ~/.ssh/id_rsa
(Depending on your setup, you may wish to use Ansible’s --private-key option to specify a pem file instead)
Now ping all your nodes:
$ ansible all -m ping
Ansible will attempt to remote connect to the machines using your current user name, just like SSH would. To override
the remote user name, just use the ‘-u’ parameter.
If you would like to access sudo mode, there are also flags to do that:
# as bruce
$ ansible all -m ping -u bruce
# as bruce, sudoing to root
$ ansible all -m ping -u bruce --sudo
# as bruce, sudoing to batman
$ ansible all -m ping -u bruce --sudo --sudo-user batman
(The sudo implementation is changeable in Ansible’s configuration file if you happen to want to use a sudo replace-
ment. Flags passed to sudo (like -H) can also be set there.)
Now run a live command on all of your nodes:
$ ansible all -a "/bin/echo hello"
Congratulations. You’ve just contacted your nodes with Ansible. It’s soon going to be time to read some of the
more real-world Introduction To Ad-Hoc Commands, and explore what you can do with different modules, as well
as the Ansible Playbooks language. Ansible is not just about running commands, it also has powerful configuration
management and deployment features. There’s more to explore, but you already have a fully working infrastructure!
Ansible 1.2.1 and later have host key checking enabled by default.
If a host is reinstalled and has a different key in ‘known_hosts’, this will result in a error message until corrected. If
a host is not initially in ‘known_hosts’ this will result in prompting for confirmation of the key, which results in a
interactive experience if using Ansible, from say, cron. You might not want this.
If you wish to disable this behavior and understand the implications, you can do so by editing /etc/ansible/ansible.cfg
or ~/.ansible.cfg:
[defaults]
host_key_checking = False
Also note that host key checking in paramiko mode is reasonably slow, therefore switching to ‘ssh’ is also recom-
mended when using this feature. Ansible will log some information about module arguments on the remote system in
the remote syslog. To enable basic logging on the control machine see The Ansible Configuration File document and
set the ‘log_path’ configuration file setting. Enterprise users may also be interested in Ansible Tower. Tower provides
a very robust database logging feature where it is possible to drill down and see history based on hosts, projects, and
particular inventories over time – explorable both graphically and through a REST API.
See also:
Inventory More information about inventory
Introduction To Ad-Hoc Commands Examples of basic commands
Playbooks Learning Ansible’s configuration management language
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net #ansible IRC chat channel
1.1.3 Inventory
1.1. Introduction 7
Ansible Documentation, Release 1.5
Topics
• Inventory
– Hosts and Groups
– Host Variables
– Group Variables
– Groups of Groups, and Group Variables
– Splitting Out Host and Group Specific Data
– List of Behavioral Inventory Parameters
Ansible works against multiple systems in your infrastructure at the same time. It does this by selecting portions of
systems listed in Ansible’s inventory file, which defaults to being saved in the location /etc/ansible/hosts.
Not only is this inventory configurable, but you can also use multiple inventory files at the same time (explained below)
and also pull inventory from dynamic or cloud sources, as described in Dynamic Inventory.
The format for /etc/ansible/hosts is an INI format and looks like this:
mail.example.com
[webservers]
foo.example.com
bar.example.com
[dbservers]
one.example.com
two.example.com
three.example.com
The things in brackets are group names, which are used in classifying systems and deciding what systems you are
controlling at what times and for what purpose.
It is ok to put systems in more than one group, for instance a server could be both a webserver and a dbserver. If you
do, note that variables will come from all of the groups they are a member of, and variable precedence is detailed in a
later chapter.
If you have hosts that run on non-standard SSH ports you can put the port number after the hostname with a colon.
Ports listed in your SSH config file won’t be used, so it is important that you set them if things are not running on the
default port:
badwolf.example.com:5309
Suppose you have just static IPs and want to set up some aliases that don’t live in your host file, or you are connecting
through tunnels. You can do things like this:
jumper ansible_ssh_port=5555 ansible_ssh_host=192.168.1.50
In the above example, trying to ansible against the host alias “jumper” (which may not even be a real hostname)
will contact 192.168.1.50 on port 5555. Note that this is using a feature of the inventory file to define some special
variables. Generally speaking this is not the best way to define variables that describe your system policy, but we’ll
share suggestions on doing this later. We’re just getting started.
Adding a lot of hosts? If you have a lot of hosts following similar patterns you can do this rather than listing each
hostname:
[webservers]
www[01:50].example.com
For numeric patterns, leading zeros can be included or removed, as desired. Ranges are inclusive. You can also define
alphabetic ranges:
[databases]
db-[a:f].example.com
You can also select the connection type and user on a per host basis:
[targets]
localhost ansible_connection=local
other1.example.com ansible_connection=ssh ansible_ssh_user=mpdehaan
other2.example.com ansible_connection=ssh ansible_ssh_user=mdehaan
As mentioned above, setting these in the inventory file is only a shorthand, and we’ll discuss how to store them in
individual files in the ‘host_vars’ directory a bit later on.
Host Variables
As alluded to above, it is easy to assign variables to hosts that will be used later in playbooks:
[atlanta]
host1 http_port=80 maxRequestsPerChild=808
host2 http_port=303 maxRequestsPerChild=909
Group Variables
[atlanta:vars]
ntp_server=ntp.atlanta.example.com
proxy=proxy.atlanta.example.com
It is also possible to make groups of groups and assign variables to groups. These variables can be used by
/usr/bin/ansible-playbook, but not /usr/bin/ansible:
[atlanta]
host1
host2
[raleigh]
host2
host3
[southeast:children]
atlanta
1.1. Introduction 9
Ansible Documentation, Release 1.5
raleigh
[southeast:vars]
some_server=foo.southeast.example.com
halon_system_timeout=30
self_destruct_countdown=60
escape_pods=2
[usa:children]
southeast
northeast
southwest
northwest
If you need to store lists or hash data, or prefer to keep host and group specific variables separate from the inventory
file, see the next section.
The preferred practice in Ansible is actually not to store variables in the main inventory file.
In addition to the storing variables directly in the INI file, host and group variables can be stored in individual files
relative to the inventory file.
These variable files are in YAML format. See YAML Syntax if you are new to YAML.
Assuming the inventory file path is:
/etc/ansible/hosts
If the host is named ‘foosball’, and in groups ‘raleigh’ and ‘webservers’, variables in YAML files at the following
locations will be made available to the host:
/etc/ansible/group_vars/raleigh
/etc/ansible/group_vars/webservers
/etc/ansible/host_vars/foosball
For instance, suppose you have hosts grouped by datacenter, and each datacenter uses some different servers. The data
in the groupfile ‘/etc/ansible/group_vars/raleigh’ for the ‘raleigh’ group might look like:
---
ntp_server: acme.example.org
database_server: storage.example.org
As alluded to above, setting the following variables controls how ansible interacts with remote hosts. Some we have
already mentioned:
ansible_ssh_host
The name of the host to connect to, if different from the alias you wish to give to it.
ansible_ssh_port
The ssh port number, if not 22
ansible_ssh_user
The default ssh user name to use.
ansible_ssh_pass
The ssh password to use (this is insecure, we strongly recommend using --ask-pass or SSH keys)
ansible_sudo_pass
The sudo password to use (this is insecure, we strongly recommend using --ask-sudo-pass)
ansible_connection
Connection type of the host. Candidates are local, ssh or paramiko. The default is paramiko before
ansible_ssh_private_key_file
Private key file used by ssh. Useful if using multiple keys and you don’t want to use SSH agent.
ansible_python_interpreter
The target host python path. This is useful for systems with more
than one Python or not located at "/usr/bin/python" such as \*BSD, or where /usr/bin/python
is not a 2.X series Python. We do not use the "/usr/bin/env" mechanism as that requires the remote
path to be set right and also assumes the "python" executable is named python, where the executable
be named something like "python26".
ansible\_\*\_interpreter
Works for anything such as ruby or perl and works just like ansible_python_interpreter.
This replaces shebang of modules which will run on that host.
See also:
Dynamic Inventory Pulling inventory from dynamic sources, such as cloud providers
Introduction To Ad-Hoc Commands Examples of basic commands
Playbooks Learning ansible’s configuration management language
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net #ansible IRC chat channel
Topics
• Dynamic Inventory
– Example: The Cobbler External Inventory Script
– Example: AWS EC2 External Inventory Script
– Other inventory scripts
– Using Multiple Inventory Sources
Often a user of a configuration management system will want to keep inventory in a different software system. Ansible
provides a basic text-based system as described in Inventory but what if you want to use something else?
Frequent examples include pulling inventory from a cloud provider, LDAP, Cobbler, or a piece of expensive enterprisey
CMDB software.
1.1. Introduction 11
Ansible Documentation, Release 1.5
Ansible easily supports all of these options via an external inventory system. The plugins directory contains some of
these already – including options for EC2/Eucalyptus, Rackspace Cloud, and OpenStack, examples of some of which
will be detailed below.
doc:tower also provides a database to store inventory results that is both web and REST Accessible. Tower syncs with
all Ansible dynamic inventory sources you might be using, and also includes a graphical inventory editor. By having
a database record of all of your hosts, it’s easy to correlate past event history and see which ones have had failures on
their last playbook runs.
For information about writing your own dynamic inventory source, see Developing Dynamic Inventory Sources.
It is expected that many Ansible users with a reasonable amount of physical hardware may also be Cobbler users.
(note: Cobbler was originally written by Michael DeHaan and is now lead by James Cammarata, who also works for
Ansible, Inc).
While primarily used to kickoff OS installations and manage DHCP and DNS, Cobbler has a generic layer that allows
it to represent data for multiple configuration management systems (even at the same time), and has been referred
to as a ‘lightweight CMDB’ by some admins. This particular script will communicate with Cobbler using Cobbler’s
XMLRPC API.
To tie Ansible’s inventory to Cobbler (optional), copy this script to /etc/ansible and chmod +x the file. cobblerd will
now need to be running when you are using Ansible and you’ll need to use Ansible’s -i command line option (e.g.
-i /etc/ansible/cobbler.py).
First test the script by running /etc/ansible/cobbler.py directly. You should see some JSON data output,
but it may not have anything in it just yet.
Let’s explore what this does. In cobbler, assume a scenario somewhat like the following:
cobbler profile add --name=webserver --distro=CentOS6-x86_64
cobbler profile edit --name=webserver --mgmt-classes="webserver" --ksmeta="a=2 b=3"
cobbler system edit --name=foo --dns-name="foo.example.com" --mgmt-classes="atlanta" --ksmeta="c=4"
cobbler system edit --name=bar --dns-name="bar.example.com" --mgmt-classes="atlanta" --ksmeta="c=5"
In the example above, the system ‘foo.example.com’ will be addressable by ansible directly, but will also be address-
able when using the group names ‘webserver’ or ‘atlanta’. Since Ansible uses SSH, we’ll try to contact system foo
over ‘foo.example.com’, only, never just ‘foo’. Similarly, if you try “ansible foo” it wouldn’t find the system... but
“ansible ‘foo*”’ would, because the system DNS name starts with ‘foo’.
The script doesn’t just provide host and group info. In addition, as a bonus, when the ‘setup’ module is run (which
happens automatically when using playbooks), the variables ‘a’, ‘b’, and ‘c’ will all be auto-populated in the templates:
# file: /srv/motd.j2
Welcome, I am templated with a value of a={{ a }}, b={{ b }}, and c={{ c }}
Note: The name ‘webserver’ came from cobbler, as did the variables for the config file. You can still pass in your
own variables like normal in Ansible, but variables from the external inventory script will override any that have the
same name.
So, with the template above (motd.j2), this would result in the following data being written to /etc/motd for system
‘foo’:
And technically, though there is no major good reason to do it, this also works too:
ansible webserver -m shell -a "echo {{ a }}"
If you use Amazon Web Services EC2, maintaining an inventory file might not be the best approach, because hosts
may come and go over time, be managed by external applications, or you might even be using AWS autoscaling. For
this reason, you can use the EC2 external inventory script.
You can use this script in one of two ways. The easiest is to use Ansible’s -i command line option and specify the
path to the script after marking it executable:
ansible -i ec2.py -u ubuntu us-east-1d -m ping
The second option is to copy the script to /etc/ansible/hosts and chmod +x it. You will also need to copy the ec2.ini
file to /etc/ansible/ec2.ini. Then you can run ansible as you would normally.
To successfully make an API call to AWS, you will need to configure Boto (the Python interface to AWS). There are
a variety of methods available, but the simplest is just to export two environment variables:
export AWS_ACCESS_KEY_ID=’AK123’
export AWS_SECRET_ACCESS_KEY=’abc123’
You can test the script by itself to make sure your config is correct:
cd plugins/inventory
./ec2.py --list
After a few moments, you should see your entire EC2 inventory across all regions in JSON.
Since each region requires its own API call, if you are only using a small set of regions, feel free to edit ec2.ini
and list only the regions you are interested in. There are other config options in ec2.ini including cache control,
and destination variables.
At their heart, inventory files are simply a mapping from some name to a destination address. The default ec2.ini
settings are configured for running Ansible from outside EC2 (from your laptop for example) – and this is not the most
efficient way to manage EC2.
If you are running Ansible from within EC2, internal DNS names and IP addresses may make more sense than public
DNS names. In this case, you can modify the destination_variable in ec2.ini to be the private DNS name
of an instance. This is particularly important when running Ansible within a private subnet inside a VPC, where the
only way to access an instance is via its private IP address. For VPC instances, vpc_destination_variable in ec2.ini
provides a means of using which ever boto.ec2.instance variable makes the most sense for your use case.
The EC2 external inventory provides mappings to instances from several groups:
Instance ID These are groups of one since instance IDs are unique. e.g. i-00112233 i-a1b1c1d1
Region A group of all instances in an AWS region. e.g. us-east-1 us-west-2
Availability Zone A group of all instances in an availability zone. e.g. us-east-1a us-east-1b
1.1. Introduction 13
Ansible Documentation, Release 1.5
Security Group Instances belong to one or more security groups. A group is created for each security group,
with all characters except alphanumerics, dashes (-) converted to underscores (_). Each group is pre-
fixed by security_group_ e.g. security_group_default security_group_webservers
security_group_Pete_s_Fancy_Group
Tags Each instance can have a variety of key/value pairs associated with it called Tags. The most common
tag key is ‘Name’, though anything is possible. Each key/value pair is its own group of instances, again
with special characters converted to underscores, in the format tag_KEY_VALUE e.g. tag_Name_Web
tag_Name_redis-master-001 tag_aws_cloudformation_logical-id_WebServerGroup
When the Ansible is interacting with a specific server, the EC2 inventory script is called again with the --host
HOST option. This looks up the HOST in the index cache to get the instance ID, and then makes an API call to AWS
to get information about that specific instance. It then makes information about that instance available as variables to
your playbooks. Each variable is prefixed by ec2_. Here are some of the variables available:
• ec2_architecture
• ec2_description
• ec2_dns_name
• ec2_id
• ec2_image_id
• ec2_instance_type
• ec2_ip_address
• ec2_kernel
• ec2_key_name
• ec2_launch_time
• ec2_monitored
• ec2_ownerId
• ec2_placement
• ec2_platform
• ec2_previous_state
• ec2_private_dns_name
• ec2_private_ip_address
• ec2_public_dns_name
• ec2_ramdisk
• ec2_region
• ec2_root_device_name
• ec2_root_device_type
• ec2_security_group_ids
• ec2_security_group_names
• ec2_spot_instance_request_id
• ec2_state
• ec2_state_code
• ec2_state_reason
• ec2_status
• ec2_subnet_id
• ec2_tag_Name
• ec2_tenancy
• ec2_virtualization_type
• ec2_vpc_id
Both ec2_security_group_ids and ec2_security_group_names are comma-separated lists of all secu-
rity groups. Each EC2 tag is a variable in the format ec2_tag_KEY.
To see the complete list of variables available for an instance, run the script by itself:
cd plugins/inventory
./ec2.py --host ec2-12-12-12-12.compute-1.amazonaws.com
Note that the AWS inventory script will cache results to avoid repeated API calls, and this cache setting is configurable
in ec2.ini. To explicitly clear the cache, you can run the ec2.py script with the --refresh-cache parameter.
In addition to Cobbler and EC2, inventory scripts are also available for:
BSD Jails
Digital Ocean
Linode
OpenShift
OpenStack Nova
Red Hat’s SpaceWalk
Vagrant (not to be confused with the provisioner in vagrant, which is preferred)
Zabbix
Sections on how to use these in more detail will be added over time, but by looking at the “plugins/” directory of the
Ansible checkout it should be very obvious how to use them. The process for the AWS inventory script is the same.
If you develop an interesting inventory script that might be general purpose, please submit a pull request – we’d likely
be glad to include it in the project.
If the location given to -i in Ansible is a directory (or as so configured in ansible.cfg), Ansible can use multiple
inventory sources at the same time. When doing so, it is possible to mix both dynamic and statically managed inventory
sources in the same ansible run. Instant hybrid cloud!
See also:
Inventory All about static inventory files
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net #ansible IRC chat channel
1.1. Introduction 15
Ansible Documentation, Release 1.5
1.1.5 Patterns
Topics
• Patterns
Patterns in Ansible are how we decide which hosts to manage. This can mean what hosts to communicate with, but in
terms of Playbooks it actually means what hosts to apply a particular configuration or IT process to.
We’ll go over how to use the command line in Introduction To Ad-Hoc Commands section, however, basically it looks
like this:
ansible <pattern_goes_here> -m <module_name> -a <arguments>
Such as:
ansible webservers -m service -a "name=httpd state=restarted"
A pattern usually refers to a set of groups (which are sets of hosts) – in the above case, machines in the “webservers”
group.
Anyway, to use Ansible, you’ll first need to know how to tell Ansible which hosts in your inventory to talk to. This is
done by designating particular host names or groups of hosts.
The following patterns are equivalent and target all hosts in the inventory:
all
*
The following patterns address one or more groups. Groups separated by a colon indicate an “OR” configuration. This
means the host may be in either one group or the other:
webservers
webservers:dbservers
You can exclude groups as well, for instance, all machines must be in the group webservers but not in the group
phoenix:
webservers:!phoenix
You can also specify the intersection of two groups. This would mean the hosts must be in the group webservers and
the host must also be in the group staging:
webservers:&staging
The above configuration means “all machines in the groups ‘webservers’ and ‘dbservers’ are to be managed if they are
in the group ‘staging’ also, but the machines are not to be managed if they are in the group ‘phoenix’ ... whew!
You can also use variables if you want to pass some group specifiers via the “-e” argument to ansible-playbook, but
this is uncommonly used:
webservers:!{{excluded}}:&{{required}}
You also don’t have to manage by strictly defined groups. Individual host names, IPs and groups, can also be referenced
using wildcards:
*.example.com
*.com
It’s also ok to mix wildcard patterns and groups at the same time:
one*.com:dbservers
Most people don’t specify patterns as regular expressions, but you can. Just start the pattern with a ‘~’:
~(web|db).*\.example\.com
While we’re jumping a bit ahead, additionally, you can add an exclusion criteria just by supplying the --limit flag
to /usr/bin/ansible or /usr/bin/ansible-playbook:
ansible-playbook site.yml --limit datacenter2
Easy enough. See Introduction To Ad-Hoc Commands and then Playbooks for how to apply this knowledge.
See also:
Introduction To Ad-Hoc Commands Examples of basic commands
Playbooks Learning ansible’s configuration management language
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net #ansible IRC chat channel
Topics
• Introduction To Ad-Hoc Commands
– Parallelism and Shell Commands
– File Transfer
– Managing Packages
– Users and Groups
– Deploying From Source Control
– Managing Services
– Time Limited Background Operations
– Gathering Facts
The following examples show how to use /usr/bin/ansible for running ad hoc tasks.
What’s an ad-hoc command?
An ad-hoc command is something that you might type in to do something really quick, but don’t want to save for later.
This is a good place to start to understand the basics of what Ansible can do prior to learning the playbooks language
– ad-hoc commands can also be used to do quick things that you might not necessarily want to write a full playbook
for.
Generally speaking, the true power of Ansible lies in playbooks. Why would you use ad-hoc tasks versus playbooks?
1.1. Introduction 17
Ansible Documentation, Release 1.5
For instance, if you wanted to power off all of your lab for Christmas vacation, you could execute a quick one-liner in
Ansible without writing a playbook.
For configuration management and deployments, though, you’ll want to pick up on using ‘/usr/bin/ansible-playbook’
– the concepts you will learn here will port over directly to the playbook language.
(See Playbooks for more information about those)
If you haven’t read Inventory already, please look that over a bit first and then we’ll get going.
Arbitrary example.
Let’s use Ansible’s command line tool to reboot all web servers in Atlanta, 10 at a time. First, let’s set up SSH-agent
so it can remember our credentials:
$ ssh-agent bash
$ ssh-add ~/.ssh/id_rsa
If you don’t want to use ssh-agent and want to instead SSH with a password instead of keys, you can with
--ask-pass (-k), but it’s much better to just use ssh-agent.
Now to run the command on all servers in a group, in this case, atlanta, in 10 parallel forks:
$ ansible atlanta -a "/sbin/reboot" -f 10
/usr/bin/ansible will default to running from your user account. If you do not like this behavior, pass in “-u username”.
If you want to run commands as a different user, it looks like this:
$ ansible atlanta -a "/usr/bin/foo" -u username
Often you’ll not want to just do things from your user account. If you want to run commands through sudo:
$ ansible atlanta -a "/usr/bin/foo" -u username --sudo [--ask-sudo-pass]
Use --ask-sudo-pass (-K) if you are not using passwordless sudo. This will interactively prompt you for the
password to use. Use of passwordless sudo makes things easier to automate, but it’s not required.
It is also possible to sudo to a user other than root using --sudo-user (-U):
$ ansible atlanta -a "/usr/bin/foo" -u username -U otheruser [--ask-sudo-pass]
Note: Rarely, some users have security rules where they constrain their sudo environment to running specific com-
mand paths only. This does not work with ansible’s no-bootstrapping philosophy and hundreds of different modules. If
doing this, use Ansible from a special account that does not have this constraint. One way of doing this without sharing
access to unauthorized users would be gating Ansible with Ansible Tower, which can hold on to an SSH credential and
let members of certain organizations use it on their behalf without having direct access.
Ok, so those are basics. If you didn’t read about patterns and groups yet, go back and read Patterns.
The -f 10 in the above specifies the usage of 10 simultaneous processes to use. You can also set this in The Ansible
Configuration File to avoid setting it again. The default is actually 5, which is really small and conservative. You are
probably going to want to talk to a lot more simultaneous hosts so feel free to crank this up. If you have more hosts
than the value set for the fork count, Ansible will talk to them, but it will take a little longer. Feel free to push this
value as high as your system can handle it!
You can also select what Ansible “module” you want to run. Normally commands also take a -m for module name,
but the default module name is ‘command’, so we didn’t need to specify that all of the time. We’ll use -m in later
examples to run some other About Modules.
Note: The command - Executes a command on a remote node module does not support shell variables and things like
piping. If we want to execute a module using a shell, use the ‘shell’ module instead. Read more about the differences
on the About Modules page.
Using the shell - Execute commands in nodes. module looks like this:
$ ansible raleigh -m shell -a ’echo $TERM’
When running any command with the Ansible ad hoc CLI (as opposed to Playbooks), pay particular attention to shell
quoting rules, so the local shell doesn’t eat a variable before it gets passed to Ansible. For example, using double vs
single quotes in the above example would evaluate the variable on the box you were on.
So far we’ve been demoing simple command execution, but most Ansible modules usually do not work like simple
scripts. They make the remote system look like you state, and run the commands necessary to get it there. This is
commonly referred to as ‘idempotence’, and is a core design goal of Ansible. However, we also recognize that running
arbitrary commands is equally important, so Ansible easily supports both.
File Transfer
Here’s another use case for the /usr/bin/ansible command line. Ansible can SCP lots of files to multiple machines in
parallel.
To transfer a file directly to many different servers:
$ ansible atlanta -m copy -a "src=/etc/hosts dest=/tmp/hosts"
If you use playbooks, you can also take advantage of the template module, which takes this another step further.
(See module and playbook documentation).
The file module allows changing ownership and permissions on files. These same options can be passed directly to
the copy module as well:
$ ansible webservers -m file -a "dest=/srv/foo/a.txt mode=600"
$ ansible webservers -m file -a "dest=/srv/foo/b.txt mode=600 owner=mdehaan group=mdehaan"
The file module can also create directories, similar to mkdir -p:
$ ansible webservers -m file -a "dest=/path/to/c mode=755 owner=mdehaan group=mdehaan state=directory
Managing Packages
There are modules available for yum and apt. Here are some examples with yum.
Ensure a package is installed, but don’t update it:
$ ansible webservers -m yum -a "name=acme state=installed"
1.1. Introduction 19
Ansible Documentation, Release 1.5
Ansible has modules for managing packages under many platforms. If your package manager does not have a module
available for it, you can install for other packages using the command module or (better!) contribute a module for
other package managers. Stop by the mailing list for info/details.
The ‘user’ module allows easy creation and manipulation of existing user accounts, as well as removal of user accounts
that may exist:
$ ansible all -m user -a "name=foo password=<crypted password here>"
See the About Modules section for details on all of the available options, including how to manipulate groups and
group membership.
Since Ansible modules can notify change handlers it is possible to tell Ansible to run specific tasks when the code is
updated, such as deploying Perl/Python/PHP/Ruby directly from git and then restarting apache.
Managing Services
Long running operations can be backgrounded, and their status can be checked on later. The same job ID is given to
the same task on all hosts, so you won’t lose track. If you kick hosts and don’t want to poll, it looks like this:
$ ansible all -B 3600 -a "/usr/bin/long_running_operation --do-stuff"
If you do decide you want to check on the job status later, you can:
The above example says “run for 30 minutes max (-B: 30*60=1800), poll for status (-P) every 60 seconds”.
Poll mode is smart so all jobs will be started before polling will begin on any machine. Be sure to use a high enough
--forks value if you want to get all of your jobs started very quickly. After the time limit (in seconds) runs out (-B),
the process on the remote nodes will be terminated.
Typically you’ll be only be backgrounding long-running shell commands or software upgrades only. Backgrounding
the copy module does not do a background file transfer. Playbooks also support polling, and have a simplified syntax
for this.
Gathering Facts
Facts are described in the playbooks section and represent discovered variables about a system. These can be used to
implement conditional execution of tasks but also just to get ad-hoc information about your system. You can see all
facts via:
$ ansible all -m setup
Its also possible to filter this output to just export certain facts, see the “setup” module documentation for details.
Read more about facts at Variables once you’re ready to read up on Playbooks.
See also:
The Ansible Configuration File All about the Ansible config file
About Modules A list of available modules
Playbooks Using Ansible for configuration management & deployment
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net #ansible IRC chat channel
1.1. Introduction 21
Ansible Documentation, Release 1.5
Topics
• The Ansible Configuration File
– Getting the latest configuration
– Environmental configuration
– Explanation of values by section
* General defaults
· action_plugins
· ansible_managed
· ask_pass
· ask_sudo_pass
· callback_plugins
· connection_plugins
· deprecation_warnings
· display_skipped_hosts
· error_on_undefined_vars
· executable
· filter_plugins
· forks
· hash_behaviour
· hostfile
· host_key_checking
· jinja2_extensions
· legacy_playbook_variables
· library
· log_path
· lookup_plugins
· module_name
· nocolor
· nocows
· pattern
· poll_interval
· private_key_file
· remote_port
· remote_tmp
· remote_user
· roles_path
· sudo_exe
· sudo_flags
· sudo_user
· timeout
· transport
· vars_plugins
* Paramiko Specific Settings
· record_host_keys
* OpenSSH Specific Settings
· ssh_args
· control_path
· scp_if_ssh
· pipelining
* Accelerate Mode Settings
· accelerate_port
· accelerate_timeout
· accelerate_connect_timeout
Certain settings in Ansible are adjustable via a configuration file. The stock configuration should be sufficient for most
users, but there may be reasons you would want to change them.
Changes can be made and used in a configuration file which will be processed in the following order:
* ANSIBLE_CONFIG (an environment variable)
* ansible.cfg (in the current directory)
* .ansible.cfg (in the home directory)
* /etc/ansible/ansible.cfg
Ansible will process the above list and use the first file found. Settings in files are not merged together.
If installing ansible from a package manager, the latest ansible.cfg should be present in /etc/ansible, possibly as a
”.rpmnew” file (or other) as appropriate in the case of updates.
If you have installed from pip or from source, however, you may want to create this file in order to override default
settings in Ansible.
You may wish to consult the ansible.cfg in source control for all of the possible latest values.
Environmental configuration
Ansible also allows configuration of settings via environment variables. If these environment variables are set, they
will override any setting loaded from the configuration file. These variables are for brevity not defined here, but look
in ‘constants.py’ in the source tree if you want to use these. They are mostly considered to be a legacy system as
compared to the config file, but are equally valid.
The configuration file is broken up into sections. Most options are in the “general” section but some sections of the
file are specific to certain connection types.
General defaults
action_plugins Actions are pieces of code in ansible that enable things like module execution, templating, and so
forth.
This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from different loca-
tions:
action_plugins = /usr/share/ansible_plugins/action_plugins
Most users will not need to use this feature. See Developing Plugins for more details.
1.1. Introduction 23
Ansible Documentation, Release 1.5
ansible_managed Ansible-managed is a string that can be inserted into files written by Ansible’s config templating
system, if you use a string like:
{{ ansible_managed }}
This is useful to tell users that a file has been placed by Ansible and manual changes are likely to be overwritten.
Note that if using this feature, and there is a date in the string, the template will be reported changed each time as the
date is updated.
ask_pass This controls whether an Ansible playbook should prompt for a password by default. The default behavior
is no:
#ask_pass=True
If using SSH keys for authentication, it’s probably not needed to change this setting.
ask_sudo_pass Similar to ask_pass, this controls whether an Ansible playbook should prompt for a sudo password
by default when sudoing. The default behavior is also no:
#ask_sudo_pass=True
Users on platforms where sudo passwords are enabled should consider changing this setting.
callback_plugins This is a developer-centric feature that allows low-level extensions around Ansible to be loaded
from different locations:
callback_plugins = /usr/share/ansible_plugins/callback_plugins
Most users will not need to use this feature. See Developing Plugins for more details
connection_plugins This is a developer-centric feature that allows low-level extensions around Ansible to be loaded
from different locations:
connection_plugins = /usr/share/ansible_plugins/connection_plugins
Most users will not need to use this feature. See Developing Plugins for more details
Deprecation warnings indicate usage of legacy features that are slated for removal in a future release of Ansible.
display_skipped_hosts If set to False, ansible will not display any status for a task that is skipped. The default
behavior is to display skipped tasks:
#display_skipped_hosts=True
Note that Ansible will always show the task header for any task, regardless of whether or not the task is skipped.
error_on_undefined_vars On by default since Ansible 1.3, this causes ansible to fail steps that reference variable
names that are likely typoed:
#error_on_undefined_vars=True
If set to False, any ‘{{ template_expression }}’ that contains undefined variables will be rendered in a template or
ansible action line exactly as written.
executable This indicates the command to use to spawn a shell under a sudo environment. Users may need to change
this in rare instances to /bin/bash in rare instances when sudo is constrained, but in most cases it may be left as is:
#executable = /bin/bash
filter_plugins This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from
different locations:
filter_plugins = /usr/share/ansible_plugins/filter_plugins
Most users will not need to use this feature. See Developing Plugins for more details
forks This is the default number of parallel processes to spawn when communicating with remote hosts. Since
Ansible 1.3, the fork number is automatically limited to the number of possible hosts, so this is really a limit of how
much network and CPU load you think you can handle. Many users may set this to 50, some set it to 500 or more.
If you have a large number of hosts, higher values will make actions across all of those hosts complete faster. The
default is very very conservative:
forks=5
hash_behaviour Ansible by default will override variables in specific precedence orders, as described in Variables.
When a variable of higher precedence wins, it will replace the other value.
Some users prefer that variables that are hashes (aka ‘dictionaries’ in Python terms) are merged together. This setting
is called ‘merge’. This is not the default behavior and it does not affect variables whose values are scalars (integers,
strings) or arrays. We generally recommend not using this setting unless you think you have an absolute need for it,
and playbooks in the official examples repos do not use this setting:
#hash_behaviour=replace
hostfile This is the default location of the inventory file, script, or directory that Ansible will use to determine what
hosts it has available to talk to:
hostfile = /etc/ansible/hosts
host_key_checking As described in Getting Started, host key checking is on by default in Ansible 1.3 and later. If
you understand the implications and wish to disable it, you may do so here by setting the value to False:
host_key_checking=True
1.1. Introduction 25
Ansible Documentation, Release 1.5
jinja2_extensions This is a developer-specific feature that allows enabling additional Jinja2 extensions:
jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n
If you do not know what these do, you probably don’t need to change this setting :)
legacy_playbook_variables Ansible prefers to use Jinja2 syntax ‘{{ like_this }}’ to indicate a variable should be
substituted in a particular string. However, older versions of playbooks used a more Perl-style syntax. This syntax was
undesirable as it frequently conflicted with bash and was hard to explain to new users when referencing complicated
variable hierarchies, so we have standardized on the ‘{{ jinja2 }}’ way.
To ensure a string like ‘$foo’ is not inadvertently replaced in a Perl or Bash script template, the old form of templating
(which is still enabled as of Ansible 1.4) can be disabled like so
legacy_playbook_variables = no
Ansible knows how to look in multiple locations if you feed it a colon separated path, and it also will look for modules
in the ”./library” directory alongside a playbook.
log_path If present and configured in ansible.cfg, Ansible will log information about executions at the designated
location. Be sure the user running Ansible has permissions on the logfile:
log_path=/var/log/ansible.log
This behavior is not on by default. Note that ansible will, without this setting, record module arguments called to the
syslog of managed machines. Password arguments are excluded.
For Enterprise users seeking more detailed logging history, you may be interested in Ansible Tower.
lookup_plugins This is a developer-centric feature that allows low-level extensions around Ansible to be loaded
from different locations:
lookup_plugins = /usr/share/ansible_plugins/lookup_plugins
Most users will not need to use this feature. See Developing Plugins for more details
module_name This is the default module name (-m) value for /usr/bin/ansible. The default is the ‘command’ mod-
ule. Remember the command module doesn’t support shell variables, pipes, or quotes, so you might wish to change it
to ‘shell’:
module_name = command
nocolor By default ansible will try to colorize output to give a better indication of failure and status information. If
you dislike this behavior you can turn it off by setting ‘nocolor’ to 1:
nocolor=0
nocows By default ansible will take advantage of cowsay if installed to make /usr/bin/ansible-playbook runs more
exciting. Why? We believe systems management should be a happy experience. If you do not like the cows, you can
disable them by setting ‘nocows’ to 1:
nocows=0
pattern This is the default group of hosts to talk to in a playbook if no “hosts:” stanza is supplied. The default is to
talk to all hosts. You may wish to change this to protect yourself from surprises:
hosts=*
Note that /usr/bin/ansible always requires a host pattern and does not use this setting, only /usr/bin/ansible-playbook.
poll_interval For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling), this is how often
to check back on the status of those tasks when an explicit poll interval is not supplied. The default is a reasonably
moderate 15 seconds which is a tradeoff between checking in frequently and providing a quick turnaround when
something may have completed:
poll_interval=15
private_key_file If you are using a pem file to authenticate with machines rather than SSH agent or passwords, you
can set the default value here to avoid re-specifying --ansible-private-keyfile with every invocation:
private_key_file=/path/to/file.pem
remote_port This sets the default SSH port on all of your systems, for systems that didn’t specify an alternative
value in inventory. The default is the standard 22:
remote_port = 22
remote_tmp Ansible works by transferring modules to your remote machines, running them, and then cleaning up
after itself. In some cases, you may not wish to use the default location and would like to change the path. You can do
so by altering this setting:
remote_tmp = $HOME/.ansible/tmp
The default is to use a subdirectory of the user’s home directory. Ansible will then choose a random directory name
inside this location.
remote_user This is the default username ansible will connect as for /usr/bin/ansible-playbook. Note that
/usr/bin/ansible will always default to the current user:
remote_user = root
roles_path The roles path indicate additional directories beyond the ‘roles/’ subdirectory of a playbook project to
search to find Ansible roles. For instance, if there was a source control repository of common roles and a different
repository of playbooks, you might choose to establish a convention to checkout roles in /opt/mysite/roles like so:
roles_path = /opt/mysite/roles
Roles will be first searched for in the playbook directory. Should a role not be found, it will indicate all the possible
paths that were searched.
1.1. Introduction 27
Ansible Documentation, Release 1.5
sudo_exe If using an alternative sudo implementation on remote machines, the path to sudo can be replaced here
provided the sudo implementation is matching CLI flags with the standard sudo:
sudo_exe=sudo
sudo_flags Additional flags to pass to sudo when engaging sudo support. The default is ‘-H’ which preserves the
environment of the original user. In some situations you may wish to add or remote flags, but in general most users
will not need to change this setting:
sudo_flags=-H
sudo_user This is the default user to sudo to if --sudo-user is not specified or ‘sudo_user’ is not specified in an
Ansible playbook. The default is the most logical: ‘root’:
sudo_user=root
transport This is the default transport to use if “-c <transport_name>” is not specified to /usr/bin/ansible or
/usr/bin/ansible-playbook. The default is ‘smart’, which will use ‘ssh’ (OpenSSH based) if the local operating system
is new enough to support ControlPersist technology, and then will otherwise use ‘paramiko’. Other transport options
include ‘local’, ‘chroot’, ‘jail’, and so on.
Users should usually leave this setting as ‘smart’ and let their playbooks choose an alternate setting when needed with
the ‘connection:’ play parameter.
vars_plugins This is a developer-centric feature that allows low-level extensions around Ansible to be loaded from
different locations:
vars_plugins = /usr/share/ansible_plugins/vars_plugins
Most users will not need to use this feature. See Developing Plugins for more details
Paramiko is the default SSH connection implementation on Enterprise Linux 6 or earlier, and is not used by default on
other platforms. Settings live under the [paramiko] header.
record_host_keys The default setting of yes will record newly discovered and approved (if host key checking is
enabled) hosts in the user’s hostfile. This setting may be inefficient for large numbers of hosts, and in those situa-
tions, using the ssh transport is definitely recommended instead. Setting it to False will improve performance and is
recommended when host key checking is disabled:
record_host_keys=True
Under the [ssh_connection] header, the following settings are tunable for SSH connections. OpenSSH is the default
connection type for Ansible on OSes that are new enough to support ControlPersist. (This means basically all operating
systems except Enterprise Linux 6 or earlier).
ssh_args If set, this will pass a specific set of options to Ansible rather than Ansible’s usual defaults:
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
In particular, users may wish to raise the ControlPersist time to encourage performance. A value of 30 minutes may
be appropriate.
control_path This is the location to save ControlPath sockets. This defaults to:
control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r
On some systems with very long hostnames or very long path names (caused by long user names or deeply nested
home directories) this can exceed the character limit on file socket names (108 characters for most platforms). In that
case, you may wish to shorten the string to something like the below:
control_path = %(directory)s/%%h-%%r
Ansible 1.4 and later will instruct users to run with “-vvvv” in situations where it hits this problem and if so it is easy
to tell there is too long of a Control Path filename. This may be frequently encountered on EC2.
scp_if_ssh Occasionally users may be managing a remote system that doesn’t have SFTP enabled. If set to True, we
can cause scp to be used to transfer remote files instead:
scp_if_ssh=False
There’s really no reason to change this unless problems are encountered, and then there’s also no real drawback to
managing the switch. Most environments support SFTP by default and this doesn’t usually need to be changed.
pipelining Enabling pipelining reduces the number of SSH operations required to execute a module on the remote
server, by executing many ansible modules without actual file transfer. This can result in a very significant performance
improvement when enabled, however when using “sudo:” operations you must first disable ‘requiretty’ in /etc/sudoers
on all managed hosts.
By default, this option is disabled to preserve compatibility with sudoers configurations that have requiretty (the default
on many distros), but is highly recommended if you can enable it, eliminating the need for Accelerated Mode:
pipelining=False
Under the [accelerate] header, the following settings are tunable for Accelerated Mode. Acceleration is a useful
performance feature to use if you cannot enable pipelining in your environment, but is probably not needed if you can.
1.1. Introduction 29
Ansible Documentation, Release 1.5
Note, this value can be set to less than one second, however it is probably not a good idea to do so unless you’re on
a very fast and reliable LAN. If you’re connecting to systems over the internet, it may be necessary to increase this
timeout.
We’ve recorded a short video that shows how to get started with Ansible that you may like to use alongside the
documentation.
The quickstart video is about 20 minutes long and will show you some of the basics about your first steps with Ansible.
Enjoy, and be sure to visit the rest of the documentation to learn more.
1.3 Playbooks
Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want
your remote systems to enforce, or a set of steps in a general IT process.
If Ansible modules are the tools in your workshop, playbooks are your design plans.
At a basic level, playbooks can be used to manage configurations of and deployments to remote machines. At a more
advanced level, they can sequence multi-tier rollouts involving rolling updates, and can delegate actions to other hosts,
interacting with monitoring servers and load balancers along the way.
While there’s a lot of information here, there’s no need to learn everything at once. You can start small and pick up
more features over time as you need them.
Playbooks are designed to be human-readable and are developed in a basic text language. There are multiple ways to
organize playbooks and the files they include, and we’ll offer up some suggestions on that and making the most out of
Ansible.
It is recommended to look at Example Playbooks while reading along with the playbook documentation. These
illustrate best practices as well as how to put many of the various concepts together.
About Playbooks
Playbooks are a completely different way to use ansible than in adhoc task execution mode, and are particularly
powerful.
Simply put, playbooks are the basis for a really simple configuration management and multi-machine deployment
system, unlike any that already exist, and one that is very well suited to deploying complex applications.
Playbooks can declare configurations, but they can also orchestrate steps of any manual ordered process, even as
different steps must bounce back and forth between sets of machines in particular orders. They can launch tasks
synchronously or asynchronously.
While you might run the main /usr/bin/ansible program for ad-hoc tasks, playbooks are more likely to be kept in source
control and used to push out your configuration or assure the configurations of your remote systems are in spec.
There are also some full sets of playbooks illustrating a lot of these techniques in the ansible-examples repository.
We’d recommend looking at these in another tab as you go along.
There are also many jumping off points after you learn playbooks, so hop back to the documentation index after you’re
done with this section.
Playbooks are expressed in YAML format (see YAML Syntax) and have a minimum of syntax, which intentionally tries
to not be a programming language or script, but rather a model of a configuration or a process.
Each playbook is composed of one or more ‘plays’ in a list.
The goal of a play is to map a group of hosts to some well defined roles, represented by things ansible calls tasks. At
a basic level, a task is nothing more than a call to an ansible module, which you should have learned about in earlier
chapters.
By composing a playbook of multiple ‘plays’, it is possible to orchestrate multi-machine deployments, running certain
steps on all machines in the webservers group, then certain steps on the database server group, then more commands
back on the webservers group, etc.
“plays” are more or less a sports analogy. You can have quite a lot of plays that affect your systems to do different
things. It’s not as if you were just defining one particular state or model, and you can run different plays at different
times.
For starters, here’s a playbook that contains just one play:
---
- hosts: webservers
vars:
http_port: 80
max_clients: 200
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum: pkg=httpd state=latest
- name: write the apache config file
template: src=/https/www.scribd.com/srv/httpd.j2 dest=/etc/httpd.conf
notify:
- restart apache
- name: ensure apache is running
service: name=httpd state=started
handlers:
1.3. Playbooks 31
Ansible Documentation, Release 1.5
Below, we’ll break down what the various features of the playbook language are.
Basics
For each play in a playbook, you get to choose which machines in your infrastructure to target and what remote user
to complete the steps (called tasks) as.
The hosts line is a list of one or more groups or host patterns, separated by colons, as described in the Patterns
documentation. The remote_user is just the name of the user account:
---
- hosts: webservers
remote_user: root
Note: The remote_user parameter was formerly called just user. It was renamed in Ansible 1.4 to make it more
distinguishable from the user module (used to create users on remote systems).
You can also use sudo on a particular task instead of the whole play:
---
- hosts: webservers
remote_user: yourname
tasks:
- service: name=nginx state=started
sudo: yes
You can also login as you, and then sudo to different users than root:
---
- hosts: webservers
remote_user: yourname
sudo: yes
sudo_user: postgres
If you need to specify a password to sudo, run ansible-playbook with --ask-sudo-pass (-K). If you run a sudo
playbook and the playbook seems to hang, it’s probably stuck at the sudo prompt. Just Control-C to kill it and run it
again with -K.
Important: When using sudo_user to a user other than root, the module arguments are briefly written into a random
tempfile in /tmp. These are deleted immediately after the command is executed. This only occurs when sudoing from
a user like ‘bob’ to ‘timmy’, not when going from ‘bob’ to ‘root’, or logging in directly as ‘bob’ or ‘root’. If this
concerns you that this data is briefly readable (not writable), avoid transferring uncrypted passwords with sudo_user
set. In other cases, ‘/tmp’ is not used and this does not come into play. Ansible also takes care to not log password
parameters.
Tasks list
Each play contains a list of tasks. Tasks are executed in order, one at a time, against all machines matched by the host
pattern, before moving on to the next task. It is important to understand that, within a play, all hosts are going to get
the same task directives. It is the purpose of a play to map a selection of hosts to tasks.
When running the playbook, which runs top to bottom, hosts with failed tasks are taken out of the rotation for the
entire playbook. If things fail, simply correct the playbook file and rerun.
The goal of each task is to execute a module, with very specific arguments. Variables, as mentioned above, can be
used in arguments to modules.
Modules are ‘idempotent’, meaning if you run them again, they will make only the changes they must in order to bring
the system to the desired state. This makes it very safe to rerun the same playbook multiple times. They won’t change
things unless they have to change things.
The command and shell modules will typically rerun the same command again, which is totally ok if the command is
something like ‘chmod’ or ‘setsebool’, etc. Though there is a ‘creates’ flag available which can be used to make these
modules also idempotent.
Every task should have a name, which is included in the output from running the playbook. This is output for humans,
so it is nice to have reasonably good descriptions of each task step. If the name is not provided though, the string fed
to ‘action’ will be used for output.
Tasks can be declared using the legacy “action: module options” format, but it is recommended that you use the more
conventional “module: options” format. This recommended format is used throughout the documentation, but you
may encounter the older format in some playbooks.
Here is what a basic task looks like, as with most modules, the service module takes key=value arguments:
tasks:
- name: make sure apache is running
service: name=httpd state=running
The command and shell modules are the only modules that just take a list of arguments and don’t use the key=value
form. This makes them work as simply as you would expect:
tasks:
- name: disable selinux
command: /sbin/setenforce 0
The command and shell module care about return codes, so if you have a command whose successful exit code is not
zero, you may wish to do this:
tasks:
- name: run this command and ignore the result
shell: /usr/bin/somecommand || /bin/true
1.3. Playbooks 33
Ansible Documentation, Release 1.5
Or this:
tasks:
- name: run this command and ignore the result
shell: /usr/bin/somecommand
ignore_errors: True
If the action line is getting too long for comfort you can break it on a space and indent any continuation lines:
tasks:
- name: Copy ansible inventory file to client
copy: src=/https/www.scribd.com/etc/ansible/hosts dest=/etc/ansible/hosts
owner=root group=root mode=0644
Variables can be used in action lines. Suppose you defined a variable called ‘vhost’ in the ‘vars’ section, you could do
this:
tasks:
- name: create a virtual host file for {{ vhost }}
template: src=somefile.j2 dest=/etc/httpd/conf.d/{{ vhost }}
Those same variables are usable in templates, which we’ll get to later.
Now in a very basic playbook all the tasks will be listed directly in that play, though it will usually make more sense
to break up tasks using the ‘include:’ directive. We’ll show that a bit later.
Action Shorthand
You will notice in earlier versions, this was only available as:
action: template src=templates/foo.j2 dest=/etc/foo.conf
The old form continues to work in newer versions without any plan of deprecation.
As we’ve mentioned, modules are written to be ‘idempotent’ and can relay when they have made a change on the
remote system. Playbooks recognize this and have a basic event system that can be used to respond to change.
These ‘notify’ actions are triggered at the end of each block of tasks in a playbook, and will only be triggered once
even if notified by multiple different tasks.
For instance, multiple resources may indicate that apache needs to be restarted because they have changed a config
file, but apache will only be bounced once to avoid unnecessary restarts.
Here’s an example of restarting two services when the contents of a file change, but only if the file changes:
- name: template configuration file
template: src=template.j2 dest=/etc/foo.conf
notify:
- restart memcached
- restart apache
The things listed in the ‘notify’ section of a task are called handlers.
Handlers are lists of tasks, not really any different from regular tasks, that are referenced by name. Handlers are what
notifiers notify. If nothing notifies a handler, it will not run. Regardless of how many things notify a handler, it will
run only once, after all of the tasks complete in a particular play.
Here’s an example handlers section:
handlers:
- name: restart memcached
service: name=memcached state=restarted
- name: restart apache
service: name=apache state=restarted
Handlers are best used to restart services and trigger reboots. You probably won’t need them for much else.
Roles are described later on. It’s worthwhile to point out that handlers are automatically processed between ‘pre_tasks’,
‘roles’, ‘tasks’, and ‘post_tasks’ sections. If you ever want to flush all the handler commands immediately though, in
1.2 and later, you can:
tasks:
- shell: some tasks go here
- meta: flush_handlers
- shell: some other tasks
In the above example any queued up handlers would be processed early when the ‘meta’ statement was reached. This
is a bit of a niche case but can come in handy from time to time.
Executing A Playbook
Now that you’ve learned playbook syntax, how do you run a playbook? It’s simple. Let’s run a playbook using a
parallelism level of 10:
ansible-playbook playbook.yml -f 10
Ansible-Pull
Should you want to invert the architecture of Ansible, so that nodes check in to a central location, instead of pushing
configuration out to them, you can.
Ansible-pull is a small script that will checkout a repo of configuration instructions from git, and then run ansible-
playbook against that content.
Assuming you load balance your checkout location, ansible-pull scales essentially infinitely.
Run ansible-pull --help for details.
There’s also a clever playbook available to using ansible in push mode to configure ansible-pull via a crontab!
Look at the bottom of the playbook execution for a summary of the nodes that were targeted and how they performed.
General failures and fatal “unreachable” communication attempts are kept separate in the counts.
1.3. Playbooks 35
Ansible Documentation, Release 1.5
If you ever want to see detailed output from successful modules as well as unsuccessful ones, use the --verbose
flag. This is available in Ansible 0.5 and later.
Ansible playbook output is vastly upgraded if the cowsay package is installed. Try it!
To see what hosts would be affected by a playbook before you run it, you can do this:
ansible-playbook playbook.yml --list-hosts.
See also:
YAML Syntax Learn about YAML syntax
Best Practices Various tips about managing playbooks in the real world
Ansible Documentation Hop back to the documentation index for a lot of special topics about playbooks
About Modules Learn about available modules
Developing Modules Learn how to extend Ansible by writing your own modules
Patterns Learn about how to select hosts
Github examples directory Complete end-to-end playbook examples
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
Topics
• Playbook Roles and Include Statements
– Introduction
– Task Include Files And Encouraging Reuse
– Roles
– Role Default Variables
– Role Dependencies
– Ansible Galaxy
Introduction
While it is possible to write a playbook in one very large file (and you might start out learning playbooks this way),
eventually you’ll want to reuse files and start to organize things.
At a basic level, including task files allows you to break up bits of configuration policy into smaller files. Task includes
pull in tasks from other files. Since handlers are tasks too, you can also include handler files from the ‘handlers:’
section.
See Playbooks if you need a review of these concepts.
Playbooks can also include plays from other playbook files. When that is done, the plays will be inserted into the
playbook to form a longer list of plays.
When you start to think about it – tasks, handlers, variables, and so on – begin to form larger concepts. You start
to think about modeling what something is, rather than how to make something look like something. It’s no longer
“apply this handful of THINGS to these hosts”, you say “these hosts are dbservers” or “these hosts are webservers”. In
programming, we might call that “encapsulating” how things work. For instance, you can drive a car without knowing
how the engine works.
Roles in Ansible build on the idea of include files and combine them to form clean, reusable abstractions – they allow
you to focus more on the big picture and only dive down into the details when needed.
We’ll start with understanding includes so roles make more sense, but our ultimate goal should be understanding roles
– roles are great and you should use them every time you write playbooks.
See the ansible-examples repository on GitHub for lots of examples of all of this put together. You may wish to have
this open in a separate tab as you dive in.
Suppose you want to reuse lists of tasks between plays or playbooks. You can use include files to do this. Use of
included task lists is a great way to define a role that system is going to fulfill. Remember, the goal of a play in a
playbook is to map a group of systems into multiple roles. Let’s see what this looks like...
A task include file simply contains a flat list of tasks, like so:
---
# possibly saved as tasks/foo.yml
Include directives look like this, and can be mixed in with regular tasks in a playbook:
tasks:
- include: tasks/foo.yml
You can also pass variables into includes. We call this a ‘parameterized include’.
For instance, if deploying multiple wordpress instances, I could contain all of my wordpress tasks in a single word-
press.yml file, and use it like so:
tasks:
- include: wordpress.yml user=timmy
- include: wordpress.yml user=alice
- include: wordpress.yml user=bob
If you are running Ansible 1.4 and later, include syntax is streamlined to match roles, and also allows passing list and
dictionary parameters:
tasks:
- { include: wordpress.yml, user: timmy, ssh_keys: [ ’keys/one.txt’, ’keys/two.txt’ ] }
Using either syntax, variables passed in can then be used in the included files. We’ve already covered them a bit in
Variables. You can reference them like this:
{{ user }}
(In addition to the explicitly passed-in parameters, all variables from the vars section are also available for use here as
well.)
Starting in 1.0, variables can also be passed to include files using an alternative syntax, which also supports structured
variables:
1.3. Playbooks 37
Ansible Documentation, Release 1.5
tasks:
- include: wordpress.yml
vars:
remote_user: timmy
some_list_variable:
- alpha
- beta
- gamma
Playbooks can include other playbooks too, but that’s mentioned in a later section.
Note: As of 1.0, task include statements can be used at arbitrary depth. They were previously limited to a single
level, so task includes could not include other files containing task includes.
Includes can also be used in the ‘handlers’ section, for instance, if you want to define how to restart apache, you only
have to do that once for all of your playbooks. You might make a handlers.yml that looks like:
---
# this might be in a file like handlers/handlers.yml
- name: restart apache
service: name=apache state=restarted
And in your main playbook file, just include it like so, at the bottom of a play:
handlers:
- include: handlers/handlers.yml
You can mix in includes along with your regular non-included tasks and handlers.
Includes can also be used to import one playbook file into another. This allows you to define a top-level playbook that
is composed of other playbooks.
For example:
- name: this is a play at the top level of a file
hosts: all
remote_user: root
tasks:
- name: say hi
tags: foo
shell: echo "hi..."
- include: load_balancers.yml
- include: webservers.yml
- include: dbservers.yml
Note that you cannot do variable substitution when including one playbook inside another.
Note: You can not conditionally path the location to an include file, like you can with ‘vars_files’. If you find yourself
needing to do this, consider how you can restructure your playbook to be more class/role oriented. This is to say you
cannot use a ‘fact’ to decide what include file to use. All hosts contained within the play are going to get the same
tasks. (‘when‘ provides some ability for hosts to conditionally skip tasks).
Roles
1.3. Playbooks 39
Ansible Documentation, Release 1.5
If any files are not present, they are just ignored. So it’s ok to not have a ‘vars/’ subdirectory for the role, for instance.
Note, you are still allowed to list tasks, vars_files, and handlers “loose” in playbooks without using roles, but roles
are a good organizational feature and are highly recommended. if there are loose things in the playbook, the roles are
evaluated first.
Also, should you wish to parameterize roles, by adding variables, you can do so, like this:
---
- hosts: webservers
roles:
- common
- { role: foo_app_instance, dir: ’/opt/a’, port: 5000 }
- { role: foo_app_instance, dir: ’/opt/b’, port: 5001 }
While it’s probably not something you should do often, you can also conditionally apply roles like so:
---
- hosts: webservers
roles:
- { role: some_role, when: "ansible_os_family == ’RedHat’" }
This works by applying the conditional to every task in the role. Conditionals are covered later on in the documentation.
Finally, you may wish to assign tags to the roles you specify. You can do so inline::
---
- hosts: webservers
roles:
- { role: foo, tags: ["bar", "baz"] }
If the play still has a ‘tasks’ section, those tasks are executed after roles are applied.
If you want to define certain tasks to happen before AND after roles are applied, you can do this:
---
- hosts: webservers
pre_tasks:
- shell: echo ’hello’
roles:
- { role: some_role }
tasks:
- shell: echo ’still busy’
post_tasks:
- shell: echo ’goodbye’
Note: If using tags with tasks (described later as a means of only running part of a playbook), be sure to also tag
your pre_tasks and post_tasks and pass those along as well, especially if the pre and post tasks are used for monitoring
outage window control or load balancing.
Role Dependencies
Role dependencies can also be specified as a full path, just like top level roles:
---
dependencies:
- { role: ’/path/to/common/roles/foo’, x: 1 }
Roles dependencies are always executed before the role that includes them, and are recursive. By default, roles can
also only be added as a dependency once - if another role also lists it as a dependency it will not be run again. This
behavior can be overridden by adding allow_duplicates: yes to the meta/main.yml file. For example, a role named
‘car’ could add a role named ‘wheel’ to its dependencies as follows:
---
dependencies:
- { role: wheel, n: 1 }
- { role: wheel, n: 2 }
- { role: wheel, n: 3 }
- { role: wheel, n: 4 }
1.3. Playbooks 41
Ansible Documentation, Release 1.5
Ansible Galaxy
Ansible Galaxy, is a free site for finding, downloading, rating, and reviewing all kinds of community developed
Ansible roles and can be a great way to get a jumpstart on your automation projects.
You can sign up with social auth, and the download client ‘ansible-galaxy’ is included in Ansible 1.4.2 and later.
Read the “About” page on the Galaxy site for more information.
See also:
YAML Syntax Learn about YAML syntax
Playbooks Review the basic Playbook language features
Best Practices Various tips about managing playbooks in the real world
Variables All about variables in playbooks
Conditionals Conditionals in playbooks
Loops Loops in playbooks
About Modules Learn about available modules
Developing Modules Learn how to extend Ansible by writing your own modules
GitHub Ansible examples Complete playbook files from the GitHub project source
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
1.3.3 Variables
Topics
• Variables
– What Makes A Valid Variable Name
– Variables Defined in Inventory
– Variables Defined in a Playbook
– Variables defined from included files and roles
– Using Variables: About Jinja2
– Jinja2 Filters
* Filters For Formatting Data
* Filters Often Used With Conditionals
* Forcing Variables To Be Defined
* Defaulting Undefined Variables
* Set Theory Filters
* Other Useful Filters
– Hey Wait, A YAML Gotcha
– Information discovered from systems: Facts
– Turning Off Facts
– Local Facts (Facts.d)
– Registered Variables
– Accessing Complex Variable Data
– Magic Variables, and How To Access Information About Other Hosts
– Variable File Separation
– Passing Variables On The Command Line
– Conditional Imports
– Variable Precedence: Where Should I Put A Variable?
While automation exists to make it easier to make things repeatable, all of your systems are likely not exactly alike.
All of your systems are likely not the same. On some systems you may want to set some behavior or configuration
that is slightly different from others.
Also, some of the observed behavior or state of remote systems might need to influence how you configure those
systems. (Such as you might need to find out the IP address of a system and even use it as a configuration value on
another system).
You might have some templates for configuration files that are mostly the same, but slightly different based on those
variables.
Variables in Ansible are how we deal with differences between systems.
Once understanding variables you’ll also want to dig into Conditionals and Loops. Useful things like the “group_by”
module and the “when” conditional can also be used with variables, and to help manage differences between systems.
It’s highly recommended that you consult the ansible-examples github repository to see a lot of examples of variables
put to use.
Before we start using variables it’s important to know what are valid variable names.
Variable names should be letters, numbers, and underscores. Variables should always start with a letter.
“foo_port” is a great variable. “foo5” is fine too.
“foo-port”, “foo port”, “foo.port” and “12” are not valid variable names.
Easy enough, let’s move on.
1.3. Playbooks 43
Ansible Documentation, Release 1.5
We’ve actually already covered a lot about variables in another section, so so far this shouldn’t be terribly new, but a
bit of a refresher.
Often you’ll want to set variables based on what groups a machine is in. For instance, maybe machines in Boston want
to use ‘boston.ntp.example.com’ as an NTP server.
See the Inventory document for multiple ways on how to define variables in inventory.
This can be nice as it’s right there when you are reading the playbook.
It turns out we’ve already talked about variables in another place too.
As described in Playbook Roles and Include Statements, variables can also be included in the playbook via include
files, which may or may not be part of an “Ansible Role”. Usage of roles is preferred as it provides a nice organizational
system.
It’s nice enough to know about how to define variables, but how do you use them?
Ansible allows you to reference variables in your playbooks using the Jinja2 templating system. While you can do a
lot of complex things in Jinja, only the basics are things you really need to learn at first.
For instance, in a simple template, you can do something like:
My amp goes to {{ max_amp_value }}
And that will provide the most basic form of variable substitution.
This is also valid directly in playbooks, and you’ll occasionally want to do things like:
template: src=foo.cfg.j2 dest={{ remote_install_path}}/foo.cfg
In the above example, we used a variable to help decide where to place a file.
Inside a template you automatically have access to all of the variables that are in scope for a host. Actually it’s more
than that – you can also read variables about other hosts. We’ll show how to do that in a bit.
Note: ansible allows Jinja2 loops and conditionals in templates, but in playbooks, we do not use them. Ansible
templates are pure machine-parseable YAML. This is an rather important feature as it means it is possible to code-
generate pieces of files, or to have other ecosystem tools read Ansible files. Not everyone will need this but it can
unlock possibilities.
Jinja2 Filters
Note: These are infrequently utilized features. Use them if they fit a use case you have, but this is optional knowledge.
Filters in Jinja2 are a way of transforming template expressions from one kind of data into another. Jinja2 ships with
many of these. See builtin filters in the official Jinja2 template documentation.
In addition to those, Ansible supplies many more.
The following filters will take a data structure in a template and render it in a slightly different format. These are
occasionally useful for debugging:
{{ some_variable | to_nice_json }}
{{ some_variable | to_nice_yaml }}
The following tasks are illustrative of how filters can be used with conditionals:
tasks:
- shell: /usr/bin/foo
register: result
ignore_errors: True
# in most cases you’ll want a handler, but if you want to do something right now, this is nice
- debug: msg="it changed"
when: result|changed
The default behavior from ansible and ansible.cfg is to fail if variables are undefined, but you can turn this off.
This allows an explicit check with this feature off:
{{ variable | mandatory }}
The variable value will be used as is, but the template evaluation will raise an error if it is undefined.
Jinja2 provides a useful ‘default’ filter, that is often a better approach to failing if a variable is not defined.
1.3. Playbooks 45
Ansible Documentation, Release 1.5
{{ some_variable | default(5) }}
In the above example, if the variable ‘some_variable’ is not defined, the value used will be 5, rather than an error being
raised.
To get the last name of a file path, like ‘foo.txt’ out of ‘/etc/asdf/foo.txt’:
{{ path | basename }}
To cast values as certain types, such as when you input a string as “True” from a vars_prompt and the system doesn’t
know it is a boolean value:
- debug: msg=test
when: some_string_value | bool
A few useful filters are typically added with each new Ansible release. The development documentation shows how to
extend Ansible filters by writing your own as plugins, though in general, we encourage new ones to be added to core
so everyone can make use of them.
YAML syntax requires that if you start a value with {{ foo }} you quote the whole line, since it wants to be sure you
aren’t trying to start a YAML dictionary. This is covered on the YAML Syntax page.
This won’t work:
- hosts: app_servers
vars:
app_path: {{ base_path }}/22
There are other places where variables can come from, but these are a type of variable that are discovered, not set by
the user.
Facts are information derived from speaking with your remote systems.
An example of this might be the ip address of the remote host, or what the operating system is.
To see what information is available, try the following:
ansible hostname -m setup
This will return a ginormous amount of variable data, which may look like this, as taken from Ansible 1.4 on a Ubuntu
12.04 system:
"ansible_all_ipv4_addresses": [
"REDACTED IP ADDRESS"
],
"ansible_all_ipv6_addresses": [
"REDACTED IPV6 ADDRESS"
],
"ansible_architecture": "x86_64",
"ansible_bios_date": "09/20/2012",
"ansible_bios_version": "6.00",
"ansible_cmdline": {
"BOOT_IMAGE": "/boot/vmlinuz-3.5.0-23-generic",
"quiet": true,
"ro": true,
"root": "UUID=4195bff4-e157-4e41-8701-e93f0aec9e22",
"splash": true
},
"ansible_date_time": {
"date": "2013-10-02",
1.3. Playbooks 47
Ansible Documentation, Release 1.5
"day": "02",
"epoch": "1380756810",
"hour": "19",
"iso8601": "2013-10-02T23:33:30Z",
"iso8601_micro": "2013-10-02T23:33:30.036070Z",
"minute": "33",
"month": "10",
"second": "30",
"time": "19:33:30",
"tz": "EDT",
"year": "2013"
},
"ansible_default_ipv4": {
"address": "REDACTED",
"alias": "eth0",
"gateway": "REDACTED",
"interface": "eth0",
"macaddress": "REDACTED",
"mtu": 1500,
"netmask": "255.255.255.0",
"network": "REDACTED",
"type": "ether"
},
"ansible_default_ipv6": {},
"ansible_devices": {
"fd0": {
"holders": [],
"host": "",
"model": null,
"partitions": {},
"removable": "1",
"rotational": "1",
"scheduler_mode": "deadline",
"sectors": "0",
"sectorsize": "512",
"size": "0.00 Bytes",
"support_discard": "0",
"vendor": null
},
"sda": {
"holders": [],
"host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ult
"model": "VMware Virtual S",
"partitions": {
"sda1": {
"sectors": "39843840",
"sectorsize": 512,
"size": "19.00 GB",
"start": "2048"
},
"sda2": {
"sectors": "2",
"sectorsize": 512,
"size": "1.00 KB",
"start": "39847934"
},
"sda5": {
"sectors": "2093056",
"sectorsize": 512,
"size": "1022.00 MB",
"start": "39847936"
}
},
"removable": "0",
"rotational": "1",
"scheduler_mode": "deadline",
"sectors": "41943040",
"sectorsize": "512",
"size": "20.00 GB",
"support_discard": "0",
"vendor": "VMware,"
},
"sr0": {
"holders": [],
"host": "IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)",
"model": "VMware IDE CDR10",
"partitions": {},
"removable": "1",
"rotational": "1",
"scheduler_mode": "deadline",
"sectors": "2097151",
"sectorsize": "512",
"size": "1024.00 MB",
"support_discard": "0",
"vendor": "NECVMWar"
}
},
"ansible_distribution": "Ubuntu",
"ansible_distribution_release": "precise",
"ansible_distribution_version": "12.04",
"ansible_domain": "",
"ansible_env": {
"COLORTERM": "gnome-terminal",
"DISPLAY": ":0",
"HOME": "/home/mdehaan",
"LANG": "C",
"LESSCLOSE": "/usr/bin/lesspipe %s %s",
"LESSOPEN": "| /usr/bin/lesspipe %s",
"LOGNAME": "root",
"LS_COLORS": "rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=
"MAIL": "/var/mail/root",
"OLDPWD": "/root/ansible/docsite",
"PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"PWD": "/root/ansible",
"SHELL": "/bin/bash",
"SHLVL": "1",
"SUDO_COMMAND": "/bin/bash",
"SUDO_GID": "1000",
"SUDO_UID": "1000",
"SUDO_USER": "mdehaan",
"TERM": "xterm",
"USER": "root",
"USERNAME": "root",
"XAUTHORITY": "/home/mdehaan/.Xauthority",
"_": "/usr/local/bin/ansible"
},
1.3. Playbooks 49
Ansible Documentation, Release 1.5
"ansible_eth0": {
"active": true,
"device": "eth0",
"ipv4": {
"address": "REDACTED",
"netmask": "255.255.255.0",
"network": "REDACTED"
},
"ipv6": [
{
"address": "REDACTED",
"prefix": "64",
"scope": "link"
}
],
"macaddress": "REDACTED",
"module": "e1000",
"mtu": 1500,
"type": "ether"
},
"ansible_form_factor": "Other",
"ansible_fqdn": "ubuntu2",
"ansible_hostname": "ubuntu2",
"ansible_interfaces": [
"lo",
"eth0"
],
"ansible_kernel": "3.5.0-23-generic",
"ansible_lo": {
"active": true,
"device": "lo",
"ipv4": {
"address": "127.0.0.1",
"netmask": "255.0.0.0",
"network": "127.0.0.0"
},
"ipv6": [
{
"address": "::1",
"prefix": "128",
"scope": "host"
}
],
"mtu": 16436,
"type": "loopback"
},
"ansible_lsb": {
"codename": "precise",
"description": "Ubuntu 12.04.2 LTS",
"id": "Ubuntu",
"major_release": "12",
"release": "12.04"
},
"ansible_machine": "x86_64",
"ansible_memfree_mb": 74,
"ansible_memtotal_mb": 991,
"ansible_mounts": [
{
"device": "/dev/sda1",
"fstype": "ext4",
"mount": "/",
"options": "rw,errors=remount-ro",
"size_available": 15032406016,
"size_total": 20079898624
}
],
"ansible_os_family": "Debian",
"ansible_pkg_mgr": "apt",
"ansible_processor": [
"Intel(R) Core(TM) i7 CPU 860 @ 2.80GHz"
],
"ansible_processor_cores": 1,
"ansible_processor_count": 1,
"ansible_processor_threads_per_core": 1,
"ansible_processor_vcpus": 1,
"ansible_product_name": "VMware Virtual Platform",
"ansible_product_serial": "REDACTED",
"ansible_product_uuid": "REDACTED",
"ansible_product_version": "None",
"ansible_python_version": "2.7.3",
"ansible_selinux": false,
"ansible_ssh_host_key_dsa_public": "REDACTED KEY VALUE"
"ansible_ssh_host_key_ecdsa_public": "REDACTED KEY VALUE"
"ansible_ssh_host_key_rsa_public": "REDACTED KEY VALUE"
"ansible_swapfree_mb": 665,
"ansible_swaptotal_mb": 1021,
"ansible_system": "Linux",
"ansible_system_vendor": "VMware, Inc.",
"ansible_user_id": "root",
"ansible_userspace_architecture": "x86_64",
"ansible_userspace_bits": "64",
"ansible_virtualization_role": "guest",
"ansible_virtualization_type": "VMware"
In the above the model of the first harddrive may be referenced in a template or playbook as:
{{ ansible_devices.sda.model }}
Facts are frequently used in conditionals (see Conditionals) and also in templates.
Facts can be also used to create dynamic groups of hosts that match particular criteria, see the About Modules docu-
mentation on ‘group_by’ for details, as well as in generalized conditional statements as discussed in the Conditionals
chapter.
If you know you don’t need any fact data about your hosts, and know everything about your systems centrally, you
can turn off fact gathering. This has advantages in scaling Ansible in push mode with very large numbers of systems,
mainly, or if you are using Ansible on experimental platforms. In any play, just do this:
- hosts: whatever
gather_facts: no
1.3. Playbooks 51
Ansible Documentation, Release 1.5
Note: Perhaps “local facts” is a bit of a misnomer, it means “locally supplied user values” as opposed to “centrally
supplied user values”, or what facts are – “locally dynamically determined values”.
If a remotely managed system has an “/etc/ansible/facts.d” directory, any files in this directory ending in ”.fact”, can
be JSON, INI, or executable files returning JSON, and these can supply local facts in Ansible.
For instance assume a /etc/ansible/facts.d/preferences.fact:
[general]
asdf=1
bar=2
This will produce a hash variable fact named “general” with ‘asdf’ and ‘bar’ as members. To validate this, run the
following:
ansible <hostname> -m setup -a "filter=ansible_local"
The local namespace prevents any user supplied fact from overriding system facts or variables defined elsewhere in
the playbook.
Registered Variables
Another major use of variables is running a command and using the result of that command to save the result into a
variable. Results will vary from module to module. Use of -v when executing playbooks will show possible values for
the results.
The value of a task being executed in ansible can be saved in a variable and used later. See some examples of this in
the Conditionals chapter.
While it’s mentioned elsewhere in that document too, here’s a quick syntax example:
- hosts: web_servers
tasks:
- shell: /usr/bin/foo
register: foo_result
ignore_errors: True
- shell: /usr/bin/bar
when: foo_result.rc == 5
Registered variables are valid on the host the remainder of the playbook run, which is the same as the lifetime of
“facts” in Ansible. Effectively registered variables are just like facts.
OR alternatively:
{{ ansible_eth0.ipv4.address }}
Even if you didn’t define them yourself, Ansible provides a few variables for you automatically. The most important of
these are ‘hostvars’, ‘group_names’, and ‘groups’. Users should not use these names themselves as they are reserved.
‘environment’ is also reserved.
Hostvars lets you ask about the variables of another host, including facts that have been gathered about that host. If,
at this point, you haven’t talked to that host yet in any play in the playbook or set of playbooks, you can get at the
variables, but you will not be able to see the facts.
If your database server wants to use the value of a ‘fact’ from another node, or an inventory variable assigned to
another node, it’s easy to do so within a template or even an action line:
{{ hostvars[’test.example.com’][’ansible_distribution’] }}
Additionally, group_names is a list (array) of all the groups the current host is in. This can be used in templates using
Jinja2 syntax to make template source files that vary based on the group membership (or role) of the host:
{% if ’webserver’ in group_names %}
# some part of a configuration file that only applies to webservers
{% endif %}
groups is a list of all the groups (and hosts) in the inventory. This can be used to enumerate all hosts within a group.
For example:
1.3. Playbooks 53
Ansible Documentation, Release 1.5
A frequently used idiom is walking a group to find all IP addresses in that group:
{% for host in groups[’app_servers’] %}
{{ hostvars[host][’ansible_eth0’][’ipv4’][’address’] }}
{% endfor %}
An example of this could include pointing a frontend proxy server to all of the app servers, setting up the correct
firewall rules between servers, etc.
Additionally, inventory_hostname is the name of the hostname as configured in Ansible’s inventory host file. This
can be useful for when you don’t want to rely on the discovered hostname ansible_hostname or for other mysterious
reasons. If you have a long FQDN, inventory_hostname_short also contains the part up to the first period, without the
rest of the domain.
play_hosts is available as a list of hostnames that are in scope for the current play. This may be useful for filling out
templates with multiple hostnames or for injecting the list into the rules for a load balancer.
Don’t worry about any of this unless you think you need it. You’ll know when you do.
Also available, inventory_dir is the pathname of the directory holding Ansible’s inventory host file, inventory_file is
the pathname and the filename pointing to the Ansible’s inventory host file.
It’s a great idea to keep your playbooks under source control, but you may wish to make the playbook source public
while keeping certain important variables private. Similarly, sometimes you may just want to keep certain information
in different files, away from the main playbook.
You can do this by using an external variables file, or files, just like this:
---
- hosts: all
remote_user: root
vars:
favcolor: blue
vars_files:
- /vars/external_vars.yml
tasks:
This removes the risk of sharing sensitive data with others when sharing your playbook source with them.
The contents of each variables file is a simple YAML dictionary, like this:
---
# in the above example, this would be vars/external_vars.yml
somevar: somevalue
password: magic
Note: It’s also possible to keep per-host and per-group variables in very similar files, this is covered in Patterns.
In addition to vars_prompt and vars_files, it is possible to send variables over the Ansible command line. This is par-
ticularly useful when writing a generic release playbook where you may want to pass in the version of the application
to deploy:
ansible-playbook release.yml --extra-vars "version=1.23.45 other_variable=foo"
This is useful, for, among other things, setting the hosts group or the user for the playbook.
Example:
---
tasks:
- ...
As of Ansible 1.2, you can also pass in extra vars as quoted JSON, like so:
--extra-vars ’{"pacman":"mrs","ghosts":["inky","pinky","clyde","sue"]}’
The key=value form is obviously simpler, but it’s there if you need it!
As of Ansible 1.3, extra vars can be loaded from a JSON file with the “@” syntax:
--extra-vars "@some_file.json"
Also as of Ansible 1.3, extra vars can be formatted as YAML, either on the command line or in a file as above.
Conditional Imports
Note: This behavior is infrequently used in Ansible. You may wish to skip this section. The ‘group_by’ module as
described in the module documentation is a better way to achieve this behavior in most cases.
Sometimes you will want to do certain things differently in a playbook based on certain criteria. Having one playbook
that works on multiple platforms and OS versions is a good example.
As an example, the name of the Apache package may be different between CentOS and Debian, but it is easily handled
with a minimum of syntax in an Ansible Playbook:
---
- hosts: all
remote_user: root
vars_files:
- "vars/common.yml"
- [ "vars/{{ ansible_os_family }}.yml", "vars/os_defaults.yml" ]
tasks:
1.3. Playbooks 55
Ansible Documentation, Release 1.5
Note: The variable ‘ansible_os_family’ is being interpolated into the list of filenames being defined for vars_files.
As a reminder, the various YAML files contain just keys and values:
---
# for vars/CentOS.yml
apache: httpd
somethingelse: 42
How does this work? If the operating system was ‘CentOS’, the first file Ansible would try to import would be
‘vars/CentOS.yml’, followed by ‘/vars/os_defaults.yml’ if that file did not exist. If no files in the list were found, an
error would be raised. On Debian, it would instead first look towards ‘vars/Debian.yml’ instead of ‘vars/CentOS.yml’,
before falling back on ‘vars/os_defaults.yml’. Pretty simple.
To use this conditional import feature, you’ll need facter or ohai installed prior to running the playbook, but you can
of course push this out with Ansible if you like:
# for facter
ansible -m yum -a "pkg=facter ensure=installed"
ansible -m yum -a "pkg=ruby-json ensure=installed"
# for ohai
ansible -m yum -a "pkg=ohai ensure=installed"
Ansible’s approach to configuration – separating variables from tasks, keeps your playbooks from turning into arbitrary
code with ugly nested ifs, conditionals, and so on - and results in more streamlined & auditable configuration rules –
especially because there are a minimum of decision points to track.
A lot of folks may ask about how variables override another. Ultimately it’s Ansible’s philosophy that it’s better you
know where to put a variable, and then you have to think about it a lot less.
Avoid defining the variable “x” in 47 places and then ask the question “which x gets used”. Why? Because that’s not
Ansible’s Zen philosophy of doing things.
There is only one Empire State Building. One Mona Lisa, etc. Figure out where to define a variable, and don’t make
it complicated.
However, let’s go ahead and get precedence out of the way! It exists. It’s a real thing, and you might have a use for it.
If multiple variables of the same name are defined in different places, they win in a certain order, which is:
* -e variables always win
* then comes "most everything else"
* then comes variables defined in inventory
* then "role defaults", which are the most "defaulty" and lose in priority to everything.
That seems a little theoretical. Let’s show some examples and where you would choose to put what based on the kind
of control you might want over values.
First off, group variables are super powerful.
Site wide defaults should be defined as a ‘group_vars/all’ setting. Group variables are generally placed alongside your
inventory file. They can also be returned by a dynamic inventory script (see Dynamic Inventory) or defined in things
like Ansible Tower from the UI or API:
---
# file: /etc/ansible/group_vars/all
# this is the site wide default
ntp_server: default-time.example.com
Regional information might be defined in a ‘group_vars/region’ variable. If this group is a child of the ‘all’ group
(which it is, because all groups are), it will override the group that is higher up and more general:
---
# file: /etc/ansible/group_vars/boston
ntp_server: boston-time.example.com
If for some crazy reason we wanted to tell just a specific host to use a specific NTP server, it would then override the
group variable!:
---
# file: /etc/ansible/host_vars/xyz.boston.example.com
ntp_server: override.example.com
So that covers inventory and what you would normally set there. It’s a great place for things that deal with geography
or behavior. Since groups are frequently the entity that maps roles onto hosts, it is sometimes a shortcut to set variables
on the group instead of defining them on a role. You could go either way.
Remember: Child groups override parent groups, and hosts always override their groups.
Next up: learning about role variable precedence.
We’ll pretty much assume you are using roles at this point. You should be using roles for sure. Roles are great. You
are using roles aren’t you? Hint hint.
Ok, so if you are writing a redistributable role with reasonable defaults, put those in the ‘roles/x/defaults/main.yml’
file. This means the role will bring along a default value but ANYTHING in Ansible will override it. It’s just a default.
That’s why it says “defaults” :) See Playbook Roles and Include Statements for more info about this:
---
# file: roles/x/defaults/main.yml
# if not overriden in inventory or as a parameter, this is the value that will be used
http_port: 80
if you are writing a role and want to ensure the value in the role is absolutely used in that role, and is not going to be
overridden by inventory, you should but it in roles/x/vars/main.yml like so, and inventory values cannot override it. -e
however, still will:
---
# file: roles/x/vars/main.yml
# this will absolutely be used in this role
http_port: 80
So the above is a great way to plug in constants about the role that are always true. If you are not sharing your role
with others, app specific behaviors like ports is fine to put in here. But if you are sharing roles with others, putting
variables in here might be bad. Nobody will be able to override them with inventory, but they still can by passing a
parameter to the role.
Parameterized roles are useful.
If you are using a role and want to override a default, pass it as a parameter to the role like so:
roles:
- { name: apache, http_port: 8080 }
This makes it clear to the playbook reader that you’ve made a conscious choice to override some default in the role, or
pass in some configuration that the role can’t assume by itself. It also allows you to pass something site-specific that
1.3. Playbooks 57
Ansible Documentation, Release 1.5
isn’t really part of the role you are sharing with others.
This can often be used for things that might apply to some hosts multiple times, like so:
roles:
- { role: app_user, name: Ian }
- { role: app_user, name: Terry }
- { role: app_user, name: Graham }
- { role: app_user, name: John }
That’s a bit arbitrary, but you can see how the same role was invoked multiple Times. In that example it’s quite likely
there was no default for ‘name’ supplied at all. Ansible can yell at you when variables aren’t defined – it’s the default
behavior in fact.
So that’s a bit about roles.
There are a few bonus things that go on with roles.
Generally speaking, variables set in one role are available to others. This means if you have a
“roles/common/vars/main.yml” you can set variables in there and make use of them in other roles and elsewhere
in your playbook:
roles:
- { role: common_settings }
- { role: something, foo: 12 }
- { role: something_else }
Note: There are some protections in place to avoid the need to namespace variables. In the above, variables de-
fined in common_settings are most definitely available to ‘app_user’ and ‘something_else’ tasks, but if “something’s”
guaranteed to have foo set at 12, even if somewhere deep in common settings it set foo to 20.
So, that’s precedence, explained in a more direct way. Don’t worry about precedence, just think about if your role is
defining a variable that is a default, or a “live” variable you definitely want to use. Inventory lies in precedence right
in the middle, and if you want to forcibly override something, use -e.
If you found that a little hard to understand, take a look at the ansible-examples repo on our github for a bit more about
how all of these things can work together.
See also:
Playbooks An introduction to playbooks
Conditionals Conditional statements in playbooks
Loops Looping in playbooks
Playbook Roles and Include Statements Playbook organization by roles
Best Practices Best practices in playbooks
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
1.3.4 Conditionals
Topics
• Conditionals
– The When Statement
– Loading in Custom Facts
– Applying ‘when’ to roles and includes
– Conditional Imports
– Selecting Files And Templates Based On Variables
– Register Variables
Often the result of a play may depend on the value of a variable, fact (something learned about the remote system), or
previous task result. In some cases, the values of variables may depend on other variables. Further, additional groups
can be created to manage hosts based on whether the hosts match other criteria. There are many options to control
execution flow in Ansible.
Let’s dig into what they are.
Contents
• Conditionals
– The When Statement
– Loading in Custom Facts
– Applying ‘when’ to roles and includes
– Conditional Imports
– Selecting Files And Templates Based On Variables
– Register Variables
Sometimes you will want to skip a particular step on a particular host. This could be something as simple as not
installing a certain package if the operating system is a particular version, or it could be something like performing
some cleanup steps if a filesystem is getting full.
This is easy to do in Ansible, with the when clause, which contains a Jinja2 expression (see Variables). It’s actually
pretty simple:
tasks:
- name: "shutdown Debian flavored systems"
command: /sbin/shutdown -t now
when: ansible_os_family == "Debian"
A number of Jinja2 “filters” can also be used in when statements, some of which are unique and provided by Ansible.
Suppose we want to ignore the error of one statement and then decide to do something conditionally based on success
or failure:
tasks:
- command: /bin/false
register: result
ignore_errors: True
- command: /bin/something
when: result|failed
- command: /bin/something_else
when: result|success
1.3. Playbooks 59
Ansible Documentation, Release 1.5
- command: /bin/still/something_else
when: result|skipped
Note that was a little bit of foreshadowing on the ‘register’ statement. We’ll get to it a bit later in this chapter.
As a reminder, to see what facts are available on a particular system, you can do:
ansible hostname.example.com -m setup
Tip: Sometimes you’ll get back a variable that’s a string and you’ll want to do a math operation comparison on it. You
can do this like so:
tasks:
- shell: echo "only on Red Hat 6, derivatives, and later"
when: ansible_os_family == "RedHat" and ansible_lsb.major_release|int >= 6
Note: the above example requires the lsb_release package on the target host in order to return the ansi-
ble_lsb.major_release fact.
Variables defined in the playbooks or inventory can also be used. An example may be the execution of a task based on
a variable’s boolean value:
vars:
epic: true
or:
tasks:
- shell: echo "This certainly isn’t epic!"
when: not epic
If a required variable has not been set, you can skip or fail using Jinja2’s defined test. For example:
tasks:
- shell: echo "I’ve got ’{{ foo }}’ and am not afraid to use it!"
when: foo is defined
This is especially useful in combination with the conditional import of vars files (see below).
Note that when combining when with with_items (see Loops), be aware that the when statement is processed separately
for each item. This is by design:
tasks:
- command: echo {{ item }}
with_items: [ 0, 2, 4, 6, 8, 10 ]
when: item > 5
It’s also easy to provide your own facts if you want, which is covered in Developing Modules. To run them, just make
a call to your own custom fact gathering module at the top of your list of tasks, and variables returned there will be
accessible to future tasks:
tasks:
- name: gather site specific fact data
action: site_facts
- command: /usr/bin/thingy
when: my_custom_fact_just_retrieved_from_the_remote_system == ’1234’
Note that if you have several tasks that all share the same conditional statement, you can affix the conditional to a
task include statement as below. Note this does not work with playbook includes, just task includes. All the tasks get
evaluated, but the conditional is applied to each and every task:
- include: tasks/sometasks.yml
when: "’reticulating splines’ in output"
Or with a role:
- hosts: webservers
roles:
- { role: debian_stock_config, when: ansible_os_family == ’Debian’ }
You will note a lot of ‘skipped’ output by default in Ansible when using this approach on systems that don’t match the
criteria. Read up on the ‘group_by’ module in the About Modules docs for a more streamlined way to accomplish the
same thing.
Conditional Imports
Note: This is an advanced topic that is infrequently used. You can probably skip this section.
Sometimes you will want to do certain things differently in a playbook based on certain criteria. Having one playbook
that works on multiple platforms and OS versions is a good example.
As an example, the name of the Apache package may be different between CentOS and Debian, but it is easily handled
with a minimum of syntax in an Ansible Playbook:
---
- hosts: all
remote_user: root
vars_files:
- "vars/common.yml"
- [ "vars/{{ ansible_os_family }}.yml", "vars/os_defaults.yml" ]
tasks:
- name: make sure apache is running
service: name={{ apache }} state=running
Note: The variable ‘ansible_os_family’ is being interpolated into the list of filenames being defined for vars_files.
As a reminder, the various YAML files contain just keys and values:
1.3. Playbooks 61
Ansible Documentation, Release 1.5
---
# for vars/CentOS.yml
apache: httpd
somethingelse: 42
How does this work? If the operating system was ‘CentOS’, the first file Ansible would try to import would be
‘vars/CentOS.yml’, followed by ‘/vars/os_defaults.yml’ if that file did not exist. If no files in the list were found, an
error would be raised. On Debian, it would instead first look towards ‘vars/Debian.yml’ instead of ‘vars/CentOS.yml’,
before falling back on ‘vars/os_defaults.yml’. Pretty simple.
To use this conditional import feature, you’ll need facter or ohai installed prior to running the playbook, but you can
of course push this out with Ansible if you like:
# for facter
ansible -m yum -a "pkg=facter ensure=installed"
ansible -m yum -a "pkg=ruby-json ensure=installed"
# for ohai
ansible -m yum -a "pkg=ohai ensure=installed"
Ansible’s approach to configuration – separating variables from tasks, keeps your playbooks from turning into arbitrary
code with ugly nested ifs, conditionals, and so on - and results in more streamlined & auditable configuration rules –
especially because there are a minimum of decision points to track.
Note: This is an advanced topic that is infrequently used. You can probably skip this section.
Sometimes a configuration file you want to copy, or a template you will use may depend on a variable. The following
construct selects the first available file appropriate for the variables of a given host, which is often much cleaner than
putting a lot of if conditionals in a template.
The following example shows how to template out a configuration file that was very different between, say, CentOS
and Debian:
- name: template a file
template: src={{ item }} dest=/etc/myapp/foo.conf
with_first_found:
- files:
- {{ ansible_distribution }}.conf
- default.conf
paths:
- search_location_one/somedir/
- /opt/other_location/somedir/
Register Variables
Often in a playbook it may be useful to store the result of a given command in a variable and access it later. Use of the
command module in this way can in many ways eliminate the need to write site specific facts, for instance, you could
test for the existence of a particular program.
The ‘register’ keyword decides what variable to save a result in. The resulting variables can be used in templates,
action lines, or when statements. It looks like this (in an obviously trivial example):
tasks:
As shown previously, the registered variable’s string contents are accessible with the ‘stdout’ value. The registered
result can be used in the “with_items” of a task if it is converted into a list (or already is a list) as shown below.
“stdout_lines” is already available on the object as well though you could also call “home_dirs.stdout.split()” if you
wanted, and could split by other fields:
- name: registered variable usage as a with_items list
hosts: all
tasks:
See also:
Playbooks An introduction to playbooks
Playbook Roles and Include Statements Playbook organization by roles
Best Practices Best practices in playbooks
Conditionals Conditional statements in playbooks
Variables All about variables
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
1.3.5 Loops
Often you’ll want to do many things in one task, such as create a lot of users, install a lot of packages, or repeat a
polling step until a certain result is reached.
This chapter is all about how to use loops in playbooks.
1.3. Playbooks 63
Ansible Documentation, Release 1.5
Topics
• Loops
– Standard Loops
– Nested Loops
– Looping over Hashes
– Looping over Fileglobs
– Looping over Parallel Sets of Data
– Looping over Subelements
– Looping over Integer Sequences
– Random Choices
– Do-Until Loops
– Finding First Matched Files
– Iterating Over The Results of a Program Execution
– Looping Over A List With An Index
– Flattening A List
– Using register with a loop
– Writing Your Own Iterators
Standard Loops
To save some typing, repeated tasks can be written in short-hand like so:
- name: add several users
user: name={{ item }} state=present groups=wheel
with_items:
- testuser1
- testuser2
If you have defined a YAML list in a variables file, or the ‘vars’ section, you can also do:
with_items: somelist
The yum and apt modules use with_items to execute fewer package manager transactions.
Note that the types of items you iterate over with ‘with_items’ do not have to be simple lists of strings. If you have a
list of hashes, you can reference subkeys using things like:
- name: add several users
user: name={{ item.name }} state=present groups={{ item.groups }}
with_items:
- { name: ’testuser1’, groups: ’wheel’ }
- { name: ’testuser2’, groups: ’root’ }
Nested Loops
As with the case of ‘with_items’ above, you can use previously defined variables. Just specify the variable’s name
without templating it with ‘{{ }}’:
- name: here, ’users’ contains the above list of employees
mysql_user: name={{ item[0] }} priv={{ item[1] }}.*:ALL append_privs=yes password=foo
with_nested:
- users
- [ ’clientdb’, ’employeedb’, ’providerdb’ ]
And you want to print every user’s name and phone number. You can loop through the elements of a hash using
with_dict like this:
tasks:
- name: Print phone records
debug: msg="User {{ item.key }} is {{ item.value.name }} ({{ item.value.telephone }})"
with_dict: users
with_fileglob matches all files in a single directory, non-recursively, that match a pattern. It can be used like
this:
---
- hosts: all
tasks:
1.3. Playbooks 65
Ansible Documentation, Release 1.5
Note: This is an uncommon thing to want to do, but we’re documenting it for completeness. You probably won’t be
reaching for this one often.
Suppose you have the following variable data was loaded in via somewhere:
---
alpha: [ ’a’, ’b’, ’c’, ’d’ ]
numbers: [ 1, 2, 3, 4 ]
And you want the set of ‘(a, 1)’ and ‘(b, 2)’ and so on. Use ‘with_together’ to get this:
tasks:
- debug: msg="{{ item.0 }} and {{ item.1 }}"
with_together:
- alpha
- numbers
Suppose you want to do something like loop over a list of users, creating them, and allowing them to login by a certain
set of SSH keys.
How might that be accomplished? Let’s assume you had the following defined and loaded in via “vars_files” or maybe
a “group_vars/all” file:
---
users:
- name: alice
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
Subelements walks a list of hashes (aka dictionaries) and then traverses a list with a given key inside of those records.
The authorized_key pattern is exactly where it comes up most.
with_sequence generates a sequence of items in ascending numerical order. You can specify a start, end, and an
optional step value.
Arguments should be specified in key=value pairs. If supplied, the ‘format’ is a printf style string.
Numerical values can be specified in decimal, hexadecimal (0x3f8) or octal (0600). Negative numbers are not sup-
ported. This works as follows:
---
- hosts: all
tasks:
# create groups
- group: name=evens state=present
- group: name=odds state=present
Random Choices
The ‘random_choice’ feature can be used to pick something at random. While it’s not a load balancer (there are
modules for those), it can somewhat be used as a poor man’s loadbalancer in a MacGyver like situation:
- debug: msg={{ item }}
with_random_choice:
- "go through the door"
- "drink from the goblet"
- "press the red button"
- "do nothing"
Do-Until Loops
Sometimes you would want to retry a task until a certain condition is met. Here’s an example:
- action: shell /usr/bin/foo
register: result
until: result.stdout.find("all systems go") != -1
retries: 5
delay: 10
The above example run the shell module recursively till the module’s result has “all systems go” in it’s stdout or the
task has been retried for 5 times with a delay of 10 seconds. The default value for “retries” is 3 and “delay” is 5.
The task returns the results returned by the last task run. The results of individual retries can be viewed by -vv option.
The registered variable will also have a new key “attempts” which will have the number of the retries for the task.
1.3. Playbooks 67
Ansible Documentation, Release 1.5
Note: This is an uncommon thing to want to do, but we’re documenting it for completeness. You probably won’t be
reaching for this one often.
This isn’t exactly a loop, but it’s close. What if you want to use a reference to a file based on the first file found that
matches a given criteria, and some of the filenames are determined by variable names? Yes, you can do that as follows:
- name: INTERFACES | Create Ansible header for /etc/network/interfaces
template: src={{ item }} dest=/etc/foo.conf
with_first_found:
- "{{ansible_virtualization_type}_foo.conf"
- "default_foo.conf"
This tool also has a long form version that allows for configurable search paths. Here’s an example:
- name: some configuration template
template: src={{ item }} dest=/etc/file.cfg mode=0444 owner=root group=root
with_first_found:
- files:
- "{{inventory_hostname}}/etc/file.cfg"
paths:
- ../../../templates.overwrites
- ../../../templates
- files:
- etc/file.cfg
paths:
- templates
Note: This is an uncommon thing to want to do, but we’re documenting it for completeness. You probably won’t be
reaching for this one often.
Sometimes you might want to execute a program, and based on the output of that program, loop over the results of
that line by line. Ansible provides a neat way to do that, though you should remember, this is always executed on the
control machine, not the local machine:
- name: Example of looping over a command result
shell: /usr/bin/frobnicate {{ item }}
with_lines: /usr/bin/frobnications_per_host --param {{ inventory_hostname }}
Ok, that was a bit arbitrary. In fact, if you’re doing something that is inventory related you might just want to write
a dynamic inventory source instead (see Dynamic Inventory), but this can be occasionally useful in quick-and-dirty
implementations.
Should you ever need to execute a command remotely, you would not use the above method. Instead do this:
- name: Example of looping over a REMOTE command result
shell: /usr/bin/something
register: command_result
Note: This is an uncommon thing to want to do, but we’re documenting it for completeness. You probably won’t be
reaching for this one often.
If you want to loop over an array and also get the numeric index of where you are in the array as you go, you can also
do that. It’s uncommonly used:
- name: indexed loop demo
debug: msg="at array position {{ item.0 }} there is a value {{ item.1 }}"
with_indexed_items: some_list
Flattening A List
Note: This is an uncommon thing to want to do, but we’re documenting it for completeness. You probably won’t be
reaching for this one often.
In rare instances you might have several lists of lists, and you just want to iterate over every item in all of those lists.
Assume a really crazy hypothetical datastructure:
----
# file: roles/foo/vars/main.yml
packages_base:
- [ ’foo-package’, ’bar-package’ ]
packages_apps:
- [ [’one-package’, ’two-package’ ]]
- [ [’red-package’], [’blue-package’]]
As you can see the formatting of packages in these lists is all over the place. How can we install all of the packages in
both lists?:
- name: flattened loop demo
yum: name={{ item }} state=installed
with_flattened:
- packages_base
- packages_apps
That’s how!
When using register with a loop the data structure placed in the variable during a loop, will contain a results
attribute, that is a list of all responses from the module.
Here is an example of using register with with_items:
- shell: echo "{{ item }}"
with_items:
- one
- two
register: echo
This differs from the data structure returned when using register without a loop:
1.3. Playbooks 69
Ansible Documentation, Release 1.5
{
"changed": true,
"msg": "All items completed",
"results": [
{
"changed": true,
"cmd": "echo \"one\" ",
"delta": "0:00:00.003110",
"end": "2013-12-19 12:00:05.187153",
"invocation": {
"module_args": "echo \"one\"",
"module_name": "shell"
},
"item": "one",
"rc": 0,
"start": "2013-12-19 12:00:05.184043",
"stderr": "",
"stdout": "one"
},
{
"changed": true,
"cmd": "echo \"two\" ",
"delta": "0:00:00.002920",
"end": "2013-12-19 12:00:05.245502",
"invocation": {
"module_args": "echo \"two\"",
"module_name": "shell"
},
"item": "two",
"rc": 0,
"start": "2013-12-19 12:00:05.242582",
"stderr": "",
"stdout": "two"
}
]
}
Subsequent loops over the registered variable to inspect the results may look like:
- name: Fail if return code is not 0
fail:
msg: "The command ({{ item.cmd }}) did not have a 0 return code"
when: item.rc != 0
with_items: echo.results
While you ordinarily shouldn’t have to, should you wish to write your own ways to loop over arbitrary datastructures,
you can read Developing Plugins for some starter information. Each of the above features are implemented as plugins
in ansible, so there are many implementations to reference.
See also:
Playbooks An introduction to playbooks
Playbook Roles and Include Statements Playbook organization by roles
Best Practices Best practices in playbooks
Here are some tips for making the most of Ansible playbooks.
You can find some example playbooks illustrating these best practices in our ansible-examples repository. (NOTE:
These may not use all of the features in the latest release, but are still an excellent reference!).
Topics
• Best Practices
– Content Organization
* Directory Layout
* How to Arrange Inventory, Stage vs Production
* Group And Host Variables
* Top Level Playbooks Are Separated By Role
* Task And Handler Organization For A Role
* What This Organization Enables (Examples)
* Deployment vs Configuration Organization
– Stage vs Production
– Rolling Updates
– Always Mention The State
– Group By Roles
– Operating System and Distribution Variance
– Bundling Ansible Modules With Playbooks
– Whitespace and Comments
– Always Name Tasks
– Keep It Simple
– Version Control
Content Organization
The following section shows one of many possible ways to organize playbook content. Your usage of Ansible should
fit your needs, however, not ours, so feel free to modify this approach and organize as you see fit.
(One thing you will definitely want to do though, is use the “roles” organization feature, which is documented as part
of the main playbooks page. See Playbook Roles and Include Statements).
Directory Layout
The top level of the directory would contain files and directories like so:
production # inventory file for production servers
stage # inventory file for stage environment
group_vars/
group1 # here we assign variables to particular groups
1.3. Playbooks 71
Ansible Documentation, Release 1.5
group2 # ""
host_vars/
hostname1 # if systems need specific variables, put them here
hostname2 # ""
roles/
common/ # this hierarchy represents a "role"
tasks/ #
main.yml # <-- tasks file can include smaller files if warranted
handlers/ #
main.yml # <-- handlers file
templates/ # <-- files for use with the template resource
ntp.conf.j2 # <------- templates end in .j2
files/ #
bar.txt # <-- files for use with the copy resource
foo.sh # <-- script files for use with the script resource
vars/ #
main.yml # <-- variables associated with this role
webtier/ # same kind of structure as "common" was above, done for the webtier role
monitoring/ # ""
fooapp/ # ""
In the example below, the production file contains the inventory of all of your production hosts. Of course you can
pull inventory from an external data source as well, but this is just a basic example.
It is suggested that you define groups based on purpose of the host (roles) and also geography or datacenter location
(if applicable):
# file: production
[atlanta-webservers]
www-atl-1.example.com
www-atl-2.example.com
[boston-webservers]
www-bos-1.example.com
www-bos-2.example.com
[atlanta-dbservers]
db-atl-1.example.com
db-atl-2.example.com
[boston-dbservers]
db-bos-1.example.com
Now, groups are nice for organization, but that’s not all groups are good for. You can also assign variables to them!
For instance, atlanta has its own NTP servers, so when setting up ntp.conf, we should use them. Let’s set those now:
---
# file: group_vars/atlanta
ntp: ntp-atlanta.example.com
backup: backup-atlanta.example.com
Variables aren’t just for geographic information either! Maybe the webservers have some configuration that doesn’t
make sense for the database servers:
---
# file: group_vars/webservers
apacheMaxRequestsPerChild: 3000
apacheMaxClients: 900
If we had any default values, or values that were universally true, we would put them in a file called group_vars/all:
---
# file: group_vars/all
ntp: ntp-boston.example.com
backup: backup-boston.example.com
We can define specific hardware variance in systems in a host_vars file, but avoid doing this unless you need to:
---
# file: host_vars/db-bos-1.example.com
foo_agent_port: 86
bar_agent_port: 99
In site.yml, we include a playbook that defines our entire infrastructure. Note this is SUPER short, because it’s just
including some other playbooks. Remember, playbooks are nothing more than lists of plays:
---
# file: site.yml
- include: webservers.yml
- include: dbservers.yml
1.3. Playbooks 73
Ansible Documentation, Release 1.5
In a file like webservers.yml (also at the top level), we simply map the configuration of the webservers group to the
roles performed by the webservers group. Also notice this is incredibly short. For example:
---
# file: webservers.yml
- hosts: webservers
roles:
- common
- webtier
Below is an example tasks file that explains how a role works. Our common role here just sets up NTP, but it could do
more if we wanted:
---
# file: roles/common/tasks/main.yml
Here is an example handlers file. As a review, handlers are only fired when certain tasks report changes, and are run at
the end of each play:
---
# file: roles/common/handlers/main.yml
- name: restart ntpd
service: name=ntpd state=restarted
What about just the first 10, and then the next 10?:
ansible-playbook -i production webservers.yml --limit boston[0-10]
ansible-playbook -i production webservers.yml --limit boston[10-20]
And there are some useful commands to know (at least in 1.1 and higher):
# confirm what task names would be run if I ran this command and said "just ntp tasks"
ansible-playbook -i production webservers.yml --tags ntp --list-tasks
The above setup models a typical configuration topology. When doing multi-tier deployments, there are going to be
some additional playbooks that hop between tiers to roll out an application. In this case, ‘site.yml’ may be augmented
by playbooks like ‘deploy_exampledotcom.yml’ but the general concepts can still apply.
Consider “playbooks” as a sports metaphor – you don’t have to just have one set of plays to use against your infras-
tructure all the time – you can have situational plays that you use at different times and for different purposes.
Ansible allows you to deploy and configure using the same tool, so you would likely reuse groups and just keep the
OS configuration in separate playbooks from the app deployment.
Stage vs Production
As also mentioned above, a good way to keep your stage (or testing) and production environments separate is to use a
separate inventory file for stage and production. This way you pick with -i what you are targeting. Keeping them all
in one file can lead to surprises!
Testing things in a stage environment before trying in production is always a great idea. Your environments need not
be the same size and you can use group variables to control the differences between those environments.
Rolling Updates
Understand the ‘serial’ keyword. If updating a webserver farm you really want to use it to control how many machines
you are updating at once in the batch.
See Delegation, Rolling Updates, and Local Actions.
The ‘state’ parameter is optional to a lot of modules. Whether ‘state=present’ or ‘state=absent’, it’s always best to
leave that parameter in your playbooks to make it clear, especially as some modules support additional states.
1.3. Playbooks 75
Ansible Documentation, Release 1.5
Group By Roles
A system can be in multiple groups. See Inventory and Patterns. Having groups named after things like webservers
and dbservers is repeated in the examples because it’s a very powerful concept.
This allows playbooks to target machines based on role, as well as to assign role specific variables using the group
variable system.
See Playbook Roles and Include Statements.
When dealing with a parameter that is different between two different operating systems, the best way to handle this
is by using the group_by module.
This makes a dynamic group of hosts matching certain criteria, even if that group is not defined in the inventory file:
---
- hosts: all
tasks:
- group_by: key={{ ansible_distribution }}
- hosts: CentOS
gather_facts: False
tasks:
- # tasks that only happen on CentOS go here
If group-specific settings are needed, this can also be done. For example:
---
# file: group_vars/all
asdf: 10
---
# file: group_vars/CentOS
asdf: 42
In the above example, CentOS machines get the value of ‘42’ for asdf, but other machines get ‘10’.
Generous use of whitespace to break things up, and use of comments (which start with ‘#’), is encouraged.
It is possible to leave off the ‘name’ for a given task, though it is recommended to provide a description about why
something is being done instead. This name is shown when the playbook is run.
Keep It Simple
When you can do something simply, do something simply. Do not reach to use every feature of Ansible together, all
at once. Use what works for you. For example, you will probably not need vars, vars_files, vars_prompt
and --extra-vars all at once, while also using an external inventory file.
Version Control
Use version control. Keep your playbooks and inventory file in git (or another version control system), and commit
when you make changes to them. This way you have an audit trail describing when and why you changed the rules
that are automating your infrastructure.
See also:
YAML Syntax Learn about YAML syntax
Playbooks Review the basic playbook features
About Modules Learn about available modules
Developing Modules Learn how to extend Ansible by writing your own modules
Patterns Learn about how to select hosts
Github examples directory Complete playbook files from the github project source
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
Here are some playbook features that not everyone may need to learn, but can be quite useful for particular applications.
Browsing these topics is recommended as you may find some useful tips here, but feel free to learn the basics of Ansible
first and adopt these only if they seem relevant or useful to your environment.
Are you running Ansible 1.5 or later? If so, you may not need accelerate mode due to a new feature called “SSH
pipelining” and should read the pipelining section of the documentation.
For users on 1.5 and later, accelerate mode only makes sense if you are (A) are managing from an Enterprise Linux 6 or earlier
and still are on paramiko, or (B) can’t enable TTYs with sudo as described in the pipelining docs.
If you can use pipelining, Ansible will reduce the amount of files transferred over the wire, making everything much
more efficient, and performance will be on par with accelerate mode in nearly all cases, possibly excluding very large
file transfer. Because less moving parts are involved, pipelining is better than accelerate mode for nearly all use cases.
Accelerate mode remains around in support of EL6 control machines and other constrained environments.
While OpenSSH using the ControlPersist feature is quite fast and scalable, there is a certain small amount of overhead
involved in using SSH connections. While many people will not encounter a need, if you are running on a platform
that doesn’t have ControlPersist support (such as an EL6 control machine), you’ll probably be even more interested in
tuning options.
Accelerate mode is there to help connections work faster, but still uses SSH for initial secure key exchange. There is
no additional public key infrastructure to manage, and this does not require things like NTP or even DNS.
Accelerated mode can be anywhere from 2-6x faster than SSH with ControlPersist enabled, and 10x faster than
paramiko.
Accelerated mode works by launching a temporary daemon over SSH. Once the daemon is running, Ansible will
connect directly to it via a socket connection. Ansible secures this communication by using a temporary AES key that
is exchanged during the SSH connection (this key is different for every host, and is also regenerated periodically).
By default, Ansible will use port 5099 for the accelerated connection, though this is configurable. Once running, the
daemon will accept connections for 30 minutes, after which time it will terminate itself and need to be restarted over
SSH.
Accelerated mode offers several improvements over the (deprecated) original fireball mode from which it was based:
• No bootstrapping is required, only a single line needs to be added to each play you wish to run in accelerated
mode.
• Support for sudo commands (see below for more details and caveats) is available.
• There are fewer requirements. ZeroMQ is no longer required, nor are there any special packages beyond python-
keyczar
• python 2.5 or higher is required.
In order to use accelerated mode, simply add accelerate: true to your play:
---
- hosts: all
accelerate: true
tasks:
If you wish to change the port Ansible will use for the accelerated connection, just add the accelerated_port option:
---
- hosts: all
accelerate: true
The accelerate_port option can also be specified in the environment variable ACCELERATE_PORT, or in your ansi-
ble.cfg configuration:
[accelerate]
accelerate_port = 5099
As noted above, accelerated mode also supports running tasks via sudo, however there are two important caveats:
• You must remove requiretty from your sudoers options.
• Prompting for the sudo password is not yet supported, so the NOPASSWD option is required for sudo’ed
commands.
By default tasks in playbooks block, meaning the connections stay open until the task is done on each node. This may
not always be desirable, or you may be running operations that take longer than the SSH timeout.
The easiest way to do this is to kick them off all at once and then poll until they are done.
You will also want to use asynchronous mode on very long running operations that might be subject to timeout.
To launch a task asynchronously, specify its maximum runtime and how frequently you would like to poll for status.
The default poll value is 10 seconds if you do not specify a value for poll:
---
- hosts: all
remote_user: root
tasks:
- name: simulate long running op (15 sec), wait for up to 45, poll every 5
command: /bin/sleep 15
async: 45
poll: 5
Note: There is no default for the async time limit. If you leave off the ‘async’ keyword, the task runs synchronously,
which is Ansible’s default.
Alternatively, if you do not need to wait on the task to complete, you may “fire and forget” by specifying a poll value
of 0:
---
- hosts: all
remote_user: root
tasks:
- name: simulate long running op, allow to run for 45, fire and forget
command: /bin/sleep 15
async: 45
poll: 0
Note: You shouldn’t “fire and forget” with operations that require exclusive locks, such as yum transactions, if you
expect to run other commands later in the playbook against those same resources.
Note: Using a higher value for --forks will result in kicking off asynchronous tasks even faster. This also increases
the efficiency of polling.
See also:
Playbooks An introduction to playbooks
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
Topics
• Check Mode (“Dry Run”)
– Running a task in check mode
– Showing Differences with --diff
When ansible-playbook is executed with --check it will not make any changes on remote systems. Instead, any
module instrumented to support ‘check mode’ (which contains most of the primary core modules, but it is not required
that all modules do this) will report what changes they would have made rather than making them. Other modules that
do not support check mode will also take no action, but just will not report what changes they might have made.
Check mode is just a simulation, and if you have steps that use conditionals that depend on the results of prior
commands, it may be less useful for you. However it is great for one-node-at-time basic configuration management
use cases.
Example:
ansible-playbook foo.yml --check
As a reminder, a task with a when clause evaluated to false, will still be skipped even if it has a always_run clause
evaluated to true.
Topics
• Delegation, Rolling Updates, and Local Actions
– Rolling Update Batch Size
– Maximum Failure Percentage
– Delegation
– Local Playbooks
Being designed for multi-tier deployments since the beginning, Ansible is great at doing things on one host on behalf
of another, or doing local steps with reference to some remote hosts.
This in particular this is very applicable when setting up continuous deployment infrastructure or zero downtime
rolling updates, where you might be talking with load balancers or monitoring systems.
Additional features allow for tuning the orders in which things complete, and assigning a batch window size for how
many machines to process at once during a rolling update.
This section covers all of these features. For examples of these items in use, please see the ansible-examples repository.
There are quite a few examples of zero-downtime update procedures for different kinds of applications.
You should also consult the About Modules section, various modules like ‘ec2_elb’, ‘nagios’, and ‘bigip_pool’, and
‘netscaler’ dovetail neatly with the concepts mentioned here.
You’ll also want to read up on Playbook Roles and Include Statements, as the ‘pre_task’ and ‘post_task’ concepts are
the places where you would typically call these modules.
In the above example, if we had 100 hosts, 3 hosts in the group ‘webservers’ would complete the play completely
before moving on to the next 3 hosts.
In the above example, if more than 3 of the 10 servers in the group were to fail, the rest of the play would be aborted.
Note: The percentage set must be exceeded, not equaled. For example, if serial were set to 4 and you wanted the task
to abort when 2 of the systems failed, the percentage should be set at 49 rather than 50.
Delegation
- hosts: webservers
serial: 5
tasks:
These commands will run on 127.0.0.1, which is the machine running Ansible. There is also a shorthand syntax that
you can use on a per-task basis: ‘local_action’. Here is the same playbook as above, but using the shorthand syntax
for delegating to 127.0.0.1:
---
# ...
tasks:
# ...
A common pattern is to use a local action to call ‘rsync’ to recursively copy files to the managed servers. Here is an
example:
---
# ...
tasks:
Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync will
need to ask for a passphrase.
Local Playbooks
It may be useful to use a playbook locally, rather than by connecting over SSH. This can be useful for assuring the
configuration of a system by putting a playbook on a crontab. This may also be used to run a playbook inside a OS
installer, such as an Anaconda kickstart.
To run an entire playbook locally, just set the “hosts:” line to “hosts:127.0.0.1” and then run the playbook like so:
ansible-playbook playbook.yml --connection=local
Alternatively, a local connection can be used in a single playbook play, even if other plays in the playbook use the
default remote connection type:
- hosts: 127.0.0.1
connection: local
See also:
Playbooks An introduction to playbooks
Ansible Examples on GitHub Many examples of full-stack deployments
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
tasks:
The environment can also be stored in a variable, and accessed like so:
- hosts: all
remote_user: root
tasks:
While just proxy settings were shown above, any number of settings can be supplied. The most logical place to define
an environment hash might be a group_vars file, like so:
---
# file: group_vars/boston
ntp_server: ntp.bos.example.com
backup: bak.bos.example.com
proxy_env:
http_proxy: http://proxy.bos.example.com:8080
https_proxy: http://proxy.bos.example.com:8080
See also:
Playbooks An introduction to playbooks
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
Topics
• Error Handling In Playbooks
– Ignoring Failed Commands
– Controlling What Defines Failure
– Overriding The Changed Result
Ansible normally has defaults that make sure to check the return codes of commands and modules and it fails fast –
forcing an error to be dealt with unless you decide otherwise.
Sometimes a command that returns 0 isn’t an error. Sometimes a command might not always need to report that it
‘changed’ the remote system. This section describes how to change the default behavior of Ansible for certain tasks
so output and error handling behavior is as desired.
Note that the above system only governs the failure of the particular task, so if you have an undefined variable used, it
will still raise an error that users will need to address.
- name: fail the play if the previous command did not succeed
fail: msg="the command failed"
when: "’FAILED’ in command_result.stderr"
See also:
Playbooks An introduction to playbooks
Best Practices Best practices in playbooks
Conditionals Conditional statements in playbooks
Variables All about variables
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
Lookup plugins allow access of data in Ansible from outside sources. This can include the filesystem but also external
datastores. These values are then made available using the standard templating system in Ansible, and are typically
used to load variables or templates with information from those systems.
Note: This is considered an advanced feature, and many users will probably not rely on these features.
Topics
• Using Lookups
– Intro to Lookups: Getting File Contents
– The Password Lookup
– More Lookups
tasks:
Note: A great alternative to the password lookup plugin, if you don’t need to generate random passwords on a per-
host basis, would be to use Vault. Read the documentation there and consider using it first, it will be more desirable
for most applications.
password generates a random plaintext password and stores it in a file at a given filepath.
(Docs about crypted save modes are pending)
If the file exists previously, it will retrieve its contents, behaving just like with_file. Usage of variables like “{{
inventory_hostname }}” in the filepath can be used to set up random passwords per host (what simplifies password
management in ‘host_vars’ variables).
Generated passwords contain a random mix of upper and lowercase ASCII letters, the numbers 0-9 and punctuation
(”. , : - _”). The default length of a generated password is 20 characters. This length can be changed by passing an
extra parameter:
---
- hosts: all
tasks:
(...)
Note: If the file already exists, no data will be written to it. If the file has contents, those contents will be read in as
the password. Empty files cause the password to return as an empty string
Starting in version 1.4, password accepts a “chars” parameter to allow defining a custom character set in the generated
passwords. It accepts comma separated list of names that are either string module attributes (ascii_letters,digits, etc)
or are used literally:
---
- hosts: all
tasks:
# create a mysql user with a random password using only ascii letters:
- mysql_user: name={{ client }}
password="{{ lookup(’password’, ’/tmp/passwordfile chars=ascii’) }}"
priv={{ client }}_{{ tier }}_{{ role }}.*:ALL
# create a mysql user with a random password using many different char sets:
- mysql_user: name={{ client }}
password="{{ lookup(’password’, ’/tmp/passwordfile chars=ascii,numbers,digits,hexdi
priv={{ client }}_{{ tier }}_{{ role }}.*:ALL
(...)
To enter comma use two commas ‘„’ somewhere - preferably at the end. Quotes and double quotes are not supported.
More Lookups
Note: This feature is very infrequently used in Ansible. You may wish to skip this section.
Various lookup plugins allow additional ways to iterate over data. In Loops you will learn how to use them to walk
over collections of numerous types. However, they can also be used to pull in data from remote sources, such as shell
commands or even key value stores. This section will cover lookup plugins in this capacity.
Here are some examples:
---
- hosts: all
tasks:
As an alternative you can also assign lookup plugins to variables or use them elsewhere. This macros are evaluated
each time they are used in a task (or template):
vars:
motd_value: "{{ lookup(’file’, ’/etc/motd’) }}"
tasks:
See also:
Playbooks An introduction to playbooks
Conditionals Conditional statements in playbooks
Variables All about variables
Loops Looping in playbooks
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
1.4.8 Prompts
When running a playbook, you may wish to prompt the user for certain input, and can do so with the ‘vars_prompt’
section.
A common use for this might be for asking for sensitive data that you do not want to record.
This has uses beyond security, for instance, you may use the same playbook for all software releases and would prompt
for a particular release version in a push-script.
Here is a most basic example:
---
- hosts: all
remote_user: root
vars:
from: "camelot"
vars_prompt:
name: "what is your name?"
quest: "what is your quest?"
favcolor: "what is your favorite color?"
If you have a variable that changes infrequently, it might make sense to provide a default value that can be overridden.
This can be accomplished using the default argument:
vars_prompt:
- name: "release_version"
prompt: "Product release version"
default: "1.0"
An alternative form of vars_prompt allows for hiding input from the user, and may later support some other options,
but otherwise works equivalently:
vars_prompt:
- name: "some_password"
prompt: "Enter password"
private: yes
- name: "release_version"
prompt: "Product release version"
private: no
If Passlib is installed, vars_prompt can also crypt the entered value so you can use it, for instance, with the user module
to define a password:
vars_prompt:
- name: "my_password2"
prompt: "Enter password2"
private: yes
encrypt: "md5_crypt"
confirm: yes
salt_size: 7
1.4.9 Tags
If you have a large playbook it may become useful to be able to run a specific part of the configuration without running
the whole playbook.
Both plays and tasks support a “tags:” attribute for this reason.
Example:
tasks:
If you wanted to just run the “configuration” and “packages” part of a very long playbook, you could do this:
ansible-playbook example.yml --tags "configuration,packages"
On the other hand, if you want to run a playbook without certain tasks, you could do this:
ansible-playbook example.yml --skip-tags "notification"
roles:
- { role: webserver, port: 5000, tags: [ ’web’, ’foo’ ] }
Both of these have the function of tagging every single task inside the include statement.
See also:
Playbooks An introduction to playbooks
Playbook Roles and Include Statements Playbook organization by roles
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
1.4.10 Vault
Topics
• Vault
– What Can Be Encrypted With Vault
– Creating Encrypted Files
– Editing Encrypted Files
– Rekeying Encrypted Files
– Encrypting Unencrypted Files
– Decrypting Encrypted Files
– Running a Playbook With Vault
New in Ansible 1.5, “Vault” is a feature of ansible that allows keeping encrypted data in source control.
To enable this feature, a command line tool, ansible-vault is used to edit files, and a command line flag –ask-vault-pass
or –vault-password-file is used.
The vault feature can encrypt any structured data file used by Ansible. This can include “group_vars/” or “host_vars/”
inventory variables, variables loaded by “include_vars” or “vars_files”, or variable files passed on the ansible-playbook
command line with “-e @file.yml” or “-e @file.json”. Role variables and defaults are also included!
Because Ansible tasks, handlers, and so on are also data, these two can also be encrypted with vault. If you’d like
to not betray what variables you are even using, you can go as far to keep an individual task file entirely encrypted.
However, that might be a little much and could annoy your coworkers :)
First you will be prompted for a password. The password used with vault currently must be the same for all files you
wish to use together at the same time.
After providing a password, the tool will launch whatever editor you have defined with $EDITOR, and defaults to vim.
Once you are done with the editor session, the file will be saved as encrypted data.
The default cipher is AES (which is shared-secret based).
To edit an encrypted file in place, use the ansible-vault edit command. This command will decrypt the file to a
temporary file and allow you to edit the file, saving it back when done and removing the temporary file:
ansible-vault edit foo.yml
Should you wish to change your password on a vault-encrypted file or files, you can do so with the rekey command:
ansible-vault rekey foo.yml bar.yml baz.yml
This command can rekey multiple data files at once and will ask for the original password and also the new password.
If you have existing files that you wish to encrypt, use the ansible-vault encrypt command. This command can operate
on multiple files at once:
ansible-vault encrypt foo.yml bar.yml baz.yml
If you have existing files that you no longer want to keep encrypted, you can permanently decrypt them by running the
ansible-vault decrypt command. This command will save them unencrypted to the disk, so be sure you do not want
ansible-vault edit instead:
ansible-vault decrypt foo.yml bar.yml baz.yml
To run a playbook that contains vault-encrypted data files, you must pass one of two flags. To specify the vault-
password interactively:
ansible-playbook site.yml --ask-vault-pass
This prompt will then be used to decrypt (in memory only) any vault encrypted files that are accessed. Currently this
requires that all passwords be encrypted with the same password.
Alternatively, passwords can be specified with a file. If this is done, be careful to ensure permissions on the file are
such that no one else can access your key, and do not add your key to source control:
ansible-playbook site.yml --vault-password-file ~/.vault_pass.txt
1.5.1 Introduction
Ansible ships with a number of modules (called the ‘module library’) that can be executed directly on remote hosts or
through Playbooks.
Users can also write their own modules. These modules can control system resources, like services, packages, or files
(anything really), or handle executing system commands.
Let’s review how we execute three different modules from the command line:
ansible webservers -m service -a "name=httpd state=running"
ansible webservers -m ping
ansible webservers -m command -a "/sbin/reboot -t now"
Each module supports taking arguments. Nearly all modules take key=value arguments, space delimited. Some
modules take no arguments, and the command/shell modules simply take the string of the command you want to run.
From playbooks, Ansible modules are executed in a very similar way:
- name: reboot the servers
action: command /sbin/reboot -t now
All modules technically return JSON format data, though if you are using the command line or playbooks, you don’t
really need to know much about that. If you’re writing your own module, you care, and this means you do not have to
write modules in any particular language – you get to choose.
Modules are idempotent, meaning they will seek to avoid changes to the system unless a change needs to be made.
When using Ansible playbooks, these modules can trigger ‘change events’ in the form of notifying ‘handlers’ to run
additional tasks.
Documentation for each module can be accessed from the command line with the ansible-doc tool:
ansible-doc yum
See also:
Introduction To Ad-Hoc Commands Examples of using modules in /usr/bin/ansible
Playbooks Examples of using modules with /usr/bin/ansible-playbook
Developing Modules How to write your own modules
Python API Examples of using modules with the Python API
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net #ansible IRC chat channel
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# To use accelerate mode, simply add "accelerate: true" to your play. The initial
# key exchange and starting up of the daemon will occur over SSH, but all commands and
# subsequent actions will be conducted over the raw socket connection using AES encryption
- hosts: devservers
accelerate: true
tasks:
- command: /usr/bin/anything
Note: See the advanced playbooks chapter for more about using accelerated mode.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The “acl” module requires that acls are enabled on the target filesystem and that the setfacl and getfacl binaries
are installed.
add_host - add a host (and alternatively a group) to the ansible-playbook in-memory inventory
• Synopsis
• Options
• Examples
Synopsis
Use variables to create new hosts and groups in inventory for use in later plays of the same playbook. Takes variables
so you can define the new hosts more fully.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- airbrake_deployment: token=AAAAAA
environment=’staging’
user=’ansible’
revision=4.2
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Update the repository cache and update package "nginx" to latest version using default release sque
- apt: pkg=nginx state=latest default_release=squeeze-backports update_cache=yes
# Only run "update_cache=yes" if the last one is more than more than 3600 seconds ago
- apt: update_cache=yes cache_valid_time=3600
Note: Three of the upgrade modes (full, safe and its alias yes) require aptitude, otherwise apt-get
suffices.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: as a sanity check, downloaded key id must match the one specified
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# On Ubuntu target: add nginx stable repository from PPA and install its signing key.
# On Debian target: adding PPA is not available, so it will fail immediately.
apt_repository: repo=’ppa:nginx/stable’
Note: This module works on Debian and Ubuntu and requires python-apt and python-pycurl packages.
Note: This module supports Debian Squeeze (version 6) as well as its successors.
Note: This module treats Debian and Ubuntu distributions separately. So PPA could be installed only on Ubuntu
machines.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: enable interface Ethernet 1
action: arista_interface interface_id=Ethernet1 admin=up speed=10g duplex=full logging=true
Note: The Netdev extension for EOS must be installed and active in the available extensions (show extensions from
the EOS CLI)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: create switchport ethernet1 access port
action: arista_l2interface interface_id=Ethernet1 logging=true
Note: The Netdev extension for EOS must be installed and active in the available extensions (show extensions from
the EOS CLI)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: create lag interface
action: arista_lag interface_id=Port-Channel1 links=Ethernet1,Ethernet2 logging=true
Note: The Netdev extension for EOS must be installed and active in the available extensions (show extensions from
the EOS CLI)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: create vlan 999
action: arista_vlan vlan_id=999 logging=true
Note: The Netdev extension for EOS must be installed and active in the available extensions (show extensions from
the EOS CLI)
• Synopsis
• Options
• Examples
Synopsis
Assembles a configuration file from fragments. Often a particular program will take a single configuration file and does
not support a conf.d style structure where it is easy to build up the configuration from multiple sources. assemble
will take a directory of files that can be local or have already been transferred to the system, and concatenate them
together to produce a destination file. Files are assembled in string sorting order. Puppet calls this idea fragments.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
Synopsis
Options
• Synopsis
• Options
• Examples
Synopsis
Use this module to schedule a command or script to run once in the future. All jobs are executed in the a queue.
Options
Note: Requires at
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Example using key data from a local file on the management machine
- authorized_key: user=charlie key="{{ lookup(’file’, ’/home/charlie/.ssh/id_rsa.pub’) }}"
# Using with_file
- name: Set up authorized_keys for the deploy user
authorized_key: user=deploy
key="{{ item }}"
with_file:
- public_keys/doe-jane
- public_keys/doe-john
# Using key_options:
- authorized_key: user=charlie
key="{{ lookup(’file’, ’/home/charlie/.ssh/id_rsa.pub’) }}"
key_options=’no-port-forwarding,host="10.0.1.1"’
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
# file bigip-test.yml
# ...
- hosts: bigip-test
tasks:
- name: Add node
local_action: >
bigip_node
server=lb.mydomain.com
user=admin
password=mysecret
state=present
partition=matthite
host="{{ ansible_default_ipv4["address"] }}"
name="{{ ansible_default_ipv4["address"] }}"
# Note that the BIG-IP automatically names the node using the
# IP address specified in previous play’s host parameter.
# Future plays referencing this node no longer use the host
# parameter but instead use the name parameter.
# Alternatively, you could have specified a name with the
# name parameter when state=present.
state=present
partition=matthite
name="{{ ansible_default_ipv4["address"] }}"
description="Our best server yet"
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
# file bigip-test.yml
# ...
- hosts: localhost
tasks:
- name: Create pool
local_action: >
bigip_pool
server=lb.mydomain.com
user=admin
password=mysecret
state=present
name=matthite-pool
partition=matthite
lb_method=least_connection_member
slow_ramp_time=120
- hosts: bigip-test
tasks:
- name: Add pool member
local_action: >
bigip_pool
server=lb.mydomain.com
user=admin
password=mysecret
state=present
name=matthite-pool
partition=matthite
host="{{ ansible_default_ipv4["address"] }}"
port=80
- hosts: localhost
tasks:
- name: Delete pool
local_action: >
bigip_pool
server=lb.mydomain.com
user=admin
password=mysecret
state=absent
name=matthite-pool
partition=matthite
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
# file bigip-test.yml
# ...
- hosts: bigip-test
tasks:
- name: Add pool member
local_action: >
bigip_pool_member
server=lb.mydomain.com
user=admin
password=mysecret
state=present
pool=matthite-pool
partition=matthite
host="{{ ansible_default_ipv4["address"] }}"
port=80
description="web server"
connection_limit=100
rate_limit=50
ratio=2
Author [email protected]
• Synopsis
• Options
• Examples
Synopsis
Options
Note: Requires bprobe is required to send data, but not to register a meter
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
The command module takes the command name followed by a list of space-delimited arguments. The given command
will be executed on all selected nodes. It will not be processed through the shell, so variables like $HOME and
operations like "<", ">", "|", and "&" will not work (use the shell module if you need these features).
Options
Examples
Note: If you want to run a command through the shell (say you are using <, >, |, etc), you actually want the shell
module instead. The command module is much more secure as it’s not affected by the user’s environment.
Note: creates, removes, and chdir can be specified after the command. For instance, if you only want to run
a command if a certain file does not exist, use this.
• Synopsis
• Options
• Examples
Synopsis
The copy module copies a file on the local box to remote locations.
Options
Examples
# Copy a new "ntp.conf file into place, backing up the original if it differs from the copied version
- copy: src=/https/www.scribd.com/mine/ntp.conf dest=/etc/ntp.conf owner=root group=root mode=644 backup=yes
# Copy a new "sudoers" file into place, after passing validation with visudo
- copy: src=/https/www.scribd.com/mine/sudoers dest=/etc/sudoers validate=’visudo -cf %s’
Note: The “copy” module recursively copy facility does not scale to lots (>hundreds) of files. For alternative, see
synchronize module, which is a wrapper around rsync.
• Synopsis
• Options
• Examples
Synopsis
Use this module to manage crontab entries. This module allows you to create named crontab entries, update, or
delete them. The module includes one line with the description of the crontab entry "#Ansible: <name>"
corresponding to the “name” passed to the module, which is used by future ansible/module calls to find/check the
state.
Options
Examples
# Ensure an old job is no longer present. Removes any job that is prefixed
# by "#Ansible: an old job" from the crontab
- cron: name="an old job" state=absent
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
api_key="6873258723457823548234234234"
tags=aa,bb,cc
• Synopsis
• Options
• Examples
Synopsis
This module prints statements during execution and can be useful for debugging variables or expressions without
necessarily halting the playbook. Useful for debugging together with the ‘when:’ directive.
Options
Examples
# Example that prints the loopback address and gateway for each host
- debug: msg="System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}"
- shell: /usr/bin/uptime
register: result
- debug: var=result
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- digital_ocean: >
state=present
command=ssh
name=my_ssh_key
ssh_pub_key=’ssh-rsa AAAA...’
client_id=XXX
api_key=XXX
- digital_ocean: >
state=present
command=droplet
name=mydroplet
client_id=XXX
api_key=XXX
size_id=1
region_id=2
image_id=3
wait_timeout=500
register: my_droplet
- debug: msg="ID is {{ my_droplet.droplet.id }}"
- debug: msg="IP is {{ my_droplet.droplet.ip_address }}"
- digital_ocean: >
state=present
command=droplet
id=123
name=mydroplet
client_id=XXX
api_key=XXX
size_id=1
region_id=2
image_id=3
wait_timeout=500
- digital_ocean: >
state=present
ssh_key_ids=id1,id2
name=mydroplet
client_id=XXX
api_key=XXX
size_id=1
region_id=2
image_id=3
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
virtualenv={{ virtualenv_dir }}
#Run the SmokeTest test case from the main app. Useful for testing deploys.
- django_manage: command=test app_path=django_dir apps=main.SmokeTest
Note: virtualenv (http://www.virtualenv.org) must be installed on the remote host if the virtualenv parameter is
specified.
Note: This module will create a virtualenv if the virtualenv parameter is specified and a virtualenv does not already
exist at the given location.
Note: This module assumes English error messages for the ‘createcachetable’ command to detect table existence,
unfortunately.
Note: To be able to use the migrate command, you must have south installed and added as an app in your settings
Note: To be able to use the collectstatic command, you must have enabled staticfiles in your settings
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The DNS Made Easy service requires that machines interacting with the API have the proper time and timezone
set. Be sure you are within a few seconds of actual time by using NTP.
Note: This module returns record(s) in the “result” element when ‘state’ is set to ‘present’. This value can be be
registered and used in your playbooks.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Start one docker container running tomcat in each host of the web group and bind tomcat’s listening p
on the host:
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker: image=centos command="service tomcat6 start" ports=8080
The tomcat server’s port is NAT’ed to a dynamic port on the host, but you can determine which port th
mapped to using docker_containers:
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker: image=centos command="service tomcat6 start" ports=8080 count=5
- name: Display IP address and port mappings for containers
debug: msg={{inventory_hostname}}:{{item[’HostConfig’][’PortBindings’][’8080/tcp’][0][’HostPort’]
with_items: docker_containers
Just as in the previous example, but iterates over the list of docker containers with a sequence:
- hosts: web
sudo: yes
vars:
start_containers_count: 5
tasks:
- name: run tomcat servers
docker: image=centos command="service tomcat6 start" ports=8080 count={{start_containers_count}}
- name: Display IP address and port mappings for containers
debug: msg="{{inventory_hostname}}:{{docker_containers[{{item}}][’HostConfig’][’PortBindings’][’8
with_sequence: start=0 end={{start_containers_count - 1}}
Stop, remove all of the running tomcat containers and list the exit code from the stopped containers:
- hosts: web
sudo: yes
tasks:
- name: stop tomcat servers
docker: image=centos command="service tomcat6 start" state=absent
- name: Display return codes from stopped containers
debug: msg="Returned {{inventory_hostname}}:{{item}}"
with_items: docker_containers
- hosts: web
sudo: yes
tasks:
- name: run tomcat server
docker: image=centos name=tomcat command="service tomcat6 start" ports=8080
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker: image=centos name={{item}} command="service tomcat6 start" ports=8080
with_items:
- crookshank
- snowbell
- heathcliff
- felix
- sylvester
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker: image=centos name={{item}} command="service tomcat6 start" ports=8080
with_sequence: start=1 end=5 format=tomcat_%d.example.com
- hosts: web
sudo: yes
tasks:
- name: ensure redis container is running
docker: image=crosbymichael/redis name=redis
- hosts: web
sudo: yes
tasks:
- docker:
image: namespace/image_name
links:
- postgresql:db
- redis:redis
Create containers with options specified as strings and lists as comma-separated strings:
- hosts: web
sudo: yes
tasks:
docker: image=namespace/image_name links=postgresql:db,redis:redis
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Build docker image if required. Path should contains Dockerfile to build image:
- hosts: web
sudo: yes
tasks:
- name: check or build image
docker_image: path="/path/to/build/dir" name="my/app" state=present
- hosts: web
sudo: yes
tasks:
- name: check or build image
docker_image: path="/path/to/build/dir" name="my/app" state=build
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker_image: name="my/app" state=absent
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Please note that the easy_install module can only install Python libraries. Thus this module is not
able to remove libraries. It is generally recommended to use the pip module which you can first install using
easy_install.
Note: Also note that virtualenv must be installed on the remote host if the virtualenv parameter is specified.
• Synopsis
• Options
• Examples
Synopsis
Creates or terminates ec2 instances. When created optionally waits for it to be ‘running’. This module has a depen-
dency on python-boto >= 2.5
Options
Examples
instance_type: c1.medium
image: emi-40603AD1
wait: yes
group: webserver
count: 3
wait: yes
wait_timeout: 500
count: 5
volumes:
- device_name: /dev/sdb
snapshot: snap-abcdef12
volume_size: 10
monitoring: yes
# VPC example
- local_action:
module: ec2
key_name: mykey
group_id: sg-1dc53f72
instance_type: m1.small
image: ami-6e649707
wait: yes
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
module: ec2
state: ’absent’
instance_ids: ’{{ ec2.instance_ids }}’
#
# Enforce that 5 instances with a tag "foo" are running
#
- local_action:
module: ec2
key_name: mykey
instance_type: c1.medium
image: emi-40603AD1
wait: yes
group: webserver
instance_tags:
foo: bar
exact_count: 5
count_tag: foo
#
# Enforce that 5 running instances named "database" with a "dbtype" of "postgres"
#
- local_action:
module: ec2
key_name: mykey
instance_type: c1.medium
image: emi-40603AD1
wait: yes
group: webserver
instance_tags:
Name: database
dbtype: postgres
exact_count: 5
count_tag:
Name: database
dbtype: postgres
#
# count_tag complex argument examples
#
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Deregister/Delete AMI
- local_action:
module: ec2_ami
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
region: xxxxxx
image_id: ${instance.image_id}
delete_snapshot: True
state: absent
# Deregister AMI
- local_action:
module: ec2_ami
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
region: xxxxxx
image_id: ${instance.image_id}
delete_snapshot: False
state: absent
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: This module will return public_ip on success, which will contain the public IP address associated with the
instance.
Note: There may be a delay between the time the Elastic IP is assigned and when the cloud instance is reachable
via the new address. Use wait_for and pause to delay further playbook execution until the instance is reachable, if
necessary.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
ec2_elb_lb - Creates or destroys Amazon ELB. - Returns information about the load balancer. - Will
be marked changed when called only if state is changed.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
module: ec2_elb_lb
name: "test-please-delete"
state: present
zones:
- us-east-1d
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
health_check:
ping_protocol: http # options are http, https, ssl, tcp
ping_port: 80
ping_path: "/index.html" # not required for tcp or ssl
response_timeout: 5 # seconds
interval: 30 # seconds
unhealthy_threshold: 2
healthy_threshold: 10
# Normally, this module will purge any listeners that exist on the ELB
# but aren’t specified in the listeners parameter. If purge_listeners is
# false it leaves them alone
- local_action:
module: ec2_elb_lb
name: "test-please-delete"
state: present
zones:
- us-east-1a
- us-east-1d
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
purge_listeners: no
# Normally, this module will leave availability zones that are enabled
# on the ELB alone. If purge_zones is true, then any extreneous zones
# will be removed
- local_action:
module: ec2_elb_lb
name: "test-please-delete"
state: present
zones:
- us-east-1a
- us-east-1d
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
purge_zones: yes
• Synopsis
• Examples
Synopsis
Examples
# Conditional example
- name: Gather facts
action: ec2_facts
- name: Conditional
action: debug msg="This instance is a t1.micro"
when: ansible_ec2_instance_type == "t1.micro"
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Creates a new ec2 key pair named ‘example‘ if not present, returns generated
# private key
- name: example ec2 key
local_action:
module: ec2_key
name: example
# Creates a new ec2 key pair named ‘example‘ if not present using provided key
# material
- name: example2 ec2 key
local_action:
module: ec2_key
name: example2
key_material: ’ssh-rsa AAAAxyz...== [email protected]’
state: present
# Creates a new ec2 key pair named ‘example‘ if not present using provided key
# material
- name: example3 ec2 key
local_action:
module: ec2_key
name: example3
key_material: "{{ item }}"
with_file: /path/to/public_key.id_rsa.pub
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Name: ubervol
env: prod
ec2_vol - create and attach a volume, return volume id and device map
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
instance: XXXXXX
volume_size: 5
iops: 200
device_name: sdd
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
local_action:
module: ec2_vpc
state: present
cidr_block: 172.23.0.0/16
region: us-west-2
# Full creation example with subnets and optional availability zones.
# The absence or presense of subnets deletes or creates them respectively.
local_action:
module: ec2_vpc
state: present
cidr_block: 172.22.0.0/16
subnets:
- cidr: 172.22.1.0/24
az: us-west-2c
- cidr: 172.22.2.0/24
az: us-west-2b
- cidr: 172.22.3.0/24
az: us-west-2a
internet_gateway: True
route_tables:
- subnets:
- 172.22.2.0/24
- 172.22.3.0/24
routes:
- dest: 0.0.0.0/0
gw: igw
- subnets:
- 172.22.1.0/24
routes:
- dest: 0.0.0.0/0
gw: igw
region: us-west-2
register: vpc
# Removal of a VPC by id
local_action:
module: ec2_vpc
state: absent
vpc_id: vpc-aaaaaaa
region: us-west-2
If you have added elements not managed by this module, e.g. instances, NATs, etc then
the delete will fail until those dependencies are removed.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Example playbook entries using the ejabberd_user module to manage users state.
tasks:
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Basic example
- local_action:
module: elasticache
name: "test-please-delete"
state: present
engine: memcached
cache_engine_version: 1.4.14
node_type: cache.m1.small
num_nodes: 1
cache_port: 11211
cache_security_groups:
- default
zone: us-east-1d
• Synopsis
• Examples
Synopsis
Runs the facter discovery program (https://github.com/puppetlabs/facter) on the remote system, returning JSON data
that can be useful for inventory purposes.
Examples
• Synopsis
• Options
• Examples
Synopsis
This module fails the progress with a custom message. It can be useful for bailing out when a certain condition is met
using when.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
This module works like copy, but in reverse. It is used for fetching files from remote machines and storing them
locally in a file tree, organized by hostname. Note that this module is written to transfer log files that might not be
present, so a missing remote file won’t be an error unless fail_on_missing is set to ‘yes’.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Sets attributes of files, symlinks, and directories, or removes files/symlinks/directories. Many other modules support
the same options as the file module - including copy, template, and assemble.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
This modules launches an ephemeral fireball ZeroMQ message bus daemon on the remote node which Ansible can
use to communicate with nodes at high speed. The daemon listens on a configurable port for a configurable amount of
time. Starting a new fireball as a given user terminates any existing user fireballs. Fireball mode is AES encrypted
Options
Examples
# This example playbook has two plays: the first launches ’fireball’ mode on all hosts via SSH, and
# the second actually starts using it for subsequent management over the fireball connection
- hosts: devservers
gather_facts: false
connection: ssh
sudo: yes
tasks:
- action: fireball
- hosts: devservers
connection: fireball
tasks:
- command: /usr/bin/anything
Note: See the advanced playbooks chapter for more about using fireball mode.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- flowdock: type=inbox
token=AAAAAA
[email protected]
source=’my cool app’
msg=’test from ansible’
subject=’test subject’
- flowdock: type=chat
token=AAAAAA
external_user_name=testuser
msg=’test from ansible’
tags=tag1,tag2,tag3
Author [email protected] Note. Most of the code has been taken from the S3 module.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Example using defaults and with metadata to create a single ’foo’ instance
- local_action:
module: gce
name: foo
metadata: ’{"db":"postgres", "group":"qa", "id":500}’
# Launch instances from a control node, runs some tasks on the new instances,
# and then terminate them
- name: Create a sandbox instance
hosts: localhost
vars:
names: foo,bar
machine_type: n1-standard-1
image: debian-6
zone: us-central1-a
tasks:
- name: Launch instances
local_action: gce instance_names={{names}} machine_type={{machine_type}}
image={{image}} zone={{zone}}
register: gce
- name: Wait for SSH to come up
local_action: wait_for host={{item.public_ip}} port=22 delay=10
timeout=60 state=started
with_items: {{gce.instance_data}}
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Simple example of creating a new LB, adding members, and a health check
- local_action:
module: gce_lb
name: testlb
region: us-central1
members: ["us-central1-a/www-a", "us-central1-b/www-b"]
httphealthcheck_name: hc
httphealthcheck_port: 80
httphealthcheck_path: "/up"
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Downloads files from HTTP, HTTPS, or FTP to the remote server. The remote server must have direct access to the
remote resource. By default, if an environment variable <protocol>_proxy is set on the target host, requests
will be sent through that proxy. This behaviour can be overridden by setting a variable for this task (see setting the
environment), or by using the use_proxy option.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: If the task seems to be hanging, first verify remote host is in known_hosts. SSH will prompt user to
authorize the first contact with a remote host. To avoid this prompt, one solution is to add the remote host public
key in /etc/ssh/ssh_known_hosts before calling the git module, with the following command: ssh-keyscan
remote_host.com >> /etc/ssh/ssh_known_hosts.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Cleaning all hooks for this repo that had an error on the last update. Since this works for all hoo
- local_action: github_hooks action=cleanall user={{ gituser }} oauthkey={{ oauthkey }} repo={{ repo
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Use facts to create ad-hoc groups that can be used later in a playbook.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- grove: >
channel_token=6Ph62VBBJOccmtTPZbubiPzdrhipZXtg
service=my-app
message=deployed {{ target }}
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Ensure the current working copy is inside the stable branch and deletes untracked files if any.
- hg: repo=https://bitbucket.org/user/repo1 dest=/home/user/repo1 revision=stable purge=yes
Note: If the task seems to be hanging, first verify remote host is in known_hosts. SSH will prompt user to
authorize the first contact with a remote host. To avoid this prompt, one solution is to add the remote host public
key in /etc/ssh/ssh_known_hosts before calling the hg module, with the following command: ssh-keyscan
remote_host.com >> /etc/ssh/ssh_known_hosts.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- hostname: name=web01
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: This module depends on the passlib Python library, which needs to be installed on all target systems.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Manage (add, remove, change) individual settings in an INI-style file without having to manage the file as a whole
with, say, template or assemble. Adds missing sections if they don’t exist. Comments are discarded when the
source file is read, and therefore will not show up in the destination file.
Options
Examples
- ini_file: dest=/etc/anotherconf
section=drinks
option=temperature
value=cold
backup=yes
Note: While it is possible to add an option without specifying a value, this makes no sense.
Note: A section named default cannot be added by the module, but if it exists, individual options within the
section can be updated. (This is a limitation of Python’s ConfigParser.) Either use template to create a base INI
file with a [default] section, or use lineinfile to add the missing line.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Ensure no identically named application is deployed through the JBoss CLI
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a tenant
- keystone_user: tenant=demo tenant_description="Default Tenant"
# Create a user
- keystone_user: user=john tenant=demo password=secrete
# Apply the admin role to the john user in the demo tenant
- keystone_user: role=admin user=john tenant=demo
lineinfile - Ensure a particular line is in a file, or replace an existing line using a back-referenced
regular expression.
• Synopsis
• Options
• Examples
Synopsis
This module will search a file for a line, and ensure that it is present or absent. This is primarily useful when you want
to change a single line in a file only. For other cases, see the copy or template modules.
Options
Examples
# Fully quoted because of the ’: ’ on the line. See the Gotchas in the YAML docs.
- lineinfile: "dest=/etc/sudoers state=present regexp=’^%wheel’ line=’%wheel ALL=(ALL) NOPASSWD: ALL’
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a server
- local_action:
module: linode
api_key: ’longStringFromLinodeApi’
name: linode-test1
plan: 1
datacenter: 2
distribution: 99
password: ’superSecureRootPassword’
ssh_pub_key: ’ssh-rsa qwerty’
swap: 768
wait: yes
wait_timeout: 600
state: present
module: linode
api_key: ’longStringFromLinodeApi’
name: linode-test1
linode_id: 12345678
plan: 1
datacenter: 2
distribution: 99
password: ’superSecureRootPassword’
ssh_pub_key: ’ssh-rsa qwerty’
swap: 768
wait: yes
wait_timeout: 600
state: present
# Delete a server
- local_action:
module: linode
api_key: ’longStringFromLinodeApi’
name: linode-test1
linode_id: 12345678
state: absent
# Stop a server
- local_action:
module: linode
api_key: ’longStringFromLinodeApi’
name: linode-test1
linode_id: 12345678
state: stopped
# Reboot a server
- local_action:
module: linode
api_key: ’longStringFromLinodeApi’
name: linode-test1
linode_id: 12345678
state: restarted
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a volume group on top of /dev/sda1 with physical extent size = 32MB.
- lvg: vg=vg.services pvs=/dev/sda1 pesize=32
Note: module does not modify PE size for already present volume group
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a logical volume the size of all remaining space in the volume group
- lvol: vg=firefly lv=test size=100%FREE
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
This module is useful for sending emails from playbooks. One may wonder why automate sending emails? In complex
environments there are from time to time processes that cannot be automated, either because you lack the authority
to make it so, or because not everyone agrees to a common approach. If you cannot automate a specific step, but the
step is non-blocking, sending out an email to the responsible party to make him perform his part of the bargain is an
elegant way to put the responsibility in someone else’s lap. Of course sending out a mail can be equally useful as a
way to notify one or more people in a team that a specific action has been (successfully) taken.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create ’burgers’ database user with name ’bob’ and password ’12345’.
- mongodb_user: database=burgers name=bob password=12345 state=present
# Define more users with various specific roles (if not defined, no roles is assigned, and the user w
- mongodb_user: database=burgers name=ben password=12345 roles=’read’ state=present
- mongodb_user: database=burgers name=jim password=12345 roles=’readWrite,dbAdmin,userAdmin’ state=pr
- mongodb_user: database=burgers name=joe password=12345 roles=’readWriteAnyDatabase’ state=present
Note: Requires the pymongo Python package on the remote host, version 2.4.2+. This can be installed using pip or
the OS package manager. @see http://api.mongodb.org/python/current/installation.html
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- local_action: mqtt
topic=service/ansible/{{ ansible_hostname }}
payload="Hello at {{ ansible_date_time.iso8601 }}"
qos=0
retain=false
client_id=ans001
Note: This module requires a connection to an MQTT broker such as Mosquitto http://mosquitto.org and the
mosquitto Python module (http://mosquitto.org/python).
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Copy database dump file to remote host and restore it to database ’my_db’
- copy: src=dump.sql.bz2 dest=/tmp
- mysql_db: name=my_db state=import target=/tmp/dump.sql.bz2
Note: Requires the MySQLdb Python package on the remote host. For Ubuntu, this is as easy as apt-get install
python-mysqldb. (See apt.)
Note: Both login_password and login_user are required when you are passing credentials. If none are present, the
module will attempt to read the credentials from ~/.my.cnf, and finally fall back to using the MySQL default login
of root with no password.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Change master to master server 192.168.1.1 and use binary log ’mysql-bin.000009’ with position 4578
- mysql_replication: mode=changemaster master_host=192.168.1.1 master_log_file=mysql-bin.000009 maste
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create database user with name ’bob’ and password ’12345’ with all database privileges
- mysql_user: name=bob password=12345 priv=*.*:ALL state=present
# Creates database user ’bob’ and password ’12345’ with all database privileges and ’WITH GRANT OPTIO
- mysql_user: name=bob password=12345 priv=*.*:ALL,GRANT state=present
# Ensure no user named ’sally’ exists, also passing in the auth credentials.
- mysql_user: login_user=root login_password=123456 name=sally state=absent
[client]
user=root
password=n<_665{vS43y
Note: Requires the MySQLdb Python package on the remote host. For Ubuntu, this is as easy as apt-get install
python-mysqldb.
Note: Both login_password and login_username are required when you are passing credentials. If none are
present, the module will attempt to read the credentials from ~/.my.cnf, and finally fall back to using the MySQL
default login of ‘root’ with no password.
Note: MySQL server installs with default login_user of ‘root’ and no password. To secure this user as part of an
idempotent playbook, you must create at least two tasks: the first must change the root user’s password, without
providing any login_user/login_password details. The second must drop a ~/.my.cnf file containing the new root
credentials. Subsequent runs of the playbook will then succeed by reading the new credentials from the file.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
The nagios module has two basic functions: scheduling downtime and toggling alerts for services or hosts. All
actions require the host parameter to be given explicitly. In playbooks you can use the {{inventory_hostname}}
variable to refer to the host the playbook is currently running on. You can specify multiple services at once by
separating them with commas, .e.g., services=httpd,nfs,puppet. When specifying what service to handle
there is a special service value, host, which will handle alerts/downtime for the host itself, e.g., service=host.
This keyword may not be given with other services at the same time. Setting alerts/downtime for a host does not affect
alerts/downtime for any of the services running on it. To schedule downtime for all services on particular host use
keyword “all”, e.g., service=all. When using the nagios module you will need to specify your Nagios server
using the delegate_to parameter.
Options
Examples
# SHUT UP NAGIOS
- nagios: action=silence_nagios
# ANNOY ME NAGIOS
- nagios: action=unsilence_nagios
# command something
- nagios: action=command command=’DISABLE_FAILURE_PREDICTION’
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- newrelic_deployment: token=AAAAAA
app_name=myapp
user=’ansible deployment’
revision=1.0
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Creates a new VM and attaches to a network and passes metadata to the instance
- nova_compute:
state: present
login_username: admin
login_password: admin
login_tenant_name: admin
name: vm1
image_id: 4f905f38-e52a-43d2-b6ec-754a13ffb529
key_name: ansible_key
wait_for: 200
flavor_id: 4
nics:
- net-id: 34605f38-e52a-25d2-b6ec-754a13ffb723
meta:
hostname: test1
group: uge_master
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Creates a new key pair and the private key returned after the run.
- nova_keypair: state=present login_username=admin login_password=admin
login_tenant_name=admin name=ansible_key
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
description: Install packages based on package.json using the npm installed with nvm v0.10.1.
- npm: path=/app/location executable=/opt/nvm/v0.10.1/bin/npm state=present
• Synopsis
• Examples
Synopsis
Similar to the facter module, this runs the Ohai discovery program (http://wiki.opscode.com/display/chef/Ohai) on
the remote host and returns JSON inventory data. Ohai data is a bit more verbose and nested than facter.
Examples
# Retrieve (ohai) data from all Web servers and store in one-file per host
ansible webservers -m ohai --tree=/tmp/ohaidata
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: If you like this module, you may also be interested in the osx_say callback in the plugins/ directory of the
source checkout.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# stopping an instance
action: ovirt >
instance_name=testansible
state=stopped
user=admin@internal
password=secret
url=https://ovirt.example.com
# starting an instance
action: ovirt >
instance_name=testansible
state=started
user=admin@internal
password=secret
url=https://ovirt.example.com
Author Afterburn
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Update the package database (pacman -Syy) and install bar (bar will be the updated if a newer versi
- pacman: name=bar, state=installed, update_cache=yes
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a 4 hour maintenance window for service FOO123 with the description "deployment".
- pagerduty: name=companyabc
[email protected]
passwd=password123
state=running
service=FOO123
hours=4
desc=deployment
Note: This module does not yet have support to end maintenance windows.
• Synopsis
• Options
• Examples
Synopsis
Pauses playbook execution for a set amount of time, or until a prompt is acknowledged. All parameters are optional.
The default behavior is to pause with a prompt. You can use ctrl+c if you wish to advance a pause earlier than it is
set to expire or if you need to abort a playbook run entirely. To continue early: press ctrl+c and then c. To abort
a playbook: press ctrl+c and then a. The pause module integrates into async/parallelized playbooks without any
special considerations (see also: Rolling Updates). When using pauses with the serial playbook parameter (as in
rolling updates) you are only prompted once for the current group of hosts.
Options
Examples
• Synopsis
• Examples
Synopsis
A trivial test module, this module always returns pong on successful contact. It does not make sense in playbooks,
but it is useful from /usr/bin/ansible
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: This module does not yet have support to add/remove checks.
• Synopsis
• Options
• Examples
Synopsis
Manage Python library dependencies. To use this module, one of the following keys is required: name or
requirements.
Options
Examples
# Install (MyApp) using one of the remote protocols (bzr+,hg+,git+,svn+). You do not have to supply ’
- pip: name=’svn+http://myrepo/svn/MyApp#egg=MyApp’
# Install (Bottle) into the specified (virtualenv), inheriting none of the globally installed modules
# Install (Bottle) into the specified (virtualenv), inheriting globally installed modules
- pip: name=bottle virtualenv=/my_app/venv virtualenv_site_packages=yes
Note: Please note that virtualenv (http://www.virtualenv.org/) must be installed on the remote host if the virtualenv
parameter is specified.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Author bleader
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: When using pkgsite, be careful that already in cache packages won’t be downloaded again.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Install a package
pkgutil: name=CSWcommon state=present
Author berenddeboer
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a new database with name "acme" and specific encoding and locale
# settings. If a template different from "template0" is specified, encoding
# and locale settings must match those of the template.
- postgresql_db: name=acme
encoding=’UTF-8’
lc_collate=’de_DE.UTF-8’
lc_ctype=’de_DE.UTF-8’
template=’template0’
Note: The default authentication assumes that you are either logging in as or sudo’ing to the postgres account on
the host.
Note: This module uses psycopg2, a Python PostgreSQL database adapter. You must ensure that psycopg2 is
installed on the host before using this module. If the remote host is the PostgreSQL server (which is the default case),
then PostgreSQL must also be installed on the remote host. For Ubuntu-based systems, install the postgresql,
libpq-dev, and python-psycopg2 packages on the remote host before using this module.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# On database "library":
# GRANT SELECT, INSERT, UPDATE ON TABLE public.books, public.authors
# TO librarian, reader WITH GRANT OPTION
- postgresql_privs: >
database=library
state=present
privs=SELECT,INSERT,UPDATE
type=table
objs=books,authors
schema=public
roles=librarian,reader
grant_option=yes
Note: Default authentication assumes that postgresql_privs is run by the postgres user on the remote host. (Ansi-
ble’s user or sudo-user).
Note: This module requires Python package psycopg2 to be installed on the remote host. In the default case of
the remote host also being the PostgreSQL server, PostgreSQL has to be installed there as well, obviously. For
Debian/Ubuntu-based systems, install packages postgresql and python-psycopg2.
Note: Parameters that accept comma separated lists (privs, objs, roles) have singular alias names (priv, obj, role).
Note: To revoke only GRANT OPTION for a specific object, set state to present and grant_option to no (see
examples).
Note: Note that when revoking privileges from a role R, this role may still have access via privileges granted to any
role R is a member of including PUBLIC.
Note: Note that when revoking privileges from a role R, you do so as the user specified via login. If R has been
granted the same privileges by another user also, R can still access database objects via these privileges.
• Synopsis
• Options
• Examples
Synopsis
Add or remove PostgreSQL users (roles) from a remote host and, optionally, grant the users access to an existing
database or tables. The fundamental function of the module is to create, or delete, roles from a PostgreSQL cluster.
Privilege assignment, or removal, is an optional step, which works on one database at a time. This allows for the
module to be called several times in the same module to modify the permissions on different databases, or to grant
permissions to already existing users. A user cannot be removed until all the privileges have been stripped from the
user. In such situation, if the module tries to remove the user it will fail. To avoid this from happening the fail_on_user
option signals the module to try to remove the user, but if not possible keep going; the module will report if changes
happened and separately if the user was removed or not.
Options
Examples
# Create django user and grant access to database and products table
- postgresql_user: db=acme name=django password=ceec4eif7ya priv=CONNECT/products:ALL
# Create rails user, grant privilege to create other databases and demote rails from super user statu
- postgresql_user: name=rails password=secret role_attr_flags=CREATEDB,NOSUPERUSER
Note: The default authentication assumes that you are either logging in as or sudo’ing to the postgres account on the
host.
Note: This module uses psycopg2, a Python PostgreSQL database adapter. You must ensure that psycopg2 is installed
on the host before using this module. If the remote host is the PostgreSQL server (which is the default case), then
PostgreSQL must also be installed on the remote host. For Ubuntu-based systems, install the postgresql, libpq-dev,
and python-psycopg2 packages on the remote host before using this module.
Note: If you specify PUBLIC as the user, then the privilege changes will apply to all users. You may not specify
password or role_attr_flags when the PUBLIC user is specified.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
quantum_router_gateway - set/unset a gateway interface for the router with the specified external
network
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- name: ensure the default vhost contains the HA policy via a dict
rabbitmq_policy: name=HA pattern=’.*’
args:
tags:
"ha-mode": all
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Executes a low-down and dirty SSH command, not going through the module subsystem. This is useful and should
only be done in two cases. The first case is installing python-simplejson on older (Python 2.4 and before)
hosts that need it as a dependency to run modules, since nearly all core modules require it. Another is speaking to any
devices such as routers that do not have any Python installed. In any other case, using the shell or command module
is much more appropriate. Arguments given to raw are run directly through the configured remote shell. Standard
output, error output and return code are returned when available. There is no change handler support for this module.
This module does not require python on the remote system, much like the script module.
Options
Examples
Note: If you want to execute a command securely and predictably, it may be better to use the command module
instead. Best practices when writing playbooks will follow the trend of using command unless shell is explicitly
required. When running ad-hoc commands, use your best judgement.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
files:
/root/.ssh/authorized_keys: /home/localuser/.ssh/id_rsa.pub
/root/test.txt: /home/localuser/test.txt
wait: yes
state: present
networks:
- private
- public
register: rax
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
rax_clb_nodes - add, modify and remove nodes from a Rackspace Cloud Load Balancer
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used: RAX_USERNAME, RAX_API_KEY, RAX_CREDENTIALS
and RAX_REGION.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
rax_files:
container: mycontainer2
type: meta
meta:
uploaded_by: [email protected]
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
Note: Keypairs cannot be manipulated, only created and deleted. To “update” a keypair you must first delete and then
recreate.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: Network create request
local_action:
module: rax_network
credentials: ~/.raxpub
label: my-net
cidr: 192.168.3.0/24
state: present
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS points to a credentials file appropriate for pyrax
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
redhat_subscription - Manage Red Hat Network registration and subscriptions using the
subscription-manager command
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Register as user (joe_user) with password (somepass) and auto-subscribe to available content.
- redhat_subscription: action=register username=joe_user password=somepass autosubscribe=true
Note: In order to register a system, subscription-manager requires either a username and password, or an activation-
key.
• Synopsis
• Options
• Examples
Synopsis
Unified utility to interact with redis instances. ‘slave’ Sets a redis instance in slave or master mode. ‘flush’ Flushes all
the instance or a specified db.
Options
Examples
Note: Requires the redis-py Python package on the remote host. You can install it with pip (pip install redis) or with
a package manager. https://github.com/andymccurdy/redis-py
Note: If the redis master instance we are making slave of is password protected this needs to be in the redis.conf in
the masterauth variable
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
rhn_register - Manage Red Hat Network registration using the rhnreg_ks command
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Register as user (joe_user) with password (somepass) and auto-subscribe to available content.
- rhn_register: state=present username=joe_user password=somepass
Note: In order to register a system, rhnreg_ks requires either a username and password, or an activationkey.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Delete new.foo.com A record using the results from the get command
- route53: >
command=delete
zone=foo.com
record={{ rec.set.record }}
type={{ rec.set.type }}
value={{ rec.set.value }}
# Add an AAAA record. Note that because there are colons in the value
# that the entire parameter list must be quoted:
- route53: >
command=create
zone=foo.com
record=localhost.foo.com
type=AAAA
ttl=7200
value="::1"
# Add a TXT record. Note that TXT and SPF records must be surrounded
# by quotes when sent to Route 53:
- route53: >
command=create
zone=foo.com
record=localhost.foo.com
type=TXT
ttl=7200
value=""bar""
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
The script module takes the script name followed by a list of space-delimited arguments. The local script at
path will be transfered to the remote node and then executed. The given script will be processed through the shell
environment on the remote node. This module does not require python on the remote system, much like the raw
module.
Options
Examples
# Run a script that creates a file, but only if the file is not yet created
- script: /some/local/create_file.sh --some-arguments 1234 creates=/the/created/file.txt
# Run a script that removes a file, but only if the file is not yet removed
- script: /some/local/remove_file.sh --some-arguments 1234 removes=/the/removed/file.txt
Note: It is usually preferable to write Ansible modules than pushing scripts. Convert your script to an Ansible module
for bonus points!
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Configures the SELinux mode and policy. A reboot may be required after usage. Ansible will not issue this reboot but
will let you know when it is required.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Example action to enable service httpd, and not touch the running state
- service: name=httpd enabled=yes
• Synopsis
• Options
• Examples
Synopsis
This module allows setting new variables. Variables are set on a host-by-host basis just like facts discovered by the
setup module. These variables will survive between plays.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
This module is automatically called by playbooks to gather useful variables about remote hosts that can be used in
playbooks. It can also be executed directly by /usr/bin/ansible to check what variables are available to a host.
Ansible provides many facts about the system, automatically.
Options
Examples
# Display facts from all hosts and store them indexed by I(hostname) at C(/tmp/facts).
ansible all -m setup --tree /tmp/facts
# Display only facts regarding memory found by ansible on all hosts and output them.
ansible all -m setup -a ’filter=ansible_*_mb’
Note: More ansible facts will be added with successive releases. If facter or ohai are installed, variables from
these programs will also be snapshotted into the JSON file for usage in templating. These variables are prefixed with
facter_ and ohai_ so it’s easy to tell their source. All variables are bubbled up to the caller. Using the ansible
facts and choosing to not install facter and ohai means you can avoid Ruby-dependencies on your remote systems.
(See also facter and ohai.)
Note: The filter option filters only the first level subkey below ansible_facts.
• Synopsis
• Options
• Examples
Synopsis
The shell module takes the command name followed by a list of space-delimited arguments. It is almost exactly
like the command module but runs the command through a shell (/bin/sh) on the remote node.
Options
Examples
Note: If you want to execute a command securely and predictably, it may be better to use the command module
instead. Best practices when writing playbooks will follow the trend of using command unless shell is explicitly
required. When running ad-hoc commands, use your best judgement.
• Synopsis
• Options
• Examples
Synopsis
This module works like fetch. It is used for fetching a base64- encoded blob containing the data in a remote file.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Obtain the stats of /etc/foo.conf, and check that the file still belongs
# to ’root’. Fail otherwise.
- stat: path=/etc/foo.conf
register: st
- fail: msg="Whoops! file ownership has changed"
when: st.stat.pw_name != ’root’
• Synopsis
• Options
• Examples
Synopsis
Deploy given repository URL / revision to dest. If dest exists, update to the specified revision, otherwise perform a
checkout.
Options
Examples
supervisorctl - Manage the state of a program or group of programs running via Supervisord
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Manages SVR4 packages on Solaris 10 and 11. These were the native packages on Solaris <= 10 and are available
as a legacy feature in Solaris 11. Note that this is a very basic packaging system. It will not enforce dependencies on
install or remove.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
synchronize - Uses rsync to make synchronizing file paths in your playbooks quick and easy.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
delegate_to: delegate.host
# Synchronize and delete files in dest on the remote host that are not found in src of localhost.
synchronize: src=some/relative/path dest=/some/absolute/path delete=yes
Note: Inspect the verbose output to validate the destination user/host/path are what was expected.
Note: The remote user for the dest path will always be the remote_user, not the sudo_user.
Note: To exclude files and directories from being synchronized, you may add .rsync-filter files to the source
directory.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Set ip forwarding on in /proc and in the sysctl file and reload if necessary
- sysctl: name="net.ipv4.ip_forward" value=1 sysctl_set=yes state=present reload=yes
• Synopsis
• Options
• Examples
Synopsis
Templates are processed by the Jinja2 templating language (http://jinja.pocoo.org/docs/) - documentation on the tem-
plate formatting can be found in the Template Designer Documentation (http://jinja.pocoo.org/docs/templates/). Six
additional variables can be used in templates: ansible_managed (configurable via the defaults section of
ansible.cfg) contains a string which can be used to describe the template name, host, modification time of the tem-
plate file and the owner uid, template_host contains the node name of the template’s machine, template_uid
the owner, template_path the absolute path of the template, template_fullpath is the absolute path of the
template, and template_run_date is the date that the template was rendered. Note that including a string that
uses a date in the template will resort in the template being marked ‘changed’ each time.
Options
Examples
# Copy a new "sudoers file into place, after passing validation with visudo
- action: template src=/https/www.scribd.com/mine/sudoers dest=/etc/sudoers validate=’visudo -cf %s’
Note: Since Ansible version 0.9, templates are loaded with trim_blocks=True.
Note: Also, you can override jinja2 settings by adding a special header to template file. i.e.
#jinja2:variable_start_string:’[%’ , variable_end_string:’%]’ which changes the vari-
able interpolation markers to [% var %] instead of {{ var }}. This is the best way to prevent evaluation of things
that look like, but should not be Jinja2. raw/endraw in Jinja2 will not work as you expect because templates in Ansible
are recursively evaluated.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: can handle gzip, bzip2 and xz compressed as well as uncompressed tar files
Note: uses tar’s --diff arg to calculate if changed or not. If this arg is not supported, it will always unpack the
archive
Note: does not detect if a .zip file is different from destination - always unzips
Note: existing files/directories in the destination which are not in the archive are not touched. This is the same
behavior as a normal archive extraction
Note: existing files/directories in the destination which are not in the archive are ignored for purposes of deciding if
the archive should be unpacked or not
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Check that you can connect (GET) to a page and it returns a status 200
- uri: url=http://www.example.com
# Check that a page returns a status 200 and fail if the word AWESOME is not in the page contents.
- action: uri url=http://www.example.com return_content=yes
register: webpage
- action: fail
when: ’AWESOME’ not in "{{ webpage.content }}"
- action: >
uri url=https://your.form.based.auth.examle.com/index.php
method=POST body="name=your_username&password=your_password&enter=Sign%20in"
status_code=302 HEADER_Content-Type="application/x-www-form-urlencoded"
register: login
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Add the user ’johnd’ with a specific uid and a primary group of ’admin’
- user: name=johnd comment="John Doe" uid=1040
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# /usr/bin/ansible invocations
ansible host -m virt -a "name=alpha command=status"
ansible host -m virt -a "name=alpha command=get_xml"
ansible host -m virt -a "name=alpha command=create uri=lxc:///"
• Synopsis
• Options
• Examples
Synopsis
Waiting for a port to become available is useful for when services are not immediately available after their init scripts
return - which is true of certain Java application servers. It is also useful when starting guests with the virt module
and needing to pause until they are ready. This module can also be used to wait for a file to be available on the
filesystem or with a regex match a string to be present in a file.
Options
Examples
# wait 300 seconds for port 8000 to become open on the host, don’t start checking for 10 seconds
- wait_for: port=8000 delay=10
# wait until the string "completed" is in the file /tmp/foo before continuing
- wait_for: path=/tmp/foo search_regex=completed
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Installs, upgrade, removes, and lists packages and groups with the yum package manager.
Options
Examples
- name: install the latest version of Apche from the testing repo
yum: name=httpd enablerepo=testing state=installed
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Install "nmap"
- zypper: name=nmap state=present
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- digital_ocean: >
state=present
command=ssh
name=my_ssh_key
ssh_pub_key=’ssh-rsa AAAA...’
client_id=XXX
api_key=XXX
- digital_ocean: >
state=present
command=droplet
name=mydroplet
client_id=XXX
api_key=XXX
size_id=1
region_id=2
image_id=3
wait_timeout=500
register: my_droplet
- debug: msg="ID is {{ my_droplet.droplet.id }}"
- debug: msg="IP is {{ my_droplet.droplet.ip_address }}"
- digital_ocean: >
state=present
command=droplet
id=123
name=mydroplet
client_id=XXX
api_key=XXX
size_id=1
region_id=2
image_id=3
wait_timeout=500
- digital_ocean: >
state=present
ssh_key_ids=id1,id2
name=mydroplet
client_id=XXX
api_key=XXX
size_id=1
region_id=2
image_id=3
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Start one docker container running tomcat in each host of the web group and bind tomcat’s listening p
on the host:
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker: image=centos command="service tomcat6 start" ports=8080
The tomcat server’s port is NAT’ed to a dynamic port on the host, but you can determine which port th
mapped to using docker_containers:
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker: image=centos command="service tomcat6 start" ports=8080 count=5
- name: Display IP address and port mappings for containers
debug: msg={{inventory_hostname}}:{{item[’HostConfig’][’PortBindings’][’8080/tcp’][0][’HostPort’]
with_items: docker_containers
Just as in the previous example, but iterates over the list of docker containers with a sequence:
- hosts: web
sudo: yes
vars:
start_containers_count: 5
tasks:
- name: run tomcat servers
docker: image=centos command="service tomcat6 start" ports=8080 count={{start_containers_count}}
- name: Display IP address and port mappings for containers
debug: msg="{{inventory_hostname}}:{{docker_containers[{{item}}][’HostConfig’][’PortBindings’][’8
with_sequence: start=0 end={{start_containers_count - 1}}
Stop, remove all of the running tomcat containers and list the exit code from the stopped containers:
- hosts: web
sudo: yes
tasks:
- name: stop tomcat servers
docker: image=centos command="service tomcat6 start" state=absent
- name: Display return codes from stopped containers
debug: msg="Returned {{inventory_hostname}}:{{item}}"
with_items: docker_containers
- hosts: web
sudo: yes
tasks:
- name: run tomcat server
docker: image=centos name=tomcat command="service tomcat6 start" ports=8080
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker: image=centos name={{item}} command="service tomcat6 start" ports=8080
with_items:
- crookshank
- snowbell
- heathcliff
- felix
- sylvester
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker: image=centos name={{item}} command="service tomcat6 start" ports=8080
with_sequence: start=1 end=5 format=tomcat_%d.example.com
- hosts: web
sudo: yes
tasks:
- name: ensure redis container is running
docker: image=crosbymichael/redis name=redis
- hosts: web
sudo: yes
tasks:
- docker:
image: namespace/image_name
links:
- postgresql:db
- redis:redis
Create containers with options specified as strings and lists as comma-separated strings:
- hosts: web
sudo: yes
tasks:
docker: image=namespace/image_name links=postgresql:db,redis:redis
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Build docker image if required. Path should contains Dockerfile to build image:
- hosts: web
sudo: yes
tasks:
- name: check or build image
docker_image: path="/path/to/build/dir" name="my/app" state=present
- hosts: web
sudo: yes
tasks:
- name: check or build image
docker_image: path="/path/to/build/dir" name="my/app" state=build
- hosts: web
sudo: yes
tasks:
- name: run tomcat servers
docker_image: name="my/app" state=absent
• Synopsis
• Options
• Examples
Synopsis
Creates or terminates ec2 instances. When created optionally waits for it to be ‘running’. This module has a depen-
dency on python-boto >= 2.5
Options
Examples
monitoring: yes
# VPC example
- local_action:
module: ec2
key_name: mykey
group_id: sg-1dc53f72
instance_type: m1.small
image: ami-6e649707
wait: yes
vpc_subnet_id: subnet-29e63245
assign_public_ip: yes
- my_awesome_role
- my_awesome_test
- local_action:
module: ec2
key_name: mykey
instance_type: c1.medium
image: emi-40603AD1
wait: yes
group: webserver
instance_tags:
foo: bar
exact_count: 5
count_tag: foo
#
# Enforce that 5 running instances named "database" with a "dbtype" of "postgres"
#
- local_action:
module: ec2
key_name: mykey
instance_type: c1.medium
image: emi-40603AD1
wait: yes
group: webserver
instance_tags:
Name: database
dbtype: postgres
exact_count: 5
count_tag:
Name: database
dbtype: postgres
#
# count_tag complex argument examples
#
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Deregister/Delete AMI
- local_action:
module: ec2_ami
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
region: xxxxxx
image_id: ${instance.image_id}
delete_snapshot: True
state: absent
# Deregister AMI
- local_action:
module: ec2_ami
aws_access_key: xxxxxxxxxxxxxxxxxxxxxxx
aws_secret_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
region: xxxxxx
image_id: ${instance.image_id}
delete_snapshot: False
state: absent
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
with_items: ec2.instance_ids
Note: This module will return public_ip on success, which will contain the public IP address associated with the
instance.
Note: There may be a delay between the time the Elastic IP is assigned and when the cloud instance is reachable
via the new address. Use wait_for and pause to delay further playbook execution until the instance is reachable, if
necessary.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- myrole
post_tasks:
- name: Instance Register
local_action: ec2_elb
args:
instance_id: "{{ ansible_ec2_instance_id }}"
ec2_elbs: "{{ item }}"
state: ’present’
with_items: ec2_elbs
ec2_elb_lb - Creates or destroys Amazon ELB. - Returns information about the load balancer. - Will
be marked changed when called only if state is changed.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Normally, this module will purge any listeners that exist on the ELB
# but aren’t specified in the listeners parameter. If purge_listeners is
# false it leaves them alone
- local_action:
module: ec2_elb_lb
name: "test-please-delete"
state: present
zones:
- us-east-1a
- us-east-1d
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
purge_listeners: no
# Normally, this module will leave availability zones that are enabled
# on the ELB alone. If purge_zones is true, then any extreneous zones
# will be removed
- local_action:
module: ec2_elb_lb
name: "test-please-delete"
state: present
zones:
- us-east-1a
- us-east-1d
listeners:
- protocol: http
load_balancer_port: 80
instance_port: 80
purge_zones: yes
• Synopsis
• Examples
Synopsis
Examples
# Conditional example
- name: Gather facts
action: ec2_facts
- name: Conditional
action: debug msg="This instance is a t1.micro"
when: ansible_ec2_instance_type == "t1.micro"
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Creates a new ec2 key pair named ‘example‘ if not present, returns generated
# private key
- name: example ec2 key
local_action:
module: ec2_key
name: example
# Creates a new ec2 key pair named ‘example‘ if not present using provided key
# material
- name: example2 ec2 key
local_action:
module: ec2_key
name: example2
key_material: ’ssh-rsa AAAAxyz...== [email protected]’
state: present
# Creates a new ec2 key pair named ‘example‘ if not present using provided key
# material
- name: example3 ec2 key
local_action:
module: ec2_key
name: example3
key_material: "{{ item }}"
with_file: /path/to/public_key.id_rsa.pub
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
ec2_vol - create and attach a volume, return volume id and device map
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Removal of a VPC by id
local_action:
module: ec2_vpc
state: absent
vpc_id: vpc-aaaaaaa
region: us-west-2
If you have added elements not managed by this module, e.g. instances, NATs, etc then
the delete will fail until those dependencies are removed.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Basic example
- local_action:
module: elasticache
name: "test-please-delete"
state: present
engine: memcached
cache_engine_version: 1.4.14
node_type: cache.m1.small
num_nodes: 1
cache_port: 11211
cache_security_groups:
- default
zone: us-east-1d
Author [email protected] Note. Most of the code has been taken from the S3 module.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Example using defaults and with metadata to create a single ’foo’ instance
- local_action:
module: gce
name: foo
metadata: ’{"db":"postgres", "group":"qa", "id":500}’
# Launch instances from a control node, runs some tasks on the new instances,
# and then terminate them
- name: Create a sandbox instance
hosts: localhost
vars:
names: foo,bar
machine_type: n1-standard-1
image: debian-6
zone: us-central1-a
tasks:
- name: Launch instances
local_action: gce instance_names={{names}} machine_type={{machine_type}}
image={{image}} zone={{zone}}
register: gce
- name: Wait for SSH to come up
local_action: wait_for host={{item.public_ip}} port=22 delay=10
timeout=60 state=started
with_items: {{gce.instance_data}}
- my_awesome_tasks
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Simple example of creating a new LB, adding members, and a health check
- local_action:
module: gce_lb
name: testlb
region: us-central1
members: ["us-central1-a/www-a", "us-central1-b/www-b"]
httphealthcheck_name: hc
httphealthcheck_port: 80
httphealthcheck_path: "/up"
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a tenant
- keystone_user: tenant=demo tenant_description="Default Tenant"
# Create a user
- keystone_user: user=john tenant=demo password=secrete
# Apply the admin role to the john user in the demo tenant
- keystone_user: role=admin user=john tenant=demo
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a server
- local_action:
module: linode
api_key: ’longStringFromLinodeApi’
name: linode-test1
plan: 1
datacenter: 2
distribution: 99
password: ’superSecureRootPassword’
ssh_pub_key: ’ssh-rsa qwerty’
swap: 768
wait: yes
wait_timeout: 600
state: present
# Delete a server
- local_action:
module: linode
api_key: ’longStringFromLinodeApi’
name: linode-test1
linode_id: 12345678
state: absent
# Stop a server
- local_action:
module: linode
api_key: ’longStringFromLinodeApi’
name: linode-test1
linode_id: 12345678
state: stopped
# Reboot a server
- local_action:
module: linode
api_key: ’longStringFromLinodeApi’
name: linode-test1
linode_id: 12345678
state: restarted
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Creates a new VM and attaches to a network and passes metadata to the instance
- nova_compute:
state: present
login_username: admin
login_password: admin
login_tenant_name: admin
name: vm1
image_id: 4f905f38-e52a-43d2-b6ec-754a13ffb529
key_name: ansible_key
wait_for: 200
flavor_id: 4
nics:
- net-id: 34605f38-e52a-25d2-b6ec-754a13ffb723
meta:
hostname: test1
group: uge_master
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Creates a new key pair and the private key returned after the run.
- nova_keypair: state=present login_username=admin login_password=admin
login_tenant_name=admin name=ansible_key
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# stopping an instance
action: ovirt >
instance_name=testansible
state=stopped
user=admin@internal
password=secret
url=https://ovirt.example.com
# starting an instance
action: ovirt >
instance_name=testansible
state=started
user=admin@internal
password=secret
url=https://ovirt.example.com
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
quantum_router_gateway - set/unset a gateway interface for the router with the specified external
network
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
group: test
wait: yes
register: rax
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
type: SERVICENET
timeout: 30
region: DFW
wait: yes
state: present
meta:
app: my-cool-app
register: my_lb
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
rax_clb_nodes - add, modify and remove nodes from a Rackspace Cloud Load Balancer
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
type: primary
wait: yes
credentials: /path/to/credentials
Note: The following environment variables can be used: RAX_USERNAME, RAX_API_KEY, RAX_CREDENTIALS
and RAX_REGION.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
domain: example.org
name: www.example.org
data: 127.0.0.1
type: A
register: rax_dns_record
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
container: mycontainer2
meta:
key: value
file_for: [email protected]
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
src: ~/Downloads/testcont/file2
method: put
meta:
testkey: testdata
who_uploaded_this: [email protected]
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
Note: Keypairs cannot be manipulated, only created and deleted. To “update” a keypair you must first delete and then
recreate.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: Network create request
local_action:
module: rax_network
credentials: ~/.raxpub
label: my-net
cidr: 192.168.3.0/24
state: present
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS points to a credentials file appropriate for pyrax
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The following environment variables can be used, RAX_USERNAME, RAX_API_KEY, RAX_CREDS_FILE,
RAX_CREDENTIALS, RAX_REGION.
Note: RAX_CREDENTIALS and RAX_CREDS_FILE points to a credentials file appropriate for pyrax. See
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
Note: RAX_REGION defines a Rackspace Public Cloud region (DFW, ORD, LON, ...)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- route53: >
command=get
zone=foo.com
record=new.foo.com
type=A
register: rec
# Delete new.foo.com A record using the results from the get command
- route53: >
command=delete
zone=foo.com
record={{ rec.set.record }}
type={{ rec.set.type }}
value={{ rec.set.value }}
# Add an AAAA record. Note that because there are colons in the value
# that the entire parameter list must be quoted:
- route53: >
command=create
zone=foo.com
record=localhost.foo.com
type=AAAA
ttl=7200
value="::1"
# Add a TXT record. Note that TXT and SPF records must be surrounded
# by quotes when sent to Route 53:
- route53: >
command=create
zone=foo.com
record=localhost.foo.com
type=TXT
ttl=7200
value=""bar""
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# /usr/bin/ansible invocations
ansible host -m virt -a "name=alpha command=status"
• Synopsis
• Options
• Examples
Synopsis
The command module takes the command name followed by a list of space-delimited arguments. The given command
will be executed on all selected nodes. It will not be processed through the shell, so variables like $HOME and
operations like "<", ">", "|", and "&" will not work (use the shell module if you need these features).
Options
Examples
Note: If you want to run a command through the shell (say you are using <, >, |, etc), you actually want the shell
module instead. The command module is much more secure as it’s not affected by the user’s environment.
Note: creates, removes, and chdir can be specified after the command. For instance, if you only want to run
a command if a certain file does not exist, use this.
• Synopsis
• Options
• Examples
Synopsis
Executes a low-down and dirty SSH command, not going through the module subsystem. This is useful and should
only be done in two cases. The first case is installing python-simplejson on older (Python 2.4 and before)
hosts that need it as a dependency to run modules, since nearly all core modules require it. Another is speaking to any
devices such as routers that do not have any Python installed. In any other case, using the shell or command module
is much more appropriate. Arguments given to raw are run directly through the configured remote shell. Standard
output, error output and return code are returned when available. There is no change handler support for this module.
This module does not require python on the remote system, much like the script module.
Options
Examples
Note: If you want to execute a command securely and predictably, it may be better to use the command module
instead. Best practices when writing playbooks will follow the trend of using command unless shell is explicitly
required. When running ad-hoc commands, use your best judgement.
• Synopsis
• Options
• Examples
Synopsis
The script module takes the script name followed by a list of space-delimited arguments. The local script at
path will be transfered to the remote node and then executed. The given script will be processed through the shell
environment on the remote node. This module does not require python on the remote system, much like the raw
module.
Options
Examples
# Run a script that creates a file, but only if the file is not yet created
- script: /some/local/create_file.sh --some-arguments 1234 creates=/the/created/file.txt
# Run a script that removes a file, but only if the file is not yet removed
- script: /some/local/remove_file.sh --some-arguments 1234 removes=/the/removed/file.txt
Note: It is usually preferable to write Ansible modules than pushing scripts. Convert your script to an Ansible module
for bonus points!
• Synopsis
• Options
• Examples
Synopsis
The shell module takes the command name followed by a list of space-delimited arguments. It is almost exactly
like the command module but runs the command through a shell (/bin/sh) on the remote node.
Options
Examples
Note: If you want to execute a command securely and predictably, it may be better to use the command module
instead. Best practices when writing playbooks will follow the trend of using command unless shell is explicitly
required. When running ad-hoc commands, use your best judgement.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create ’burgers’ database user with name ’bob’ and password ’12345’.
- mongodb_user: database=burgers name=bob password=12345 state=present
# Define more users with various specific roles (if not defined, no roles is assigned, and the user w
- mongodb_user: database=burgers name=ben password=12345 roles=’read’ state=present
- mongodb_user: database=burgers name=jim password=12345 roles=’readWrite,dbAdmin,userAdmin’ state=pr
- mongodb_user: database=burgers name=joe password=12345 roles=’readWriteAnyDatabase’ state=present
Note: Requires the pymongo Python package on the remote host, version 2.4.2+. This can be installed using pip or
the OS package manager. @see http://api.mongodb.org/python/current/installation.html
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Copy database dump file to remote host and restore it to database ’my_db’
- copy: src=dump.sql.bz2 dest=/tmp
- mysql_db: name=my_db state=import target=/tmp/dump.sql.bz2
Note: Requires the MySQLdb Python package on the remote host. For Ubuntu, this is as easy as apt-get install
python-mysqldb. (See apt.)
Note: Both login_password and login_user are required when you are passing credentials. If none are present, the
module will attempt to read the credentials from ~/.my.cnf, and finally fall back to using the MySQL default login
of root with no password.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Change master to master server 192.168.1.1 and use binary log ’mysql-bin.000009’ with position 4578
- mysql_replication: mode=changemaster master_host=192.168.1.1 master_log_file=mysql-bin.000009 maste
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create database user with name ’bob’ and password ’12345’ with all database privileges
- mysql_user: name=bob password=12345 priv=*.*:ALL state=present
# Creates database user ’bob’ and password ’12345’ with all database privileges and ’WITH GRANT OPTIO
- mysql_user: name=bob password=12345 priv=*.*:ALL,GRANT state=present
# Ensure no user named ’sally’ exists, also passing in the auth credentials.
- mysql_user: login_user=root login_password=123456 name=sally state=absent
[client]
user=root
password=n<_665{vS43y
Note: Requires the MySQLdb Python package on the remote host. For Ubuntu, this is as easy as apt-get install
python-mysqldb.
Note: Both login_password and login_username are required when you are passing credentials. If none are
present, the module will attempt to read the credentials from ~/.my.cnf, and finally fall back to using the MySQL
default login of ‘root’ with no password.
Note: MySQL server installs with default login_user of ‘root’ and no password. To secure this user as part of an
idempotent playbook, you must create at least two tasks: the first must change the root user’s password, without
providing any login_user/login_password details. The second must drop a ~/.my.cnf file containing the new root
credentials. Subsequent runs of the playbook will then succeed by reading the new credentials from the file.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a new database with name "acme" and specific encoding and locale
# settings. If a template different from "template0" is specified, encoding
# and locale settings must match those of the template.
- postgresql_db: name=acme
encoding=’UTF-8’
lc_collate=’de_DE.UTF-8’
lc_ctype=’de_DE.UTF-8’
template=’template0’
Note: The default authentication assumes that you are either logging in as or sudo’ing to the postgres account on
the host.
Note: This module uses psycopg2, a Python PostgreSQL database adapter. You must ensure that psycopg2 is
installed on the host before using this module. If the remote host is the PostgreSQL server (which is the default case),
then PostgreSQL must also be installed on the remote host. For Ubuntu-based systems, install the postgresql,
libpq-dev, and python-psycopg2 packages on the remote host before using this module.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# On database "library":
# GRANT SELECT, INSERT, UPDATE ON TABLE public.books, public.authors
# TO librarian, reader WITH GRANT OPTION
- postgresql_privs: >
database=library
state=present
privs=SELECT,INSERT,UPDATE
type=table
objs=books,authors
schema=public
roles=librarian,reader
grant_option=yes
Note: Default authentication assumes that postgresql_privs is run by the postgres user on the remote host. (Ansi-
ble’s user or sudo-user).
Note: This module requires Python package psycopg2 to be installed on the remote host. In the default case of
the remote host also being the PostgreSQL server, PostgreSQL has to be installed there as well, obviously. For
Debian/Ubuntu-based systems, install packages postgresql and python-psycopg2.
Note: Parameters that accept comma separated lists (privs, objs, roles) have singular alias names (priv, obj, role).
Note: To revoke only GRANT OPTION for a specific object, set state to present and grant_option to no (see
examples).
Note: Note that when revoking privileges from a role R, this role may still have access via privileges granted to any
role R is a member of including PUBLIC.
Note: Note that when revoking privileges from a role R, you do so as the user specified via login. If R has been
granted the same privileges by another user also, R can still access database objects via these privileges.
• Synopsis
• Options
• Examples
Synopsis
Add or remove PostgreSQL users (roles) from a remote host and, optionally, grant the users access to an existing
database or tables. The fundamental function of the module is to create, or delete, roles from a PostgreSQL cluster.
Privilege assignment, or removal, is an optional step, which works on one database at a time. This allows for the
module to be called several times in the same module to modify the permissions on different databases, or to grant
permissions to already existing users. A user cannot be removed until all the privileges have been stripped from the
user. In such situation, if the module tries to remove the user it will fail. To avoid this from happening the fail_on_user
option signals the module to try to remove the user, but if not possible keep going; the module will report if changes
happened and separately if the user was removed or not.
Options
Examples
# Create django user and grant access to database and products table
- postgresql_user: db=acme name=django password=ceec4eif7ya priv=CONNECT/products:ALL
# Create rails user, grant privilege to create other databases and demote rails from super user statu
- postgresql_user: name=rails password=secret role_attr_flags=CREATEDB,NOSUPERUSER
Note: The default authentication assumes that you are either logging in as or sudo’ing to the postgres account on the
host.
Note: This module uses psycopg2, a Python PostgreSQL database adapter. You must ensure that psycopg2 is installed
on the host before using this module. If the remote host is the PostgreSQL server (which is the default case), then
PostgreSQL must also be installed on the remote host. For Ubuntu-based systems, install the postgresql, libpq-dev,
and python-psycopg2 packages on the remote host before using this module.
Note: If you specify PUBLIC as the user, then the privilege changes will apply to all users. You may not specify
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Requires the redis-py Python package on the remote host. You can install it with pip (pip install redis) or with
a package manager. https://github.com/andymccurdy/redis-py
Note: If the redis master instance we are making slave of is password protected this needs to be in the redis.conf in
the masterauth variable
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The “acl” module requires that acls are enabled on the target filesystem and that the setfacl and getfacl binaries
are installed.
• Synopsis
• Options
• Examples
Synopsis
Assembles a configuration file from fragments. Often a particular program will take a single configuration file and does
not support a conf.d style structure where it is easy to build up the configuration from multiple sources. assemble
will take a directory of files that can be local or have already been transferred to the system, and concatenate them
together to produce a destination file. Files are assembled in string sorting order. Puppet calls this idea fragments.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
The copy module copies a file on the local box to remote locations.
Options
Examples
# Copy a new "ntp.conf file into place, backing up the original if it differs from the copied version
- copy: src=/https/www.scribd.com/mine/ntp.conf dest=/etc/ntp.conf owner=root group=root mode=644 backup=yes
# Copy a new "sudoers" file into place, after passing validation with visudo
- copy: src=/https/www.scribd.com/mine/sudoers dest=/etc/sudoers validate=’visudo -cf %s’
Note: The “copy” module recursively copy facility does not scale to lots (>hundreds) of files. For alternative, see
synchronize module, which is a wrapper around rsync.
• Synopsis
• Options
• Examples
Synopsis
This module works like copy, but in reverse. It is used for fetching files from remote machines and storing them
locally in a file tree, organized by hostname. Note that this module is written to transfer log files that might not be
present, so a missing remote file won’t be an error unless fail_on_missing is set to ‘yes’.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Sets attributes of files, symlinks, and directories, or removes files/symlinks/directories. Many other modules support
the same options as the file module - including copy, template, and assemble.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Manage (add, remove, change) individual settings in an INI-style file without having to manage the file as a whole
with, say, template or assemble. Adds missing sections if they don’t exist. Comments are discarded when the
source file is read, and therefore will not show up in the destination file.
Options
Examples
- ini_file: dest=/etc/anotherconf
section=drinks
option=temperature
value=cold
backup=yes
Note: While it is possible to add an option without specifying a value, this makes no sense.
Note: A section named default cannot be added by the module, but if it exists, individual options within the
section can be updated. (This is a limitation of Python’s ConfigParser.) Either use template to create a base INI
file with a [default] section, or use lineinfile to add the missing line.
lineinfile - Ensure a particular line is in a file, or replace an existing line using a back-referenced
regular expression.
• Synopsis
• Options
• Examples
Synopsis
This module will search a file for a line, and ensure that it is present or absent. This is primarily useful when you want
to change a single line in a file only. For other cases, see the copy or template modules.
Options
Examples
# Fully quoted because of the ’: ’ on the line. See the Gotchas in the YAML docs.
- lineinfile: "dest=/etc/sudoers state=present regexp=’^%wheel’ line=’%wheel ALL=(ALL) NOPASSWD: ALL’
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Obtain the stats of /etc/foo.conf, and check that the file still belongs
# to ’root’. Fail otherwise.
- stat: path=/etc/foo.conf
register: st
- fail: msg="Whoops! file ownership has changed"
when: st.stat.pw_name != ’root’
synchronize - Uses rsync to make synchronizing file paths in your playbooks quick and easy.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Synchronize and delete files in dest on the remote host that are not found in src of localhost.
synchronize: src=some/relative/path dest=/some/absolute/path delete=yes
- /var # exclude any path starting with ’var’ starting at the source directory
+ /var/conf # include /var/conf even though it was previously excluded
Note: Inspect the verbose output to validate the destination user/host/path are what was expected.
Note: The remote user for the dest path will always be the remote_user, not the sudo_user.
Note: To exclude files and directories from being synchronized, you may add .rsync-filter files to the source
directory.
• Synopsis
• Options
• Examples
Synopsis
Templates are processed by the Jinja2 templating language (http://jinja.pocoo.org/docs/) - documentation on the tem-
plate formatting can be found in the Template Designer Documentation (http://jinja.pocoo.org/docs/templates/). Six
additional variables can be used in templates: ansible_managed (configurable via the defaults section of
ansible.cfg) contains a string which can be used to describe the template name, host, modification time of the tem-
plate file and the owner uid, template_host contains the node name of the template’s machine, template_uid
the owner, template_path the absolute path of the template, template_fullpath is the absolute path of the
template, and template_run_date is the date that the template was rendered. Note that including a string that
uses a date in the template will resort in the template being marked ‘changed’ each time.
Options
Examples
# Copy a new "sudoers file into place, after passing validation with visudo
- action: template src=/https/www.scribd.com/mine/sudoers dest=/etc/sudoers validate=’visudo -cf %s’
Note: Since Ansible version 0.9, templates are loaded with trim_blocks=True.
Note: Also, you can override jinja2 settings by adding a special header to template file. i.e.
#jinja2:variable_start_string:’[%’ , variable_end_string:’%]’ which changes the vari-
able interpolation markers to [% var %] instead of {{ var }}. This is the best way to prevent evaluation of things
that look like, but should not be Jinja2. raw/endraw in Jinja2 will not work as you expect because templates in Ansible
are recursively evaluated.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: can handle gzip, bzip2 and xz compressed as well as uncompressed tar files
Note: uses tar’s --diff arg to calculate if changed or not. If this arg is not supported, it will always unpack the
archive
Note: does not detect if a .zip file is different from destination - always unzips
Note: existing files/directories in the destination which are not in the archive are not touched. This is the same
behavior as a normal archive extraction
Note: existing files/directories in the destination which are not in the archive are ignored for purposes of deciding if
the archive should be unpacked or not
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
Synopsis
Options
add_host - add a host (and alternatively a group) to the ansible-playbook in-memory inventory
• Synopsis
• Options
• Examples
Synopsis
Use variables to create new hosts and groups in inventory for use in later plays of the same playbook. Takes variables
so you can define the new hosts more fully.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Use facts to create ad-hoc groups that can be used later in a playbook.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- name: ensure the default vhost contains the HA policy via a dict
rabbitmq_policy: name=HA pattern=’.*’
args:
tags:
"ha-mode": all
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- airbrake_deployment: token=AAAAAA
environment=’staging’
user=’ansible’
revision=4.2
Author [email protected]
• Synopsis
• Options
• Examples
Synopsis
Options
Note: Requires bprobe is required to send data, but not to register a meter
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
The nagios module has two basic functions: scheduling downtime and toggling alerts for services or hosts. All
actions require the host parameter to be given explicitly. In playbooks you can use the {{inventory_hostname}}
variable to refer to the host the playbook is currently running on. You can specify multiple services at once by
separating them with commas, .e.g., services=httpd,nfs,puppet. When specifying what service to handle
there is a special service value, host, which will handle alerts/downtime for the host itself, e.g., service=host.
This keyword may not be given with other services at the same time. Setting alerts/downtime for a host does not affect
alerts/downtime for any of the services running on it. To schedule downtime for all services on particular host use
keyword “all”, e.g., service=all. When using the nagios module you will need to specify your Nagios server
using the delegate_to parameter.
Options
Examples
# SHUT UP NAGIOS
- nagios: action=silence_nagios
# ANNOY ME NAGIOS
- nagios: action=unsilence_nagios
# command something
- nagios: action=command command=’DISABLE_FAILURE_PREDICTION’
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- newrelic_deployment: token=AAAAAA
app_name=myapp
user=’ansible deployment’
revision=1.0
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a 4 hour maintenance window for service FOO123 with the description "deployment".
- pagerduty: name=companyabc
[email protected]
passwd=password123
state=running
service=FOO123
hours=4
desc=deployment
Note: This module does not yet have support to end maintenance windows.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: This module does not yet have support to add/remove checks.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: enable interface Ethernet 1
action: arista_interface interface_id=Ethernet1 admin=up speed=10g duplex=full logging=true
Note: The Netdev extension for EOS must be installed and active in the available extensions (show extensions from
the EOS CLI)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: create switchport ethernet1 access port
action: arista_l2interface interface_id=Ethernet1 logging=true
Note: The Netdev extension for EOS must be installed and active in the available extensions (show extensions from
the EOS CLI)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: create lag interface
action: arista_lag interface_id=Port-Channel1 links=Ethernet1,Ethernet2 logging=true
Note: The Netdev extension for EOS must be installed and active in the available extensions (show extensions from
the EOS CLI)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
tasks:
- name: create vlan 999
action: arista_vlan vlan_id=999 logging=true
Note: The Netdev extension for EOS must be installed and active in the available extensions (show extensions from
the EOS CLI)
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
module: bigip_monitor_tcp
state: absent
server: "{{ f5server }}"
user: "{{ f5user }}"
password: "{{ f5password }}"
name: "{{ monitorname }}"
with_flattened:
- f5monitors-tcp
- f5monitors-halftcp
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
# file bigip-test.yml
# ...
- hosts: bigip-test
tasks:
- name: Add node
local_action: >
bigip_node
server=lb.mydomain.com
user=admin
password=mysecret
state=present
partition=matthite
host="{{ ansible_default_ipv4["address"] }}"
name="{{ ansible_default_ipv4["address"] }}"
# Note that the BIG-IP automatically names the node using the
# IP address specified in previous play’s host parameter.
# Future plays referencing this node no longer use the host
# parameter but instead use the name parameter.
# Alternatively, you could have specified a name with the
# name parameter when state=present.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
# file bigip-test.yml
# ...
- hosts: localhost
tasks:
- name: Create pool
local_action: >
bigip_pool
server=lb.mydomain.com
user=admin
password=mysecret
state=present
name=matthite-pool
partition=matthite
lb_method=least_connection_member
slow_ramp_time=120
- hosts: bigip-test
tasks:
- name: Add pool member
local_action: >
bigip_pool
server=lb.mydomain.com
user=admin
password=mysecret
state=present
name=matthite-pool
partition=matthite
host="{{ ansible_default_ipv4["address"] }}"
port=80
- hosts: localhost
tasks:
- name: Delete pool
local_action: >
bigip_pool
server=lb.mydomain.com
user=admin
password=mysecret
state=absent
name=matthite-pool
partition=matthite
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
---
# file bigip-test.yml
# ...
- hosts: bigip-test
tasks:
- name: Add pool member
local_action: >
bigip_pool_member
server=lb.mydomain.com
user=admin
password=mysecret
state=present
pool=matthite-pool
partition=matthite
host="{{ ansible_default_ipv4["address"] }}"
port=80
description="web server"
connection_limit=100
rate_limit=50
ratio=2
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: The DNS Made Easy service requires that machines interacting with the API have the proper time and timezone
set. Be sure you are within a few seconds of actual time by using NTP.
Note: This module returns record(s) in the “result” element when ‘state’ is set to ‘present’. This value can be be
registered and used in your playbooks.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Downloads files from HTTP, HTTPS, or FTP to the remote server. The remote server must have direct access to the
remote resource. By default, if an environment variable <protocol>_proxy is set on the target host, requests
will be sent through that proxy. This behaviour can be overridden by setting a variable for this task (see setting the
environment), or by using the use_proxy option.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
This module works like fetch. It is used for fetching a base64- encoded blob containing the data in a remote file.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Check that you can connect (GET) to a page and it returns a status 200
- uri: url=http://www.example.com
# Check that a page returns a status 200 and fail if the word AWESOME is not in the page contents.
- action: uri url=http://www.example.com return_content=yes
register: webpage
- action: fail
when: ’AWESOME’ not in "{{ webpage.content }}"
- action: >
uri url=https://your.jira.example.com/rest/api/2/issue/
method=POST user=your_username password=your_pass
body="{{ lookup(’file’,’issue.json’) }}" force_basic_auth=yes
status_code=201 HEADER_Content-Type="application/json"
- action: >
uri url=https://your.form.based.auth.examle.com/index.php
method=POST body="name=your_username&password=your_password&enter=Sign%20in"
status_code=302 HEADER_Content-Type="application/x-www-form-urlencoded"
register: login
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- flowdock: type=inbox
token=AAAAAA
[email protected]
source=’my cool app’
msg=’test from ansible’
subject=’test subject’
- flowdock: type=chat
token=AAAAAA
external_user_name=testuser
msg=’test from ansible’
tags=tag1,tag2,tag3
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- grove: >
channel_token=6Ph62VBBJOccmtTPZbubiPzdrhipZXtg
service=my-app
message=deployed {{ target }}
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
This module is useful for sending emails from playbooks. One may wonder why automate sending emails? In complex
environments there are from time to time processes that cannot be automated, either because you lack the authority
to make it so, or because not everyone agrees to a common approach. If you cannot automate a specific step, but the
step is non-blocking, sending out an email to the responsible party to make him perform his part of the bargain is an
elegant way to put the responsibility in someone else’s lap. Of course sending out a mail can be equally useful as a
way to notify one or more people in a team that a specific action has been (successfully) taken.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- local_action: mqtt
topic=service/ansible/{{ ansible_hostname }}
payload="Hello at {{ ansible_date_time.iso8601 }}"
qos=0
retain=false
client_id=ans001
Note: This module requires a connection to an MQTT broker such as Mosquitto http://mosquitto.org and the
mosquitto Python module (http://mosquitto.org/python).
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: If you like this module, you may also be interested in the osx_say callback in the plugins/ directory of the
source checkout.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Update the repository cache and update package "nginx" to latest version using default release sque
- apt: pkg=nginx state=latest default_release=squeeze-backports update_cache=yes
# Only run "update_cache=yes" if the last one is more than more than 3600 seconds ago
- apt: update_cache=yes cache_valid_time=3600
Note: Three of the upgrade modes (full, safe and its alias yes) require aptitude, otherwise apt-get
suffices.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: as a sanity check, downloaded key id must match the one specified
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# On Ubuntu target: add nginx stable repository from PPA and install its signing key.
# On Debian target: adding PPA is not available, so it will fail immediately.
apt_repository: repo=’ppa:nginx/stable’
Note: This module works on Debian and Ubuntu and requires python-apt and python-pycurl packages.
Note: This module supports Debian Squeeze (version 6) as well as its successors.
Note: This module treats Debian and Ubuntu distributions separately. So PPA could be installed only on Ubuntu
machines.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Please note that the easy_install module can only install Python libraries. Thus this module is not
able to remove libraries. It is generally recommended to use the pip module which you can first install using
easy_install.
Note: Also note that virtualenv must be installed on the remote host if the virtualenv parameter is specified.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
description: Install packages based on package.json using the npm installed with nvm v0.10.1.
- npm: path=/app/location executable=/opt/nvm/v0.10.1/bin/npm state=present
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Author Afterburn
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Update the package database (pacman -Syy) and install bar (bar will be the updated if a newer versi
- pacman: name=bar, state=installed, update_cache=yes
• Synopsis
• Options
• Examples
Synopsis
Manage Python library dependencies. To use this module, one of the following keys is required: name or
requirements.
Options
Examples
# Install (MyApp) using one of the remote protocols (bzr+,hg+,git+,svn+). You do not have to supply ’
- pip: name=’svn+http://myrepo/svn/MyApp#egg=MyApp’
# Install (Bottle) into the specified (virtualenv), inheriting none of the globally installed modules
- pip: name=bottle virtualenv=/my_app/venv
# Install (Bottle) into the specified (virtualenv), inheriting globally installed modules
- pip: name=bottle virtualenv=/my_app/venv virtualenv_site_packages=yes
Note: Please note that virtualenv (http://www.virtualenv.org/) must be installed on the remote host if the virtualenv
parameter is specified.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Author bleader
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: When using pkgsite, be careful that already in cache packages won’t be downloaded again.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Install a package
pkgutil: name=CSWcommon state=present
Author berenddeboer
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
redhat_subscription - Manage Red Hat Network registration and subscriptions using the
subscription-manager command
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Register as user (joe_user) with password (somepass) and auto-subscribe to available content.
- redhat_subscription: action=register username=joe_user password=somepass autosubscribe=true
Note: In order to register a system, subscription-manager requires either a username and password, or an activation-
key.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
rhn_register - Manage Red Hat Network registration using the rhnreg_ks command
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Register as user (joe_user) with password (somepass) and auto-subscribe to available content.
- rhn_register: state=present username=joe_user password=somepass
Note: In order to register a system, rhnreg_ks requires either a username and password, or an activationkey.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Manages SVR4 packages on Solaris 10 and 11. These were the native packages on Solaris <= 10 and are available
as a legacy feature in Solaris 11. Note that this is a very basic packaging system. It will not enforce dependencies on
install or remove.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Installs, upgrade, removes, and lists packages and groups with the yum package manager.
Options
Examples
- name: install the latest version of Apche from the testing repo
yum: name=httpd enablerepo=testing state=installed
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Install "nmap"
- zypper: name=nmap state=present
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: If the task seems to be hanging, first verify remote host is in known_hosts. SSH will prompt user to
authorize the first contact with a remote host. To avoid this prompt, one solution is to add the remote host public
key in /etc/ssh/ssh_known_hosts before calling the git module, with the following command: ssh-keyscan
remote_host.com >> /etc/ssh/ssh_known_hosts.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Cleaning all hooks for this repo that had an error on the last update. Since this works for all hoo
- local_action: github_hooks action=cleanall user={{ gituser }} oauthkey={{ oauthkey }} repo={{ repo
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Ensure the current working copy is inside the stable branch and deletes untracked files if any.
- hg: repo=https://bitbucket.org/user/repo1 dest=/home/user/repo1 revision=stable purge=yes
Note: If the task seems to be hanging, first verify remote host is in known_hosts. SSH will prompt user to
authorize the first contact with a remote host. To avoid this prompt, one solution is to add the remote host public
key in /etc/ssh/ssh_known_hosts before calling the hg module, with the following command: ssh-keyscan
remote_host.com >> /etc/ssh/ssh_known_hosts.
• Synopsis
• Options
• Examples
Synopsis
Deploy given repository URL / revision to dest. If dest exists, update to the specified revision, otherwise perform a
checkout.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Use this module to schedule a command or script to run once in the future. All jobs are executed in the a queue.
Options
Note: Requires at
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Example using key data from a local file on the management machine
- authorized_key: user=charlie key="{{ lookup(’file’, ’/home/charlie/.ssh/id_rsa.pub’) }}"
# Using with_file
- name: Set up authorized_keys for the deploy user
authorized_key: user=deploy
key="{{ item }}"
with_file:
- public_keys/doe-jane
- public_keys/doe-john
# Using key_options:
- authorized_key: user=charlie
key="{{ lookup(’file’, ’/home/charlie/.ssh/id_rsa.pub’) }}"
key_options=’no-port-forwarding,host="10.0.1.1"’
• Synopsis
• Options
• Examples
Synopsis
Use this module to manage crontab entries. This module allows you to create named crontab entries, update, or
delete them. The module includes one line with the description of the crontab entry "#Ansible: <name>"
corresponding to the “name” passed to the module, which is used by future ansible/module calls to find/check the
state.
Options
Examples
# Ensure an old job is no longer present. Removes any job that is prefixed
# by "#Ansible: an old job" from the crontab
- cron: name="an old job" state=absent
• Synopsis
• Examples
Synopsis
Runs the facter discovery program (https://github.com/puppetlabs/facter) on the remote system, returning JSON data
that can be useful for inventory purposes.
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
- hostname: name=web01
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a volume group on top of /dev/sda1 with physical extent size = 32MB.
- lvg: vg=vg.services pvs=/dev/sda1 pesize=32
Note: module does not modify PE size for already present volume group
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Create a logical volume the size of all remaining space in the volume group
- lvol: vg=firefly lv=test size=100%FREE
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Examples
Synopsis
Similar to the facter module, this runs the Ohai discovery program (http://wiki.opscode.com/display/chef/Ohai) on
the remote host and returns JSON inventory data. Ohai data is a bit more verbose and nested than facter.
Examples
# Retrieve (ohai) data from all Web servers and store in one-file per host
ansible webservers -m ohai --tree=/tmp/ohaidata
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Examples
Synopsis
A trivial test module, this module always returns pong on successful contact. It does not make sense in playbooks,
but it is useful from /usr/bin/ansible
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Configures the SELinux mode and policy. A reboot may be required after usage. Ansible will not issue this reboot but
will let you know when it is required.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Example action to enable service httpd, and not touch the running state
- service: name=httpd enabled=yes
• Synopsis
• Options
• Examples
Synopsis
This module is automatically called by playbooks to gather useful variables about remote hosts that can be used in
playbooks. It can also be executed directly by /usr/bin/ansible to check what variables are available to a host.
Ansible provides many facts about the system, automatically.
Options
Examples
# Display facts from all hosts and store them indexed by I(hostname) at C(/tmp/facts).
ansible all -m setup --tree /tmp/facts
# Display only facts regarding memory found by ansible on all hosts and output them.
ansible all -m setup -a ’filter=ansible_*_mb’
Note: More ansible facts will be added with successive releases. If facter or ohai are installed, variables from
these programs will also be snapshotted into the JSON file for usage in templating. These variables are prefixed with
facter_ and ohai_ so it’s easy to tell their source. All variables are bubbled up to the caller. Using the ansible
facts and choosing to not install facter and ohai means you can avoid Ruby-dependencies on your remote systems.
(See also facter and ohai.)
Note: The filter option filters only the first level subkey below ansible_facts.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Set ip forwarding on in /proc and in the sysctl file and reload if necessary
- sysctl: name="net.ipv4.ip_forward" value=1 sysctl_set=yes state=present reload=yes
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# Add the user ’johnd’ with a specific uid and a primary group of ’admin’
- user: name=johnd comment="John Doe" uid=1040
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
# To use accelerate mode, simply add "accelerate: true" to your play. The initial
# key exchange and starting up of the daemon will occur over SSH, but all commands and
# subsequent actions will be conducted over the raw socket connection using AES encryption
- hosts: devservers
accelerate: true
tasks:
- command: /usr/bin/anything
Note: See the advanced playbooks chapter for more about using accelerated mode.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
This module prints statements during execution and can be useful for debugging variables or expressions without
necessarily halting the playbook. Useful for debugging together with the ‘when:’ directive.
Options
Examples
# Example that prints the loopback address and gateway for each host
- debug: msg="System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}"
- shell: /usr/bin/uptime
register: result
- debug: var=result
• Synopsis
• Options
• Examples
Synopsis
This module fails the progress with a custom message. It can be useful for bailing out when a certain condition is met
using when.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
This modules launches an ephemeral fireball ZeroMQ message bus daemon on the remote node which Ansible can
use to communicate with nodes at high speed. The daemon listens on a configurable port for a configurable amount of
time. Starting a new fireball as a given user terminates any existing user fireballs. Fireball mode is AES encrypted
Options
Examples
# This example playbook has two plays: the first launches ’fireball’ mode on all hosts via SSH, and
# the second actually starts using it for subsequent management over the fireball connection
- hosts: devservers
gather_facts: false
connection: ssh
sudo: yes
tasks:
- action: fireball
- hosts: devservers
connection: fireball
tasks:
- command: /usr/bin/anything
Note: See the advanced playbooks chapter for more about using fireball mode.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Pauses playbook execution for a set amount of time, or until a prompt is acknowledged. All parameters are optional.
The default behavior is to pause with a prompt. You can use ctrl+c if you wish to advance a pause earlier than it is
set to expire or if you need to abort a playbook run entirely. To continue early: press ctrl+c and then c. To abort
a playbook: press ctrl+c and then a. The pause module integrates into async/parallelized playbooks without any
special considerations (see also: Rolling Updates). When using pauses with the serial playbook parameter (as in
rolling updates) you are only prompted once for the current group of hosts.
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
• Synopsis
• Options
• Examples
Synopsis
Waiting for a port to become available is useful for when services are not immediately available after their init scripts
return - which is true of certain Java application servers. It is also useful when starting guests with the virt module
and needing to pause until they are ready. This module can also be used to wait for a file to be available on the
filesystem or with a regex match a string to be present in a file.
Options
Examples
# wait 300 seconds for port 8000 to become open on the host, don’t start checking for 10 seconds
- wait_for: port=8000 delay=10
# wait until the string "completed" is in the file /tmp/foo before continuing
- wait_for: path=/tmp/foo search_regex=completed
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
app_path={{ django_dir }}
settings={{ settings_app_name }}
pythonpath={{ settings_dir }}
virtualenv={{ virtualenv_dir }}
#Run the SmokeTest test case from the main app. Useful for testing deploys.
- django_manage: command=test app_path=django_dir apps=main.SmokeTest
Note: virtualenv (http://www.virtualenv.org) must be installed on the remote host if the virtualenv parameter is
specified.
Note: This module will create a virtualenv if the virtualenv parameter is specified and a virtualenv does not already
exist at the given location.
Note: This module assumes English error messages for the ‘createcachetable’ command to detect table existence,
unfortunately.
Note: To be able to use the migrate command, you must have south installed and added as an app in your settings
Note: To be able to use the collectstatic command, you must have enabled staticfiles in your settings
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Example playbook entries using the ejabberd_user module to manage users state.
tasks:
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: This module depends on the passlib Python library, which needs to be installed on all target systems.
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
Note: Ensure no identically named application is deployed through the JBoss CLI
supervisorctl - Manage the state of a program or group of programs running via Supervisord
• Synopsis
• Options
• Examples
Synopsis
Options
Examples
This section is new and evolving. The idea here is explore particular use cases in greater depth and provide a more
“top down” explanation of some basic features.
Introduction
Note: This section of the documentation is under construction. We are in the process of adding more examples about
all of the EC2 modules and how they work together. There’s also an ec2 example in the language_features directory of
the ansible-examples github repository that you may wish to consult. Once complete, there will also be new examples
of ec2 in ansible-examples.
Ansible contains a number of core modules for interacting with Amazon Web Services (AWS). These also work
with Eucalyptus, which is an AWS compatible private cloud solution. There are other supported cloud types, but this
documentation chapter is about AWS API clouds. The purpose of this section is to explain how to put Ansible modules
together (and use inventory scripts) to use Ansible in AWS context.
Requirements for the AWS modules are minimal. All of the modules require and are tested against boto 2.5 or higher.
You’ll need this Python module installed on the execution host. If you are using Red Hat Enterprise Linux or CentOS,
install boto from EPEL:
$ yum install python-boto
And in your playbook steps we’ll typically be using the following pattern for provisioning steps:
- hosts: localhost
connection: local
gather_facts: False
Provisioning
The ec2 module provides the ability to provision instances within EC2. Typically the provisioning task will be per-
formed against your Ansible master server in a play that operates on localhost using the local connection type. If
you are doing an EC2 operation mid-stream inside a regular play operating on remote hosts, you may want to use the
local_action keyword for that particular task. Read Delegation, Rolling Updates, and Local Actions for more
about local actions.
Note: Authentication with the AWS-related modules is handled by either specifying your access and secret key as
ENV variables or passing them as module arguments.
Note: To talk to specific endpoints, the environmental variable EC2_URL can be set. This is useful if using a private
cloud like Eucalyptus, exporting the variable as EC2_URL=https://myhost:8773/services/Eucalyptus. This can be set
using the ‘environment’ keyword in Ansible if you like.
In a play, this might look like (assuming the parameters are held as vars):
tasks:
- name: Provision a set of instances
ec2: >
keypair={{mykeypair}}
group={{security_group}}
instance_type={{instance_type}}
image={{image}}
wait=true
count={{number}}
register: ec2
By registering the return its then possible to dynamically create a host group consisting of these new instances. This
facilitates performing configuration actions on the hosts immediately in a subsequent task:
- name: Add all instance public IPs to host group
add_host: hostname={{ item.public_ip }} groupname=ec2hosts
with_items: ec2.instances
With the host group now created, a second play in your provision playbook might now have some configuration steps:
- name: Configuration play
hosts: ec2hosts
user: ec2-user
gather_facts: true
tasks:
- name: Check NTP service
service: name=ntpd state=started
Rather than include configuration inline, you may also choose to just do it as a task include or a role.
The method above ties the configuration of a host with the provisioning step. This isn’t always ideal and leads us onto
the next section.
Advanced Usage
Host Inventory
Once your nodes are spun up, you’ll probably want to talk to them again. The best way to handle this is to use the ec2
inventory plugin.
Even for larger environments, you might have nodes spun up from Cloud Formations or other tooling. You don’t have
to use Ansible to spin up guests. Once these are created and you wish to configure them, the EC2 API can be used
to return system grouping with the help of the EC2 inventory script. This script can be used to group resources by
their security group or tags. Tagging is highly recommended in EC2 and can provide an easy way to sort between host
groups and roles. The inventory script is documented doc:api section.
You may wish to schedule a regular refresh of the inventory cache to accommodate for frequent changes in resources:
# ./ec2.py --refresh-cache
Put this into a crontab as appropriate to make calls from your Ansible master server to the EC2 API endpoints and
gather host information. The aim is to keep the view of hosts as up-to-date as possible, so schedule accordingly.
Playbook calls could then also be scheduled to act on the refreshed hosts inventory after each refresh. This approach
means that machine images can remain “raw”, containing no payload and OS-only. Configuration of the workload is
handled entirely by Ansible.
Tags
There’s a feature in the ec2 inventory script where hosts tagged with certain keys and values automatically appear in
certain groups.
For instance, if a host is given the “class” tag with the value of “webserver”, it will be automatically discoverable via
a dynamic group like so:
- hosts: tag_class_webserver
tasks:
- ping
Using this philosophy can be a great way to manage groups dynamically, without having to maintain seperate inventory.
Pull Configuration
For some the delay between refreshing host information and acting on that host information (i.e. running Ansible
tasks against the hosts) may be too long. This may be the case in such scenarios where EC2 AutoScaling is being
used to scale the number of instances as a result of a particular event. Such an event may require that hosts come
online and are configured as soon as possible (even a 1 minute delay may be undesirable). Its possible to pre-bake
machine images which contain the necessary ansible-pull script and components to pull and run a playbook via git.
The machine images could be configured to run ansible-pull upon boot as part of the bootstrapping procedure.
Read Ansible-Pull for more information on pull-mode playbooks.
(Various developments around Ansible are also going to make this easier in the near future. Stay tuned!)
Ansible Tower also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call a
defined URL and the server will “dial out” to the requester and configure an instance that is spinning up. This can be
a great way to reconfigure ephemeral nodes. See the Tower documentation for more details. Click on the Tower link
in the sidebar for details.
A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded and less informa-
tion has to be shared with remote hosts.
Use Cases
This section covers some usage examples built around a specific use case.
Example 1
Example 1: I’m using CloudFormation to deploy a specific infrastructure stack. I’d like to manage con-
figuration of the instances with Ansible.
Provision instances with your tool of choice and consider using the inventory plugin to group hosts based on particular
tags or security group. Consider tagging instances you wish to managed with Ansible with a suitably unique key=value
tag.
Note: Ansible also has a cloudformation module you may wish to explore.
Example 2
Example 2: I’m using AutoScaling to dynamically scale up and scale down the number of instances.
This means the number of hosts is constantly fluctuating but I’m letting EC2 automatically handle the
provisioning of these instances. I don’t want to fully bake a machine image, I’d like to use Ansible to
configure the hosts.
There are several approaches to this use case. The first is to use the inventory plugin to regularly refresh host informa-
tion and then target hosts based on the latest inventory data. The second is to use ansible-pull triggered by a user-data
script (specified in the launch configuration) which would then mean that each instance would fetch Ansible and the
latest playbook from a git repository and run locally to configure itself. You could also use the Tower callback feature.
Example 3
Example 3: I don’t want to use Ansible to manage my instances but I’d like to consider using Ansible to
build my fully-baked machine images.
There’s nothing to stop you doing this. If you like working with Ansible’s playbook format then writing a playbook
to create an image; create an image file with dd, give it a filesystem and then install packages and finally chroot into
it for further configuration. Ansible has the ‘chroot’ plugin for this purpose, just add the following to your inventory
file:
/chroot/path ansible_connection=chroot
Example 4
How would I create a new ec2 instance, provision it and then destroy it all in the same play?
# Use the ec2 module to create a new host and then add
# it to a special "ec2hosts" group.
- hosts: localhost
connection: local
gather_facts: False
vars:
ec2_access_key: "--REMOVED--"
ec2_secret_key: "--REMOVED--"
keypair: "mykeyname"
instance_type: "t1.micro"
image: "ami-d03ea1e0"
group: "mysecuritygroup"
region: "us-west-2"
zone: "us-west-2c"
tasks:
- name: make one instance
ec2: image={{ image }}
instance_type={{ instance_type }}
aws_access_key={{ ec2_access_key }}
aws_secret_key={{ ec2_secret_key }}
keypair={{ keypair }}
instance_tags=’{"foo":"bar"}’
region={{ region }}
group={{ group }}
wait=true
register: ec2_info
- debug: var=ec2_info
- debug: var=item
with_items: ec2_info.instance_ids
- hosts: ec2hosts
gather_facts: True
user: ec2-user
sudo: True
tasks:
- hosts: ec2hosts
gather_facts: True
connection: local
vars:
ec2_access_key: "--REMOVED--"
ec2_secret_key: "--REMOVED--"
region: "us-west-2"
tasks:
- name: destroy all instances
ec2: state=’absent’
aws_access_key={{ ec2_access_key }}
aws_secret_key={{ ec2_secret_key }}
region={{ region }}
instance_ids={{ item }}
wait=true
with_items: hostvars[inventory_hostname][’ansible_ec2_instance-id’]
Note: more examples of this are pending. You may also be interested in the ec2_ami module for taking AMIs of
running instances.
Pending Information
Introduction
Note: This section of the documentation is under construction. We are in the process of adding more examples about
the Rackspace modules and how they work together. Once complete, there will also be examples for Rackspace Cloud
in ansible-examples.
Ansible contains a number of core modules for interacting with Rackspace Cloud.
The purpose of this section is to explain how to put Ansible modules together (and use inventory scripts) to use Ansible
in Rackspace Cloud context.
Prerequisites for using the rax modules are minimal. In addition to ansible itself, all of the modules require and are
tested against pyrax 1.5 or higher. You’ll need this Python module installed on the execution host.
pyrax is not currently available in many operating system package repositories, so you will likely need to install it via
pip:
$ pip install pyrax
The following steps will often execute from the control machine against the Rackspace Cloud API, so it makes sense
to add localhost to the inventory file. (Ansible may not require this manual step in the future):
[localhost]
localhost ansible_connection=local
Credentials File
The rax.py inventory script and all rax modules support a standard pyrax credentials file that looks like:
[rackspace_cloud]
username = myraxusername
api_key = d41d8cd98f00b204e9800998ecf8427e
Setting the environment parameter RAX_CREDS_FILE to the path of this file will help Ansible find how to load this
information.
More information about this credentials file can be found at https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#auth
Special considerations need to be taken if pyrax is not installed globally but instead using a python virtualenv (it’s fine
if you install it globally).
Ansible assumes, unless otherwise instructed, that the python binary will live at /usr/bin/python. This is done so via the
interpret line in the modules, however when instructed using ansible_python_interpreter, ansible will use this specified
path instead for finding python.
If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows:
[localhost]
localhost ansible_connection=local ansible_python_interpreter=/path/to/ansible_venv/bin/python
Provisioning
Note: Authentication with the Rackspace-related modules is handled by either specifying your username and API
key as environment variables or passing them as module arguments.
Here’s what it would look like in a playbook, assuming the parameters were defined in variables:
tasks:
- name: Provision a set of instances
local_action:
module: rax
name: "{{ rax_name }}"
flavor: "{{ rax_flavor }}"
image: "{{ rax_image }}"
count: "{{ rax_count }}"
group: "{{ group }}"
wait: yes
register: rax
By registering the return value of the step, it is then possible to dynamically add the resulting hosts to inventory
(temporarily, in memory). This facilitates performing configuration actions on the hosts immediately in a subsequent
task:
- name: Add the instances we created (by public IP) to the group ’raxhosts’
local_action:
module: add_host
hostname: "{{ item.name }}"
ansible_ssh_host: "{{ item.rax_accessipv4 }}"
ansible_ssh_pass: "{{ item.rax_adminpass }}"
groupname: raxhosts
with_items: rax.success
when: rax.action == ’create’
With the host group now created, a second play in your provision playbook could now configure them, for example:
- name: Configuration play
hosts: raxhosts
user: root
roles:
- ntp
- webserver
The method above ties the configuration of a host with the provisioning step. This isn’t always what you want, and
leads us to the next section.
Host Inventory
Once your nodes are spun up, you’ll probably want to talk to them again.
The best way to handle his is to use the rax inventory plugin, which dynamically queries Rackspace Cloud and tells
Ansible what nodes you have to manage.
You might want to use this even if you are spinning up Ansible via other tools, including the Rackspace Cloud user
interface.
The inventory plugin can be used to group resources by their meta data. Utilizing meta data is highly recommended
in rax and can provide an easy way to sort between host groups and roles.
If you don’t want to use the rax.py dynamic inventory script, you could also still choose to manually manage your
INI inventory file, though this is less recommended.
In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a
common directory and be sure the scripts are chmod +x, and the INI-based ones are not.
rax.py
To use the rackspace dynamic inventory script, copy rax.py from plugins/inventory into your inventory
directory and make it executable. You can specify credentials for rax.py utilizing the RAX_CREDS_FILE environ-
ment variable.
Note: Users of Ansible Tower will note that dynamic inventory is natively supported by Tower, and all you have to
do is associate a group with your Rackspace Cloud credentials, and it will easily synchronize without going through
these steps:
$ RAX_CREDS_FILE=~/.raxpub ansible all -i rax.py -m setup
rax.py also accepts a RAX_REGION environment variable, which can contain an individual region, or a comma
separated list of regions.
When using rax.py, you will not have a ‘localhost’ defined in the inventory.
As mentioned previously, you will often be running most of these modules outside of the host loop, and will need
‘localhost’ defined. The recommended way to do this, would be to create an inventory directory, and place both
the rax.py script and a file containing localhost in it.
Executing ansible or ansible-playbook and specifying the inventory directory instead of an individual
file, will cause ansible to evaluate each file in that directory for inventory.
Let’s test our inventory script to see if it can talk to Rackspace Cloud.
$ RAX_CREDS_FILE=~/.raxpub ansible all -i inventory/ -m setup
Assuming things are properly configured, the rax.py inventory script will output information similar to the following
information, which will be utilized for inventory and variables.
{
"ORD": [
"test"
],
"_meta": {
"hostvars": {
"test": {
"ansible_ssh_host": "1.1.1.1",
"rax_accessipv4": "1.1.1.1",
"rax_accessipv6": "2607:f0d0:1002:51::4",
"rax_addresses": {
"private": [
{
"addr": "2.2.2.2",
"version": 4
}
],
"public": [
{
"addr": "1.1.1.1",
"version": 4
},
{
"addr": "2607:f0d0:1002:51::4",
"version": 6
}
]
},
"rax_config_drive": "",
"rax_created": "2013-11-14T20:48:22Z",
"rax_flavor": {
"id": "performance1-1",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/perfor
"rel": "bookmark"
}
]
},
"rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
"rax_human_id": "test",
"rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
"rax_image": {
"id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7b
"rel": "bookmark"
}
]
},
"rax_key_name": null,
"rax_links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447
"rel": "self"
},
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a
"rel": "bookmark"
}
],
"rax_metadata": {
"foo": "bar"
},
"rax_name": "test",
"rax_name_attr": "name",
"rax_networks": {
"private": [
"2.2.2.2"
],
"public": [
"1.1.1.1",
"2607:f0d0:1002:51::4"
]
},
"rax_os-dcf_diskconfig": "AUTO",
"rax_os-ext-sts_power_state": 1,
"rax_os-ext-sts_task_state": null,
"rax_os-ext-sts_vm_state": "active",
"rax_progress": 100,
"rax_status": "ACTIVE",
"rax_tenant_id": "111111",
"rax_updated": "2013-11-14T20:49:27Z",
"rax_user_id": "22222"
}
}
}
}
Standard Inventory
When utilizing a standard ini formatted inventory file (as opposed to the inventory plugin), it may still be adventageous
to retrieve discoverable hostvar information from the Rackspace API.
This can be achieved with the rax_facts module and an inventory file similar to the following:
[test_servers]
hostname1 rax_region=ORD
hostname2 rax_region=ORD
While you don’t need to know how it works, it may be interesting to know what kind of variables are returned.
The rax_facts module provides facts as followings, which match the rax.py inventory script:
{
"ansible_facts": {
"rax_accessipv4": "1.1.1.1",
"rax_accessipv6": "2607:f0d0:1002:51::4",
"rax_addresses": {
"private": [
{
"addr": "2.2.2.2",
"version": 4
}
],
"public": [
{
"addr": "1.1.1.1",
"version": 4
},
{
"addr": "2607:f0d0:1002:51::4",
"version": 6
}
]
},
"rax_config_drive": "",
"rax_created": "2013-11-14T20:48:22Z",
"rax_flavor": {
"id": "performance1-1",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1
"rel": "bookmark"
}
]
},
"rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
"rax_human_id": "test",
"rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
"rax_image": {
"id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4
"rel": "bookmark"
}
]
},
"rax_key_name": null,
"rax_links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-4
"rel": "self"
},
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f
"rel": "bookmark"
}
],
"rax_metadata": {
"foo": "bar"
},
"rax_name": "test",
"rax_name_attr": "name",
"rax_networks": {
"private": [
"2.2.2.2"
],
"public": [
"1.1.1.1",
"2607:f0d0:1002:51::4"
]
},
"rax_os-dcf_diskconfig": "AUTO",
"rax_os-ext-sts_power_state": 1,
"rax_os-ext-sts_task_state": null,
"rax_os-ext-sts_vm_state": "active",
"rax_progress": 100,
"rax_status": "ACTIVE",
"rax_tenant_id": "111111",
"rax_updated": "2013-11-14T20:49:27Z",
"rax_user_id": "22222"
},
"changed": false
}
Use Cases
This section covers some additional usage examples built around a specific use case.
Example 1
Example 2
Build a complete webserver environment with servers, custom networks and load balancers, install nginx and create a
custom index.html
---
- name: Build environment
hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Load Balancer create request
local_action:
module: rax_clb
credentials: ~/.raxpub
name: my-lb
port: 80
protocol: HTTP
algorithm: ROUND_ROBIN
type: PUBLIC
timeout: 30
region: IAD
wait: yes
state: present
meta:
app: my-cool-app
register: clb
credentials: ~/.raxpub
load_balancer_id: "{{ clb.balancer.id }}"
address: "{{ item.rax_networks.private|first }}"
port: 80
condition: enabled
type: primary
wait: yes
region: IAD
with_items: rax.success
when: rax.action == ’create’
tasks:
- name: Install nginx
apt: pkg=nginx state=latest update_cache=yes cache_valid_time=86400
notify:
- restart nginx
Advanced Usage
Ansible Tower also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call a
defined URL and the server will “dial out” to the requester and configure an instance that is spinning up. This can be
a great way to reconfigure ephmeral nodes. See the Tower documentation for more details.
A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded and less informa-
tion has to be shared with remote hosts.
Pending Information
More to come!
Introduction
Vagrant is a tool to manage virtual machine environments, and allows you to configure and use reproducable work
environments on top of various virtualization and cloud platforms. It also has integration with Ansible as a provisioner
for these virtual machines, and the two tools work together well.
This guide will describe how to use Vagrant and Ansible together.
If you’re not familar with Vagrant, you should visit the documentation.
This guide assumes that you already have Ansible installed and working. Running from a Git checkout is fine. Follow
the Installation guide for more information.
Vagrant Setup
The first step once you’ve installed Vagrant is to create a Vagrantfile and customize it to suit your needs. This is
covered in detail in the Vagrant documentation, but here is a quick example:
$ mkdir vagrant-test
$ cd vagrant-test
$ vagrant init precise32 http://files.vagrantup.com/precise32.box
This will create a file called Vagrantfile that you can edit to suit your needs. The default Vagrantfile has a lot of
comments. Here is a simplified example that includes a section to use the Ansible provisioner:
# Vagrantfile API/syntax version. Don’t touch unless you know what you’re doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :public_network
The Vagrantfile has a lot of options, but these are the most important ones. Notice the config.vm.provision
section that refers to an Ansible playbook called playbook.yml in the same directory as the Vagrantfile. Vagrant
runs the provisioner once the virtual machine has booted and is ready for SSH access.
$ vagrant up
Sometimes you may want to run Ansible manually against the machines. This is pretty easy to do.
Vagrant automatically creates an inventory file for each Vagrant machine in the same directory called
vagrant_ansible_inventory_machinename. It configures the inventory file according to the SSH tun-
nel that Vagrant automatically creates, and executes ansible-playbook with the correct username and SSH key
options to allow access. A typical automatically-created inventory file may look something like this:
# Generated by Vagrant
If you want to run Ansible manually, you will want to make sure to pass ansible or ansible-playbook
commands the correct arguments for the username (usually vagrant) and the SSH key (usually
~/.vagrant.d/insecure_private_key), and the autogenerated inventory file.
Here is an example:
$ ansible-playbook -i vagrant_ansible_inventory_machinename --private-key=~/.vagrant.d/insecure_priva
See also:
Vagrant Home The Vagrant homepage with downloads
Vagrant Documentation Vagrant Documentation
Ansible Provisioner The Vagrant documentation for the Ansible provisioner
Playbooks An introduction to playbooks
Introduction
Continuous Delivery is the concept of frequently delivering updates to your software application.
The idea is that by updating more often, you do not have to wait for a specific timed period, and your organization gets
better at the process of responding to change.
Some Ansible users are deploying updates to their end users on an hourly or even more frequent basis – sometimes
every time there is an approved code change. To achieve this, you need tools to be able to quickly apply those updates
in a zero-downtime way.
This document describes in detail how to achieve this goal, using one of Ansible’s most complete example playbooks
as a template: lamp_haproxy. This example uses a lot of Ansible features: roles, templates, and group variables, and
it also comes with an orchestration playbook that can do zero-downtime rolling upgrades of the web application stack.
Note: Click here for the latest playbooks for this example.
The playbooks deploy Apache, PHP, MySQL, Nagios, and HAProxy to a CentOS-based set of servers.
We’re not going to cover how to run these playbooks here. Read the included README in the github project along
with the example for that information. Instead, we’re going to take a close look at every part of the playbook and
describe what it does.
Site Deployment
Let’s start with site.yml. This is our site-wide deployment playbook. It can be used to initially deploy the site, as
well as push updates to all of the servers:
---
# This playbook deploys the whole application stack in this site.
- hosts: all
roles:
- common
roles:
- db
# Configure and deploy the web servers. Note that we include two roles
# here, the ’base-apache’ role which simply sets up Apache, and ’web’
# which includes our example web application.
- hosts: webservers
roles:
- base-apache
- web
roles:
- haproxy
roles:
- base-apache
- nagios
Note: If you’re not familiar with terms like playbooks and plays, you should review Playbooks.
In this playbook we have 5 plays. The first one targets all hosts and applies the common role to all of the hosts. This
is for site-wide things like yum repository configuration, firewall configuration, and anything else that needs to apply
to all of the servers.
The next four plays run against specific host groups and apply specific roles to those servers. Along with the roles for
Nagios monitoring, the database, and the web application, we’ve implemented a base-apache role that installs and
configures a basic Apache setup. This is used by both the sample web application and the Nagios hosts.
By now you should have a bit of understanding about roles and how they work in Ansible. Roles are a way to organize
content: tasks, handlers, templates, and files, into reusable components.
This example has six roles: common, base-apache, db, haproxy, nagios, and web. How you organize your
roles is up to you and your application, but most sites will have one or more common roles that are applied to all
systems, and then a series of application-specific roles that install and configure particular parts of the site.
Roles can have variables and dependencies, and you can pass in parameters to roles to modify their behavior. You can
read more about roles in the Playbook Roles and Include Statements section.
Group variables are variables that are applied to groups of servers. They can be used in templates and in playbooks
to customize behavior and to provide easily-changed settings and parameters. They are stored in a directory called
group_vars in the same location as your inventory. Here is lamp_haproxy’s group_vars/all file. As you
might expect, these variables are applied to all of the machines in your inventory:
---
httpd_port: 80
ntpserver: 192.168.1.2
This is a YAML file, and you can create lists and dictionaries for more complex variable structures. In this case, we
are just setting two variables, one for the port for the web server, and one for the NTP server that our machines should
use for time synchronization.
Here’s another group variables file. This is group_vars/dbservers which applies to the hosts in the
dbservers group:
---
mysqlservice: mysqld
mysql_port: 3306
dbuser: root
dbname: foodb
upassword: usersecret
If you look in the example, there are group variables for the webservers group and the lbservers group, simi-
larly.
These variables are used in a variety of places. You can use them in playbooks, like this, in
roles/db/tasks/main.yml:
- name: Create Application Database
mysql_db: name={{ dbname }} state=present
You can also use these variables in templates, like this, in roles/common/templates/ntp.conf.j2:
driftfile /var/lib/ntp/drift
restrict 127.0.0.1
restrict -6 ::1
server {{ ntpserver }}
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
You can see that the variable substitution syntax of {{ and }} is the same for both templates and variables. The
syntax inside the curly braces is Jinja2, and you can do all sorts of operations and apply different filters to the data
inside. In templates, you can also use for loops and if statements to handle more complex situations, like this, in
roles/common/templates/iptables.j2:
{% if inventory_hostname in groups[’dbservers’] %}
-A INPUT -p tcp --dport 3306 -j ACCEPT
{% endif %}
This is testing to see if the inventory name of the machine we’re currently operating on (inventory_hostname)
exists in the inventory group dbservers. If so, that machine will get an iptables ACCEPT line for port 3306.
Here’s another example, from the same template:
{% for host in groups[’monitoring’] %}
-A INPUT -p tcp -s {{ hostvars[host].ansible_default_ipv4.address }} --dport 5666 -j ACCEPT
{% endfor %}
This loops over all of the hosts in the group called monitoring, and adds an ACCEPT line for each monitoring
hosts’s default IPV4 address to the current machine’s iptables configuration, so that Nagios can monitor those hosts.
You can learn a lot more about Jinja2 and its capabilities here, and you can read more about Ansible variables in
general in the Variables section.
Now you have a fully-deployed site with web servers, a load balancer, and monitoring. How do you update it? This is
where Ansible’s orchestration features come into play. While some applications use the term ‘orchestration’ to mean
basic ordering or command-blasting, Ansible referes to orchestration as ‘conducting machines like an orchestra’, and
has a pretty sophisticated engine for it.
Ansible has the capability to do operations on multi-tier applications in a coordinated way, making it easy to orchestrate
a sophisticated zero-downtime rolling upgrade of our web application. This is implemented in a separate playbook,
called rolling_upgrade.yml.
Looking at the playbook, you can see it is made up of two plays. The first play is very simple and looks like this:
- hosts: monitoring
tasks: []
What’s going on here, and why are there no tasks? You might know that Ansible gathers “facts” from the servers
before operating upon them. These facts are useful for all sorts of things: networking information, OS/distribution
versions, etc. In our case, we need to know something about all of the monitoring servers in our environment before
we perform the update, so this simple play forces a fact-gathering step on our monitoring servers. You will see this
pattern sometimes, and it’s a useful trick to know.
The next part is the update play. The first part looks like this:
- hosts: webservers
user: root
serial: 1
This is just a normal play definition, operating on the webservers group. The serial keyword tells Ansible how
many servers to operate on at once. If it’s not specified, Ansible will paralleize these operations up to the default
“forks” limit specified in the configuration file. But for a zero-downtime rolling upgrade, you may not want to operate
on that many hosts at once. If you had just a handful of webservers, you may want to set serial to 1, for one host at
a time. If you have 100, maybe you could set serial to 10, for ten at a time.
Here is the next part of the update play:
pre_tasks:
- name: disable nagios alerts for this host webserver service
nagios: action=disable_alerts host={{ ansible_hostname }} services=webserver
delegate_to: "{{ item }}"
with_items: groups.monitoring
The pre_tasks keyword just lets you list tasks to run before the roles are called. This will make more sense in a
minute. If you look at the names of these tasks, you can see that we are disabling Nagios alerts and then removing the
webserver that we are currently updating from the HAProxy load balancing pool.
The delegate_to and with_items arguments, used together, cause Ansible to loop over each monitoring server
and load balancer, and perform that operation (delegate that operation) on the monitoring or load balancing server, “on
behalf” of the webserver. In programming terms, the outer loop is the list of web servers, and the inner loop is the list
of monitoring servers.
Note that the HAProxy step looks a little complicated. We’re using HAProxy in this example because it’s freely
available, though if you have (for instance) an F5 or Netscaler in your infrastructure (or maybe you have an AWS
Elastic IP setup?), you can use modules included in core Ansible to communicate with them instead. You might also
wish to use other monitoring modules instead of nagios, but this just shows the main goal of the ‘pre tasks’ section –
take the server out of monitoring, and take it out of rotation.
The next step simply re-applies the proper roles to the web servers. This will cause any configuration management
declarations in web and base-apache roles to be applied to the web servers, including an update of the web
application code itself. We don’t have to do it this way–we could instead just purely update the web application, but
this is a good example of how roles can be used to reuse tasks:
roles:
- common
- base-apache
- web
Finally, in the post_tasks section, we reverse the changes to the Nagios configuration and put the web server back
in the load balancing pool:
post_tasks:
- name: Enable the server in haproxy
shell: echo "enable server myapplb/{{ ansible_hostname }}" | socat stdio /var/lib/haproxy/stats
delegate_to: "{{ item }}"
with_items: groups.lbservers
Again, if you were using a Netscaler or F5 or Elastic Load Balancer, you would just substitute in the appropriate
modules instead.
In this example, we use the simple HAProxy load balancer to front-end the web servers. It’s easy to configure and
easy to manage. As we have mentioned, Ansible has built-in support for a variety of other load balancers like Citrix
NetScaler, F5 BigIP, Amazon Elastic Load Balancers, and more. See the About Modules documentation for more
information.
For other load balancers, you may need to send shell commands to them (like we do for HAProxy above), or call an
API, if your load balancer exposes one. For the load balancers for which Ansible has modules, you may want to run
them as a local_action if they contact an API. You can read more about local actions in the Delegation, Rolling
Updates, and Local Actions section. Should you develop anything interesting for some hardware where there is not a
core module, it might make for a good module for core inclusion!
Now that you have an automated way to deploy updates to your application, how do you tie it all together? A lot of
organizations use a continuous integration tool like Jenkins or Atlassian Bamboo to tie the development, test, release,
and deploy steps together. You may also want to use a tool like Gerrit to add a code review step to commits to either
the application code itself, or to your Ansible playbooks, or both.
Depending on your environment, you might be deploying continuously to a test environment, running an integration
test battery against that environment, and then deploying automatically into production. Or you could keep it simple
and just use the rolling-update for on-demand deployment into test or production specifically. This is all up to you.
For integration with Continuous Integration systems, you can easily trigger playbook runs using the
ansible-playbook command line tool, or, if you’re using Ansible Tower, the tower-cli or the built-in REST
API. (The tower-cli command ‘joblaunch’ will spawn a remote job over the REST API and is pretty slick).
This should give you a good idea of how to structure a multi-tier application with Ansible, and orchestrate operations
upon that app, with the eventual goal of continuous delivery to your customers. You could extend the idea of the
rolling upgrade to lots of different parts of the app; maybe add front-end web servers along with application servers,
for instance, or replace the SQL database with something like MongoDB or Riak. Ansible gives you the capability to
easily manage complicated environments and automate common operations.
See also:
lamp_haproxy example The lamp_haproxy example discussed here.
Playbooks An introduction to playbooks
Playbook Roles and Include Statements An introduction to playbook roles
Variables An introduction to Ansible variables
Ansible.com: Continuous Delivery An introduction to Continuous Delivery with Ansible
Pending topics may include: Docker, Jenkins, Google Compute Engine, Linode/Digital Ocean, Continous Deploy-
ment, and more.
Learn how to build modules of your own in any language, and also how to extend Ansible through several kinds of
plugins. Explore Ansible’s Python API and write Python plugins to integrate with other solutions in your environment.
Topics
• Python API
– Python API
* Detailed API Example
There are several interesting ways to use Ansible from an API perspective. You can use the Ansible python API to
control nodes, you can extend Ansible to respond to various python events, you can write various plugins, and you
can plug in inventory data from external data sources. This document covers the Runner and Playbook API at a basic
level.
If you are looking to use Ansible programmatically from something other than Python, trigger events asynchronously,
or have access control and logging demands, take a look at Ansible Tower as it has a very nice REST API that provides
all of these things at a higher level.
Ansible is written in its own API so you have a considerable amount of power across the board. This chapter discusses
the Python API.
Python API
The Python API is very powerful, and is how the ansible CLI and ansible-playbook are implemented.
It’s pretty simple:
import ansible.runner
runner = ansible.runner.Runner(
module_name=’ping’,
module_args=’’,
pattern=’web*’,
forks=10
)
datastructure = runner.run()
The run method returns results per host, grouped by whether they could be contacted or not. Return types are module
specific, as expressed in the About Modules documentation.:
{
"dark" : {
"web1.example.com" : "failure message"
},
"contacted" : {
"web2.example.com" : 1
}
}
A module can return any type of JSON data it wants, so Ansible can be used as a framework to rapidly build powerful
applications and scripts.
The following script prints out the uptime information for all hosts:
#!/usr/bin/python
import ansible.runner
import sys
if results is None:
print "No hosts found"
sys.exit(1)
Advanced programmers may also wish to read the source to ansible itself, for it uses the Runner() API (with all
available options) to implement the command line tools ansible and ansible-playbook.
See also:
Developing Dynamic Inventory Sources Developing dynamic inventory integrations
Developing Modules How to develop modules
Developing Plugins How to develop plugins
Development Mailing List Mailing list for development topics
irc.freenode.net #ansible IRC chat channel
Topics
• Script Conventions
• Tuning the External Inventory Script
As described in Dynamic Inventory, ansible can pull inventory information from dynamic sources, including cloud
sources.
How do we write a new one?
Simple! We just create a script or program that can return JSON in the right format when fed the proper arguments.
You can do this in any language.
Script Conventions
When the external node script is called with the single argument --list, the script must return a JSON
hash/dictionary of all the groups to be managed. Each group’s value should be either a hash/dictionary containing
a list of each host/IP, potential child groups, and potential group variables, or simply a list of host/IP addresses, like
so:
{
"databases" : {
"hosts" : [ "host1.example.com", "host2.example.com" ],
"vars" : {
"a" : true
}
},
"webservers" : [ "host2.example.com", "host3.example.com" ],
"atlanta" : {
"hosts" : [ "host1.example.com", "host4.example.com", "host5.example.com" ],
"vars" : {
"b" : false
},
"children": [ "marietta", "5points" ]
},
"marietta" : [ "host6.example.com" ],
"5points" : [ "host7.example.com" ]
}
"_meta" : {
"hostvars" : {
"moocow.example.com" : { "asdf" : 1234 },
"llama.example.com" : { "asdf" : 5678 },
}
}
See also:
Python API Python API to Playbooks and Ad Hoc Task Execution
Topics
• Developing Modules
– Tutorial
– Testing Modules
– Reading Input
– Module Provided ‘Facts’
– Common Module Boilerplate
– Check Mode
– Common Pitfalls
– Conventions/Recommendations
– Shorthand Vs JSON
– Documenting Your Module
* Example
* Building & Testing
– Getting Your Module Into Core
Ansible modules are reusable units of magic that can be used by the Ansible API, or by the ansible or ansible-playbook
programs.
See About Modules for a list of various ones developed in core.
Modules can be written in any language and are found in the path specified by ANSIBLE_LIBRARY or the
--module-path command line option.
Should you develop an interesting Ansible module, consider sending a pull request to the github project to see about
getting your module included in the core project.
Tutorial
Let’s build a very-basic module to get and set the system time. For starters, let’s build a module that just outputs the
current time.
We are going to use Python here but any language is possible. Only File I/O and outputting to standard out are required.
So, bash, C++, clojure, Python, Ruby, whatever you want is fine.
Now Python Ansible modules contain some extremely powerful shortcuts (that all the core modules use) but first we
are going to build a module the very hard way. The reason we do this is because modules written in any language
OTHER than Python are going to have to do exactly this. We’ll show the easy way later.
So, here’s an example. You would never really need to build a module to set the system time, the ‘command’ module
could already be used to do this. Though we’re going to make one.
Reading the modules that come with ansible (linked above) is a great way to learn how to write modules. Keep in
mind, though, that some modules in ansible’s source tree are internalisms, so look at service or yum, and don’t stare
too close into things like async_wrapper or you’ll turn to stone. Nobody ever executes async_wrapper directly.
Ok, let’s get going with an example. We’ll use Python. For starters, save this as a file named time:
#!/usr/bin/python
import datetime
import json
date = str(datetime.datetime.now())
print json.dumps({
"time" : date
})
Testing Modules
If you did not, you might have a typo in your module, so recheck it and try again.
Reading Input
Let’s modify the module to allow setting the current time. We’ll do this by seeing if a key value pair in the form
time=<string> is passed in to the module.
Ansible internally saves arguments to an arguments file. So we must read the file and parse it. The arguments file is
just a string, so any form of arguments are legal. Here we’ll do some basic parsing to treat the input as key=value.
The example usage we are trying to achieve to set the time is:
time time="March 14 22:10"
If no time parameter is set, we’ll just leave the time as is and return the current time.
Note: This is obviously an unrealistic idea for a module. You’d most likely just use the shell module. However, it
probably makes a decent tutorial.
Let’s look at the code. Read the comments as we’ll explain as we go. Note that this is highly verbose because it’s
intended as an educational example. You can write modules a lot shorter than this:
#!/usr/bin/python
# import some python modules that we’ll use. These are all
# available in Python’s core
import datetime
import sys
import json
import os
import shlex
arguments = shlex.split(args_data)
for arg in arguments:
if key == "time":
if rc != 0:
print json.dumps({
"failed" : True,
"msg" : "failed setting the time"
})
sys.exit(1)
date = str(datetime.datetime.now())
print json.dumps({
"time" : date,
"changed" : True
})
sys.exit(0)
date = str(datetime.datetime.now())
print json.dumps({
"time" : date
})
The ‘setup’ module that ships with Ansible provides many variables about a system that can be used in playbooks and
templates. However, it’s possible to also add your own facts without modifying the system module. To do this, just
have the module return a ansible_facts key, like so, along with other return data:
{
"changed" : True,
"rc" : 5,
"ansible_facts" : {
"leptons" : 5000
"colors" : {
"red" : "FF0000",
"white" : "FFFFFF"
}
}
}
These ‘facts’ will be available to all statements called after that module (but not before) in the playbook. A good idea
might be make a module called ‘site_facts’ and always call it at the top of each playbook, though we’re always open
to improving the selection of core facts in Ansible as well.
As mentioned, if you are writing a module in Python, there are some very powerful shortcuts you can use. Modules
are still transferred as one file, but an arguments file is no longer needed, so these are not only shorter in terms of code,
they are actually FASTER in terms of execution time.
Rather than mention these here, the best way to learn is to read some of the source of the modules that come with
Ansible.
The ‘group’ and ‘user’ modules are reasonably non-trivial and showcase what this looks like.
Key parts include always ending the module file with:
from ansible.module_utils.basic import *
main()
The AnsibleModule provides lots of common code for handling returns, parses your arguments for you, and allows
you to check inputs.
Successful returns are made like this:
module.exit_json(changed=True, something_else=12345)
And failures are just as simple (where ‘msg’ is a required parameter to explain the error):
module.fail_json(msg="Something fatal happened")
There are also other useful functions in the module class, such as module.md5(path). See
lib/ansible/module_common.py in the source checkout for implementation details.
Again, modules developed this way are best tested with the hacking/test-module script in the git source checkout.
Because of the magic involved, this is really the only way the scripts can function outside of Ansible.
If submitting a module to ansible’s core code, which we encourage, use of the AnsibleModule class is required.
Check Mode
if module.check_mode:
# Check if any changes would be made by don’t actually make those changes
module.exit_json(changed=check_if_system_state_would_be_changed())
Remember that, as module developer, you are responsible for ensuring that no system state is altered when the user
enables check mode.
If your module does not support check mode, when the user runs Ansible in check mode, your module will simply be
skipped.
Common Pitfalls
Because the output is supposed to be valid JSON. Except that’s not quite true, but we’ll get to that later.
Modules must not output anything on standard error, because the system will merge standard out with standard error
and prevent the JSON from parsing. Capturing standard error and returning it as a variable in the JSON on standard
out is fine, and is, in fact, how the command module is implemented.
If a module returns stderr or otherwise fails to produce valid JSON, the actual output will still be shown in Ansible,
but the command will not succeed.
Always use the hacking/test-module script when developing modules and it will warn you about these kind of things.
Conventions/Recommendations
As a reminder from the example code above, here are some basic conventions and guidelines:
• If the module is addressing an object, the parameter for that object should be called ‘name’ whenever possible,
or accept ‘name’ as an alias.
• If you have a company module that returns facts specific to your installations, a good name for this module is
site_facts.
• Modules accepting boolean status should generally accept ‘yes’, ‘no’, ‘true’, ‘false’, or anything else a user
may likely throw at them. The AnsibleModule common code supports this with “choices=BOOLEANS” and a
module.boolean(value) casting function.
• Include a minimum of dependencies if possible. If there are dependencies, document them at the top of the
module file, and have the module raise JSON error messages when the import fails.
• Modules must be self contained in one file to be auto-transferred by ansible.
• If packaging modules in an RPM, they only need to be installed on the control machine and should be dropped
into /usr/share/ansible. This is entirely optional and up to you.
• Modules should return JSON or key=value results all on one line. JSON is best if you can do JSON. All return
types must be hashes (dictionaries) although they can be nested. Lists or simple scalar values are not supported,
though they can be trivially contained inside a dictionary.
• In the event of failure, a key of ‘failed’ should be included, along with a string explanation in ‘msg’. Mod-
ules that raise tracebacks (stacktraces) are generally considered ‘poor’ modules, though Ansible can deal with
these returns and will automatically convert anything unparseable into a failed result. If you are using the An-
sibleModule common Python code, the ‘failed’ element will be included for you automatically when you call
‘fail_json’.
• Return codes from modules are not actually not signficant, but continue on with 0=success and non-zero=failure
for reasons of future proofing.
• As results from many hosts will be aggregated at once, modules should return only relevant output. Returning
the entire contents of a log file is generally bad form.
Shorthand Vs JSON
To make it easier to write modules in bash and in cases where a JSON module might not be available, it is acceptable
for a module to return key=value output all on one line, like this. The Ansible parser will know what to do:
somekey=1 somevalue=2 rc=3 favcolor=red
If you’re writing a module in Python or Ruby or whatever, though, returning JSON is probably the simplest way to go.
All modules included in the CORE distribution must have a DOCUMENTATION string. This string MUST be a
valid YAML document which conforms to the schema defined below. You may find it easier to start writing your
DOCUMENTATION string in an editor with YAML syntax highlighting before you include it in your Python file.
Example
DOCUMENTATION = ’’’
---
module: modulename
short_description: This is a sentence describing the module
# ... snip ...
’’’
The description, and notes fields support formatting with some special macros.
These formatting functions are U(), M(), I(), and C() for URL, module, italic, and constant-width respectively. It
is suggested to use C() for file and option names, and I() when referencing parameters; module names should be
specifies as M(module).
Examples (which typically contain colons, quotes, etc.) are difficult to format with YAML, so these must be written
in plain text in an EXAMPLES string within the module like this:
EXAMPLES = ’’’
- action: modulename opt1=arg1 opt2=arg2
’’’
The EXAMPLES section, just like the documentation section, is required in all module pull requests for new modules.
Put your completed module file into the ‘library’ directory and then run the command: make webdocs. The new
‘modules.html’ file will be built and appear in the ‘docsite/’ directory.
Tip: If you’re having a problem with the syntax of your YAML you can validate it on the YAML Lint website.
Tip: You can use ANSIBLE_KEEP_REMOTE_FILES=1 to prevent ansible from deleting the remote files so you
can debug your module.
High-quality modules with minimal dependencies can be included in the core, but core modules (just due to the pro-
gramming preferences of the developers) will need to be implemented in Python and use the AnsibleModule common
code, and should generally use consistent arguments with the rest of the program. Stop by the mailing list to inquire
about requirements if you like, and submit a github pull request to the main project.
See also:
About Modules Learn about available modules
Developing Plugins Learn about developing plugins
Python API Learn about the Python API for playbook and task execution
Github modules directory Browse source of core modules
Mailing List Development mailing list
irc.freenode.net #ansible IRC chat channel
Topics
• Developing Plugins
– Connection Type Plugins
– Lookup Plugins
– Vars Plugins
– Filter Plugins
– Callbacks
* Examples
* Configuring
* Development
– Distributing Plugins
Ansible is pluggable in a lot of other ways separate from inventory scripts and callbacks. Many of these features are
there to cover fringe use cases and are infrequently needed, and others are pluggable simply because they are there to
implement core features in ansible and were most convenient to be made pluggable.
This section will explore these features, though they are generally not common in terms of things people would look
to extend quite as often.
By default, ansible ships with a ‘paramiko’ SSH, native ssh (just called ‘ssh’), ‘local’ connection type, and there are
also some minor players like ‘chroot’ and ‘jail’. All of these can be used in playbooks and with /usr/bin/ansible to de-
cide how you want to talk to remote machines. The basics of these connection types are covered in the Getting Started
section. Should you want to extend Ansible to support other transports (SNMP? Message bus? Carrier Pigeon?) it’s as
simple as copying the format of one of the existing modules and dropping it into the connection plugins directory. The
value of ‘smart’ for a connection allows selection of paramiko or openssh based on system capabilities, and chooses
‘ssh’ if OpenSSH supports ControlPersist, in Ansible 1.2.1 an later. Previous versions did not support ‘smart’.
More documentation on writing connection plugins is pending, though you can jump into
lib/ansible/runner/connection_plugins and figure things out pretty easily.
Lookup Plugins
Language constructs like “with_fileglob” and “with_items” are implemented via lookup plugins. Just like other plugin
types, you can write your own.
More documentation on writing connection plugins is pending, though you can jump into
lib/ansible/runner/lookup_plugins and figure things out pretty easily.
Vars Plugins
Playbook constructs like ‘host_vars’ and ‘group_vars’ work via ‘vars’ plugins. They inject additional variable data
into ansible runs that did not come from an inventory, playbook, or command line. Note that variables can also be
returned from inventory, so in most cases, you won’t need to write or understand vars_plugins.
More documentation on writing connection plugins is pending, though you can jump into
lib/ansible/inventory/vars_plugins and figure things out pretty easily.
If you find yourself wanting to write a vars_plugin, it’s more likely you should write an inventory script instead.
Filter Plugins
If you want more Jinja2 filters available in a Jinja2 template (filters like to_yaml and to_json are provided by default),
they can be extended by writing a filter plugin. Most of the time, when someone comes up with an idea for a new filter
they would like to make available in a playbook, we’ll just include them in ‘core.py’ instead.
Jump into lib/ansible/runner/filter_plugins/ for details.
Callbacks
Callbacks are one of the more interesting plugin types. Adding additional callback plugins to Ansible allows for
adding new behaviors when responding to events.
Examples
Configuring
Development
More information will come later, though see the source of any of the existing callbacks and you should be able to get
started quickly. They should be reasonably self explanatory.
Distributing Plugins
Plugins are loaded from both Python’s site_packages (those that ship with ansible) and a configured plugins directory,
which defaults to /usr/share/ansible/plugins, in a subfolder for each plugin type:
* action_plugins
* lookup_plugins
* callback_plugins
* connection_plugins
* filter_plugins
* vars_plugins
Ansible Tower (formerly ‘AWX’) is a web-based solution that makes Ansible even more easy to use for IT teams of
all kinds. It’s designed to be the hub for all of your automation tasks.
Tower allows you to control access to who can access what, even allowing sharing of SSH credentials without someone
being able to transfer those credentials. Inventory can be graphically managed or synced with a wide variety of cloud
sources. It logs all of your jobs, integrates well with LDAP, and has an amazing browsable REST API. Command
line tools are available for easy integration with Jenkins as well. Provisioning callbacks provide great support for
autoscaling topologies.
Find out more about Tower features and how to download it on the Ansible Tower webpage. Tower is free for usage
for up to 10 nodes, and comes bundled with amazing support from Ansible, Inc. As you would expect, Ansible is
installed using Ansible playbooks!
Ansible is an open source project designed to bring together developers and administrators of all kinds to collaborate
on building IT automation solutions that work well for them. Should you wish to get more involved – whether in
terms of just asking a question, helping other users, introducing new people to Ansible, or helping with the software
or documentation, we welcome your contributions to the project.
Ways to interact
Ansible Galaxy, is a free site for finding, downloading, rating, and reviewing all kinds of community developed
Ansible roles and can be a great way to get a jumpstart on your automation projects.
You can sign up with social auth, and the download client ‘ansible-galaxy’ is included in Ansible 1.4.2 and later.
Read the “About” page on the Galaxy site for more information.
1.12.1 How do I handle different machines needing different user accounts or ports
to log in with?
You can also dictate the connection type to be used, if you want:
[testcluster]
localhost ansible_connection=local
/path/to/chroot1 ansible_connection=chroot
foo.example.com
bar.example.com
You may also wish to keep these in group variables instead, or file in them in a group_vars/<groupname> file. See the
rest of the documentation for more information about how to organize variables.
1.12.2 How do I get ansible to reuse connections, enable Kerberized SSH, or have
Ansible pay attention to my local SSH config file?
Switch your default connection type in the configuration file to ‘ssh’, or use ‘-c ssh’ to use Native OpenSSH for
connections instead of the python paramiko library. In Ansible 1.2.1 and later, ‘ssh’ will be used by default if OpenSSH
is new enough to support ControlPersist as an option.
Paramiko is great for starting out, but the OpenSSH type offers many advanced options. You will want to run Ansible
from a machine new enough to support ControlPersist, if you are using this connection type. You can still manage
older clients. If you are using RHEL 6, CentOS 6, SLES 10 or SLES 11 the version of OpenSSH is still a bit old,
so consider managing from a Fedora or openSUSE client even though you are managing older nodes, or just use
paramiko.
We keep paramiko as the default as if you are first installing Ansible on an EL box, it offers a better experience for
new users.
Don’t try to manage a fleet of EC2 machines from your laptop. Connect to a management node inside EC2 first and
run Ansible from there.
1.12.4 How do I handle python pathing not having a Python 2.X in /usr/bin/python
on a remote machine?
While you can write ansible modules in any language, most ansible modules are written in Python, and some of these
are important core ones.
By default Ansible assumes it can find a /usr/bin/python on your remote system that is a 2.X version of Python,
specifically 2.4 or higher.
Setting of an inventory variable ‘ansible_python_interpreter’ on any host will allow Ansible to auto-replace the in-
terpreter used when executing python modules. Thus, you can point to any python you want on the system if
/usr/bin/python on your system does not point to a Python 2.X interpreter.
Some Linux operating systems, such as Arch, may only have Python 3 installed by default. This is not sufficient and
you will get syntax errors trying to run modules with Python 3. Python 3 is essentially not the same language as Python
2. Ansible modules currently need to support older Pythons for users that still have Enterprise Linux 5 deployed, so
they are not yet ported to run under Python 3.0. This is not a problem though as you can just install Python 2 also on
a managed host.
Python 3.0 support will likely be addressed at a later point in time when usage becomes more mainstream.
Do not replace the shebang lines of your python modules. Ansible will do this for you automatically at deploy time.
If you have not done so already, read all about “Roles” in the playbooks documentation. This helps you make playbook
content self contained, and works will with things like git submodules for sharing content with others.
If some of these plugin types look strange to you, see the API documentation for more details about ways Ansible can
be extended.
1.12.6 Where does the configuration file live and what can I configure in it?
If cowsay is installed, Ansible takes it upon itself to make your day happier when running playbooks. If you decide that
you would like to work in a professional cow-free environment, you can either uninstall cowsay, or set an environment
variable:
export ANSIBLE_NOCOWS=1
Ansible by default gathers “facts” about the machines under management, and these facts can be accessed in Playbooks
and in templates. To see a list of all of the facts that are available about a machine, you can run the “setup” module as
an ad-hoc action:
This will print out a dictionary of all of the facts that are available for that particular host.
A pretty common pattern is to iterate over a list of hosts inside of a host group, perhaps to populate a template
configuration file with a list of servers. To do this, you can just access the “$groups” dictionary in your template, like
this:
{% for host in groups[’db_servers’] %}
{{ host }}
{% endfor %}
If you need to access facts about these hosts, for instance, the IP address of each hostname, you need to make sure that
the facts have been populated. For example, make sure you have a play that talks to db_servers:
- hosts: db_servers
tasks:
- # doesn’t matter what you do, just that they were talked to previously.
Then you can use the facts inside your template, like this:
{% for host in groups[’db_servers’] %}
{{ hostvars[host][’ansible_eth0’][’ipv4’][’address’] }}
{% endfor %}
An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be
used may be supplied via a role parameter or other input. Variable names can be built by adding strings together, like
so:
{{ hostvars[inventory_hostname][’ansible_’ + which_interface][’ipv4’][’address’] }}
The trick about going through hostvars is neccessary because it’s a dictionary of the entire namespace of variables.
‘inventory_hostname’ is a magic variable that indiciates the current host you are looping over in the host loop.
What happens if we want the ip address of the first webserver in the webservers group? Well, we can do that too. Note
that if we are using dynamic inventory, which host is the ‘first’ may not be consistent, so you wouldn’t want to do this
unless your inventory was static and predictable. (If you are using Ansible Tower, it will use database order, so this
isn’t a problem even if you are using cloud based inventory scripts).
Anyway, here’s the trick:
{{ hostvars[groups[’webservers’][0]][’ansible_eth0’][’ipv4’][’address’] }}
Notice how we’re pulling out the hostname of the first machine of the webservers group. If you are doing this in a
template, you could use the Jinja2 ‘#set’ directive to simplify this, or in a playbook, you could also use set_fact:
• set_fact: headnode={{ groups[[’webservers’][0]] }}
• debug: msg={{ hostvars[headnode].ansible_eth0.ipv4.address }}
Notice how we interchanged the bracket syntax for dots – that can be done anywhere.
The “copy” module doesn’t handle recursive copies of directories. A common solution to do this is to use a local
action to call ‘rsync’ to recursively copy files to the managed servers.
Here is an example:
---
# ...
tasks:
- name: recursively copy files from management server to target
local_action: command rsync -a /path/to/files $inventory_hostname:/path/to/target/
Note that you’ll need passphrase-less SSH or ssh-agent set up to let rsync copy without prompting for a passphrase or
password.
If you just need to access existing variables, use the ‘env’ lookup plugin. For example, to access the value of the
HOME environment variable on management machine:
---
# ...
vars:
local_home: "{{ lookup(’env’,’HOME’) }}"
If you need to set environment variables, see the Advanced Playbooks section about environments.
Ansible 1.4 will also make remote environment variables available via facts in the ‘ansible_env’ variable:
{{ ansible_env.SOME_VARIABLE }}
The mkpasswd utility that is available on most Linux systems is a great option:
mkpasswd --method=SHA-512
If this utility is not installed on your system (e.g. you are using OS X) then you can still easily generate these passwords
using Python. First, ensure that the Passlib password hashing library is installed.
pip install passlib
Once the library is ready, SHA512 password values can then be generated as follows:
python -c "from passlib.hash import sha512_crypt; print sha512_crypt.encrypt(’<password>’)"
Yes! See our Guru offering <http://www.ansible.com/ansible-guru>_ for online support, and support is also included
with Ansible Tower. You can also read our service page and email [email protected] for further details.
Yes! Ansible, Inc makes a great product that makes Ansible even more powerful and easy to use. See Ansible Tower.
Great question! Documentation for Ansible is kept in the main project git repository, and complete instructions for
contributing can be found in the docs README viewable on GitHub. Thanks!
If you would like to keep secret data in your Ansible content and still share it publically or keep things in source
control, see Vault.
Please see the section below for a link to IRC and the Google Group, where you can ask your question there.
See also:
Ansible Documentation The documentation index
Playbooks An introduction to playbooks
Best Practices Best practices advice
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
1.13 Glossary
The following is a list (and re-explanation) of term definitions used elsewhere in the Ansible documentation.
Consult the documentation home page for the full documentation and to see the terms in context, but this should be a
good resource to check your knowledge of Ansible’s components and understand how they fit together. It’s something
you might wish to read for review or when a term comes up on the mailing list.
1.13.1 Action
An action is a part of a task that specifies which of the modules to run and the arguments to pass to that module. Each
task can have only one action, but it may also have other parameters.
1.13.2 Ad Hoc
Refers to running Ansible to perform some quick command, using /usr/bin/ansible, rather than the orchestration lan-
guage, which is /usr/bin/ansible-playbook. An example of an ad-hoc command might be rebooting 50 machines in
your infrastructure. Anything you can do ad-hoc can be accomplished by writing a playbook, and playbooks can also
glue lots of other operations together.
1.13.3 Async
Refers to a task that is configured to run in the background rather than waiting for completion. If you have a long
process that would run longer than the SSH timeout, it would make sense to launch that task in async mode. Async
modes can poll for completion every so many seconds, or can be configured to “fire and forget” in which case Ansible
will not even check on the task again, it will just kick it off and proceed to future steps. Async modes work with both
/usr/bin/ansible and /usr/bin/ansible-playbook.
Refers to some user-written code that can intercept results from Ansible and do something with them. Some supplied
examples in the GitHub project perform custom logging, send email, or even play sound effects.
Refers to running Ansible with the --check option, which does not make any changes on the remote systems,
but only outputs the changes that might occur if the command ran without this flag. This is analogous to so-called
“dry run” modes in other systems, though the user should be warned that this does not take into account unexpected
command failures or cascade effects (which is true of similar modes in other systems). Use this to get an idea of what
might happen, but it is not a substitute for a good staging environment.
By default, Ansible talks to remote machines through pluggable libraries. Ansible supports native OpenSSH (‘ssh’),
or a Python implementation called ‘paramiko’. OpenSSH is preferred if you are using a recent version, and also
enables some features like Kerberos and jump hosts. This is covered in the getting started section. There are also other
connection types like ‘accelerate’ mode, which must be bootstrapped over one of the SSH-based connection types but
is very fast, and local mode, which acts on the local system. Users can also write their own connection plugins.
1.13.7 Conditionals
A conditional is an expression that evaluates to true or false that decides whether a given task will be executed on a
given machine or not. Ansible’s conditionals are powered by the ‘when’ statement, and are discussed in the playbook
documentation.
A --diff flag can be passed to Ansible to show how template files change when they are overwritten, or how they
might change when used with --check mode. These diffs come out in unified diff format.
1.13.9 Facts
Facts are simply things that are discovered about remote nodes. While they can be used in playbooks and templates
just like variables, facts are things that are inferred, rather than set. Facts are automatically discovered by Ansible
when running plays by executing the internal ‘setup’ module on the remote nodes. You never have to call the setup
module explicitly, it just runs, but it can be disabled to save time if it is not needed. For the convenience of users who
are switching from other configuration management systems, the fact module will also pull in facts from the ‘ohai’
and ‘facter’ tools if they are installed, which are fact libraries from Chef and Puppet, respectively.
A filter plugin is something that most users will never need to understand. These allow for the creation of new Jinja2
filters, which are more or less only of use to people who know what Jinja2 filters are. If you need them, you can learn
how to write them in the API docs section.
1.13.11 Forks
Ansible talks to remote nodes in parallel and the level of parallelism can be set either by passing --forks, or editing
the default in a configuration file. The default is a very conservative 5 forks, though if you have a lot of RAM, you can
easily set this to a value like 50 for increased parallelism.
Facts are mentioned above. Sometimes when running a multi-play playbook, it is desirable to have some plays that
don’t bother with fact computation if they aren’t going to need to utilize any of these values. Setting gather_facts:
False on a playbook allows this implicit fact gathering to be skipped.
1.13.13 Globbing
Globbing is a way to select lots of hosts based on wildcards, rather than the name of the host specifically, or the name
of the group they are in. For instance, it is possible to select “www*” to match all hosts starting with “www”. This
concept is pulled directly from Func, one of Michael’s earlier projects. In addition to basic globbing, various set
operations are also possible, such as ‘hosts in this group and not in another group’, and so on.
1.13.14 Group
A group consists of several hosts assigned to a pool that can be conveniently targeted together, and also given variables
that they share in common.
The “group_vars/” files are files that live in a directory alongside an inventory file, with an optional filename named
after each group. This is a convenient place to put variables that will be provided to a given group, especially complex
data structures, so that these variables do not have to be embedded in the inventory file or playbook.
1.13.16 Handlers
Handlers are just like regular tasks in an Ansible playbook (see Tasks), but are only run if the Task contains a “notify”
directive and also indicates that it changed something. For example, if a config file is changed then the task referencing
the config file templating operation may notify a service restart handler. This means services can be bounced only if
they need to be restarted. Handlers can be used for things other than service restarts, but service restarts are the most
common usage.
1.13.17 Host
A host is simply a remote machine that Ansible manages. They can have individual variables assigned to them, and
can also be organized in groups. All hosts have a name they can be reached at (which is either an IP address or a
domain name) and optionally a port number if they are not to be accessed on the default SSH port.
Each Play in Ansible maps a series of tasks (which define the role, purpose, or orders of a system) to a set of systems.
This “hosts:” directive in each play is often called the hosts specifier.
It may select one system, many systems, one or more groups, or even some hosts that are in one group and explicitly
not in another.
Just like “Group Vars”, a directory alongside the inventory file named “host_vars/” can contain a file named after
each hostname in the inventory file, in YAML format. This provides a convenient place to assign variables to the
host without having to embed them in the inventory file. The Host Vars file can also be used to define complex data
structures that can’t be represented in the inventory file.
In general, Ansible evaluates any variables in playbook content at the last possible second, which means that if you
define a data structure that data structure itself can define variable values within it, and everything “just works” as you
would expect. This also means variable strings can include other variables inside of those strings.
A lookup plugin is a way to get data into Ansible from the outside world. These are how such things as “with_items”,
a basic looping plugin, are implemented, but there are also lookup plugins like “with_file” which loads data from a
file, and even ones for querying environment variables, DNS text records, or key value stores. Lookup plugins can
also be accessed in templates, e.g., {{ lookup(’file’,’/path/to/file’) }}.
1.13.22 Multi-Tier
The concept that IT systems are not managed one system at a time, but by interactions between multiple systems, and
groups of systems, in well defined orders. For instance, a web server may need to be updated before a database server,
and pieces on the web server may need to be updated after THAT database server, and various load balancers and
monitoring servers may need to be contacted. Ansible models entire IT topologies and workflows rather than looking
at configuration from a “one system at a time” perspective.
1.13.23 Idempotency
The concept that change commands should only be applied when they need to be applied, and that it is better to
describe the desired state of a system than the process of how to get to that state. As an analogy, the path from North
Carolina in the United States to California involves driving a very long way West, but if I were instead in Anchorage,
Alaska, driving a long way west is no longer the right way to get to California. Ansible’s Resources like you to say
“put me in California” and then decide how to get there. If you were already in California, nothing needs to happen,
and it will let you know it didn’t need to change anything.
1.13.24 Includes
The idea that playbook files (which are nothing more than lists of plays) can include other lists of plays, and task lists
can externalize lists of tasks in other files, and similarly with handlers. Includes can be parameterized, which means
that the loaded file can pass variables. For instance, an included play for setting up a WordPress blog may take a
parameter called “user” and that play could be included more than once to create a blog for both “alice” and “bob”.
1.13.25 Inventory
A file (by default, Ansible uses a simple INI format) that describes Hosts and Groups in Ansible. Inventory can also
be provided via an “Inventory Script” (sometimes called an “External Inventory Script”).
A very simple program (or a complicated one) that looks up hosts, group membership for hosts, and variable infor-
mation from an external resource – whether that be a SQL database, a CMDB solution, or something like LDAP. This
concept was adapted from Puppet (where it is called an “External Nodes Classifier”) and works more or less exactly
the same way.
1.13.27 Jinja2
Jinja2 is the preferred templating language of Ansible’s template module. It is a very simple Python template language
that is generally readable and easy to write.
1.13.28 JSON
Ansible uses JSON for return data from remote modules. This allows modules to be written in any language, not just
Python.
1.13.29 Library
By passing --limit somegroup to ansible or ansible-playbook, the commands can be limited to a subset of hosts.
For instance, this can be used to run a playbook that normally targets an entire set of servers to one particular server.
By using “connection: local” in a playbook, or passing “-c local” to /usr/bin/ansible, this indicates that we are manag-
ing the local host and not a remote machine.
A local_action directive in a playbook targeting remote machines means that the given step will actually occur on the
local machine, but that the variable ‘{{ ansible_hostname }}’ can be passed in to reference the remote hostname being
referred to in that step. This can be used to trigger, for example, an rsync operation.
1.13.33 Loops
Generally, Ansible is not a programming language. It prefers to be more declarative, though various constructs like
“with_items” allow a particular task to be repeated for multiple items in a list. Certain modules, like yum and apt, are
actually optimized for this, and can install all packages given in those lists within a single transaction, dramatically
speeding up total time to configuration.
1.13.34 Modules
Modules are the units of work that Ansible ships out to remote machines. Modules are kicked off by either
/usr/bin/ansible or /usr/bin/ansible-playbook (where multiple tasks use lots of different modules in conjunction). Mod-
ules can be implemented in any language, including Perl, Bash, or Ruby – but can leverage some useful communal
library code if written in Python. Modules just have to return JSON or simple key=value pairs. Once modules are
executed on remote machines, they are removed, so no long running daemons are used. Ansible refers to the collection
of available modules as a ‘library’.
1.13.35 Notify
The act of a task registering a change event and informing a handler task that another action needs to be run at the end
of the play. If a handler is notified by multiple tasks, it will still be run only once. Handlers are run in the order they
are listed, not in the order that they are notified.
1.13.36 Orchestration
Many software automation systems use this word to mean different things. Ansible uses it as a conductor would
conduct an orchestra. A datacenter or cloud architecture is full of many systems, playing many parts – web servers,
database servers, maybe load balancers, monitoring systems, continuous integration systems, etc. In performing any
process, it is necessary to touch systems in particular orders, often to simulate rolling updates or to deploy software
correctly. Some system may perform some steps, then others, then previous systems already processed may need to
perform more steps. Along the way, emails may need to be sent or web services contacted. Ansible orchestration is
all about modeling that kind of process.
1.13.37 paramiko
By default, Ansible manages machines over SSH. The library that Ansible uses by default to do this is a Python-
powered library called paramiko. The paramiko library is generally fast and easy to manage, though users desiring
Kerberos or Jump Host support may wish to switch to a native SSH binary such as OpenSSH by specifying the
connection type in their playbook, or using the “-c ssh” flag.
1.13.38 Playbooks
Playbooks are the language by which Ansible orchestrates, configures, administers, or deploys systems. They are
called playbooks partially because it’s a sports analogy, and it’s supposed to be fun using them. They aren’t workbooks
:)
1.13.39 Plays
A playbook is a list of plays. A play is minimally a mapping between a set of hosts selected by a host specifier (usually
chosen by groups, but sometimes by hostname globs) and the tasks which run on those hosts to define the role that
those systems will perform. There can be one or many plays in a playbook.
By default, Ansible runs in push mode, which allows it very fine-grained control over when it talks to each system.
Pull mode is provided for when you would rather have nodes check in every N minutes on a particular schedule. It
uses a program called ansible-pull and can also be set up (or reconfigured) using a push-mode playbook. Most Ansible
users use push mode, but pull mode is included for variety and the sake of having choices.
ansible-pull works by checking configuration orders out of git on a crontab and then managing the machine locally,
using the local connection plugin.
Push mode is the default mode of Ansible. In fact, it’s not really a mode at all – it’s just how Ansible works when you
aren’t thinking about it. Push mode allows Ansible to be fine-grained and conduct nodes through complex orchestration
processes without waiting for them to check in.
The result of running any task in Ansible can be stored in a variable for use in a template or a conditional statement.
The keyword used to define the variable is called ‘register’, taking its name from the idea of registers in assembly
programming (though Ansible will never feel like assembly programming). There are an infinite number of variable
names you can use for registration.
Ansible modules work in terms of resources. For instance, the file module will select a particular file and ensure
that the attributes of that resource match a particular model. As an example, we might wish to change the owner of
/etc/motd to ‘root’ if it is not already set to root, or set its mode to ‘0644’ if it is not already set to ‘0644’. The resource
models are ‘idempotent’ meaning change commands are not run unless needed, and Ansible will bring the system
back to a desired state regardless of the actual state – rather than you having to tell it how to get to the state.
1.13.44 Roles
Roles are units of organization in Ansible. Assigning a role to a group of hosts (or a set of groups, or host patterns,
etc.) implies that they should implement a specific behavior. A role may include applying certain variable values,
certain tasks, and certain handlers – or just one or more of these things. Because of the file structure associated with a
role, roles become redistributable units that allow you to share behavior among playbooks – or even with other users.
The act of addressing a number of nodes in a group N at a time to avoid updating them all at once and bringing the
system offline. For instance, in a web topology of 500 nodes handling very large volume, it may be reasonable to
update 10 or 20 machines at a time, moving on to the next 10 or 20 when done. The “serial:” keyword in an Ansible
playbook controls the size of the rolling update pool. The default is to address the batch size all at once, so this is
something that you must opt-in to. OS configuration (such as making sure config files are correct) does not typically
have to use the rolling update model, but can do so if desired.
1.13.46 Runner
A core software component of Ansible that is the power behind /usr/bin/ansible directly – and corresponds to the
invocation of each task in a playbook. The Runner is something Ansible developers may talk about, but it’s not really
user land vocabulary.
1.13.47 Serial
1.13.48 Sudo
Ansible does not require root logins, and since it’s daemonless, definitely does not require root level daemons (which
can be a security concern in sensitive environments). Ansible can log in and perform many operations wrapped in a
sudo command, and can work with both password-less and password-based sudo. Some operations that don’t normally
work with sudo (like scp file transfer) can be achieved with Ansible’s copy, template, and fetch modules while running
in sudo mode.
Native OpenSSH as an Ansible transport is specified with “-c ssh” (or a config file, or a directive in the playbook) and
can be useful if wanting to login via Kerberized SSH or using SSH jump hosts, etc. In 1.2.1, ‘ssh’ will be used by
default if the OpenSSH binary on the control machine is sufficiently new. Previously, Ansible selected ‘paramiko’ as
a default. Using a client that supports ControlMaster and ControlPersist is recommended for maximum performance
– if you don’t have that and don’t need Kerberos, jump hosts, or other features, paramiko is a good choice. Ansible
will warn you if it doesn’t detect ControlMaster/ControlPersist capability.
1.13.50 Tags
Ansible allows tagging resources in a playbook with arbitrary keywords, and then running only the parts of the play-
book that correspond to those keywords. For instance, it is possible to have an entire OS configuration, and have
certain steps labeled “ntp”, and then run just the “ntp” steps to reconfigure the time server information on a remote
host.
1.13.51 Tasks
Playbooks exist to run tasks. Tasks combine an action (a module and its arguments) with a name and optionally some
other keywords (like looping directives). Handlers are also tasks, but they are a special kind of task that do not run
unless they are notified by name when a task reports an underlying change on a remote system.
1.13.52 Templates
Ansible can easily transfer files to remote systems, but often it is desirable to substitute variables in other files. Vari-
ables may come from the inventory file, Host Vars, Group Vars, or Facts. Templates use the Jinja2 template engine
and can also include logical constructs like loops and if statements.
1.13.53 Transport
Ansible uses “Connection Plugins” to define types of available transports. These are simply how Ansible will reach
out to managed systems. Transports included are paramiko, SSH (using OpenSSH), and local.
1.13.54 When
An optional conditional statement attached to a task that is used to determine if the task should run or not. If the
expression following the “when:” keyword evaluates to false, the task will be ignored.
For no particular reason, other than the fact that Michael really likes them, all Ansible releases are codenamed after
Van Halen songs. There is no preference given to David Lee Roth vs. Sammy Lee Hagar-era songs, and instrumentals
are also allowed. It is unlikely that there will ever be a Jump release, but a Van Halen III codename release is possible.
You never know.
As opposed to Facts, variables are names of values (they can be simple scalar values – integers, booleans, strings) or
complex ones (dictionaries/hashes, lists) that can be used in templates and playbooks. They are declared things, not
things that are inferred from the remote system’s current state or nature (which is what Facts are).
1.13.57 YAML
Ansible does not want to force people to write programming language code to automate infrastructure, so Ansible uses
YAML to define playbook configuration languages and also variable files. YAML is nice because it has a minimum
of syntax and is very clean and easy for people to skim. It is a good data format for configuration files and humans,
but also machine readable. Ansible’s usage of YAML stemmed from Michael’s first use of it inside of Cobbler
around 2006. YAML is fairly popular in the dynamic language community and the format has libraries available for
serialization in many different languages (Python, Perl, Ruby, etc.).
See also:
Frequently Asked Questions Frequently asked questions
Playbooks An introduction to playbooks
Best Practices Best practices advice
User Mailing List Have a question? Stop by the google group!
irc.freenode.net #ansible IRC chat channel
This page provides a basic overview of correct YAML syntax, which is how Ansible playbooks (our configuration
management language) are expressed.
We use YAML because it is easier for humans to read and write than other common data formats like XML or JSON.
Further, there are libraries available in most programming languages for working with YAML.
You may also wish to read Playbooks at the same time to see how this is used in practice.
For Ansible, nearly every YAML file starts with a list. Each item in the list is a list of key/value pairs, commonly
called a “hash” or a “dictionary”. So, we need to know how to write lists and dictionaries in YAML.
There’s another small quirk to YAML. All YAML files (regardless of their association with Ansible or not) should
begin with ---. This is part of the YAML format and indicates the start of a document.
All members of a list are lines beginning at the same indentation level starting with a - (dash) character:
---
# A list of tasty fruits
- Apple
- Orange
- Strawberry
- Mango
Dictionaries can also be represented in an abbreviated form if you really want to:
---
# An employee record
{name: Example Developer, job: Developer, skill: Elite}
Ansible doesn’t really use these too much, but you can also specify a boolean value (true/false) in several forms:
---
create_key: yes
needs_agent: no
knows_oop: True
likes_emacs: TRUE
uses_cvs: false
Let’s combine what we learned so far in an arbitrary YAML example. This really has nothing to do with Ansible, but
will give you a feel for the format:
---
# An employee record
name: Example Developer
job: Developer
skill: Elite
employed: True
foods:
- Apple
- Orange
- Strawberry
- Mango
languages:
ruby: Elite
python: Elite
dotnet: Lame
That’s all you really need to know about YAML to start writing Ansible playbooks.
1.14.2 Gotchas
While YAML is generally friendly, the following is going to result in a YAML syntax error:
foo: somebody said I should put a colon here: so I did
You will want to quote any hash values using colons, like so:
foo: “somebody said I should put a colon here: so I did”
And then the colon will be preserved.
Further, Ansible uses “{{ var }}” for variables. If a value after a colon starts with a “{”, YAML will think it is a
dictionary, so you must quote it, like so:
foo: "{{ variable }}"
See also:
Playbooks Learn what playbooks can do and how to write/run them.
YAMLLint YAML Lint (online) helps you debug YAML syntax if you are having problems
Github examples directory Complete playbook files from the github project source
Mailing List Questions? Help? Ideas? Stop by the list on Google Groups
irc.freenode.net #ansible IRC chat channel
While many users should be able to get on fine with the documentation, mailing list, and IRC, sometimes you want a
bit more.
Ansible Guru is an offering from Ansible, Inc that helps users who would like more dedicated help with Ansible,
including building playbooks, best practices, architecture suggestions, and more – all from our awesome support and
services team. It also includes some useful discounts and also some free T-shirts, though you shoudn’t get it just for
the free shirts! It’s a great way to train up to becoming an Ansible expert.
For those interested, click through the link above. You can sign up in minutes!
For users looking for more hands-on help, we also have some more information on our Services page, and support is
also included with Ansible Tower.