/ Sensible SSH with Ansible

Sensible SSH with Ansible: An Ansible Primer

This is the third in a series of several posts on how to manage ssh via Ansible. It was inspired by a warning from Venafi that gained traction in the blogosphere (read: my Google feed for two weeks). I don't know many people that observe good ssh security, so my goal is to make it more accessible and (somewhat) streamlined.

This post serves as an Ansible primer. It assumes shell knowledge but nothing else. The post looks at each component of an Ansible playbook with plenty of examples. It doesn't explain any of the Ansible modules in detail, but does occasionally investigate how Ansible core works. If you're already familiar with Ansible, you can probably skip this. I removed anything involving the overarching project to simplify things.

The Series so Far

  1. Overview
  2. Creating the Status Quo
  3. An Ansible Primer

(This section should get updated as series progresses.)

Code

You can view the code related to this post under the post-03-ansible-primer tag.

Note

The first post has a quick style section that might be useful.

If you're using Vagrant on Windows with Hyper-V, there's a good chance you'll need to append --provider=hyperv to any vagrant up commands. If you're not sure, don't worry. Windows will be very helpful and crash with a BSOD (technically green now) if you use the wrong provider. The first post has more information on this useful feature.

I'm still fairly new to Ansible, so if something is horribly wrong, please let me know and I'll fix it ASAP. I've tried to follow the best practices. I still don't know what I don't know about Ansible, so the code might change drastically from post to post as I discover new things.

Ansible

Ansible is great. Using basic markup, you can script most of the things you can think of doing via a good command line (so not PowerShell). It even got me to begrudgingly learn Python. Rather than waste time gushing about how easy it is to use and how much it can change your life, I'll jump right in.

Configuration

If you're a masochist and enjoy manually specifying every option and every flag on every Ansible command directly, skip this section. If that doesn't sound fun, you can instead use a configuration file to DRY your scripting.

Out of the box, Ansible loads its (possibly empty) global configuration file, /etc/ansible/ansible.cfg. If you're working in a shared environment, or previously set up Ansible, Ansible might load an environment or userspace config file instead. Luckily, Ansible conveniently provides its discovered config with the --version flag:

$ ansible --version
ansible 2.4.1.0
config file = None
...
$ touch ~/.ansible.cfg
$ ansible --version
ansible 2.4.1.0
config file = /home/user/.ansible.cfg
...

Ansible only loads the first file it finds. It won't merge, say, a local directory config and your global $HOME config. Ansible starts with its base configuration and updates only the values you've specified. If you're not paying attention, this can often bite you. For example, the default inventory, /etc/ansible/hosts, probably doesn't contain the hosts you're about to set up. You'll either have to specify a local inventory at execution via the -inventory flag always or add inventory = /path/to/inventory to the project's main config file once. I prefer the latter option.

ansible-config

If you're using Ansible >=2.4, you can quickly verify config via ansible-config. If you're not using Ansible >=2.4 and don't have a serious architecture reason to resist change, pause, go update Ansible, and come back.

The --only-changed flag with dump is mind-blowingly useful when trying to figure out what's not stock:

$ ansible-config dump --only-changed

You can also view the entire configuration, which is just as insanely useful for debugging as the --only-changed refinement.

Inventory

Ansible's inventory provides all the information necessary to manage the desired machines, local or remote. You'll need to add things like addresses and usernames, so be careful with its contents. I personally wouldn't store that information, even encrypted, in a public repo, but YMMV.

(Quick aside: You can also use dynamic inventories, generated from local code or API returns. I really want to try this, and might hit it later.)

While inventories can be one of many supported filetypes, I'll be using YAML files. I find it easier to keep track of all the Ansible configuration when I don't have to swap between syntaxes (as similar as they are).

The first component of an inventory entry (in YAML, at least) is the owning group. all is a magic group that can be used when you don't want to explicitly name the group of hosts; even if you don't use it, all will get all the hosts in the inventory.

inventory
1
2
3
4
5
6
---
my_first_group:
# ...
all:
# ...
# also magically contains my_first_group

Below the group is its defining characteristics such as its child hosts, any children groups, and group-scope vars.

inventory
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
---
all:
hosts:
specific_host:
# ...
other_host:
# ...
children:
child_group:
hosts:
turtles_all_the_way_down:
# ...
vars:
group_scoped_variable: 'can be overriden per host'

Each of the hosts may redefine connection behavior and can also defined host-specific variables (not related to Ansible):

inventory
1
2
3
4
5
6
---
all:
hosts:
specific_host:
ansible_user: cool_user
host_scoped_variable: 'not accessible to its group'

To make managing all of this information easier, you can split out group and host vars. Ansible searches for group_vars/groupname.yml and host_vars/hostname.yml in the inventory path. If found, Ansible merges those vars in with the variables defined in the inventory_file.

$ tree example-inventory
example_inventory
├── group_vars
│   ├── all.yml
│   └── named_group.yml
├── host_vars
│   └── specific_hostname.yml
├── named_group.yml
└── ungrouped.yml

2 directories, 5 files

The precedence might be surprising: facts from an inventory file are replaced by facts from (group|host)_vars. Using the above example, these values represent the final value of facts defined in multiple locations (assuming they're only set in the inventory):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
# Set in:
# - under named_group vars in named_group.yml
group_inventory_final_say: 'named_group.yml'

# Set in:
# - under named_group vars in named_group.yml
# - group_vars/all.yml
all_group_vars_final_say: 'group_vars/all.yml'

# Set in:
# - under named_group vars in named_group.yml
# - group_vars/all.yml
# - group_vars/named_group.yml
group_vars_final_say: 'group_vars/named_group.yml'

# Set in:
# - under named_group vars in named_group.yml
# - group_vars/all.yml
# - group_vars/named_group.yml
# - under specific_hostname (directly) in named_group.yml
host_inventory_final_say: 'named_group.yml'

# Set in:
# - under named_group vars in named_group.yml
# - group_vars/all.yml
# - group_vars/named_group.yml
# - under specific_hostname (directly) in named_group.yml
# - host_vars/specific_hostname.yml
host_vars_final_say: 'host_vars/specific_hostname.yml'

Ad-Hoc Commands

Ansible exposes its API for quick access via ad-hoc commands. Ad-hoc commands aren't run as part of a playbook, so they're very useful for debugging or one-off calls. Similar to tasks inside a playbook (explained later), you must specify the host(s), the module, and its arguments.

$ ansible <host or group> -m <module name> -a "<arguments to pass to the module>"

A common "hello world" command uses the ping module:

$ ansible --connection=local localhost -m ping
localhost | SUCCESS => {
"changed": false,
"failed": false,
"ping": "pong"
}

The debug module provides a fast way to view variables. For example, let's check a few Ansible variables against localhost:

$ ansible --connection=local localhost -m debug -a 'msg="Host is {{ ansible_host }} as {{ inventory_hostname }} defined {{ \"locally\" if inventory_file is not defined else (\"in \" + inventory_file) }}"\'
localhost | SUCCESS => {
"msg": "Host is 127.0.0.1 as localhost defined locally"
}

There are no ad-hoc commands in the actual codebase, as the calls are all in playbooks or roles. However, I might occasionally use an ad-hoc command to illustrate a task, and I highly recommend running tasks here as commands to understand how they work.

Playbooks

Ansible proper runs blocks of actions on hosts in YAML files called playbooks. Playbooks are lists of plays, which contain targets, variables, and actions to execute. The previous section can be rewritten as follows:

playbook.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
---
- hosts: localhost
connection: local

tasks:
- name: Ping the host
ping:
- name: Print hostname metadata
debug:
msg: "Host is {{ ansible_host }} as {{ inventory_hostname }} defined {{ 'locally' if inventory_file is not defined else ('in ' + inventory_file) }}"

When run, it looks something like this:

$ ansible-playbook scratch.yml
PLAY [localhost] **********************************************************

TASK [Gathering Facts] ****************************************************
ok: [localhost]

TASK [Ping the host] *****************************************************
ok: [localhost]

TASK [Print hostname metadata] ********************************************
ok: [localhost] => {
"msg": "Host is 127.0.0.1 as localhost defined locally"
}

PLAY RECAP ****************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0

Jinja2

Ansible templates playbooks (and related files) via Jinja2. The docs include wonderfully handy details, like useful transformation filters (which links Jinja2's built-in filters, an equally handy read). I'm just going to explain Jinja2 basics in Ansible here, as covering a rich templating engine is beyond the scope of this post and project.

Jinja2 searches each template for {{ <expression> }} (actual templates might include other delimiters, e.g. when using the template module). For the most part, these are variables to replace, possibly after applying a filter, but Jinja2 expressions can also include valid code so long as it returns a value (I think; I don't know enough about Python yet to really explore potential counter-examples).

All of Ansible's playbook YAML files are rendered with Jinja2 before being sent to the target (I believe that logic is here; those classes showed up elsewhere while investigating playbook execution). Recent versions of Ansible have begun to include some template style feedback (e.g. no templates in conditionals), but, for the most part, you're on your own.

Personally, I wrap anything templated in double quotes, e.g. "{{ variable_name }}", which means I can quickly distinguish between strings that are templated and those that are not, i.e. "is {{ 'templated' }}" vs 'is not templated'. Ansible's interpretation of the YAML spec is fairly loose (as is the spec); the docs highlight a few important gotchas.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
not_parsed: this entire string
easier_to_skim_not_parsed: 'this entire string'

user_config_directory: "/home/{{ ansible_user }}/.config"
# /home/me/.config

important_value: None
important_setting: "{{ important_value|default('not that important i guess', true) }}"
# not that important i guess

number_of_seconds_in_a_day_usually: "{{ 60 * 60 * 24 }}"
# 86400

a_dict:
property_one: yes
property_two: no
templated_dict: "{{ a_dict }}"
# { 'property_one': true, 'property_two': false }

a_list:
- one
- two
templated_list: "{{ a_list }}"
# [ 'one', 'two' ]

Play Meta

The first (logically, at least) components of a play are its metadata. A play first lists its targets, defines local variables (including overriding inherited values), and gathers pertinent host facts.

Plays begin with a hosts variable, which can be a specific host, a group, or a group pattern. As of 2.4, you can additionally specify the order a group will follow. By default, each play will attempt to gather information about all the targeted hosts. If you don't want Ansible to do this, e.g. the play doesn't need any host information, you can disable it with gather_facts: no.

playbook.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
---
- hosts: specific-host
...
- hosts: some-group
...
- hosts: all:!except-for-this-host
...
- hosts: all
gather_facts: no
...

Plays can (re)define a variety of Ansible options, which come from its superclasses Base (source), Taggable(source), and Become (source). Plays inherit the options defined in the inventory. Anything specified in a play will override the inventory value, e.g. a play's remote_user will replace a host's ansible_user.

playbook.yml
1
2
3
4
5
6
...
tags:
- 'is_tagged'
remote_user: differentuser
connection: docker
...

(Full disclosure: I couldn't actually find a full list of play options in the docs when I started this project. I did find host options, so I just used those. I just now, while writing this post, discovered all the cool things available by delving in the source code. I suppose I should have done that sooner.)

Like hosts, plays can define vars or include external vars. As usual, these will override host values.

playbook.yml
1
2
3
4
5
6
7
...
vars:
play_scoped_value: 'accessible to its child elements'
host_scoped_value: 'replaces the host value'
var_files:
- /path/to/a/vars/file.yml
...

Tasks

Plays execute a collection of actions, called tasks, against their hosts. For convenience, Ansible provides three tasks blocks, pre_tasks, tasks, and post_tasks, executed in that order. tasks are a list of module calls. You can get a list of installed modules via ansible-doc -l, browse its documentation via ansible-doc <module name>, and test its syntax via ad-hoc usage. The list of modules online in the docs may or may not be current, and won't include any extensions you've installed locally.

Task attributes are defined locally and in its superclasses Base (source), Conditional (source), Taggable(source), and Become (source). The simplest task form is just a module call:

tasks.yml
1
2
3
4
5
...
tasks:
- debug:
msg: 'barebones'
...

In practice, it's usually a good idea to at least provide a name for logging:

tasks.yml
1
2
3
4
5
6
...
tasks:
- name: Log a simple message
debug:
msg: 'barebones with name'
...
$ ansible-playbook scratch.yml
PLAY **********************************************************************

TASK [setup] **************************************************************
ok: [localhost]

TASK [debug] **************************************************************
ok: [localhost] => {
"msg": "barebones"
}

TASK [Log a simple message] ***********************************************
ok: [localhost] => {
"msg": "barebones with name"
}

PLAY RECAP ****************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0

It's often useful to pass information from one task to another. Each module returns the result (if any) of its action (check its format via ansible-doc or the online docs) as well as common values. Usually, you're getting the result of AnsibleModule.run_command after the module processes its results. To access this return elsewhere, include register: name_to_register_as, which creates a new fact scoped to the play, i.e. accessible to tasks within the play but not elsewhere.

(Quick aside: The scope works because, as the variable_manager is passed around, it is serialized via pickle and, when deserialized, the nonpersistent cache is initialized to an empty dict. If that explanation is wrong, I apologize; I don't fully grok the process and am making a few logical jumps based off the code I was able to figure out and trace.)

tasks.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
...
tasks:
- name: Illustrate registering a task's output
stat:
path: /tmp/provisioning
register: demo_register

# This puts the item in the logs
# Alternatively, you could just run the playbook verbosely
- name: Output previous result
debug:
var: demo_register
...
$ ansible-playbook scratch.yml
PLAY **********************************************************************

TASK [setup] **************************************************************
ok: [localhost]

TASK [Illustrate registering a task's output] *****************************
ok: [localhost]

TASK [Output previous result] *********************************************
ok: [localhost] => {
"demo_register": {
"changed": false,
"stat": {
"exists": false
}
}
}

PLAY RECAP ****************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0

Tasks can be run conditionally via when. There are plenty of good reasons for conditional tasks, like performing OS-specific actions, running state-dependent processes, or including/excluding items based on local facts. Tasks whose execution is dependent on the status of other tasks are better handled (pun intended) via Handlers.

tasks.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
...
vars:
max_cache_age: "{{ 60 * 60 * 24 * 7 }}"
cachefile_path: /tmp/cachefile
true_in_ancestor: false

tasks:
- name: Badger Windows users
debug:
msg: You should consider using a more pleasant, less proprietary operating system.
# The regex_search filter returns matched contents if found and None otherwise
when: (ansible_distribution|regex_search('([mM]icrosoft|[wW]indows)')) or (ansible_bios_version|regex_search('([hH]yper-[vV])'))

- name: Check cache age
stat:
path: "{{ cachefile_path }}"
register: cache_age

- name: Nuke stale cache
copy:
content: ''
dest: "{{ cachefile_path }}"
force: yes
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: 'ugo=rw'
when: cache_age.stat.exists == false or cache_age.stat.mtime|int < (ansible_date_time.epoch|int - max_cache_age|int)

- name: Run when local fact is truthy
expect:
command: passwd AzureDiamond
responses:
(?i)password: 'hunter2'
when: true_in_ancestor
...
$ ansible-playbook scratch.yml
PLAY [localhost] **********************************************************

TASK [Gathering Facts] ****************************************************
ok: [localhost]

TASK [Badger Windows users] ***********************************************
skipping: [localhost]

TASK [Check cache age] ****************************************************
ok: [localhost]

TASK [Nuke stale cache] ***************************************************
skipping: [localhost]

TASK [Run when local fact is truthy] **************************************
skipping: [localhost]

PLAY RECAP ****************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0

Tasks can also be looped via with_items. This makes duplicating tasks much easier, and also allows each task to focus solely on a single action. The task iterates the contents of with_items, (coerced to) a list, using item as a placeholder. (The loop docs cover other very useful possibilities, like with_filetree and renaming loop_var; RTFM) For example, the suggested way to install packages (on targets whose shell can install packages by default, so not Windows) looks like this:

loop.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
...
tasks:
- name: Ensure dependencies are installed
package:
name: "{{ item }}"
state: present
with_items:
- git
- bash
become: yes
...
$ ansible-playbook provisioning/scratch.yml --ask-become-pass
SUDO password:

PLAY [localhost] **********************************************************

TASK [Gathering Facts] ****************************************************
ok: [localhost]

TASK [Ensure dependencies are installed] **********************************
ok: [localhost] => (item=git)
ok: [localhost] => (item=bash)

PLAY RECAP ****************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0

Handlers

Handlers are a specific subclass of tasks whose purpose is to execute task-state-dependent tasks. That's a lot to unpack, so let's look at the most common example:

tasks.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
...
tasks:
- name: Ensure templated config is in place
template:
src: etc/some/service.conf.j2
dest: /etc/some/service.conf
register: some_service_config

- name: Reload some-service on config change
service:
name: some-service
state: restarted
when: some_service_config|changed
...

This play templates the config for some-service, and, if the file changed, restarts some-service. Ansible will always attempt to run the second task, skipping it when nothing changed, as you can see below:

$ ansible-playbook scratch.yml
PLAY [localhost] **********************************************************

TASK [Gathering Facts] ****************************************************
ok: [localhost]

TASK [Ensure templated config is in place] ********************************
ok: [localhost]

TASK [Reload some-service on config change] *******************************
ok: [localhost]

PLAY RECAP ****************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0

$ ansible-playbook scratch.yml
PLAY [localhost] **********************************************************

TASK [Gathering Facts] ****************************************************
ok: [localhost]

TASK [Ensure templated config is in place] ********************************
ok: [localhost]

TASK [Reload some-service on config change] *******************************
skipping: [localhost]

PLAY RECAP ****************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0

Handlers provide a convenience wrapper for that logic. Rather than registering its output, a task can notify a handler. Handlers are defined in the handlers block of a play. Since handlers aren't executed in the linear manner tasks are run, you can quickly reuse the same handler across an entire tasks block. By default, handlers are queued at the end of each tasks without duplication. You can immediately flush the handlers queue by including a meta: flush_handlers task to override this behavior (do note the queue will still be flushed at the end of the tasks block). Like tasks, handlers are executed linearly in the order they are defined. This provides some structure for handler dependencies and makes notifying multiple handlers easier; after you declare the handlers in the order they must be run, you can notify them in any order.

Refactoring the leading example gives something like this:

handler.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
...
tasks:
- name: Ensure templated config is in place
template:
src: etc/some/service.conf.j2
dest: /etc/some/service.conf
notify: restart some-service

handlers:
- name: restart some-service
service:
name: some-service
state: restarted
...
$ ansible-playbook scratch.yml
PLAY [localhost] **********************************************************

TASK [Gathering Facts] ****************************************************
ok: [localhost]

TASK [Ensure templated config is in place] ********************************
ok: [localhost]

RUNNING HANDLER [restart some-service] ************************************
ok: [localhost]

PLAY RECAP ****************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0

$ ansible-playbook scratch.yml
PLAY [localhost] **********************************************************

TASK [Gathering Facts] ****************************************************
ok: [localhost]

TASK [Ensure templated config is in place] ********************************
ok: [localhost]

PLAY RECAP ****************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0

It's also possible to trigger multiple handlers with a single notify. Include listen: 'some string' in the handler body to add additional notify topics. listen is defined as a list, so you can add multiple triggers if desired.

task-and-handlers.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
...
tasks:
- name: Trigger immediate handler
command: echo 'By default, command|changed is always true'
notify: immediate handler

- name: Immediately flush handlers queue
meta: flush_handlers

- name: Trigger named handler
command: /bin/true
notify: named handler

- name: Trigger listen topic
command: /bin/true
notify: 'listen topic'

handlers:
- name: unnamed handler
debug:
msg: 'unnamed handler executed'
listen:
- 'listen topic'
- 'never called topic'

- name: immediate handler
debug:
msg: 'immediate handler executed'

- name: named handler
debug:
msg: 'named handler executed'
listen: 'listen topic'

# Removing the name prevents accidental name notification
# It will only execute on 'listen topic' after the others are finished
- debug:
msg: 'order dependent handler executed'
listen: 'listen topic'
...
$ ansible-playbook scratch.yml
PLAY [localhost] **********************************************************

TASK [Gathering Facts] ****************************************************
ok: [localhost]

TASK [Trigger immediate handler] ******************************************
changed: [localhost]

RUNNING HANDLER [immediate handler] ***************************************
ok: [localhost] => {
"msg": "immediate handler executed"
}

TASK [Retrigger named handler] ********************************************
changed: [localhost]

TASK [Trigger listen topic] ***********************************************
changed: [localhost]

RUNNING HANDLER [unnamed handler] *****************************************
ok: [localhost] => {
"msg": "unnamed handler executed"
}

RUNNING HANDLER [named handler] *******************************************
ok: [localhost] => {
"msg": "named handler executed"
}

RUNNING HANDLER [debug] ***************************************************
ok: [localhost] => {
"msg": "order dependent handler executed"
}

PLAY RECAP ****************************************************************
localhost : ok=8 changed=3 unreachable=0 failed=0

Roles

Roles provide a way to reuse Ansible code across plays and playbooks. You can think of a role as an isolated play that can be inserted anywhere (don't go around the internet quoting me verbatim; while not technically true, it's a good analogy). Roles usually live beside the playbook in the ./roles (you can specify fancier setups via roles_path), and have a well-defined directory structure. Instead of being declared in a single file like playbooks, roles are constructed from the contents of their respective directory, <role path>/<role name>. Any missing components are simply ignored, although at least one has to exist.

Examples make that wall of text more palatable. Let's recode one of the Tasks as roles. A great starting point is the package task. A descriptive name like installs_common_dependencies makes it easy to reference. To simply duplicate the task example, this is all that's necessary:

$ tree roles
roles
└── installs_common_dependencies
└── tasks
└── main.yml

2 directories, 1 file
roles/installs_common_dependencies/tasks/main.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# roles/installs_common_dependencies/tasks/main.yml
---
- name: Ensure dependencies are installed
package:
name: "{{ item }}"
state: present
with_items:
- git
- bash
become: yes

The role can now easily be included in a play as a top-level attribute. The roles block is compiled to a list of tasks and run exactly like a task block. roles are run after pre_tasks but before tasks.

playbook.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
---
- hosts: localhost
connection: local

roles:
- role: installs_common_dependencies

pre_tasks:
- debug:
msg: pre_tasks

tasks:
- debug:
msg: tasks
$ ansible-playbook scratch.yml --ask-become-pass
SUDO password:

PLAY [localhost] **********************************************************

TASK [Gathering Facts] ****************************************************
ok: [localhost]

TASK [debug] **************************************************************
ok: [localhost] => {
"msg": "pre_tasks"
}

TASK [installs_common_dependencies : Ensure dependencies are installed] ***
ok: [localhost] => (item=git)
ok: [localhost] => (item=bash)

TASK [debug] **************************************************************
ok: [localhost] => {
"msg": "tasks"
}

PLAY RECAP ****************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0

By default, Ansible searches each block component directory for a main.yml file, e.g. Ansible needs tasks/main.yml but doesn't care about files/main.yml (more on that later). You can include other files in those directories without issue. Ansible will completely ignore them (i.e. anything not main.yml) until you explicitly include them.

If we try to run installs_common_dependencies on a Windows target, we're going to run into issues. package doesn't work on operating systems whose default package manager is Bing via Internet Explorer. Let's expand the tasks to handle different OS families:

$ tree roles
roles
└── installs_common_dependencies
└── tasks
├── main.yml
├── not_windows.yml
└── windows.yml

2 directories, 3 files
roles/installs_common_dependencies/tasks/main.yml
1
2
3
4
5
6
7
# roles/installs_common_dependencies/tasks/main.yml
---
- include_tasks: windows.yml
when: ansible_distribution|regex_search('([mM]icrosoft|[wW]indows)')

- include_tasks: not_windows.yml
when: not ansible_distribution|regex_search('([mM]icrosoft|[wW]indows)')
roles/installs_common_dependences/tasks/not_windows.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# roles/installs_common_dependences/tasks/not_windows.yml
---
- name: Ensure dependencies are installed
package:
name: "{{ item }}"
state: present
with_items:
- git
- bash
become: yes

WARNING: I haven't actually tested this (or any of following improvements) on a Windows machine because setting it up requires more time than I feel like spending in PowerShell this weekend. Use at your own risk.

roles/installs_common_dependences/tasks/windows.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# roles/installs_common_dependences/tasks/windows.yml
---
- name: Badger the user
debug:
msg: 'There is hope available--Google "microsoft windows replacement"'

- name: Ensure dependences are installed via chocolatey
win_chocolatey:
name: "{{ item }}"
state: present
with_items:
- posh-git

- name: Ensures necessary features are installed
win_feature:
name: "{{ item }}"
state: present
include_management_tools: yes
include_sub_features: yes
register: features_update
with_items:
- Windows Subsystem for Linux

- name: Reboot if necessary (usually is)
win_reboot:
# I honestly have no idea if this works
# I also honestly have no idea how to build a context to test it
when: True in features_update.results|map(attribute='reboot_required')|list|unique

Splitting out the OS tasks has created a maintenance annoyance: we've now got two files to update when we want to modify the role. Luckily, Ansible has a solid solution for that.

$ tree roles
roles
└── installs_common_dependencies
├── defaults
│   └── main.yml
└── tasks
├── main.yml
├── not_windows.yml
└── windows.yml

3 directories, 4 files
roles/installs_common_dependencies/defaults/main.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# roles/installs_common_dependencies/defaults/main.yml
---
common_dependencies:
easy:
- git
- bash
hard:
choco:
- poshgit
features:
- Windows Subsystem for Linux
roles/installs_common_dependences/tasks/not_windows.yml
1
2
3
4
5
6
7
8
# roles/installs_common_dependences/tasks/not_windows.yml
---
- name: Ensure dependencies are installed
package:
name: "{{ item }}"
state: present
with_items: "{{ common_dependencies['easy'] }}"
become: yes
roles/installs_common_dependences/tasks/windows.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# roles/installs_common_dependences/tasks/windows.yml
---
- name: Badger the user
debug:
msg: 'There is hope available--Google "microsoft windows replacement"'

- name: Ensure dependences are installed via chocolatey
win_chocolatey:
name: "{{ item }}"
state: present
with_items: "{{ common_dependencies['hard']['choco'] }}"

- name: Ensures necessary features are installed
win_feature:
name: "{{ item }}"
state: present
include_management_tools: yes
include_sub_features: yes
register: features_update
with_items: "{{ common_dependencies['hard']['features'] }}"

- name: Reboot if necessary (usually is)
win_reboot:
# I honestly have no idea if this works
# I also honestly have no idea how to build a context to test it
when: True in features_update.results|map(attribute='reboot_required')|list|unique
$ ansible-playbook scratch.yml --ask-become-pass
SUDO password:

PLAY [localhost] **********************************************************

TASK [Gathering Facts] ****************************************************
ok: [localhost]

TASK [installs_common_dependencies : include_tasks] ***********************
skipping: [localhost]

TASK [installs_common_dependencies : include_tasks] ***********************
included: <truncated>/roles/installs_common_dependencies/tasks/not_windows.yml for localhost

TASK [installs_common_dependencies : Ensure dependencies are installed] ***
ok: [localhost] => (item=git)
ok: [localhost] => (item=bash)

PLAY RECAP ****************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0

Roles also provide a local directory for includable files and templates. Any items in <role name>/files or <role name>/templates can be referenced relatively, rather than trying to piece together an absolute path. If these directories contain a main.yml, it won't do anything unless referenced as the target of a module.

We can quickly expand the current example to copy a common .gitconfig to the user's home directory. (Note: I'm going to abandon the pretense of Windows support because I have more interesting things to write about. Sorry not sorry.) I like to treat files and templates as /, which makes managing the imports and templates much easier at the cost of lots of directories.

$ tree roles
roles
└── installs_common_dependencies
├── defaults
│   └── main.yml
├── files
│   └── home
│   └── user
│   └── gitconfig
└── tasks
├── main.yml
├── not_windows.yml
└── windows.yml

6 directories, 5 files
roles/installs_common_dependencies/files/home/user/gitconfig
1
2
3
4
5
6
7
# roles/installs_common_dependencies/files/home/user/gitconfig
[help]
autocorrect = 1
[core]
autocrlf = input
[push]
default = matching
roles/installs_common_dependences/tasks/not_windows.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# roles/installs_common_dependences/tasks/not_windows.yml
---
- name: Ensure dependencies are installed
package:
name: "{{ item }}"
state: present
with_items: "{{ common_dependencies['easy'] }}"
become: yes

- name: Ensure user gitconfig exists
copy:
src: home/user/gitconfig
dest: "/home/{{ ansible_user }}/.gitconfig"
force: no
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: 'ug=rw,o=r'
$ ansible-playbook scratch.yml --ask-become-pass
SUDO password:

PLAY [localhost] **********************************************************

TASK [Gathering Facts] ****************************************************
ok: [localhost]

TASK [installs_common_dependencies : include_tasks] ***********************
skipping: [localhost]

TASK [installs_common_dependencies : include_tasks] ***********************
included: <truncated>/roles/installs_common_dependencies/tasks/not_windows.yml for localhost

TASK [installs_common_dependencies : Ensure dependencies are installed] ***
ok: [localhost] => (item=git)
ok: [localhost] => (item=bash)

TASK [installs_common_dependencies : Ensure user gitconfig exists] ********
ok: [localhost]

PLAY RECAP ****************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0

Roles can also include metadata via <role name>/meta. At the moment, there are only three meta attributes:

  • allow_duplicates: This allows a role to be duplicated without unique options. By default, a role is only executed once per play no matter how many times it's referenced.
  • dependencies: This list allows you to prepend any role dependencies before executing the current role. The process loads recursively, so you don't have to worry about including dependency dependencies. If the order of inclusion matters, consider setting allow_duplicates on the dependencies (but first try to refactor that behavior out).
  • galaxy_info: This contains metadata for Ansible Galaxy. Ansible Galaxy is a fantastic resource for both great roles and Ansible usage, as it contains roles written by solid developers consumed by users all over (I can say they're written by solid developers because I haven't published any roles yet).

Recap

Ansible is amazing. By now you should be able to set its configuration, quickly test tasks, construct playbooks, and create reusable content. The best part of this whole post is that I've barely scratched the surface. Google, StackExchange, and the official docs have so many good ideas to try out. There's so much more that I'd love to write about but I really need to publish this and move on to the actual project: automating and securing SSH configuration.

Before you go, check out popular roles on Ansible Galaxy. It's useful to see some of this in action. Those repos are chock full of little tools and styles that get overlooked in a post like this.

CJ Harries

I did a thing once. Change "blog." to "cj@" and you've got my email. All these opinions are mine and might not be shared by clients or employers.

Read More