Learn Ansible
上QQ阅读APP看书,第一时间看更新

Playbooks

In the previous section, running the ansible command allowed us to call a single module. In this section, we are going to look at calling several modules. The following playbook is called playbook.yml. It calls the setup module we called in the previous section and then uses the debug module to print a message to the screen:

---

- hosts: boxes
gather_facts: true
become: yes
become_method: sudo

tasks:
- debug:
msg: "I am connecting to {{ ansible_nodename }} which is running {{ ansible_distribution }} {{ ansible_distribution_version }}"

Before we start to break the configuration down, let's take a look at the results of running the playbook. To do this, use the following command:

$ ansible-playbook -i hosts playbook01.yml

This will connect to our Vagrant box, gather information on the system, and then return just the information we want in a message:

The first thing you will notice about the playbook is that it is written in YAML, which is a recursive acronym that stands for YAML Ain't Markup Language. YAML was designed to be a human-readable data serialization standard that can be used by all programming languages. It is commonly used to help define configurations.

The indentation is very important in YAML as it is used to nest and define areas of the file. Let's look at our playbook in more detail:

---

While these lines might not seem like much, they are used as document separators, as Ansible compiles all of the YAML files into a single file; more on that later. It is important to Ansible to know where one document ends and another begins.

Next up, we have the configuration for the playbook. As you can see, this is where the indentation starts to come into play:

- hosts: boxes
gather_facts: true
become: yes
become_method: sudo
tasks:

The - tells Ansible that this is the start of a section. From there, key-value pairs are used. These are:

  • hosts: This tells Ansible the host or host group to target in the playbook. This must be defined in a host inventory like the ones we covered in the previous section.
  • gather_facts: This tells Ansible to run the setup module when it first connects to the host. This information is then available to the playbook during the remainder of the run.
  • become: This is present because we are connecting to our host as a basic user. In this case, the Vagrant user. Ansible may not have enough access privileges to execute some of the commands we are telling it to so this instructs Ansible to execute all of its commands as the root user.
  • become_method: This tells Ansible how to become the root user; in our case, we have a passwordless sudo configured by Vagrant so we are using sudo.
  • tasks: These are the tasks we can tell Ansible to run when connected to the target host.

You will notice that from here, we move the indentation across again. This defines another section of the configuration. This time it is for the tasks:

    - debug:
msg: "I am connecting to {{ ansible_nodename }} which is running {{ ansible_distribution }} {{ ansible_distribution_version }}"

As we have already seen, the only task we are running is the debug module. This module allows us to display output in the Ansible playbook run stream you saw when we ran the playbook.

You may have already noticed that the information between the curly brackets are the keys from the setup module. Here, we are telling Ansible to substitute the value of each key wherever we use the key—we will be using this a lot in our playbooks. We will also be defining our own key values to use as part of our playbook runs.

Let's extend our playbook by adding another task. The following can be found as playbook02.yml:

---

- hosts: boxes
gather_facts: true
become: yes
become_method: sudo

tasks:
- debug:
msg: "I am connecting to {{ ansible_nodename }} which is running {{ ansible_distribution }} {{ ansible_distribution_version }}"
- yum:
name: "*"
state: "latest"

As you can see, we have added a second task which calls the yum module. This module is designed to help us interact with the package manager used by CentOS and other Red Hat-based operating systems called yum. We are setting two key values here:

  • name: This is a wildcard. It tells Ansible to use all of the installed packages rather than just a single named package. For example, we could have just used something like HTTPD here to target just Apache.
  • state: Here, we are telling Ansible to ensure the package we have defined in the name key is the latest version. As we have named all of the installed packages, this will update everything we have installed.

Run the playbook using:

$ ansible-playbook -i hosts playbook02.yml

This will give us the following results:

The yum task has been marked as changed on the host box. This means that packages were updated. Running the same command again shows the following:

As you can see, the yum task is now showing as ok on our host. This is because there are currently no longer any packages requiring updating.

Before we finish this quick look at playbooks, let's do something more interesting. The following playbook, called playbook03.yml, adds installing, configuring, and starting the NTP service to our Vagrant box. It also adds a few new sections to our playbook as well as using a template:

---

- hosts: boxes
gather_facts: true
become: yes
become_method: sudo

vars:
ntp_servers:
- "0.centos.pool.ntp.org"
- "1.centos.pool.ntp.org"
- "2.centos.pool.ntp.org"
- "3.centos.pool.ntp.org"

handlers:
- name: "restart ntp"
service:
name: "ntpd"
state: "restarted"

tasks:
- debug:
msg: "I am connecting to {{ ansible_nodename }} which is
running {{ ansible_distribution }}
{{ ansible_distribution_version }}"
- yum:
name: "*"
state: "latest"
- yum:
name: "{{ item }}"
state: "installed"
with_items:
- "ntp"
- "ntpdate"
- template:
src: "./ntp.conf.j2"
dest: "/etc/ntp.conf"
notify: "restart ntp"

Before we work through the additions to our playbook, let's run it to get an idea of the feedback you get from Ansible:

$ ansible-playbook -i hosts playbook03.yml

The following screenshot shows the output for the preceding command:

This time, we have three changed tasks. Running the playbook again shows the following:

As expected, because we haven't changed the playbook or anything on the Vagrant box, there are no changes and Ansible is reporting everything as ok.

Let's go back to our playbook and discuss the additions. You will notice that we have added two new sections, vars and handlers, as well as two new tasks, a second task which uses the yum module and the final task, which utilizes the template module.

The vars section allows us to configure our own key-value pairs. In this case, we are providing a list of NTP servers, which we will be using later in the playbook:

  vars:
ntp_servers:
- "0.centos.pool.ntp.org"
- "1.centos.pool.ntp.org"
- "2.centos.pool.ntp.org"
- "3.centos.pool.ntp.org"

As you can see, we are actually providing four different values for the same key. These will be used in the template task. We could have also written this as follows:

  vars:
ntp_servers: [ "0.centos.pool.ntp.org", "1.centos.pool.ntp.org",
"2.centos.pool.ntp.org", "3.centos.pool.ntp.org" ]

However, this is a little more difficult to read. The new next section is handlers. A handler is a task that is assigned a name and called at the end of a playbook run depending on what tasks have changed:

  handlers:
- name: "restart ntp"
service:
name: "ntpd"
state: "restarted"

In our case, the restart ntp handler uses the service module to restart ntpd. Next up, we have the two new tasks, starting with one which installs the NTP service and also the ntpdate package using yum:

   - yum:
name: "{{ item }}"
state: "installed"
with_items:
- "ntp"
- "ntpdate"

As we are installing two packages, we need a way to provide two different package names to the yum module so that we don't have to have two different tasks for each of the package installations. To achieve this, we are using the with_items command, as part of the task section. Note that this is in addition to the yum module and is not part of the module—you can tell this by the indentation.

The with_items command allows you to provide a variable or list of items to the task. Wherever {{ item }} is used, it will be replaced with the content of the with_items value.

The final addition to the playbook is the following task:

   - template:
src: "./ntp.conf.j2"
dest: "/etc/ntp.conf"
notify: "restart ntp"

This task uses the template module. To read a template file from our Ansible controller, process it and upload the processed template to the host machine. Once uploaded, we are telling Ansible to notify the restart ntp handler if there have been any changes to the configuration file we are uploading.

In this case, the template file is the ntp.conf.j2 file in the same folder as the playbooks, as defined in the src option. This file looks like this:

# {{ ansible_managed }}
driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict ::1
{% for item in ntp_servers %}
server {{ item }} iburst
{% endfor %}
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
disable monitor

The bulk of the file is the standard NTP configuration file, with the addition of a few Ansible parts. The first addition is the very first line:

# {{ ansible_managed }}

If this line wasn't there every time we ran Ansible, the file would be uploaded, which would count as a change and the restart ntp handler would be called, meaning that even if there were no changes, NTP would be restarted.

The next part loops through the ntp_servers values we defined in the vars section of the playbook:

{% for item in ntp_servers %}
server {{ item }} iburst
{% endfor %}

For each of the values, add a line that contains the server, then the value, and then iburst. You can see the output of this by SSHing into the Vagrant machine and opening /etc/ntp.conf:

$ vagrant ssh
$ cat /etc/ntp.conf

The following screenshot shows the output for the preceding command:

As you can see from the preceding screenshot of the fully rendered file, we have the comment on the first line noting that the file is managed by Ansible and also the four lines containing the NTP servers to use.

Finally, you can check that NTP is running as expected by running the following commands:

$ vagrant ssh
$ sudo systemctl status ntpd

The following screenshot shows the output for the preceding command:

As you can see from the preceding output, NTP is loaded and running as expected. Let's remove our Vagrant box and launch a fresh one by running the following command:

$ vagrant destroy

Then launch the box again by running one of the following two commands:

$ vagrant up
$ vagrant up --provider=vmware_fusion

Once the box is up and running, we can run the final playbook with:

$ ansible-playbook -i hosts playbook03.yml

After a minute or two, you should receive the results of the playbook run. You should see five changed and six ok:

Running for the second time will just show five ok:

The reason why we got six ok on the first run and five ok on the second run is that nothing has changed since the first run. Therefore, the handler to restart NTP is never being notified so that task to restart the service never executes.

Once you have finished with the example playbooks, you can terminate the running box using:

$ vagrant destroy

We will be using the box again in the following chapter.