Setup Automation Host with Ansible
Introduction
In today’s dynamic network environments, keeping an accurate and up-to-date state of all the linux hosts is crucial.
This document describes the setup and use of Ansible with the flexibility of Python and the scheduling power of cron jobs.
We setup an automation pipeline which will access your linux hosts via ssh, so that they have the latest packages and configuration.
This guide will cover:
- Setup ansible and python: fetching device configurations and updating linux hosts
- Automating with cron jobs: scheduling regular updates to maintain synchronization.
- Practical Example: updating a linux host with the newest packages and kernel updates, rebooting the host after the update.
File Structure
We use the following file structure for the automation host.
In the example we have a host group called adminhosts which includes for the beginning the automation hosts itself, let's call it automate.lan.domain.com.

Installation
Install the required tools
- Login as the netadmin user, this user needs to be the default user on the linux host automate.lan.domain.com.
- sudo -i
-
sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install ansible
mkdir -p /var/automate/netadmin/ansible-playbooks/maintenance
cd /var/automate/netadmin/ansible-playbooks/maintenance
mkdir inventory roles
Change access rights to /var/automate
chgrp -R netadmin /var/automate
chmod -R 774 /var/automate
ls -l /var/automate
drwxrwxr-- 4 root netadmin 4096 Mar 13 07:56 automate
Configuration
Create the ansible role common
The following will create the default path structure for a role. Roles can also be used from the ansible-galaxy found here:
cd /var/automate/netadmin/ansible-playbooks/maintenance/roles
ansible-galaxy role init common
Create hosts file in inventory folder
This hosts file lists all the hosts we want to manage with ansible. The hosts are grouped, at the moment we only have the adminservers group with the host automate.lan.domain.com. This will be extended in the future.
vi /var/automate/netadmin/ansible-playbooks/maintenance/inventory/production
[adminservers]
automate.lan.domain.com
Create the master.yml file for our "maintenance" ansible-playbook
This is the master yaml file for running the ansible-playbook.
vi /var/automate/netadmin/ansible-playbooks/maintenance/master.yml
---
# file: master.yml
- import_playbook: adminservers.yml
Create the adminservers.yml config file
This will be the yaml config file which does describe which roles should be used. At the moment we only use "common" role and commented out a secondary role "monitoring" for later use.
The "common" role will be used to run common tasks on all the hosts in the group adminservers.
The "monitoring" role could be used for example to configure a syslog on the host etc...
vi /var/automate/netadmin/ansible-playbooks/maintenance/adminservers.yml
---
- hosts: adminservers
roles:
- common
# - monitoring
What is the "common" role doing ?
The goal is to run common tasks like in the following order:
- Update all packages to their latest version
- Check if a reboot is required
- Reboot if required
Create main.yml file:
vi /var/automate/netadmin/ansible-playbooks/maintenance/roles/common/tasks/main.yml
---
- include_tasks: debian.yml
when: ansible_os_family == "Debian"
Create debian.yml:
vi /var/automate/netadmin/ansible-playbooks/maintenance/roles/common/tasks/debian.yml
- name: Update all packages to their latest version
ansible.builtin.apt:
update_cache: true
autoremove: true
cache_valid_time: 3600
name: "*"
state: latest
- name: Check if a reboot is required
stat:
path: /var/run/reboot-required
register: reboot_required_file
- name: Reboot if required
reboot:
msg: "Reboot initiated by Ansible due to kernel updates"
connect_timeout: 5
reboot_timeout: 300
pre_reboot_delay: 0
post_reboot_delay: 30
test_command: uptime
when: reboot_required_file.stat.exists == true
Create the ansible.cfg (See comments for parameter description):
sudo vi /etc/ansible/ansible.cfg
[defaults]
# Ansible enables host key checking by default.
# Checking host keys guards against server spoofing and man-in-the-middle attacks, but it does require some maintenance.
# If this is set to False the host key will be automatically accepted and added.
# Also note that this could be a security problem accepting host keys.
host_key_checking = False
# Python will output warning in the log when a ansible-playbook is running.
# The following will silent these warnings to not overfill the output with these python warnings.
interpreter_python = auto_silent
# profile how long each task takes
callacks_enabled = profile_tasks
# parallel processing
forks = 10
[privilege_escalation]
# Use sudo on the destination host to run as root the tasks.
become = True
When this playbook is now running, it follows the flow:
- Gathering facts about the hosts in the group "adminservers". This will return many information about the host like the host family "debian" or "redhat" etc.
- Update all packages to their latest version
- Check if a reboot is required
- Reboot if required
Add the cron job
At the moment we only use the user and group netadmin. It will be possible to use more users which are members of the netadmin group and they have their personal ansible playbooks at the path /var/automate/<username>/ansible-playbooks.
Each user can then edit his cron job with the command
crontab -e
An example for a cron job. This will start every day at 02:00 in the morning.
0 2 * * * /usr/bin/ansible-playbook -i /var/automate/netadmin/ansible-playbooks/maintenance/inventory/production /var/automate/netadmin/ansible-playbooks/maintenance/master.yml
Summary
By following these steps, you'll have a system where ansible periodically fetches the current information from the hosts.
The cron job ensures that this process will be run every day at 02:00 in the morning.
If you add all linux hosts to your ansible hosts file (inventory/production), all devices will be included into this process.
No Comments