Categories
System Administration

Building a Custom Ansible Execution Environment

Recently I’ve been working on an Ansible upgrade project that included building out an Ansible Automation Platform installation and upgrading legacy ansible code to modern standards. The ansible code that we were working with had been written mostly targeting Enterprise Linux versions 6 and 7 and was using pre ansible version 2.9 coding standards.

The newer versions of Ansible and Ansible Automation Platform utilise Execution Environments to run the ansible engine against a host. An Execution Environment is a container built with Ansible dependencies, Python libraries and Ansible Collections baked in.

On top of the legacy Ansible code that I was working with, the codebase does a lot of “magic” configuration for setting things up across the environment, so I had to make sure that everything worked like it did previously. I tested a few of the off-the-shelf execution environments, none of which worked for what we needed it for.

In this post I wanted to detail a quick tutorial on building a custom execution environment for running your Ansible code.

I’m using Fedora Linux 39 to set up a development environment, but most Linux distributions should follow similar steps.

From the command line, install the required dependencies. As execution environments are containers, we need a container runtime and for that we’ll use Podman. We also need some build tools.

$ sudo dnf install podman python3-pip

Now to install the Ansible dependencies.

$ python3 -m pip install ansible-navigator ansible-builder

Ansible navigator is the new interface to running Ansible and is great for testing out different execution environments and your ansible code as you’re developing. I briefly demonstrated using Ansible navigator in my article about using Ansible to configure Linux servers. You need the tools in Ansible builder to create the container images.

If you’ve ever built Docker containers before, the steps for EEs are very similar just with the Ansible builder wrapper. Create a folder to store your files.

$ mkdir custom-ee && cd custom-ee

The main file we need to create is the execution-environment.yml file, which Ansible builder uses to build the image.

---
version: 3

images:
  base_image:
    name: quay.io/centos/centos:stream9

dependencies:
  python_interpreter:
    package_system: python3.11
    python_path: /usr/bin/python3.11
  ansible_core:
    package_pip: ansible-core>=2.15
  ansible_runner:
    package_pip: ansible-runner

  galaxy: requirements.yml
  system: bindep.txt
  python: |
    netaddr
    receptorctl

additional_build_steps:
  append_base:
    - RUN $PYCMD -m pip install -U pip
  append_final:
    - COPY --from=quay.io/ansible/receptor:devel /usr/bin/receptor /usr/bin/receptor
    - RUN mkdir -p /var/run/receptor
    - RUN git lfs install --system
    - RUN alternatives --install /usr/bin/python python /usr/bin/python3.11 311

The main parts of the file are fairly self-explanatory, but from the top:

  • We’re using version 3 of the ansible builder spec.
  • The base container image we’re building from is CentOS stream 9 pulled from Quay.io.
  • We want to use Python 3.11 inside the container.
  • We want an Ansible core version higher than 2.15.

In the dependencies section, we can specify additional software our image requires. The galaxy entry is Ansible collections from the Galaxy repository. System is the software installed using DNF on a Linux system. And Python is the Python dependencies we need since Ansible is written in Python and it requires certain libraries to be available depending on what your requirements.

The Galaxy collections are being defined in an external file called requirements.yml which is in the working directory with the execution-environment.yml file. It’s simply a YAML file with the following entries:

---
collections:
  - name: ansible.posix
  - name: ansible.utils
  - name: ansible.netcommon
  - name: community.general

My project requires the ansible.posix, ansible.utils and ansible.netcommon collections, and the community.general collection. Previously, all of these collections would have been part of the ansible codebase and installed when you install Ansible, however the Ansible project has decided to split these out into collections, making the Ansible core smaller and more modular. You might not need these exact collections, or you might require different collections depending on your environment, so check out the Ansible documentation.

Next is the bindep.txt file for the system binary dependencies. These are installed in our image, which is CentOS, using DNF.

epel-release [platform:rpm]
python3.11-devel [platform:rpm]
python3-libselinux [platform:rpm]
python3-libsemanage [platform:rpm]
python3-policycoreutils [platform:rpm]
sshpass [platform:rpm]
rsync [platform:rpm]
git-core [platform:rpm]
git-lfs [platform:rpm]

Again, you might require different dependencies, so check the documentation for the Ansible modules you’re using.

Under the Python section, I’ve defined the Python dependencies directly rather than using a seperate file. If you need a separate file it’s called requirements.txt.

    netaddr
    receptorctl

Netaddr is the Python library for working with IP Addresses, which the ansible codebase I was working with needed, and receptorctl is a Python library for working with Receptor, network service mesh implementation that Ansible uses to distribute work across execution nodes.

With all of that definied, we can build the image.

ansible-builder build --tag=custom-ee:1.1

The custom-ee tag is the name of the image that we’ll use to call from Ansible. The ansible-builder command runs Podman to build the container image, The build should take a few minutes. If everything went according to plan, you should see a success message.

Because the images are just standard Podman images, you can run the podman images command to see it. You should see the output display ‘localhost/custom-ee’ or whatever you tagged your image with.

$ podman images

If the build was successful and the image is available, you can test the image with Ansible navigator. I’m going to test with a minimal RHEL 9 installation that I have running. In the ansible-navigator command, you can specify the –eei flag to change the EE from the default, or you can add a directive in an ansible-navigator.yml file in your ansible project, such as the following:

ansible-navigator:
  execution-environment:
    image: localhost/custom-ee:1.1
    pull:
      policy: missing
  playbook-artifact:
    enable: false

If you’re using Ansible Automation Platform you can pull the EE from a container registry or Private Automation Hub and specify which EE to use in your Templates.

ansible-navigator run web.yml -m stdout --eei localhost/custom-ee:1.1

You can also inspect the image with podman inspect with the image hash from the podman images command.

$ podman inspect 8e53f19f86e4

Once you’ve got the EE working how you need it to you can push it to either a public or private container registry for use in your environment.

Categories
System Administration

Setting Up Oracle Linux Automation Manager

Previously I wrote about using Ansible to manage the configuration of Linux servers. I love using Ansible and use it almost every day, however in a large Enterprise environment with multiple users and a lot of Ansible roles and playbooks, sometimes using Ansible on its own becomes difficult to maintain.

In this post I’m going to run through configuring Oracle Linux Automation Manager. Oracle’s Automation Manager is essentially a rebranded fork of Ansible Automation Platform and provides a web user interface to easily manage your Ansible deployments and inventory.

I’m demonstrating the use of OLAM instead of the Red Hat’s Ansible Automation Platform or upstream AWX because I’ve had recent experience deploying Oracle Linux Automation Manager in an Enterprise environment. The most recent version of OLAM as of this writing is version 2 which is based on the Ansible AWX version 19. The newer versions of AAP that Red Hat provides, and the community AWX version are both installed with Kubernetes or OpenShift, which I don’t want to worry about for the purposes of this article. OLAMv2 is installable by RPM packages with DNF, however it still uses the newer Ansible Automation Platform architecture. I really want to dig into the underlying components such as Receptor and the Execution Environments, and I feel like this is the least complex path for my purposes.

This will also give you a good platform to get familiar with AAP without the complexity of setting up Kubernetes or managing containers. As much as I love Kubernetes, Containers and OpenShift, I think it’s important to remember that underneath container platforms is still Linux, and knowing how to work with Linux is an important skill.

This is a really great platform to get familiar with. You can really expand your Ansible deployments with a lot of flexibility using OLAM or AWX in general.

Oracle provide access to Automation Manager directly in their Yum repositories for Oracle Linux 8 which makes installation really simple, particularly if you already run Oracle Enterprise Linux or have a non-RHEL environment.

In this post I’ll install OL Automation Manager onto an Oracle Linux 8 virtual machine running in Proxmox. I won’t detail getting Oracle Linux installed as I’ve already done a post about RHEL and CentOS, and the installation steps are the same. I’ll install OLAM onto a single virtual machine rather than a cluster as it’s just for my own testing environment, however in a production environment you should use multiple machines.

Once Oracle Linux has been setup you can start the installation of Oracle Linux Automation Manager. First we have to enable the Automation Manager 2 repository.

$ sudo dnf install oraclelinux-automation-manager-release-el8

Next we need to enable the postgresql database. I’m going to use Postgresql 13.

$ sudo dnf module reset postgresql
$ sudo dnf module enable postgresql:13
$ sudo dnf install postgresql-server
$ sudo postgresql-setup --initdb
$ sudo sed -i "s/#password_encryption.*/password_encryption = scram-sha-256/"  /var/lib/pgsql/data/postgresql.conf
$ sudo systemctl enable --now postgresql

Next, set up the AWX user in postgresql.

$ sudo su - postgres -c "createuser -S -P awx"

Enter the password when prompted then create the awx database.

$ sudo su - postgres -c "createdb -O awx awx"

Open the file /var/lib/pgsql/data/pg_hba.conf and add the following

host  all  all 0.0.0.0/0 scram-sha-256

In the file /var/lib/pgsql/data/postgresql.conf uncomment the “listen_addresses = ‘localhost'” line.

Now that the database is ready, we can install Automation Manager using DNF.

$ sudo dnf install ol-automation-manager

That should only take a moment. Next you’ll need to edit the file /etc/redis.conf and add the following two lines at the bottom of the file.

unixsocket /var/run/redis/redis.sock 
unixsocketperm 775

Next edit the file /etc/tower/settings.py. If you’re installing in a cluster configuration you’ll need to make a couple of extra changes, but for this single host installation the only change we need to make is the database configuration settings. Add the password you created earlier when creating the awx user is postgresql and set the host to ‘localhost’.

Now we’ll change users to the awx user to run the next part of the installation.

$ sudo su -l awx -s /bin/bash
$ podman system migrate
$ podman pull container-registry.oracle.com/oracle_linux_automation_manager/olam-ee:latest
$ awx-manage migrate
$ awx-manage createsuperuser --username admin --email [email protected]

After running the createsuperuser command you’ll be asked to create a password. This is the username and password to login to the web ui, so don’t forget it.

Next generate an SSL certificate so you can access Automation Manager over HTTPS.

$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/tower/tower.key -out /etc/tower/tower.crt

And replace the default /etc/nginx/nginx.conf configuration script with the this one.

Next we can start to provision the installation. Log back in as the awx user.

$ sudo su -l awx -s /bin/bash
$ awx-manage provision_instance --hostname=awx.local --node_type=hybrid
$ awx-manage register_default_execution_environments
$ awx-manage register_queue --queuename=default --hostnames=awx.local
$ awx-manage register_queue --queuename=controlplane --hostnames=awx.local

Change the hostname(s) to whatever suits your environment. I used awx.local for the purposes of this demonstration. You can now type exit to leave the awx user session and go back to the rest of the setup as your normal user.

Replace the /etc/receptor/receptor.conf file with this one.

You can now start OL Automation Manager.

$ sudo systemctl enable --now ol-automation-manager.service

Now we can preload some data.

$ sudo su -l awx -s /bin/bash
$ awx-manage create_preload_data

Finally, we’ll open up the firewall to allow access.

$ sudo firewall-cmd --add-service=https --permanent
$ sudo firewall-cmd --add-service=http --permanent
$ sudo firewall-cmd --reload

You should be able to load up the browser and access the Web UI.

Login using the admin credentials you created during the setup process.