Categories
System Administration

Setting Up Oracle Linux Automation Manager

Previously I wrote about using Ansible to manage the configuration of Linux servers. I love using Ansible and use it almost every day, however in a large Enterprise environment with multiple users and a lot of Ansible roles and playbooks, sometimes using Ansible on its own becomes difficult to maintain.

In this post I’m going to run through configuring Oracle Linux Automation Manager. Oracle’s Automation Manager is essentially a rebranded fork of Ansible Automation Platform and provides a web user interface to easily manage your Ansible deployments and inventory.

I’m demonstrating the use of OLAM instead of the Red Hat’s Ansible Automation Platform or upstream AWX because I’ve had recent experience deploying Oracle Linux Automation Manager in an Enterprise environment. The most recent version of OLAM as of this writing is version 2 which is based on the Ansible AWX version 19. The newer versions of AAP that Red Hat provides, and the community AWX version are both installed with Kubernetes or OpenShift, which I don’t want to worry about for the purposes of this article. OLAMv2 is installable by RPM packages with DNF, however it still uses the newer Ansible Automation Platform architecture. I really want to dig into the underlying components such as Receptor and the Execution Environments, and I feel like this is the least complex path for my purposes.

This will also give you a good platform to get familiar with AAP without the complexity of setting up Kubernetes or managing containers. As much as I love Kubernetes, Containers and OpenShift, I think it’s important to remember that underneath container platforms is still Linux, and knowing how to work with Linux is an important skill.

This is a really great platform to get familiar with. You can really expand your Ansible deployments with a lot of flexibility using OLAM or AWX in general.

Oracle provide access to Automation Manager directly in their Yum repositories for Oracle Linux 8 which makes installation really simple, particularly if you already run Oracle Enterprise Linux or have a non-RHEL environment.

In this post I’ll install OL Automation Manager onto an Oracle Linux 8 virtual machine running in Proxmox. I won’t detail getting Oracle Linux installed as I’ve already done a post about RHEL and CentOS, and the installation steps are the same. I’ll install OLAM onto a single virtual machine rather than a cluster as it’s just for my own testing environment, however in a production environment you should use multiple machines.

Once Oracle Linux has been setup you can start the installation of Oracle Linux Automation Manager. First we have to enable the Automation Manager 2 repository.

$ sudo dnf install oraclelinux-automation-manager-release-el8

Next we need to enable the postgresql database. I’m going to use Postgresql 13.

$ sudo dnf module reset postgresql
$ sudo dnf module enable postgresql:13
$ sudo dnf install postgresql-server
$ sudo postgresql-setup --initdb
$ sudo sed -i "s/#password_encryption.*/password_encryption = scram-sha-256/"  /var/lib/pgsql/data/postgresql.conf
$ sudo systemctl enable --now postgresql

Next, set up the AWX user in postgresql.

$ sudo su - postgres -c "createuser -S -P awx"

Enter the password when prompted then create the awx database.

$ sudo su - postgres -c "createdb -O awx awx"

Open the file /var/lib/pgsql/data/pg_hba.conf and add the following

host  all  all 0.0.0.0/0 scram-sha-256

In the file /var/lib/pgsql/data/postgresql.conf uncomment the “listen_addresses = ‘localhost'” line.

Now that the database is ready, we can install Automation Manager using DNF.

$ sudo dnf install ol-automation-manager

That should only take a moment. Next you’ll need to edit the file /etc/redis.conf and add the following two lines at the bottom of the file.

unixsocket /var/run/redis/redis.sock 
unixsocketperm 775

Next edit the file /etc/tower/settings.py. If you’re installing in a cluster configuration you’ll need to make a couple of extra changes, but for this single host installation the only change we need to make is the database configuration settings. Add the password you created earlier when creating the awx user is postgresql and set the host to ‘localhost’.

Now we’ll change users to the awx user to run the next part of the installation.

$ sudo su -l awx -s /bin/bash
$ podman system migrate
$ podman pull container-registry.oracle.com/oracle_linux_automation_manager/olam-ee:latest
$ awx-manage migrate
$ awx-manage createsuperuser --username admin --email [email protected]

After running the createsuperuser command you’ll be asked to create a password. This is the username and password to login to the web ui, so don’t forget it.

Next generate an SSL certificate so you can access Automation Manager over HTTPS.

$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/tower/tower.key -out /etc/tower/tower.crt

And replace the default /etc/nginx/nginx.conf configuration script with the this one.

Next we can start to provision the installation. Log back in as the awx user.

$ sudo su -l awx -s /bin/bash
$ awx-manage provision_instance --hostname=awx.local --node_type=hybrid
$ awx-manage register_default_execution_environments
$ awx-manage register_queue --queuename=default --hostnames=awx.local
$ awx-manage register_queue --queuename=controlplane --hostnames=awx.local

Change the hostname(s) to whatever suits your environment. I used awx.local for the purposes of this demonstration. You can now type exit to leave the awx user session and go back to the rest of the setup as your normal user.

Replace the /etc/receptor/receptor.conf file with this one.

You can now start OL Automation Manager.

$ sudo systemctl enable --now ol-automation-manager.service

Now we can preload some data.

$ sudo su -l awx -s /bin/bash
$ awx-manage create_preload_data

Finally, we’ll open up the firewall to allow access.

$ sudo firewall-cmd --add-service=https --permanent
$ sudo firewall-cmd --add-service=http --permanent
$ sudo firewall-cmd --reload

You should be able to load up the browser and access the Web UI.

Login using the admin credentials you created during the setup process.

Categories
System Administration

Managing Linux servers with Ansible

Ansible is an open source, configuration management and automation tool sponsored by Red Hat. Ansible lets you define the state that your servers should be in using YAML and then proceeds to create that state over SSH. For example, the state might be that the Apache web server should be present and enabled.

The great thing about Ansible is if the server is already in the state that you’ve defined then nothing happens. Ansible wont try to install or change anything.

In this post I’m going to show how to set up Ansible to manage Linux servers.

I have Fedora Linux installed for the control node, and I’ll be configuring Enterprise Linux machines to run Ansible against. Ensure you have at least 2 Linux machines set up and connected to each other remotely.

I also recommend generating SSH keys for passwordless remote access. This isn’t entirely necessary for using Ansible, but it improves the security of your environment and gives us better automation. On your control node, generate an SSH key with ssh-keygen.

$ ssh-keygen -t ecdsa -b 521 -f ~/.ssh/id_local

You can hit enter for each of the prompts when generating the key. For additional security in a production environment I’d recommend entering a Passphrase.

Once the key is ready, copy it to the remote host. In my case my remote host is called rhel.local, which I’ve configured in /etc/hosts.

$ ssh-copy-id [email protected]

Substituting the username for your own username on the remote server, unless your name is also Dave, then you can leave it.

The user account on the remote servers needs to have sudo privileges to be able to perform tasks with Ansible. On my remote host the user account that I’m using has been added to the wheel group and I’ve allowed the wheel group to run sudo commands without a password by editing the sudoers file with visudo.

%wheel  ALL=(ALL)       NOPASSWD: ALL

If you don’t want to enable passwordless sudo you’ll need to instruct Ansible to pass your sudo password to the host when running playbooks.

The Linux installation that I’m using for the control node doesn’t have Ansible installed by default, so we’ll get that set up first. Note, we don’t need to install anything on the remote hosts which is one of the benefits of Ansible.

On your control node, which in my case is my Fedora Linux desktop machine that I use for my development work, install the following packages.

$ sudo dnf install python3-pip
$ python3 -m pip install ansible-core ansible-navigator ansible-builder

I’m installing Ansible with Python’s PIP package manager to get access to the latest version of ansible as well as ansible-navigator and the ansible-builder tool.

Once Ansible is installed, we can test it’s working correctly by trying to communicate with the remote host. Ansible has various methods of interacting with hosts, one way is through ad-hoc commands from the command line or by running playbooks. Ad-hoc commands are fine for quick tasks or for testing something, however playbooks are usually the recommended approach for most cases.

For the sake of confirming that Ansible is working and we can reach the remote host, let’s run an ad-hoc command using the ping module.

First, create a file to store your host inventory. An inventory file is basically just a list of hosts that Ansible can work with. Hosts can be listed using either IP address or hostnames and can also be grouped together for different purposes. Ignoring that for a second, we’ll start with a simple hosts file with one host that we’re configuring.

In your users home directory, create an ansible project directory and an empty file called hosts.

$ mkdir ansible
$ cd ansible
$ touch hosts

Open the hosts file and just insert the host you wish to configure. In my case it’s rhel.local. Again, you can use hostnames if your /etc/hosts file is setup for name resolution as well.

[www]
rhel.local

I’ll also create an ansible.cfg file.

[defaults]
remote_user = dave
inventory = $HOME/ansible/hosts
interpreter_python = auto_silent

[privilege_escalation]
become=true
become_method=sudo

Now we can run our first ad-hoc command.

$ ansible -m ping all

The -m ping loads the ping module. The ping module attempts to see if the remote host is alive with a ping command, and will print the output to the screen.

Now that I can communicate with the server using Ansible, I’m going to deploy the Apache web server.

As mentioned, the previous ping command was known as an ad-hoc Ansible command and is essentially just for running once off commands across your fleet. Normally though you’d define your tasks in a playbook and instruct Ansible to run those which would allow you to ensure the state is always how you expect it to be and also allows you to store your tasks in version control to track changes.

A playbook is a text file with the .yml or .yaml extension and is structured in a specific way. I wont go into what YAML is here because it’s pretty easy to understand just by looking at the code.

To install the Apache web server, create a new file in your Ansible project directory called web-server.yml and add the following:

---
- name: Ensure Apache is installed
  ansible.builtin.dnf: 
    name: httpd
    state: present

This task instructs Ansible to use the dnf module which is the package manager module for Red Hat based Linux systems, and install the httpd package. Setting state to present ensures that if the server doesn’t have Apache installed then yum will install it.

Once it’s installed we want to make sure it’s running. So we define an additional task beneath the previous one in the same yaml file.

- name: Ensure Apache is enabled and running
  ansible.builtin.service:
    name: httpd
    state: started
    enabled: true

This task tells systemd to ensure the httpd service is started and enabled at boot.

How you structure your Ansible project will depend on the scope of tasks you need completed. For small tasks like this I can use a simple playbook, but for larger installations with many tasks you might consider using ansible roles and collections which are a bit beyond what I want to discuss here, so I’ll stick with a simple playbook. The complete playbook might look like this:

---
- name: Install and configure Web server
  hosts: www
  become: true

  tasks:
    - name: Ensure Apache is installed
      ansible.builtin.dnf:
        name: httpd
        state: present

    - name: Ensure Apache is enabled and running
      ansible.builtin.service:
        name: httpd
        state: started
        enabled: true

YAML files start with 3 dashes, and then on the next line we define which hosts we require to have the following tasks run on. In this example the hosts group is www which we created in the inventory file. Become: true is telling Ansible to elevate privileges like when you type sudo before a command.

Then run the playbook with ansible-navigator use the following command:

$ ansible-navigator run web-server.yml -m stdout 

The command ansible-navigator is used instead of ansible that we ran previously and instead of passing the -m command to load a module we pass the name of the playbook, in this case web-server.yml. We can use ansible-navigator when building our playbooks and automation code. The -m stdout part of the command displays the output to the terminal rather than opening the navigator user interface.

Ansible will read the playbook and reach out to each host in the inventory then proceed to do whatever tasks are necessary to reach the desired state, in our case that the Apache web server is installed and running.

The yellow “changed” text in the output tells you that before running the playbook the state wasn’t matching what you defined and that Ansible “changed” the state of your host. This is good, this means it worked. If you run the same playbook again you should see the output changed to green and “ok” to indicate that the server is already in the state described and nothing was changed.

To make httpd accessible remotely, we need to tell the firewall on the remote host to allow traffic to port 80. Add an extra task below the previous two.

---
- name: Install and configure Web server
  hosts: www
  become: true

  tasks:
    - name: Ensure Apache is installed
      ansible.builtin.dnf:
        name: httpd
        state: present

    - name: Ensure Apache is enabled and running
      ansible.builtin.service:
        name: httpd
        state: started
        enabled: true

    - name: Allow traffic to port 80
      ansible.builtin.firewalld:
        name: http
        permanent: true
        immediate: true
        state: enabled

Re-run the playbook and see what changed.

We should now be able to open firefox and navigate to the server IP to see the default Apache webpage.

Categories
System Administration

Joining Enterprise Linux to Active Directory

In this post I’ll outline the steps to join an Enterprise Linux host to Microsoft Active Directory for user account management.

Why would you want to do this?

In an Enterprise environment it’s common to have a mix of Windows and Unix/Linux machines that offer different services across the organisation. To resolve the issues of user account management across a network of systems you’ll typically find a centralised directory service such as Microsoft Active Directory. Active Directory manages the creation and administration of user accounts for any system joined to the domain, so an administrator can create a single user account once and deploy it anywhere in the Enterprise.

If all the systems across the network are Windows machines it’s simple to join them to the domain . However when you introduce Linux hosts into the mix things can get a little complicated.

Fortunately there’s a lot of high quality tools that help make this process relatively simple.

For this post I’m not going to go into the details of setting up a Windows domain controller. I’ll be using an already configured DC running on Windows Server 2016 in VirtualBox. I’ve also set up a user account to test.

I’ve also got an Oracle Linux 7 host set up on the same host-only network. The steps here should work for any RHEL based Linux distribution, such as CentOS or Red Hat Enterprise Linux. The process for Debian based distributions should be similar however the package names might be slightly different.

Install the required packages.

$ sudo yum install adcli sssd authconfig realmd krb5-workstation oddjob oddjob-mkhomedir samba-common-tools

Once those are installed we can use realm to join to the Windows domain. For simplicity sake I’ve used my domain name davidroddick.com for the domain controller. This isn’t a publicly accessible system and I’ve updated the /etc/hosts file to point the Linux host to the Windows machine.

$ sudo realm discover davidroddick.com

realm discover will print the domain configuration for the davidroddick.com domain.

Next we can use realm join to join the Linux host to the domain.

$ sudo realm join --verbose --user=Administrator davidroddick.com

If the domain controller is setup correctly and everything worked you should see the message at the bottom of the screen “Successfully enrolled machine in realm”.

Verify the Linux client is connected to the Windows domain.

$ realm list

Now we can configure NSS to authenticate users.

$ authconfig --enablesssd --enablesssdauth --enablemkhomedir --update

I had to manually edit the /etc/samba/smb.conf, /etc/krb5.conf and /etc/sssd/sssd.conf files to make sure all the settings were correct. I’d also recommend updating your nameserver settings in /etc/resolv.conf to point to the Windows Server.

If everything went according to plan (which mine didn’t at first) you should be able to query Active Directory for a user and then login as that user.

$ id DAVIDRODDICK\\droddick
$ su - DAVIDRODDICK\\droddick

If you can now successfully log into the user account that was created in Active Directory from the Linux host everything should be set up correctly.

Categories
System Administration

Installing Enterprise Linux

In this post I’m going to demonstrate the installation of Enterprise Linux in a Virtual lab environment.

I’ll be installing both Red Hat Enterprise Linux 9, because it’s the leading Enterprise Linux distribution, and CentOS Stream 9, because it’s the upstream community release of RHEL. The steps outlined here should be the same for all variants of Red Hat Enterprise Linux, including Oracle Linux, Rocky Linux and AlmaLinux.

I’ve chosen Enterprise Linux as it’s commonly deployed in datacentres and server infrastructure for large companies across the world. Being familiar with Red Hat technologies is super important if you want to work with Linux professionally. Even as a professional, having a home lab or test enviornment lets you play and experiment without messing with your production environment.

Red Hat Linux was the first Linux distribution I ever used back in 2002 with Red Hat Linux 7.3. I’ve primarily used Red Hat based Linux distributions for both home and work ever since.

First of all, download the CentOS installation ISO from the official website. I’ll be installing version 9 for x86_64. You can select any of the iso images, for example the DVD iso contains the entire distribution at 10GB, alternatively I’ll be selecting the boot iso as it’s a smaller initial download and I don’t need every package that comes in the full DVD.

We’ll also need the RHEL installation ISO from Red Hat. To get access to the Red Hat Enterprise Linux software you’ll need to register for a free developer account at https://developers.redhat.com/. The developer account allows you to run up to 16 RHEL machines for free, which is perfect for a home lab or test environment. Once you’ve registered for the developer account, head over to https://developers.redhat.com/products/rhel/download and download the ISO.

Unless you’re installing Linux on your physical machine, which is great (I personally run Fedora Linux on my main personal machine), I’m assuming you are using a lab environment, so you should also have either VirtualBox or similar Virtual Machine software installed. I’ll be using Proxmox, but you could also just as easily use VMware, Qemu/KVM or VirtualBox. The installation steps covered below will also work if you’re installing Linux on a bare metal PC.

I already have a Proxmox homelab set up running on 2 old machines so I won’t go into setting up Proxmox here. Whatever virtualisation software you’re using, create a vm and configure appropriately. I’m using the defaults of 2GB RAM, 32Gib storage and 1 CPU.

Once the virtual machine has been created you can click the “Start” button to boot the virtual machine and start up the installation process.

When the machine boots up you will be presented with the “Welcome” screen. Select your language if you need to change the default and click Continue.

On the Installation Summary screen you’ll likely see a bunch of red warning messages. This is fine and completely normal, it’s just the installation process letting you know what you still need to configure before continuing with the installation. The installation steps for both RHEL and CentOS are the same, except for one additional step that you need to take when installing RHEL, which is registering the installation with your Red Hat subscription. Putting aside the licensing requirements, registering your machines with Red Hat as a developer or home lab user gives you access to the Red Hat software repositories for installing additional applications, and has the additional benefit of giving you access to things like Red Hat Insights, allowing you to explore and use the same tools and software that Red Hat offers to enterprise customers completely for free. This is awesome because if you’re working in the industry as a Linux systems administrator, having experience with Red Hat’s products beyond RHEL is very helpful.

Let’s quickly register our Red Hat system so we can continue with the installation. I registered with my developer account information but I unselected the system purpose and the insights checkboxes, we can change these options later once we start experimenting with our systems.

Next, select the Network & Host Name option. If your Host PC is connected to the Internet, which I’m assuming it is as you’re reading this, you should just be able to toggle the Ethernet button to On and the network should auto-connect.

If you want to also give your machine a Hostname instead of locahost.localdomain this is where you can do that. I’m not going to that here as it’s easy to change later. Click Done.

Next, if Installation Source is still red you will have to configure that. This is to decide where the packages will be installed from. If you downloaded the full DVD iso then you can likely ignore that and just install from the local source, but if like me you downloaded the boot iso you might need to select the remote location to download the rest of the packages from. Both RHEL and CentOS should pick a mirror automatically once the network is connected, but if it doesn’t you’ll need to search online for your closest mirror URL.

Next select Software Selection. This screen is to actually pick which packages or package groups to install. I’m going to pick “Minimal Install” for both machines, though feel free to pick an environment that suits your purposes.

Next you’ll need to configure the Installation Destination. This is to prepare the virtual hard drive (or the physical hard drive if you’re installing this on bare metal) for the file system. By default, the installation will configure Logical Volume Manager (LVM) and use the entire disk, which is normally fine unless you know you need to configure manual disk partitions. In a production environment you might want to create separate partitions for /usr, /var, /tmp and /opt depending on the purposes of the server to allow you to manage the storage appropriately.

Ensure ‘Automatic’ is selected under Storage Configuration and select Done and then Accept Changes.

Finally we’ll configure the user accounts. I’m going to leave the Root account disabled and create a not-root user with Administrative privileges.

When creating a standard user account, make sure to select “Make this user administrator” which will add your user account to the “Wheel” group allowing you to perform administrative tasks without logging in directly as root. Click done.

That should be the final configuration step you need to perform.

If there is no more red text on the Summary screen you can click “Begin Installation” to continue. Depending on whether you’re installing from the DVD iso or over the network and considering the software packages you chose to install and your Internet speed, the actual installation process might take a while.

Once the installation is complete you can reboot the machines.

Login as your non-root user.

The final thing I will do is make note of each machine’s IP address so that I can connect to them via SSH instead of having to interact with the machines in the Proxmox or Virtual Machine console. Use the following command from both machine consoles.

$ ip a

I then add them both to the ~/.ssh/config file on my desktop machine. The config file would look similar to this:

Host rhel
	Hostname 192.168.1.2
	User dave

Host centos
	Hostname 192.168.1.3
	User dave

Congratulations and enjoy Enterprise Linux.

Categories
Programming

Compiling the Linux Kernel

This is a short post describing how to build and install the Linux kernel from source. There’s plenty of documentation elsewhere online that goes into more detail than I will here, so this is mainly as a reference for myself.

I’m using Fedora 34, but you can follow similar steps for other distributions.

First install the necessary packages needed for building the kernel.

$ sudo dnf group install "C Development Tools and Libraries"
$ sudo dnf install gcc make git ctags ncurses-devel kernel-devel

It’s likely that the group install for the C development tools installs most of what you need, but various places around the web have suggested the other packages specifically, so might as well include them just to be sure.

During compiling, my build failed a couple of times requiring me to install openssl as well as dwarves.

$ sudo dnf install dwarves openssl openssl-devel

Download the kernel source. You can download it to wherever you want, I usually just put it in my home folder.

$ git clone https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
$ cd linux

Once the kernel source has finished downloading, change into the linux directory.

First we need to copy our existing config file into the kernel source directory and then make the config.

$ cp /boot/config-`uname -r`* .config
$ make oldconfig

make oldconfig will require you to answer a bunch of questions about the various modules you want the kernel to load. Selecting the default for most should be ok unless you have a reason to do otherwise.

Next we build the kernel.

$ make bzImage

Depending on how fast your computer is this might take a while.

Once the kernel has finished compiling we need to build the modules that have been configured in the earlier steps.

$ make modules

On my machine compiling took a couple of hours, but I did walk away for a bit while it was running and came back to it finished, so it might be less.

Finally, if everything went according to plan you should now be able to install your fresh kernel.

$ sudo make modules_install install

As you can see by the following screenshot, I’m currently running on the 5.14.12-200 kernel build that was installed by Fedora.

Give your computer a reboot and you should boot into the new kernel.

A couple of times when I did this I tried booting into the new kernel and all I got was a black screen. I couldn’t work out what I was doing wrong and took a few times to get it right. Turns out I wasn’t configuring the kernel correctly in the first few steps, so while running make and make install showed no errors and the new kernel was showing in the grub boot menu, it wasn’t actually built properly.

Now that it’s working and I can boot into the new kernel, I just want a quick test to make sure things are working ok. Particularly VirtualBox, and USB.

Turning on a VM works as expected, and plugging in a USB device works as well. Wifi also works. One last test, I’ll close the laptop lid so the machine goes to sleep, and then open back up to ensure suspend and resume is all ok, which it is.

If you get stuck, the documentation is hugely useful. I recommend Building a custom kernel from the Fedora project and Kernel Build on the Kernel Newbies site.