Categories
System Administration

Converting from RHEL to AlmaLinux

The best thing about open source, and one of the reasons why I love the entire Linux ecosystem, is choice. With open source software you have the ability to choose what OS or software you run, how you run it, and what you can do with it. If you don’t like the decisions that have been made, or you want to do things in a different way, you have total freedom to do something about it or find an alternative.

And I think vendor lock-in can be really bad for innovation, and of freedom. One of my long-time goals has always been to build my own Linux distribution, not to compete with anyone or because I think I can do things any better, but really just for choice. To say I can, to gain the skills to be able to do it, and if anything should ever happen to the software I use, I can do something about it.

AlmaLinux is an entirely free, community driven Enterprise Linux distribution, binary-compatible with Red Hat Enterprise Linux, that started life to fill a gap when CentOS Linux was discontinued. I’m personally a big fan of the Red Hat Linux, and Red Hat “alike”, range of products. Red Hat Linux 7.3 was the first Linux distribution I ever used and I’ve mostly run Fedora or CentOS on my personal machines ever since.

I really like the path that AlmaLinux chose to take with it’s distribution. I think the Linux world needs a community Enterprise OS, I think the changes to CentOS, CentOS Stream and Red Hat Enterprise Linux do make sense, and I think it’s important to try to work together cooperatively. Linux started life as a cooperative project, open source developers and companies like Red Hat have both made Linux into what it is today.

In this article I wanted to demonstrate converting from Red Hat Enterprise Linux to AlmaLinux. In a previous post I wrote about converting from an alternative Enterprise Linux (Oracle Linux) to Red Hat, so in the spirit of sharing and freedom of choice, I thought I’d go the opposite direction this time.

Just to be clear, this is not a comment either way about Red Hat or any of the alternative Enterprise Linux distributions. I’m not trying to say one is better than the other, or you should pick one instead of another one. I actually really value having the ability to choose. Red Hat make a fantastic product, upstream Fedora Linux built by the open source community is a fantastic distribution and the one I run on my personal machine, all of the CentOS/RHEL binary compatible forks or downstream distributions are excellent as well in my opinion.

So I’m starting with a fresh Red Hat Enterprise Linux 9 server running in Proxmox. I installed this machine from an ISO I downloaded from Red Hat and it’s registered with my developer subscription, which I did during the install.

For the sake of it, I’ve also got an Apache web server running.

AlmaLinux have provided a migration script, as most of the Enterprise Linux distributions have, to convert from one distribution to another, similar to the Convert2RHEL tool from Red Hat.

Download the script from Github, and run it.

$ curl -O https://raw.githubusercontent.com/AlmaLinux/almalinux-deploy/master/almalinux-deploy.sh

$ sudo bash almalinux-deploy.sh

Running the script shows that the migration is supported, and it starts doing its thing.

The migration took about the length of time it took me to make a coffee and some toast, though admittedly this is a small server with no real data or applications running, so your mileage may vary. Still it was quite quick and painless.

Rebooting the server shows everything seemed to work correctly.

Categories
System Administration

Converting from Oracle Linux/Rocky/CentOS to RHEL

In this post I wanted to demonstrate converting from one of the RHEL compatible Enterprise Linux distributions, like CentOS, Rocky Linux or Oracle Linux, to Red Hat Enterprise Linux. I’ll be demonstrating from an Oracle Linux server running in Proxmox, however these steps will work regardless of the RHEL compatible distribution you are starting from as long as it’s one of the supported systems.

You must be running one of the RHEL compatible distributions already. The conversion does not work from other Linux distributions, such as Debian or Ubuntu, and it also does not work if you’re starting from CentOS Stream.

There are really 2 main prequisites that you’ll need before attempting this conversion.

  1. You are already running a RHEL compatible distribution of at least version 7+. If you have an older distribution you’ll need to upgrade first.
  2. You have a valid RHEL subscription. I’ll be using the Free developer subscription that entitles you to run up to 16 Red Hat Enterprise Linux installations, as well as access to other Red Hat products.

If you need to upgrade from an older version of RHEL or CentOS, you should check out the leapp upgrade tool.

DISCLAIMER: Now before we start, I am running this in my homelab environment, not a production environment. I’m doing this as a trial run before I do an actual OUL to RHEL conversion in production for myself. This is for my own experience and practice, and I advise you to practice the conversion in test environments before running on production servers too. I’m not responsible for any damage done to your own servers. If you run into any issues and you have an active RHEL subscription you should contact Red Hat for support.

I have a running Oracle Linux 8 system that I’ll be using to convert to RHEL 8.

I’ve also set this server up to host a simple WordPress website with Apache, PHP 8, and MariaDB.

The first step in converting to RHEL is to download the convert2rhel tool. I’ve downloaded the tool from github and installed it on the OUL machine. If you’re converting from CentOS and can connect your system to Red Hat Satellite or console.redhat.com, you will be able to enable the Convert2RHEL repos and manage the conversion from there.

$ wget https://github.com/oamg/convert2rhel/releases/download/v2.1.0/convert2rhel-2.1.0-1.el8.noarch.rpm

$ dnf localinstall ./convert2rhel-2.1.0-1.el8.noarch.rpm

Once it’s installed, run the convert2rhel analyze to determine if the system can be converted.

$ convert2rhel analyze

After a minute or so, the analyze tool spat out a whole bunch of red error messages, but this is the point of analyzing first. My issues were with the firewalld service running, and the default Oracle Linux kernel that needed to be fixed. Oracle Linux generally installs the Unbreakable Enterprise Kernel which is incompatible with the Red Hat kernel, so I’ll need to fix both of those before continuing.

I’ll fix the kernel issue first.

$ grubby --default-kernel

/boot/vmlinuz-5.15.0-209.161.7.2.el8uek.x86_64

$ grubby --info=ALL | grep ^kernel

kernel="/boot/vmlinuz-5.15.0-209.161.7.2.el8uek.x86_64"
kernel="/boot/vmlinuz-4.18.0-553.16.1.el8_10.x86_64"
kernel="/boot/vmlinuz-0-rescue-83229607f01f471dbd78c219e5e4fc07"

The default kernel on the system is the UEK kernel, but there’s a Red Hat kernel already installed, so I’ll set that one as default and reboot.

$ grubby --set-default /boot/vmlinuz-4.18.0-553.16.1.el8_10.x86_64

$ grubby --default-kernel

/boot/vmlinuz-4.18.0-553.16.1.el8_10.x86_64

Once the machine has rebooted, I’ll re-run the analyze command.

The kernel error message has been resolved, so now it’s Firewalld error. There’s also an error message about the system not being connected to a Red Hat subscription, which is fine, we’ll fix that shortly.

I’m just going to run the suggested commands to remove the Firewalld error.

$ sed -i -- 's/^CleanupModulesOnExit.*/CleanupModulesOnExit=no/g' /etc/firewalld/firewalld.conf

$ firewall-cmd --reload

In the convert2rhel man page there’s a few options for authenticating to the subscription manager, including passing your username and password, or activation key at the command line. But the option I’m going to use is the -c option for using a config file. The convert2rhel tool has installed a config file at /etc/convert2rhel.ini which you can override.

$ cp /etc/convert2rhel.ini ~

[subscription_manager]
username       = <insert_username>
password       = <insert_password>

I’ve copied the ini file to my root user home directory and I’ll update it with my RHSM subscription details.

I’ll re-run the analyze tool, hopefully one last time, to check the subscription and then we should be good to go.

$ convert2rhel analyze -c /root/convert2rhel.ini

Everything looked good. There was a warning about third-party tools being installed because it detected the convert2rhel tool that I installed that wasn’t from a supported repository which I’m going to ignore. Otherwise, let’s do this.

$ convert2rhel -c /root/convert2rhel.ini

The conversion took a few minutes. But by the end it completed successfully, so I’ll reboot the machine.

The system rebooted into Red Hat Enterprise Linux and everything looks great.

I checked the system was registered correctly and changed the hostname because I previously had it set to oracle-linux.localnet.com.

$ subscription-manager status
+-------------------------------------------+
   System Status Details
+-------------------------------------------+
Overall Status: Disabled
Content Access Mode is set to Simple Content Access. This host has access to content, regardless of subscription status.

System Purpose Status: Disabled

This is a test machine, so the subscription status is fine. Installed the insights-client and registered.

$ dnf install insights-client
$ insights-client --register

Lastly, I’ll check that the WordPress installation is still working.

I noticed the hostname of the httpd configuration was still the previous oracle-linux hostname, so I’ll change that.

Everything looks great. The conversion from Oracle Linux to Red Hat Enterprise Linux was successful.

Categories
System Administration

Exploring OpenShift

OpenShift is Red Hat’s container orchestration platform built on top of Kubernetes. I love working with containers and Kubernetes, and as I’m also a big fan of Red Hat technologies I wanted to become more familiar with working with OpenShift.

I’ve also been studying Red Hat’s Openshift training courses including OpenShift Developer and OpenShift Administrator, so it makes sense to have an environment to work in and deploy some applications.

I’m going to install OpenShift local on my Fedora Linux development machine.

First you’ll need to download the crc tool from Red Hat. You’ll need a developer account and to log in to https://console.redhat.com/openshift/create/local to access the software.

Click the “Download OpenShift Local” button.

$ cd ~/Downloads
$ tar xvf crc-linux-amd64.tar.xz

Once it’s downloaded and you’ve extracted the tar archive, copy the crc tool to your users home directory.

mkdir -p ~/bin
cp ~/Downloads/crc-linux-*-amd64/crc ~/bin

If your users bin directory isn’t in your $PATH, you’ll need to add it.

$ export PATH=$PATH:$HOME/bin
$ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc

Next, initiate OpenShift with the crc setup command.

$ crc setup

This will take a while as it needs to download about 4GB for the OpenShift environment.

Once it’s downloaded, start the local cluster.

$ crc start

The crc tool will ask for the pull secret as it’s required to pull the OpenShift images from Red Hat. You can get the secret from console.redhat.com.

Copy the pull secret and paste it into your terminal when asked.

If everything worked correctly you should now have a running OpenShift instance on your machine.

The crc tool will give you a URL and login information once it’s ready. Open up your browser and login in to the dashboard.

You can change between the Administrator and the Developer perspective by clicking the dropdown under “Administrator”. This doesn’t change your permissions, but it changes the view of the dashboard to make it more focused to either Administrative tasks or Developer tasks.

Containers and Kubernetes are an important technology for modern application deployment and OpenShift is a really powerful addition to Kubernetes. I’m really enjoying getting more familiar with OpenShift.

Categories
System Administration

Building a Custom Ansible Execution Environment

Recently I’ve been working on an Ansible upgrade project that included building out an Ansible Automation Platform installation and upgrading legacy ansible code to modern standards. The ansible code that we were working with had been written mostly targeting Enterprise Linux versions 6 and 7 and was using pre ansible version 2.9 coding standards.

The newer versions of Ansible and Ansible Automation Platform utilise Execution Environments to run the ansible engine against a host. An Execution Environment is a container built with Ansible dependencies, Python libraries and Ansible Collections baked in.

On top of the legacy Ansible code that I was working with, the codebase does a lot of “magic” configuration for setting things up across the environment, so I had to make sure that everything worked like it did previously. I tested a few of the off-the-shelf execution environments, none of which worked for what we needed it for.

In this post I wanted to detail a quick tutorial on building a custom execution environment for running your Ansible code.

I’m using Fedora Linux 39 to set up a development environment, but most Linux distributions should follow similar steps.

From the command line, install the required dependencies. As execution environments are containers, we need a container runtime and for that we’ll use Podman. We also need some build tools.

$ sudo dnf install podman python3-pip

Now to install the Ansible dependencies.

$ python3 -m pip install ansible-navigator ansible-builder

Ansible navigator is the new interface to running Ansible and is great for testing out different execution environments and your ansible code as you’re developing. I briefly demonstrated using Ansible navigator in my article about using Ansible to configure Linux servers. You need the tools in Ansible builder to create the container images.

If you’ve ever built Docker containers before, the steps for EEs are very similar just with the Ansible builder wrapper. Create a folder to store your files.

$ mkdir custom-ee && cd custom-ee

The main file we need to create is the execution-environment.yml file, which Ansible builder uses to build the image.

---
version: 3

images:
  base_image:
    name: quay.io/centos/centos:stream9

dependencies:
  python_interpreter:
    package_system: python3.11
    python_path: /usr/bin/python3.11
  ansible_core:
    package_pip: ansible-core>=2.15
  ansible_runner:
    package_pip: ansible-runner

  galaxy: requirements.yml
  system: bindep.txt
  python: |
    netaddr
    receptorctl

additional_build_steps:
  append_base:
    - RUN $PYCMD -m pip install -U pip
  append_final:
    - COPY --from=quay.io/ansible/receptor:devel /usr/bin/receptor /usr/bin/receptor
    - RUN mkdir -p /var/run/receptor
    - RUN git lfs install --system
    - RUN alternatives --install /usr/bin/python python /usr/bin/python3.11 311

The main parts of the file are fairly self-explanatory, but from the top:

  • We’re using version 3 of the ansible builder spec.
  • The base container image we’re building from is CentOS stream 9 pulled from Quay.io.
  • We want to use Python 3.11 inside the container.
  • We want an Ansible core version higher than 2.15.

In the dependencies section, we can specify additional software our image requires. The galaxy entry is Ansible collections from the Galaxy repository. System is the software installed using DNF on a Linux system. And Python is the Python dependencies we need since Ansible is written in Python and it requires certain libraries to be available depending on what your requirements.

The Galaxy collections are being defined in an external file called requirements.yml which is in the working directory with the execution-environment.yml file. It’s simply a YAML file with the following entries:

---
collections:
  - name: ansible.posix
  - name: ansible.utils
  - name: ansible.netcommon
  - name: community.general

My project requires the ansible.posix, ansible.utils and ansible.netcommon collections, and the community.general collection. Previously, all of these collections would have been part of the ansible codebase and installed when you install Ansible, however the Ansible project has decided to split these out into collections, making the Ansible core smaller and more modular. You might not need these exact collections, or you might require different collections depending on your environment, so check out the Ansible documentation.

Next is the bindep.txt file for the system binary dependencies. These are installed in our image, which is CentOS, using DNF.

epel-release [platform:rpm]
python3.11-devel [platform:rpm]
python3-libselinux [platform:rpm]
python3-libsemanage [platform:rpm]
python3-policycoreutils [platform:rpm]
sshpass [platform:rpm]
rsync [platform:rpm]
git-core [platform:rpm]
git-lfs [platform:rpm]

Again, you might require different dependencies, so check the documentation for the Ansible modules you’re using.

Under the Python section, I’ve defined the Python dependencies directly rather than using a seperate file. If you need a separate file it’s called requirements.txt.

    netaddr
    receptorctl

Netaddr is the Python library for working with IP Addresses, which the ansible codebase I was working with needed, and receptorctl is a Python library for working with Receptor, network service mesh implementation that Ansible uses to distribute work across execution nodes.

With all of that definied, we can build the image.

ansible-builder build --tag=custom-ee:1.1

The custom-ee tag is the name of the image that we’ll use to call from Ansible. The ansible-builder command runs Podman to build the container image, The build should take a few minutes. If everything went according to plan, you should see a success message.

Because the images are just standard Podman images, you can run the podman images command to see it. You should see the output display ‘localhost/custom-ee’ or whatever you tagged your image with.

$ podman images

If the build was successful and the image is available, you can test the image with Ansible navigator. I’m going to test with a minimal RHEL 9 installation that I have running. In the ansible-navigator command, you can specify the –eei flag to change the EE from the default, or you can add a directive in an ansible-navigator.yml file in your ansible project, such as the following:

ansible-navigator:
  execution-environment:
    image: localhost/custom-ee:1.1
    pull:
      policy: missing
  playbook-artifact:
    enable: false

If you’re using Ansible Automation Platform you can pull the EE from a container registry or Private Automation Hub and specify which EE to use in your Templates.

ansible-navigator run web.yml -m stdout --eei localhost/custom-ee:1.1

You can also inspect the image with podman inspect with the image hash from the podman images command.

$ podman inspect 8e53f19f86e4

Once you’ve got the EE working how you need it to you can push it to either a public or private container registry for use in your environment.

Categories
Cloud Computing

Automating Server Deployments in AWS with Terraform

Previously I discussed deploying Enterprise Linux in AWS which I demonstrated by using the AWS console. This is a common way to deploy servers to the cloud, however doing server deployments manually can create a situation where you’re stuck with static images that are difficult to replicate when your infrastructure grows.

One of the benefits of Cloud Computing is that the infrastructure is programmable, meaning we can write code that can automate tasks for us. You can spend the time defining all of the settings and configurations once and then anytime you need to deploy your server again, whether it’s adding more servers to a cluster for high-availability, or re-deploying a server that’s died, you don’t have to reinvent the wheel and risk making costly mistakes. This also makes managing large-scale infrastructure much easier when provisioning can be scripted and stored in version control.

Terraform is a tool that’s used to automate infrastructure provisioning and can be used with AWS to make deploying servers fast, easy and re-producible.

Terraform lets you use a configuration language to describe your infrastructure and then goes out to your cloud platform and builds your environment. Similar to Ansible, Terraform allows you to describe the state you want your infrastructure to be in, and then makes sure that state is achieved.

You’ll need a few prerequisites to get the most out of this tutorial, specifically, I’ll be using Fedora Linux running on my desktop environment where I’ll write the Terraform configurations, and I’ll be using the AWS account that was set up previously.

Depending on your environment, you might need to follow different instructions to get Terraform installed. Most versions of Linux will be similar. In Fedora Linux, run the following commands:

$ sudo dnf install -y dnf-plugins-core
$ sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
$ sudo dnf -y install terraform

To be able to speak to the AWS infrastructure APIs, you’ll need to install the awscli tool. Download the awscli tool and install it with the following commands:

$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscli.zip"
$ unzip awscli.zip
$ sudo ./aws/install

Next you’ll need to create some AWS credentials to allow Terraform to authenticate to AWS, this needs to be separate credentials than the ones you log in with. When you create an AWS account, the “main” account is known as the root account. This is the all powerful administrator account and can do and access anything in your AWS console. When you set up your infrastructure, particularly when using automation tools like Terraform, you should create a non-root user account so you don’t inadvertently mess anything up. Head over to AWS IAM and create a new user.

AWS IAM and permissions settings are far beyond the scope of this post, however for the purposes of this demonstration ensure your new user has a policy that allows access to ec2 and set up the Access keys that the awscli tool will use to authenticate.

From your local machine type:

$ aws configure

And enter the access key and secret when prompted. You’ll also be asked to set your default region. In AWS, the region is the geographical location you want your services to run, as I’m in Sydney I’ll set my region appropriately.

We should be ready to start using Terraform now. I’ll create a folder in my local users home directory called ‘terraform’.

Inside the terraform folder, create a file called ‘main.tf’ and enter the following:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.16"
    }
  }

  required_version = ">= 1.2.0"
}

provider "aws" {
  region  = "ap-southeast-2"
}

resource "aws_instance" "rhel_server" {
  ami           = "ami-086918d8178bfe266"
  instance_type = "t2.micro"

  tags = {
    Name = "RHEL Test Server"
  }
}

Terraform uses a configuration language to define the desired infrastructure configuration. The important parts to take note of are the “provider” where you’re telling Terraform to use the AWS plugin, and the “resource” section where you define the instance you’re creating.

Here I’ve set the region to the ap-southeast-2 region which is the AWS region for Sydney, and in the resource section I’ve specified a t2.micro instance type which is the same as we used previously. I’ve given the instance a name of “rhel_server” and provided an AMI.

The AMI is the unique identifier that AWS uses to determine the OS image you want. Each image in the AMI marketplace has a different AMI code, which you can find from the console in the same location where you select the OS you want to use.

Once you’ve created the main.tf file you’ll need to initialise the terraform directory by running ‘terraform init’. This command reads the configuration file and downloads the required provider plugins.

Next you should run ‘terraform fmt’ and ‘terraform validate’ to properly format and validate your configuration.

Type ‘terraform apply’ to run your configuration. Terraform will evaluate the current state of your AWS infrastructure and determine what needs to be done to match your configuration. After a few moments you should be presented with the option to proceed where Terraform will ask you to confirm. Type ‘yes’ if the output looks good.

If you see the green output at the bottom of the screen saying ‘Apply complete’, you’ve successfully deployed an image to AWS using Terraform. Typing ‘terraform show’ will show you the current image configuration in AWS in JSON format.

And you can also check the AWS console to confirm.

Once you’re finished with the instance, type ‘terraform destroy’ to clean up and terminate the running instance. Terminating instances when you’re finished with them is the best way to keep your AWS costs low so you don’t get billed thousands of dollars at the end of the month.

Using Terraform to deploy infrastructure to the cloud is really powerful and flexible, allowing you to define exactly what you want running without having to manually deploy resources.

Categories
Cloud Computing

Deploying Enterprise Linux in AWS

In a previous post I discussed installing Enterprise Linux in a virtual machine, this time I wanted to write about deploying a server to the cloud.

Cloud Computing platforms like Amazon’s AWS allow you to build and run all kinds of Infrastructure and services on-demand without having to purchase and maintain expensive physical computing hardware. You can deploy a server in minutes and have the capability to scale your workload as much as you need. I’ve been running production servers in AWS for a few years now and of all the cloud platforms, it’s the one I’m most familiar with.

I assume you’ve registered for an AWS account already, if not, head over to https://aws.amazon.com and set one up.

AWS is huge. There’s services and infrastructure available for pretty much anything you can imagine, from basic Linux and Windows servers, to Machine Learning and AI and everything in between. It can be quite overwhelming the first time you log into the AWS console, however for this post we can safely ignore pretty much all of that.

Amazon’s EC2 is a service that lets us deploy server images, whether those are images that AWS have supplied, community contributed images, or something that we’ve built ourselves. Basically, an image is almost like what we built previously when installing Linux in a virtual machine. An image is a preconfigured Operating System that we can deploy from the AWS console.

From the Services menu, or the list of All Services above, select EC2.

You should see a dashboard that looks like the following.

Again, for the most part you can ignore everything there for now.

Click on the big orange button that says “Launch Instance”.

Here you get to select the Operating System you want to deploy. Depending on what you want, you can feel free to select any of the Quick Start OS images, most of them are self-explanatory, such as Windows, MacOS X or Ubuntu. Amazon Linux is a variant of Linux based closely on Red Hat / CentOS and has been tuned to work well with AWS.

I’m going to select Red Hat Enterprise Linux 9.

Give your instance a name. For this example I’ll just call mine “RHEL Server”, but you should give your server a name that matches its purpose or another easily identifiable name.

Next you’ll want to select the Instance type. This is basically the size of the server you want. AWS has a huge range of different instance types you can choose from depending on the purpose of the server you’re building. However, you need to be careful because the instance type dictates much of the price you’ll pay each month so don’t spin up massive servers unless you know you can pay for them.

In this example I’m going to deploy a t2.micro instance which is Free Tier eligible, which is perfect because it means I can deploy and play around for a bit and then shut it down when I’m ready without paying for it.

Below the instance type you want to select the key pair you’ll use to connect to the instance. The key pair allows you to securely connect to the instance using SSH without having to use passwords and is required by AWS. If you haven’t already set one up do so now by clicking the link next to the dropdown. Make sure you download the key you create and store it somewhere safe.

Further down in the Network settings, I’m going to select “Allow SSH traffic from” and select “My IP” from the dropdown. This just restricts access to the server from the Internet to only the IP address you’re connecting to the console from.

If you’re setting this up from your home you likely have an IP address assigned from your ISP dynamically. For the most part for home Internet access this is great, but can cause issues when setting AWS server access. If your ISP changes your home IP address you can be locked out of your server. For this example, this is fine.

I’ve deselected the “Allow HTTP traffic from the Internet” checkbox as I wont be setting up a web server at the moment.

That should be it for the basic configuration. If everything looks Ok, you can click the orange “Launch Instance” button.

After a few seconds you should see your running instance appear.

Your server is now ready for use. The base images only have the bare minimum software installed to give you a useful OS. From here you’ll need to configure the applications and services that you’ll be running.

To connect to the server, open a terminal and establish an SSH connection. Remember, we’ve restricted access to only our own IP address and we need to use the Key that was configured earlier. The default username that’s created with the image is ec2-user.

$ ssh ec2-user@[IP-Address] -i ~/key.pem

You now have a Red Hat Enterprise Linux server running in the AWS Cloud. From here you can configure your server to do anything you want, you could even try using Ansible for configuration management.

Once you’ve finished playing with your server, make sure you remember to terminate the instance so that you keep your AWS bill under control.

There’s much more to setting up and running production servers in AWS than just what I covered in this post, however this is a pretty good starting point for getting a server up and running quickly.

Categories
System Administration

Setting Up Oracle Linux Automation Manager

Previously I wrote about using Ansible to manage the configuration of Linux servers. I love using Ansible and use it almost every day, however in a large Enterprise environment with multiple users and a lot of Ansible roles and playbooks, sometimes using Ansible on its own becomes difficult to maintain.

In this post I’m going to run through configuring Oracle Linux Automation Manager. Oracle’s Automation Manager is essentially a rebranded fork of Ansible Automation Platform and provides a web user interface to easily manage your Ansible deployments and inventory.

I’m demonstrating the use of OLAM instead of the Red Hat’s Ansible Automation Platform or upstream AWX because I’ve had recent experience deploying Oracle Linux Automation Manager in an Enterprise environment. The most recent version of OLAM as of this writing is version 2 which is based on the Ansible AWX version 19. The newer versions of AAP that Red Hat provides, and the community AWX version are both installed with Kubernetes or OpenShift, which I don’t want to worry about for the purposes of this article. OLAMv2 is installable by RPM packages with DNF, however it still uses the newer Ansible Automation Platform architecture. I really want to dig into the underlying components such as Receptor and the Execution Environments, and I feel like this is the least complex path for my purposes.

This will also give you a good platform to get familiar with AAP without the complexity of setting up Kubernetes or managing containers. As much as I love Kubernetes, Containers and OpenShift, I think it’s important to remember that underneath container platforms is still Linux, and knowing how to work with Linux is an important skill.

This is a really great platform to get familiar with. You can really expand your Ansible deployments with a lot of flexibility using OLAM or AWX in general.

Oracle provide access to Automation Manager directly in their Yum repositories for Oracle Linux 8 which makes installation really simple, particularly if you already run Oracle Enterprise Linux or have a non-RHEL environment.

In this post I’ll install OL Automation Manager onto an Oracle Linux 8 virtual machine running in Proxmox. I won’t detail getting Oracle Linux installed as I’ve already done a post about RHEL and CentOS, and the installation steps are the same. I’ll install OLAM onto a single virtual machine rather than a cluster as it’s just for my own testing environment, however in a production environment you should use multiple machines.

Once Oracle Linux has been setup you can start the installation of Oracle Linux Automation Manager. First we have to enable the Automation Manager 2 repository.

$ sudo dnf install oraclelinux-automation-manager-release-el8

Next we need to enable the postgresql database. I’m going to use Postgresql 13.

$ sudo dnf module reset postgresql
$ sudo dnf module enable postgresql:13
$ sudo dnf install postgresql-server
$ sudo postgresql-setup --initdb
$ sudo sed -i "s/#password_encryption.*/password_encryption = scram-sha-256/"  /var/lib/pgsql/data/postgresql.conf
$ sudo systemctl enable --now postgresql

Next, set up the AWX user in postgresql.

$ sudo su - postgres -c "createuser -S -P awx"

Enter the password when prompted then create the awx database.

$ sudo su - postgres -c "createdb -O awx awx"

Open the file /var/lib/pgsql/data/pg_hba.conf and add the following

host  all  all 0.0.0.0/0 scram-sha-256

In the file /var/lib/pgsql/data/postgresql.conf uncomment the “listen_addresses = ‘localhost'” line.

Now that the database is ready, we can install Automation Manager using DNF.

$ sudo dnf install ol-automation-manager

That should only take a moment. Next you’ll need to edit the file /etc/redis.conf and add the following two lines at the bottom of the file.

unixsocket /var/run/redis/redis.sock 
unixsocketperm 775

Next edit the file /etc/tower/settings.py. If you’re installing in a cluster configuration you’ll need to make a couple of extra changes, but for this single host installation the only change we need to make is the database configuration settings. Add the password you created earlier when creating the awx user is postgresql and set the host to ‘localhost’.

Now we’ll change users to the awx user to run the next part of the installation.

$ sudo su -l awx -s /bin/bash
$ podman system migrate
$ podman pull container-registry.oracle.com/oracle_linux_automation_manager/olam-ee:latest
$ awx-manage migrate
$ awx-manage createsuperuser --username admin --email [email protected]

After running the createsuperuser command you’ll be asked to create a password. This is the username and password to login to the web ui, so don’t forget it.

Next generate an SSL certificate so you can access Automation Manager over HTTPS.

$ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/tower/tower.key -out /etc/tower/tower.crt

And replace the default /etc/nginx/nginx.conf configuration script with the this one.

Next we can start to provision the installation. Log back in as the awx user.

$ sudo su -l awx -s /bin/bash
$ awx-manage provision_instance --hostname=awx.local --node_type=hybrid
$ awx-manage register_default_execution_environments
$ awx-manage register_queue --queuename=default --hostnames=awx.local
$ awx-manage register_queue --queuename=controlplane --hostnames=awx.local

Change the hostname(s) to whatever suits your environment. I used awx.local for the purposes of this demonstration. You can now type exit to leave the awx user session and go back to the rest of the setup as your normal user.

Replace the /etc/receptor/receptor.conf file with this one.

You can now start OL Automation Manager.

$ sudo systemctl enable --now ol-automation-manager.service

Now we can preload some data.

$ sudo su -l awx -s /bin/bash
$ awx-manage create_preload_data

Finally, we’ll open up the firewall to allow access.

$ sudo firewall-cmd --add-service=https --permanent
$ sudo firewall-cmd --add-service=http --permanent
$ sudo firewall-cmd --reload

You should be able to load up the browser and access the Web UI.

Login using the admin credentials you created during the setup process.

Categories
System Administration

Joining Enterprise Linux to Active Directory

In this post I’ll outline the steps to join an Enterprise Linux host to Microsoft Active Directory for user account management.

Why would you want to do this?

In an Enterprise environment it’s common to have a mix of Windows and Unix/Linux machines that offer different services across the organisation. To resolve the issues of user account management across a network of systems you’ll typically find a centralised directory service such as Microsoft Active Directory. Active Directory manages the creation and administration of user accounts for any system joined to the domain, so an administrator can create a single user account once and deploy it anywhere in the Enterprise.

If all the systems across the network are Windows machines it’s simple to join them to the domain . However when you introduce Linux hosts into the mix things can get a little complicated.

Fortunately there’s a lot of high quality tools that help make this process relatively simple.

For this post I’m not going to go into the details of setting up a Windows domain controller. I’ll be using an already configured DC running on Windows Server 2016 in VirtualBox. I’ve also set up a user account to test.

I’ve also got an Oracle Linux 7 host set up on the same host-only network. The steps here should work for any RHEL based Linux distribution, such as CentOS or Red Hat Enterprise Linux. The process for Debian based distributions should be similar however the package names might be slightly different.

Install the required packages.

$ sudo yum install adcli sssd authconfig realmd krb5-workstation oddjob oddjob-mkhomedir samba-common-tools

Once those are installed we can use realm to join to the Windows domain. For simplicity sake I’ve used my domain name davidroddick.com for the domain controller. This isn’t a publicly accessible system and I’ve updated the /etc/hosts file to point the Linux host to the Windows machine.

$ sudo realm discover davidroddick.com

realm discover will print the domain configuration for the davidroddick.com domain.

Next we can use realm join to join the Linux host to the domain.

$ sudo realm join --verbose --user=Administrator davidroddick.com

If the domain controller is setup correctly and everything worked you should see the message at the bottom of the screen “Successfully enrolled machine in realm”.

Verify the Linux client is connected to the Windows domain.

$ realm list

Now we can configure NSS to authenticate users.

$ authconfig --enablesssd --enablesssdauth --enablemkhomedir --update

I had to manually edit the /etc/samba/smb.conf, /etc/krb5.conf and /etc/sssd/sssd.conf files to make sure all the settings were correct. I’d also recommend updating your nameserver settings in /etc/resolv.conf to point to the Windows Server.

If everything went according to plan (which mine didn’t at first) you should be able to query Active Directory for a user and then login as that user.

$ id DAVIDRODDICK\\droddick
$ su - DAVIDRODDICK\\droddick

If you can now successfully log into the user account that was created in Active Directory from the Linux host everything should be set up correctly.

Categories
System Administration

Installing Enterprise Linux

In this post I’m going to demonstrate the installation of Enterprise Linux in a Virtual lab environment.

I’ll be installing both Red Hat Enterprise Linux 9, because it’s the leading Enterprise Linux distribution, and CentOS Stream 9, because it’s the upstream community release of RHEL. The steps outlined here should be the same for all variants of Red Hat Enterprise Linux, including Oracle Linux, Rocky Linux and AlmaLinux.

I’ve chosen Enterprise Linux as it’s commonly deployed in datacentres and server infrastructure for large companies across the world. Being familiar with Red Hat technologies is super important if you want to work with Linux professionally. Even as a professional, having a home lab or test enviornment lets you play and experiment without messing with your production environment.

Red Hat Linux was the first Linux distribution I ever used back in 2002 with Red Hat Linux 7.3. I’ve primarily used Red Hat based Linux distributions for both home and work ever since.

First of all, download the CentOS installation ISO from the official website. I’ll be installing version 9 for x86_64. You can select any of the iso images, for example the DVD iso contains the entire distribution at 10GB, alternatively I’ll be selecting the boot iso as it’s a smaller initial download and I don’t need every package that comes in the full DVD.

We’ll also need the RHEL installation ISO from Red Hat. To get access to the Red Hat Enterprise Linux software you’ll need to register for a free developer account at https://developers.redhat.com/. The developer account allows you to run up to 16 RHEL machines for free, which is perfect for a home lab or test environment. Once you’ve registered for the developer account, head over to https://developers.redhat.com/products/rhel/download and download the ISO.

Unless you’re installing Linux on your physical machine, which is great (I personally run Fedora Linux on my main personal machine), I’m assuming you are using a lab environment, so you should also have either VirtualBox or similar Virtual Machine software installed. I’ll be using Proxmox, but you could also just as easily use VMware, Qemu/KVM or VirtualBox. The installation steps covered below will also work if you’re installing Linux on a bare metal PC.

I already have a Proxmox homelab set up running on 2 old machines so I won’t go into setting up Proxmox here. Whatever virtualisation software you’re using, create a vm and configure appropriately. I’m using the defaults of 2GB RAM, 32Gib storage and 1 CPU.

Once the virtual machine has been created you can click the “Start” button to boot the virtual machine and start up the installation process.

When the machine boots up you will be presented with the “Welcome” screen. Select your language if you need to change the default and click Continue.

On the Installation Summary screen you’ll likely see a bunch of red warning messages. This is fine and completely normal, it’s just the installation process letting you know what you still need to configure before continuing with the installation. The installation steps for both RHEL and CentOS are the same, except for one additional step that you need to take when installing RHEL, which is registering the installation with your Red Hat subscription. Putting aside the licensing requirements, registering your machines with Red Hat as a developer or home lab user gives you access to the Red Hat software repositories for installing additional applications, and has the additional benefit of giving you access to things like Red Hat Insights, allowing you to explore and use the same tools and software that Red Hat offers to enterprise customers completely for free. This is awesome because if you’re working in the industry as a Linux systems administrator, having experience with Red Hat’s products beyond RHEL is very helpful.

Let’s quickly register our Red Hat system so we can continue with the installation. I registered with my developer account information but I unselected the system purpose and the insights checkboxes, we can change these options later once we start experimenting with our systems.

Next, select the Network & Host Name option. If your Host PC is connected to the Internet, which I’m assuming it is as you’re reading this, you should just be able to toggle the Ethernet button to On and the network should auto-connect.

If you want to also give your machine a Hostname instead of locahost.localdomain this is where you can do that. I’m not going to that here as it’s easy to change later. Click Done.

Next, if Installation Source is still red you will have to configure that. This is to decide where the packages will be installed from. If you downloaded the full DVD iso then you can likely ignore that and just install from the local source, but if like me you downloaded the boot iso you might need to select the remote location to download the rest of the packages from. Both RHEL and CentOS should pick a mirror automatically once the network is connected, but if it doesn’t you’ll need to search online for your closest mirror URL.

Next select Software Selection. This screen is to actually pick which packages or package groups to install. I’m going to pick “Minimal Install” for both machines, though feel free to pick an environment that suits your purposes.

Next you’ll need to configure the Installation Destination. This is to prepare the virtual hard drive (or the physical hard drive if you’re installing this on bare metal) for the file system. By default, the installation will configure Logical Volume Manager (LVM) and use the entire disk, which is normally fine unless you know you need to configure manual disk partitions. In a production environment you might want to create separate partitions for /usr, /var, /tmp and /opt depending on the purposes of the server to allow you to manage the storage appropriately.

Ensure ‘Automatic’ is selected under Storage Configuration and select Done and then Accept Changes.

Finally we’ll configure the user accounts. I’m going to leave the Root account disabled and create a not-root user with Administrative privileges.

When creating a standard user account, make sure to select “Make this user administrator” which will add your user account to the “Wheel” group allowing you to perform administrative tasks without logging in directly as root. Click done.

That should be the final configuration step you need to perform.

If there is no more red text on the Summary screen you can click “Begin Installation” to continue. Depending on whether you’re installing from the DVD iso or over the network and considering the software packages you chose to install and your Internet speed, the actual installation process might take a while.

Once the installation is complete you can reboot the machines.

Login as your non-root user.

The final thing I will do is make note of each machine’s IP address so that I can connect to them via SSH instead of having to interact with the machines in the Proxmox or Virtual Machine console. Use the following command from both machine consoles.

$ ip a

I then add them both to the ~/.ssh/config file on my desktop machine. The config file would look similar to this:

Host rhel
	Hostname 192.168.1.2
	User dave

Host centos
	Hostname 192.168.1.3
	User dave

Congratulations and enjoy Enterprise Linux.