Categories
System Administration

Converting from RHEL to AlmaLinux

The best thing about open source, and one of the reasons why I love the entire Linux ecosystem, is choice. With open source software you have the ability to choose what OS or software you run, how you run it, and what you can do with it. If you don’t like the decisions that have been made, or you want to do things in a different way, you have total freedom to do something about it or find an alternative.

And I think vendor lock-in can be really bad for innovation, and of freedom. One of my long-time goals has always been to build my own Linux distribution, not to compete with anyone or because I think I can do things any better, but really just for choice. To say I can, to gain the skills to be able to do it, and if anything should ever happen to the software I use, I can do something about it.

AlmaLinux is an entirely free, community driven Enterprise Linux distribution, binary-compatible with Red Hat Enterprise Linux, that started life to fill a gap when CentOS Linux was discontinued. I’m personally a big fan of the Red Hat Linux, and Red Hat “alike”, range of products. Red Hat Linux 7.3 was the first Linux distribution I ever used and I’ve mostly run Fedora or CentOS on my personal machines ever since.

I really like the path that AlmaLinux chose to take with it’s distribution. I think the Linux world needs a community Enterprise OS, I think the changes to CentOS, CentOS Stream and Red Hat Enterprise Linux do make sense, and I think it’s important to try to work together cooperatively. Linux started life as a cooperative project, open source developers and companies like Red Hat have both made Linux into what it is today.

In this article I wanted to demonstrate converting from Red Hat Enterprise Linux to AlmaLinux. In a previous post I wrote about converting from an alternative Enterprise Linux (Oracle Linux) to Red Hat, so in the spirit of sharing and freedom of choice, I thought I’d go the opposite direction this time.

Just to be clear, this is not a comment either way about Red Hat or any of the alternative Enterprise Linux distributions. I’m not trying to say one is better than the other, or you should pick one instead of another one. I actually really value having the ability to choose. Red Hat make a fantastic product, upstream Fedora Linux built by the open source community is a fantastic distribution and the one I run on my personal machine, all of the CentOS/RHEL binary compatible forks or downstream distributions are excellent as well in my opinion.

So I’m starting with a fresh Red Hat Enterprise Linux 9 server running in Proxmox. I installed this machine from an ISO I downloaded from Red Hat and it’s registered with my developer subscription, which I did during the install.

For the sake of it, I’ve also got an Apache web server running.

AlmaLinux have provided a migration script, as most of the Enterprise Linux distributions have, to convert from one distribution to another, similar to the Convert2RHEL tool from Red Hat.

Download the script from Github, and run it.

$ curl -O https://raw.githubusercontent.com/AlmaLinux/almalinux-deploy/master/almalinux-deploy.sh

$ sudo bash almalinux-deploy.sh

Running the script shows that the migration is supported, and it starts doing its thing.

The migration took about the length of time it took me to make a coffee and some toast, though admittedly this is a small server with no real data or applications running, so your mileage may vary. Still it was quite quick and painless.

Rebooting the server shows everything seemed to work correctly.

Categories
System Administration

Converting from Oracle Linux/Rocky/CentOS to RHEL

In this post I wanted to demonstrate converting from one of the RHEL compatible Enterprise Linux distributions, like CentOS, Rocky Linux or Oracle Linux, to Red Hat Enterprise Linux. I’ll be demonstrating from an Oracle Linux server running in Proxmox, however these steps will work regardless of the RHEL compatible distribution you are starting from as long as it’s one of the supported systems.

You must be running one of the RHEL compatible distributions already. The conversion does not work from other Linux distributions, such as Debian or Ubuntu, and it also does not work if you’re starting from CentOS Stream.

There are really 2 main prequisites that you’ll need before attempting this conversion.

  1. You are already running a RHEL compatible distribution of at least version 7+. If you have an older distribution you’ll need to upgrade first.
  2. You have a valid RHEL subscription. I’ll be using the Free developer subscription that entitles you to run up to 16 Red Hat Enterprise Linux installations, as well as access to other Red Hat products.

If you need to upgrade from an older version of RHEL or CentOS, you should check out the leapp upgrade tool.

DISCLAIMER: Now before we start, I am running this in my homelab environment, not a production environment. I’m doing this as a trial run before I do an actual OUL to RHEL conversion in production for myself. This is for my own experience and practice, and I advise you to practice the conversion in test environments before running on production servers too. I’m not responsible for any damage done to your own servers. If you run into any issues and you have an active RHEL subscription you should contact Red Hat for support.

I have a running Oracle Linux 8 system that I’ll be using to convert to RHEL 8.

I’ve also set this server up to host a simple WordPress website with Apache, PHP 8, and MariaDB.

The first step in converting to RHEL is to download the convert2rhel tool. I’ve downloaded the tool from github and installed it on the OUL machine. If you’re converting from CentOS and can connect your system to Red Hat Satellite or console.redhat.com, you will be able to enable the Convert2RHEL repos and manage the conversion from there.

$ wget https://github.com/oamg/convert2rhel/releases/download/v2.1.0/convert2rhel-2.1.0-1.el8.noarch.rpm

$ dnf localinstall ./convert2rhel-2.1.0-1.el8.noarch.rpm

Once it’s installed, run the convert2rhel analyze to determine if the system can be converted.

$ convert2rhel analyze

After a minute or so, the analyze tool spat out a whole bunch of red error messages, but this is the point of analyzing first. My issues were with the firewalld service running, and the default Oracle Linux kernel that needed to be fixed. Oracle Linux generally installs the Unbreakable Enterprise Kernel which is incompatible with the Red Hat kernel, so I’ll need to fix both of those before continuing.

I’ll fix the kernel issue first.

$ grubby --default-kernel

/boot/vmlinuz-5.15.0-209.161.7.2.el8uek.x86_64

$ grubby --info=ALL | grep ^kernel

kernel="/boot/vmlinuz-5.15.0-209.161.7.2.el8uek.x86_64"
kernel="/boot/vmlinuz-4.18.0-553.16.1.el8_10.x86_64"
kernel="/boot/vmlinuz-0-rescue-83229607f01f471dbd78c219e5e4fc07"

The default kernel on the system is the UEK kernel, but there’s a Red Hat kernel already installed, so I’ll set that one as default and reboot.

$ grubby --set-default /boot/vmlinuz-4.18.0-553.16.1.el8_10.x86_64

$ grubby --default-kernel

/boot/vmlinuz-4.18.0-553.16.1.el8_10.x86_64

Once the machine has rebooted, I’ll re-run the analyze command.

The kernel error message has been resolved, so now it’s Firewalld error. There’s also an error message about the system not being connected to a Red Hat subscription, which is fine, we’ll fix that shortly.

I’m just going to run the suggested commands to remove the Firewalld error.

$ sed -i -- 's/^CleanupModulesOnExit.*/CleanupModulesOnExit=no/g' /etc/firewalld/firewalld.conf

$ firewall-cmd --reload

In the convert2rhel man page there’s a few options for authenticating to the subscription manager, including passing your username and password, or activation key at the command line. But the option I’m going to use is the -c option for using a config file. The convert2rhel tool has installed a config file at /etc/convert2rhel.ini which you can override.

$ cp /etc/convert2rhel.ini ~

[subscription_manager]
username       = <insert_username>
password       = <insert_password>

I’ve copied the ini file to my root user home directory and I’ll update it with my RHSM subscription details.

I’ll re-run the analyze tool, hopefully one last time, to check the subscription and then we should be good to go.

$ convert2rhel analyze -c /root/convert2rhel.ini

Everything looked good. There was a warning about third-party tools being installed because it detected the convert2rhel tool that I installed that wasn’t from a supported repository which I’m going to ignore. Otherwise, let’s do this.

$ convert2rhel -c /root/convert2rhel.ini

The conversion took a few minutes. But by the end it completed successfully, so I’ll reboot the machine.

The system rebooted into Red Hat Enterprise Linux and everything looks great.

I checked the system was registered correctly and changed the hostname because I previously had it set to oracle-linux.localnet.com.

$ subscription-manager status
+-------------------------------------------+
   System Status Details
+-------------------------------------------+
Overall Status: Disabled
Content Access Mode is set to Simple Content Access. This host has access to content, regardless of subscription status.

System Purpose Status: Disabled

This is a test machine, so the subscription status is fine. Installed the insights-client and registered.

$ dnf install insights-client
$ insights-client --register

Lastly, I’ll check that the WordPress installation is still working.

I noticed the hostname of the httpd configuration was still the previous oracle-linux hostname, so I’ll change that.

Everything looks great. The conversion from Oracle Linux to Red Hat Enterprise Linux was successful.

Categories
System Administration

Exploring OpenShift

OpenShift is Red Hat’s container orchestration platform built on top of Kubernetes. I love working with containers and Kubernetes, and as I’m also a big fan of Red Hat technologies I wanted to become more familiar with working with OpenShift.

I’ve also been studying Red Hat’s Openshift training courses including OpenShift Developer and OpenShift Administrator, so it makes sense to have an environment to work in and deploy some applications.

I’m going to install OpenShift local on my Fedora Linux development machine.

First you’ll need to download the crc tool from Red Hat. You’ll need a developer account and to log in to https://console.redhat.com/openshift/create/local to access the software.

Click the “Download OpenShift Local” button.

$ cd ~/Downloads
$ tar xvf crc-linux-amd64.tar.xz

Once it’s downloaded and you’ve extracted the tar archive, copy the crc tool to your users home directory.

mkdir -p ~/bin
cp ~/Downloads/crc-linux-*-amd64/crc ~/bin

If your users bin directory isn’t in your $PATH, you’ll need to add it.

$ export PATH=$PATH:$HOME/bin
$ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc

Next, initiate OpenShift with the crc setup command.

$ crc setup

This will take a while as it needs to download about 4GB for the OpenShift environment.

Once it’s downloaded, start the local cluster.

$ crc start

The crc tool will ask for the pull secret as it’s required to pull the OpenShift images from Red Hat. You can get the secret from console.redhat.com.

Copy the pull secret and paste it into your terminal when asked.

If everything worked correctly you should now have a running OpenShift instance on your machine.

The crc tool will give you a URL and login information once it’s ready. Open up your browser and login in to the dashboard.

You can change between the Administrator and the Developer perspective by clicking the dropdown under “Administrator”. This doesn’t change your permissions, but it changes the view of the dashboard to make it more focused to either Administrative tasks or Developer tasks.

Containers and Kubernetes are an important technology for modern application deployment and OpenShift is a really powerful addition to Kubernetes. I’m really enjoying getting more familiar with OpenShift.

Categories
System Administration

Building a Custom Ansible Execution Environment

Recently I’ve been working on an Ansible upgrade project that included building out an Ansible Automation Platform installation and upgrading legacy ansible code to modern standards. The ansible code that we were working with had been written mostly targeting Enterprise Linux versions 6 and 7 and was using pre ansible version 2.9 coding standards.

The newer versions of Ansible and Ansible Automation Platform utilise Execution Environments to run the ansible engine against a host. An Execution Environment is a container built with Ansible dependencies, Python libraries and Ansible Collections baked in.

On top of the legacy Ansible code that I was working with, the codebase does a lot of “magic” configuration for setting things up across the environment, so I had to make sure that everything worked like it did previously. I tested a few of the off-the-shelf execution environments, none of which worked for what we needed it for.

In this post I wanted to detail a quick tutorial on building a custom execution environment for running your Ansible code.

I’m using Fedora Linux 39 to set up a development environment, but most Linux distributions should follow similar steps.

From the command line, install the required dependencies. As execution environments are containers, we need a container runtime and for that we’ll use Podman. We also need some build tools.

$ sudo dnf install podman python3-pip

Now to install the Ansible dependencies.

$ python3 -m pip install ansible-navigator ansible-builder

Ansible navigator is the new interface to running Ansible and is great for testing out different execution environments and your ansible code as you’re developing. I briefly demonstrated using Ansible navigator in my article about using Ansible to configure Linux servers. You need the tools in Ansible builder to create the container images.

If you’ve ever built Docker containers before, the steps for EEs are very similar just with the Ansible builder wrapper. Create a folder to store your files.

$ mkdir custom-ee && cd custom-ee

The main file we need to create is the execution-environment.yml file, which Ansible builder uses to build the image.

---
version: 3

images:
  base_image:
    name: quay.io/centos/centos:stream9

dependencies:
  python_interpreter:
    package_system: python3.11
    python_path: /usr/bin/python3.11
  ansible_core:
    package_pip: ansible-core>=2.15
  ansible_runner:
    package_pip: ansible-runner

  galaxy: requirements.yml
  system: bindep.txt
  python: |
    netaddr
    receptorctl

additional_build_steps:
  append_base:
    - RUN $PYCMD -m pip install -U pip
  append_final:
    - COPY --from=quay.io/ansible/receptor:devel /usr/bin/receptor /usr/bin/receptor
    - RUN mkdir -p /var/run/receptor
    - RUN git lfs install --system
    - RUN alternatives --install /usr/bin/python python /usr/bin/python3.11 311

The main parts of the file are fairly self-explanatory, but from the top:

  • We’re using version 3 of the ansible builder spec.
  • The base container image we’re building from is CentOS stream 9 pulled from Quay.io.
  • We want to use Python 3.11 inside the container.
  • We want an Ansible core version higher than 2.15.

In the dependencies section, we can specify additional software our image requires. The galaxy entry is Ansible collections from the Galaxy repository. System is the software installed using DNF on a Linux system. And Python is the Python dependencies we need since Ansible is written in Python and it requires certain libraries to be available depending on what your requirements.

The Galaxy collections are being defined in an external file called requirements.yml which is in the working directory with the execution-environment.yml file. It’s simply a YAML file with the following entries:

---
collections:
  - name: ansible.posix
  - name: ansible.utils
  - name: ansible.netcommon
  - name: community.general

My project requires the ansible.posix, ansible.utils and ansible.netcommon collections, and the community.general collection. Previously, all of these collections would have been part of the ansible codebase and installed when you install Ansible, however the Ansible project has decided to split these out into collections, making the Ansible core smaller and more modular. You might not need these exact collections, or you might require different collections depending on your environment, so check out the Ansible documentation.

Next is the bindep.txt file for the system binary dependencies. These are installed in our image, which is CentOS, using DNF.

epel-release [platform:rpm]
python3.11-devel [platform:rpm]
python3-libselinux [platform:rpm]
python3-libsemanage [platform:rpm]
python3-policycoreutils [platform:rpm]
sshpass [platform:rpm]
rsync [platform:rpm]
git-core [platform:rpm]
git-lfs [platform:rpm]

Again, you might require different dependencies, so check the documentation for the Ansible modules you’re using.

Under the Python section, I’ve defined the Python dependencies directly rather than using a seperate file. If you need a separate file it’s called requirements.txt.

    netaddr
    receptorctl

Netaddr is the Python library for working with IP Addresses, which the ansible codebase I was working with needed, and receptorctl is a Python library for working with Receptor, network service mesh implementation that Ansible uses to distribute work across execution nodes.

With all of that definied, we can build the image.

ansible-builder build --tag=custom-ee:1.1

The custom-ee tag is the name of the image that we’ll use to call from Ansible. The ansible-builder command runs Podman to build the container image, The build should take a few minutes. If everything went according to plan, you should see a success message.

Because the images are just standard Podman images, you can run the podman images command to see it. You should see the output display ‘localhost/custom-ee’ or whatever you tagged your image with.

$ podman images

If the build was successful and the image is available, you can test the image with Ansible navigator. I’m going to test with a minimal RHEL 9 installation that I have running. In the ansible-navigator command, you can specify the –eei flag to change the EE from the default, or you can add a directive in an ansible-navigator.yml file in your ansible project, such as the following:

ansible-navigator:
  execution-environment:
    image: localhost/custom-ee:1.1
    pull:
      policy: missing
  playbook-artifact:
    enable: false

If you’re using Ansible Automation Platform you can pull the EE from a container registry or Private Automation Hub and specify which EE to use in your Templates.

ansible-navigator run web.yml -m stdout --eei localhost/custom-ee:1.1

You can also inspect the image with podman inspect with the image hash from the podman images command.

$ podman inspect 8e53f19f86e4

Once you’ve got the EE working how you need it to you can push it to either a public or private container registry for use in your environment.

Categories
System Administration WordPress

Installing WordPress Multisite on Linux

WordPress is one of the most popular content management systems in the world estimated to be running over 40% of all websites. Its easy to use of use, has a massive theme and plugin ecosystem, and it’s super flexible for both personal websites and businesses.

I’ve been using WordPress since the start of my career over 15 years ago. It’s always been my first choice as a Website platform and is the option that I recommend for many use cases.

In this post I’ll outline the steps for installing and configuring WordPress as a multisite. WordPress Multisite is a powerful feature that allows you to run multiple WordPress sites from a single WordPress installation. This is particularly useful for businesses or individuals who need to manage multiple websites but want the convenience of a single dashboard. With Multisite, you can create a network of sites with shared plugins and themes, making it easier to maintain consistency and control across all your sites.

I built my first WordPress Multisite a few years ago, around 2018, when I worked as Technical Lead for a publishing company. We had 30 websites all with similar functionality and design that I thought would suit the multisite features. It was painful having to manage all 30 sites as separate instances with separate user logins, having to keep all 30 up to date. Moving the sites into a multisite reduced the maintenance load, and the cost of running that amount of websites, and since then I’ve built and managed several fairly large multisite installations.

I’ve chosen to install WordPress multisite on a Linux distribution such as CentOS or AlmaLinux. Enterprise Linux distributions are well known for their stability, security, and support and used heavily in Web hosting. Using any of these distributions as a web server for WordPress ensures a reliable and secure hosting environment, critical for maintaining website performance.

Before you begin, ensure you have the following:

  • A server running RHEL, Oracle Linux, CentOS, or AlmaLinux.
  • A non-root user with sudo privileges.

I wont detail where to host your server in this article. I assume for the sake of convenience that you’ve already got a Linux server running in either a public cloud such as Amazon AWS or on a virtual machine running on your own PC. Any of the Enterprise Linux variants should do fine, the steps should be the same regardless.

Install Apache, PHP and MySQL

Apache is one of the most popular web servers in the world and is well-supported by WordPress and the web development community. It’s usually the default choice when installing WordPress.

$ sudo dnf install httpd -y
$ sudo systemctl enable --now httpd

MariaDB is a popular open-source relational database management system and is fully compatible with MySQL, which is the database system that WordPress uses.

$ sudo dnf install mariadb-server mariadb -y
$ sudo systemctl enable --now mariadb

Run the following command to secure your MariaDB installation.

$ sudo mysql_secure_installation

Follow the prompts to set the root password and remove anonymous users, disallow root login remotely, remove test databases, and reload privilege tables.

PHP is the progamming language that WordPress is built on. Your server needs to have PHP installed so that it can execute the code and produce your website. By default, Enterprise Linux 8 will install PHP 7.2, which is bit old and past it’s end-of-life, so we’ll install PHP 8.2 from App Streams instead.

$ sudo dnf module list php
$ sudo dnf module enable php:8.2
$ sudo dnf install php php-mysqlnd -y
$ sudo systemctl restart httpd

Log in to MariaDB and create a database and user for WordPress.

$ sudo mysql -u root -p

Then, run the following commands.

CREATE DATABASE wordpress;
GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpressuser'@'localhost' IDENTIFIED BY 'password';
FLUSH PRIVILEGES;
EXIT;

Changing ‘wordpressuser’ and ‘password’ for a username and secure password of your choice. This is the user details that WordPress will use to connect to the database, so it’s critical that these details are secure.

Navigate to the web root directory and download the latest WordPress package

$ cd /var/www/
$ sudo rm -rf html/
$ sudo wget https://wordpress.org/latest.tar.gz
$ sudo tar -xzvf latest.tar.gz
$ sudo mv wordpress html
$ sudo chown -R apache:apache /var/www/html
$ sudo chmod -R 755 /var/www/html

Create an Apache configuration file for WordPress.

$ sudo vim /etc/httpd/conf.d/wordpress.conf

Add the following configuration.

<VirtualHost *:80>
    ServerAdmin [email protected]
    DocumentRoot /var/www/html
    ServerName example.com
    ServerAlias www.example.com

    <Directory /var/www/html>
        Options FollowSymLinks
        AllowOverride All
        Require all granted
    </Directory>

    ErrorLog /var/log/httpd/wordpress-error.log
    CustomLog /var/log/httpd/wordpress-access.log combined
</VirtualHost>

Include the email address and domain name you wish to use for your website. In a WordPress multisite the domain name will be the primary network domain, however when you add subsites to the network you can change the subsite domain names from the WordPress dashboard.

Configuring SELinux

After configuring Apache, you might need to configure SELinux to allow Apache to serve WordPress files and communicate with the database. If you’ve followed the above configuration and your website doesn’t appear to work or you’re getting forbidden error messages, you will need to complete the follow configuration.

Note: Many people online suggest disabling SELinux out of convienience. Please don’t do that, SELinux is built into the Linux kernel to provide security access controls and is important to maintain the security of your Linux system, it’s better to configure SELinux properly rather than disabling it.

Set the proper file context for WordPress files

$ sudo semanage fcontext -a -t httpd_sys_rw_content_t "/var/www/html(/.*)?"

$ sudo restorecon -Rv /var/www/html

Allow Apache to connect to the database

$ sudo setsebool -P httpd_can_network_connect_db 1

Restart Apache

$ sudo systemctl restart httpd

You should now be able to access your WordPress installation. Open your web browser and navigate to http://your-server-ip. Follow the on-screen instructions to complete the initial installation.

If everything went correctly and the installation was able to connect to the database properly, you should now be able to login to your WordPress admin dashboard by navigating to http://your-server-ip/wp-admin.

After the initial installation, follow these steps to configure WordPress Multisite with subdomains:

Edit wp-config.php: Add the following lines above the line that says /* That's all, stop editing! Happy publishing. */:

define('WP_ALLOW_MULTISITE', true);

Install the Network: Refresh your WordPress dashboard. Navigate to Tools -> Network Setup. Select “Sub-domains” and click “Install”.

You will most likely need to enable the Apache mod_rewrite module to allow for WordPress to rewrite the site URLs.

Enter the following command, and if there’s no output you’ll need to enable the module by editing the file /etc/httpd/conf.modules.d/00-base.conf and uncommenting the line for rewrite_module.

$ httpd -M | grep rewrite
 rewrite_module (shared)

Update wp-config.php and .htaccess: WordPress will provide some code to add to your wp-config.php and .htaccess files. Edit these files and add the provided code. wp-config.php: Add the following code just below the line define('WP_ALLOW_MULTISITE', true);:

define('MULTISITE', true);
define('SUBDOMAIN_INSTALL', true);
define('DOMAIN_CURRENT_SITE', 'example.com');
define('PATH_CURRENT_SITE', '/');
define('SITE_ID_CURRENT_SITE', 1);
define('BLOG_ID_CURRENT_SITE', 1);

.htaccess:

Replace the existing WordPress rules with the provided Multisite rules:

RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]

# add a trailing slash to /wp-admin
RewriteRule ^wp-admin$ wp-admin/ [R=301,L]

RewriteCond %{REQUEST_FILENAME} -f [OR]
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule ^ - [L]
RewriteRule ^(wp-(content|admin|includes).*) $1 [L]
RewriteRule ^(.*\.php)$ $1 [L]
RewriteRule . index.php [L]

If necessary, you might need to restart Apache.

sudo systemctl restart httpd

You should be able to refresh your Web browser and re-login to WordPress and if everything was successful, you should be able to access the Network Admin section of the WordPress dashboard.

From here you can add sub-sites, you can install themes and plugins that all sub-sites will be able to use, and you’ll be able to manage user access across each site in your network.

Configuring the Linux Firewall

To ensure that your WordPress site is accessible from the web, you will need to configure the firewall on your server to allow HTTP and HTTPS traffic as well as SSH traffic, but restricting access to everything else.

First, check if firewalld is running on your server:

sudo systemctl status firewalld

If it is not running, start and enable it:

$ sudo systemctl enable --now firewalld

To allow HTTP (port 80) and HTTPS (port 443) and SSH (port 22) traffic through the firewall, run the following commands:

$ sudo firewall-cmd --permanent --add-service=http
$ sudo firewall-cmd --permanent --add-service=https
$ sudo firewall-cmd --permanent --add-service=ssh

After making these changes, reload the firewall to apply the new rules:

$ sudo firewall-cmd --reload

To verify that the rules have been added correctly, you can list the current firewall rules:

$ sudo firewall-cmd --list-all

Security Improvements

By default WordPress is relatively secure if you keep it up to date, however it’s only as secure as it’s weakest part. By following a few basic security practices you’ll minimise the risk of your website being vulnerable to attack.

Update WordPress regularly
Ensure your WordPress installation, themes, and plugins are always up-to-date to protect against vulnerabilities.

Use strong passwords
Use complex passwords for your WordPress admin account, database user, and any other user accounts.

Install security plugins
Consider using security plugins like Wordfence to add an extra layer of protection. Wordfence lets you limit login attempts, it scans your site for security vulnerabilities and automatically blocks malicious activity.

Secure Your .htaccess File
Add rules to your .htaccess file to prevent unauthorized access to sensitive files:

<Files wp-config.php>
  order allow,deny
  deny from all
</Files>

<Files .htaccess>
  order allow,deny
  deny from all
</Files>

Enable SSL
Install an SSL certificate to encrypt data transmitted between your server and your users. You can obtain a free SSL certificate from Let’s Encrypt and configure Apache to use it.

Another option for providing SSL certificates is to host your website behind Cloudflare and use their SSL certificates and Web Application firewall. I won’t detail Cloudflare setup in this post but it’s well worth considering.

Regular backups
Set up regular backups of your WordPress site, including the database and files, to ensure you can quickly recover in case of an incident.

You should now have a robust and flexible platform configured to host multiple websites from a single dashboard running on a solid, Enterprise grade Linux operating system.

Categories
Cloud Computing

Automating Server Deployments in AWS with Terraform

Previously I discussed deploying Enterprise Linux in AWS which I demonstrated by using the AWS console. This is a common way to deploy servers to the cloud, however doing server deployments manually can create a situation where you’re stuck with static images that are difficult to replicate when your infrastructure grows.

One of the benefits of Cloud Computing is that the infrastructure is programmable, meaning we can write code that can automate tasks for us. You can spend the time defining all of the settings and configurations once and then anytime you need to deploy your server again, whether it’s adding more servers to a cluster for high-availability, or re-deploying a server that’s died, you don’t have to reinvent the wheel and risk making costly mistakes. This also makes managing large-scale infrastructure much easier when provisioning can be scripted and stored in version control.

Terraform is a tool that’s used to automate infrastructure provisioning and can be used with AWS to make deploying servers fast, easy and re-producible.

Terraform lets you use a configuration language to describe your infrastructure and then goes out to your cloud platform and builds your environment. Similar to Ansible, Terraform allows you to describe the state you want your infrastructure to be in, and then makes sure that state is achieved.

You’ll need a few prerequisites to get the most out of this tutorial, specifically, I’ll be using Fedora Linux running on my desktop environment where I’ll write the Terraform configurations, and I’ll be using the AWS account that was set up previously.

Depending on your environment, you might need to follow different instructions to get Terraform installed. Most versions of Linux will be similar. In Fedora Linux, run the following commands:

$ sudo dnf install -y dnf-plugins-core
$ sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
$ sudo dnf -y install terraform

To be able to speak to the AWS infrastructure APIs, you’ll need to install the awscli tool. Download the awscli tool and install it with the following commands:

$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscli.zip"
$ unzip awscli.zip
$ sudo ./aws/install

Next you’ll need to create some AWS credentials to allow Terraform to authenticate to AWS, this needs to be separate credentials than the ones you log in with. When you create an AWS account, the “main” account is known as the root account. This is the all powerful administrator account and can do and access anything in your AWS console. When you set up your infrastructure, particularly when using automation tools like Terraform, you should create a non-root user account so you don’t inadvertently mess anything up. Head over to AWS IAM and create a new user.

AWS IAM and permissions settings are far beyond the scope of this post, however for the purposes of this demonstration ensure your new user has a policy that allows access to ec2 and set up the Access keys that the awscli tool will use to authenticate.

From your local machine type:

$ aws configure

And enter the access key and secret when prompted. You’ll also be asked to set your default region. In AWS, the region is the geographical location you want your services to run, as I’m in Sydney I’ll set my region appropriately.

We should be ready to start using Terraform now. I’ll create a folder in my local users home directory called ‘terraform’.

Inside the terraform folder, create a file called ‘main.tf’ and enter the following:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.16"
    }
  }

  required_version = ">= 1.2.0"
}

provider "aws" {
  region  = "ap-southeast-2"
}

resource "aws_instance" "rhel_server" {
  ami           = "ami-086918d8178bfe266"
  instance_type = "t2.micro"

  tags = {
    Name = "RHEL Test Server"
  }
}

Terraform uses a configuration language to define the desired infrastructure configuration. The important parts to take note of are the “provider” where you’re telling Terraform to use the AWS plugin, and the “resource” section where you define the instance you’re creating.

Here I’ve set the region to the ap-southeast-2 region which is the AWS region for Sydney, and in the resource section I’ve specified a t2.micro instance type which is the same as we used previously. I’ve given the instance a name of “rhel_server” and provided an AMI.

The AMI is the unique identifier that AWS uses to determine the OS image you want. Each image in the AMI marketplace has a different AMI code, which you can find from the console in the same location where you select the OS you want to use.

Once you’ve created the main.tf file you’ll need to initialise the terraform directory by running ‘terraform init’. This command reads the configuration file and downloads the required provider plugins.

Next you should run ‘terraform fmt’ and ‘terraform validate’ to properly format and validate your configuration.

Type ‘terraform apply’ to run your configuration. Terraform will evaluate the current state of your AWS infrastructure and determine what needs to be done to match your configuration. After a few moments you should be presented with the option to proceed where Terraform will ask you to confirm. Type ‘yes’ if the output looks good.

If you see the green output at the bottom of the screen saying ‘Apply complete’, you’ve successfully deployed an image to AWS using Terraform. Typing ‘terraform show’ will show you the current image configuration in AWS in JSON format.

And you can also check the AWS console to confirm.

Once you’re finished with the instance, type ‘terraform destroy’ to clean up and terminate the running instance. Terminating instances when you’re finished with them is the best way to keep your AWS costs low so you don’t get billed thousands of dollars at the end of the month.

Using Terraform to deploy infrastructure to the cloud is really powerful and flexible, allowing you to define exactly what you want running without having to manually deploy resources.

Categories
Cloud Computing

Deploying Enterprise Linux in AWS

In a previous post I discussed installing Enterprise Linux in a virtual machine, this time I wanted to write about deploying a server to the cloud.

Cloud Computing platforms like Amazon’s AWS allow you to build and run all kinds of Infrastructure and services on-demand without having to purchase and maintain expensive physical computing hardware. You can deploy a server in minutes and have the capability to scale your workload as much as you need. I’ve been running production servers in AWS for a few years now and of all the cloud platforms, it’s the one I’m most familiar with.

I assume you’ve registered for an AWS account already, if not, head over to https://aws.amazon.com and set one up.

AWS is huge. There’s services and infrastructure available for pretty much anything you can imagine, from basic Linux and Windows servers, to Machine Learning and AI and everything in between. It can be quite overwhelming the first time you log into the AWS console, however for this post we can safely ignore pretty much all of that.

Amazon’s EC2 is a service that lets us deploy server images, whether those are images that AWS have supplied, community contributed images, or something that we’ve built ourselves. Basically, an image is almost like what we built previously when installing Linux in a virtual machine. An image is a preconfigured Operating System that we can deploy from the AWS console.

From the Services menu, or the list of All Services above, select EC2.

You should see a dashboard that looks like the following.

Again, for the most part you can ignore everything there for now.

Click on the big orange button that says “Launch Instance”.

Here you get to select the Operating System you want to deploy. Depending on what you want, you can feel free to select any of the Quick Start OS images, most of them are self-explanatory, such as Windows, MacOS X or Ubuntu. Amazon Linux is a variant of Linux based closely on Red Hat / CentOS and has been tuned to work well with AWS.

I’m going to select Red Hat Enterprise Linux 9.

Give your instance a name. For this example I’ll just call mine “RHEL Server”, but you should give your server a name that matches its purpose or another easily identifiable name.

Next you’ll want to select the Instance type. This is basically the size of the server you want. AWS has a huge range of different instance types you can choose from depending on the purpose of the server you’re building. However, you need to be careful because the instance type dictates much of the price you’ll pay each month so don’t spin up massive servers unless you know you can pay for them.

In this example I’m going to deploy a t2.micro instance which is Free Tier eligible, which is perfect because it means I can deploy and play around for a bit and then shut it down when I’m ready without paying for it.

Below the instance type you want to select the key pair you’ll use to connect to the instance. The key pair allows you to securely connect to the instance using SSH without having to use passwords and is required by AWS. If you haven’t already set one up do so now by clicking the link next to the dropdown. Make sure you download the key you create and store it somewhere safe.

Further down in the Network settings, I’m going to select “Allow SSH traffic from” and select “My IP” from the dropdown. This just restricts access to the server from the Internet to only the IP address you’re connecting to the console from.

If you’re setting this up from your home you likely have an IP address assigned from your ISP dynamically. For the most part for home Internet access this is great, but can cause issues when setting AWS server access. If your ISP changes your home IP address you can be locked out of your server. For this example, this is fine.

I’ve deselected the “Allow HTTP traffic from the Internet” checkbox as I wont be setting up a web server at the moment.

That should be it for the basic configuration. If everything looks Ok, you can click the orange “Launch Instance” button.

After a few seconds you should see your running instance appear.

Your server is now ready for use. The base images only have the bare minimum software installed to give you a useful OS. From here you’ll need to configure the applications and services that you’ll be running.

To connect to the server, open a terminal and establish an SSH connection. Remember, we’ve restricted access to only our own IP address and we need to use the Key that was configured earlier. The default username that’s created with the image is ec2-user.

$ ssh ec2-user@[IP-Address] -i ~/key.pem

You now have a Red Hat Enterprise Linux server running in the AWS Cloud. From here you can configure your server to do anything you want, you could even try using Ansible for configuration management.

Once you’ve finished playing with your server, make sure you remember to terminate the instance so that you keep your AWS bill under control.

There’s much more to setting up and running production servers in AWS than just what I covered in this post, however this is a pretty good starting point for getting a server up and running quickly.

Categories
System Administration

Installing Enterprise Linux

In this post I’m going to demonstrate the installation of Enterprise Linux in a Virtual lab environment.

I’ll be installing both Red Hat Enterprise Linux 9, because it’s the leading Enterprise Linux distribution, and CentOS Stream 9, because it’s the upstream community release of RHEL. The steps outlined here should be the same for all variants of Red Hat Enterprise Linux, including Oracle Linux, Rocky Linux and AlmaLinux.

I’ve chosen Enterprise Linux as it’s commonly deployed in datacentres and server infrastructure for large companies across the world. Being familiar with Red Hat technologies is super important if you want to work with Linux professionally. Even as a professional, having a home lab or test enviornment lets you play and experiment without messing with your production environment.

Red Hat Linux was the first Linux distribution I ever used back in 2002 with Red Hat Linux 7.3. I’ve primarily used Red Hat based Linux distributions for both home and work ever since.

First of all, download the CentOS installation ISO from the official website. I’ll be installing version 9 for x86_64. You can select any of the iso images, for example the DVD iso contains the entire distribution at 10GB, alternatively I’ll be selecting the boot iso as it’s a smaller initial download and I don’t need every package that comes in the full DVD.

We’ll also need the RHEL installation ISO from Red Hat. To get access to the Red Hat Enterprise Linux software you’ll need to register for a free developer account at https://developers.redhat.com/. The developer account allows you to run up to 16 RHEL machines for free, which is perfect for a home lab or test environment. Once you’ve registered for the developer account, head over to https://developers.redhat.com/products/rhel/download and download the ISO.

Unless you’re installing Linux on your physical machine, which is great (I personally run Fedora Linux on my main personal machine), I’m assuming you are using a lab environment, so you should also have either VirtualBox or similar Virtual Machine software installed. I’ll be using Proxmox, but you could also just as easily use VMware, Qemu/KVM or VirtualBox. The installation steps covered below will also work if you’re installing Linux on a bare metal PC.

I already have a Proxmox homelab set up running on 2 old machines so I won’t go into setting up Proxmox here. Whatever virtualisation software you’re using, create a vm and configure appropriately. I’m using the defaults of 2GB RAM, 32Gib storage and 1 CPU.

Once the virtual machine has been created you can click the “Start” button to boot the virtual machine and start up the installation process.

When the machine boots up you will be presented with the “Welcome” screen. Select your language if you need to change the default and click Continue.

On the Installation Summary screen you’ll likely see a bunch of red warning messages. This is fine and completely normal, it’s just the installation process letting you know what you still need to configure before continuing with the installation. The installation steps for both RHEL and CentOS are the same, except for one additional step that you need to take when installing RHEL, which is registering the installation with your Red Hat subscription. Putting aside the licensing requirements, registering your machines with Red Hat as a developer or home lab user gives you access to the Red Hat software repositories for installing additional applications, and has the additional benefit of giving you access to things like Red Hat Insights, allowing you to explore and use the same tools and software that Red Hat offers to enterprise customers completely for free. This is awesome because if you’re working in the industry as a Linux systems administrator, having experience with Red Hat’s products beyond RHEL is very helpful.

Let’s quickly register our Red Hat system so we can continue with the installation. I registered with my developer account information but I unselected the system purpose and the insights checkboxes, we can change these options later once we start experimenting with our systems.

Next, select the Network & Host Name option. If your Host PC is connected to the Internet, which I’m assuming it is as you’re reading this, you should just be able to toggle the Ethernet button to On and the network should auto-connect.

If you want to also give your machine a Hostname instead of locahost.localdomain this is where you can do that. I’m not going to that here as it’s easy to change later. Click Done.

Next, if Installation Source is still red you will have to configure that. This is to decide where the packages will be installed from. If you downloaded the full DVD iso then you can likely ignore that and just install from the local source, but if like me you downloaded the boot iso you might need to select the remote location to download the rest of the packages from. Both RHEL and CentOS should pick a mirror automatically once the network is connected, but if it doesn’t you’ll need to search online for your closest mirror URL.

Next select Software Selection. This screen is to actually pick which packages or package groups to install. I’m going to pick “Minimal Install” for both machines, though feel free to pick an environment that suits your purposes.

Next you’ll need to configure the Installation Destination. This is to prepare the virtual hard drive (or the physical hard drive if you’re installing this on bare metal) for the file system. By default, the installation will configure Logical Volume Manager (LVM) and use the entire disk, which is normally fine unless you know you need to configure manual disk partitions. In a production environment you might want to create separate partitions for /usr, /var, /tmp and /opt depending on the purposes of the server to allow you to manage the storage appropriately.

Ensure ‘Automatic’ is selected under Storage Configuration and select Done and then Accept Changes.

Finally we’ll configure the user accounts. I’m going to leave the Root account disabled and create a not-root user with Administrative privileges.

When creating a standard user account, make sure to select “Make this user administrator” which will add your user account to the “Wheel” group allowing you to perform administrative tasks without logging in directly as root. Click done.

That should be the final configuration step you need to perform.

If there is no more red text on the Summary screen you can click “Begin Installation” to continue. Depending on whether you’re installing from the DVD iso or over the network and considering the software packages you chose to install and your Internet speed, the actual installation process might take a while.

Once the installation is complete you can reboot the machines.

Login as your non-root user.

The final thing I will do is make note of each machine’s IP address so that I can connect to them via SSH instead of having to interact with the machines in the Proxmox or Virtual Machine console. Use the following command from both machine consoles.

$ ip a

I then add them both to the ~/.ssh/config file on my desktop machine. The config file would look similar to this:

Host rhel
	Hostname 192.168.1.2
	User dave

Host centos
	Hostname 192.168.1.3
	User dave

Congratulations and enjoy Enterprise Linux.