Categories
System Administration

Converting from RHEL to AlmaLinux

The best thing about open source, and one of the reasons why I love the entire Linux ecosystem, is choice. With open source software you have the ability to choose what OS or software you run, how you run it, and what you can do with it. If you don’t like the decisions that have been made, or you want to do things in a different way, you have total freedom to do something about it or find an alternative.

And I think vendor lock-in can be really bad for innovation, and of freedom. One of my long-time goals has always been to build my own Linux distribution, not to compete with anyone or because I think I can do things any better, but really just for choice. To say I can, to gain the skills to be able to do it, and if anything should ever happen to the software I use, I can do something about it.

AlmaLinux is an entirely free, community driven Enterprise Linux distribution, binary-compatible with Red Hat Enterprise Linux, that started life to fill a gap when CentOS Linux was discontinued. I’m personally a big fan of the Red Hat Linux, and Red Hat “alike”, range of products. Red Hat Linux 7.3 was the first Linux distribution I ever used and I’ve mostly run Fedora or CentOS on my personal machines ever since.

I really like the path that AlmaLinux chose to take with it’s distribution. I think the Linux world needs a community Enterprise OS, I think the changes to CentOS, CentOS Stream and Red Hat Enterprise Linux do make sense, and I think it’s important to try to work together cooperatively. Linux started life as a cooperative project, open source developers and companies like Red Hat have both made Linux into what it is today.

In this article I wanted to demonstrate converting from Red Hat Enterprise Linux to AlmaLinux. In a previous post I wrote about converting from an alternative Enterprise Linux (Oracle Linux) to Red Hat, so in the spirit of sharing and freedom of choice, I thought I’d go the opposite direction this time.

Just to be clear, this is not a comment either way about Red Hat or any of the alternative Enterprise Linux distributions. I’m not trying to say one is better than the other, or you should pick one instead of another one. I actually really value having the ability to choose. Red Hat make a fantastic product, upstream Fedora Linux built by the open source community is a fantastic distribution and the one I run on my personal machine, all of the CentOS/RHEL binary compatible forks or downstream distributions are excellent as well in my opinion.

So I’m starting with a fresh Red Hat Enterprise Linux 9 server running in Proxmox. I installed this machine from an ISO I downloaded from Red Hat and it’s registered with my developer subscription, which I did during the install.

For the sake of it, I’ve also got an Apache web server running.

AlmaLinux have provided a migration script, as most of the Enterprise Linux distributions have, to convert from one distribution to another, similar to the Convert2RHEL tool from Red Hat.

Download the script from Github, and run it.

$ curl -O https://raw.githubusercontent.com/AlmaLinux/almalinux-deploy/master/almalinux-deploy.sh

$ sudo bash almalinux-deploy.sh

Running the script shows that the migration is supported, and it starts doing its thing.

The migration took about the length of time it took me to make a coffee and some toast, though admittedly this is a small server with no real data or applications running, so your mileage may vary. Still it was quite quick and painless.

Rebooting the server shows everything seemed to work correctly.

Categories
System Administration

Converting from Oracle Linux/Rocky/CentOS to RHEL

In this post I wanted to demonstrate converting from one of the RHEL compatible Enterprise Linux distributions, like CentOS, Rocky Linux or Oracle Linux, to Red Hat Enterprise Linux. I’ll be demonstrating from an Oracle Linux server running in Proxmox, however these steps will work regardless of the RHEL compatible distribution you are starting from as long as it’s one of the supported systems.

You must be running one of the RHEL compatible distributions already. The conversion does not work from other Linux distributions, such as Debian or Ubuntu, and it also does not work if you’re starting from CentOS Stream.

There are really 2 main prequisites that you’ll need before attempting this conversion.

  1. You are already running a RHEL compatible distribution of at least version 7+. If you have an older distribution you’ll need to upgrade first.
  2. You have a valid RHEL subscription. I’ll be using the Free developer subscription that entitles you to run up to 16 Red Hat Enterprise Linux installations, as well as access to other Red Hat products.

If you need to upgrade from an older version of RHEL or CentOS, you should check out the leapp upgrade tool.

DISCLAIMER: Now before we start, I am running this in my homelab environment, not a production environment. I’m doing this as a trial run before I do an actual OUL to RHEL conversion in production for myself. This is for my own experience and practice, and I advise you to practice the conversion in test environments before running on production servers too. I’m not responsible for any damage done to your own servers. If you run into any issues and you have an active RHEL subscription you should contact Red Hat for support.

I have a running Oracle Linux 8 system that I’ll be using to convert to RHEL 8.

I’ve also set this server up to host a simple WordPress website with Apache, PHP 8, and MariaDB.

The first step in converting to RHEL is to download the convert2rhel tool. I’ve downloaded the tool from github and installed it on the OUL machine. If you’re converting from CentOS and can connect your system to Red Hat Satellite or console.redhat.com, you will be able to enable the Convert2RHEL repos and manage the conversion from there.

$ wget https://github.com/oamg/convert2rhel/releases/download/v2.1.0/convert2rhel-2.1.0-1.el8.noarch.rpm

$ dnf localinstall ./convert2rhel-2.1.0-1.el8.noarch.rpm

Once it’s installed, run the convert2rhel analyze to determine if the system can be converted.

$ convert2rhel analyze

After a minute or so, the analyze tool spat out a whole bunch of red error messages, but this is the point of analyzing first. My issues were with the firewalld service running, and the default Oracle Linux kernel that needed to be fixed. Oracle Linux generally installs the Unbreakable Enterprise Kernel which is incompatible with the Red Hat kernel, so I’ll need to fix both of those before continuing.

I’ll fix the kernel issue first.

$ grubby --default-kernel

/boot/vmlinuz-5.15.0-209.161.7.2.el8uek.x86_64

$ grubby --info=ALL | grep ^kernel

kernel="/boot/vmlinuz-5.15.0-209.161.7.2.el8uek.x86_64"
kernel="/boot/vmlinuz-4.18.0-553.16.1.el8_10.x86_64"
kernel="/boot/vmlinuz-0-rescue-83229607f01f471dbd78c219e5e4fc07"

The default kernel on the system is the UEK kernel, but there’s a Red Hat kernel already installed, so I’ll set that one as default and reboot.

$ grubby --set-default /boot/vmlinuz-4.18.0-553.16.1.el8_10.x86_64

$ grubby --default-kernel

/boot/vmlinuz-4.18.0-553.16.1.el8_10.x86_64

Once the machine has rebooted, I’ll re-run the analyze command.

The kernel error message has been resolved, so now it’s Firewalld error. There’s also an error message about the system not being connected to a Red Hat subscription, which is fine, we’ll fix that shortly.

I’m just going to run the suggested commands to remove the Firewalld error.

$ sed -i -- 's/^CleanupModulesOnExit.*/CleanupModulesOnExit=no/g' /etc/firewalld/firewalld.conf

$ firewall-cmd --reload

In the convert2rhel man page there’s a few options for authenticating to the subscription manager, including passing your username and password, or activation key at the command line. But the option I’m going to use is the -c option for using a config file. The convert2rhel tool has installed a config file at /etc/convert2rhel.ini which you can override.

$ cp /etc/convert2rhel.ini ~

[subscription_manager]
username       = <insert_username>
password       = <insert_password>

I’ve copied the ini file to my root user home directory and I’ll update it with my RHSM subscription details.

I’ll re-run the analyze tool, hopefully one last time, to check the subscription and then we should be good to go.

$ convert2rhel analyze -c /root/convert2rhel.ini

Everything looked good. There was a warning about third-party tools being installed because it detected the convert2rhel tool that I installed that wasn’t from a supported repository which I’m going to ignore. Otherwise, let’s do this.

$ convert2rhel -c /root/convert2rhel.ini

The conversion took a few minutes. But by the end it completed successfully, so I’ll reboot the machine.

The system rebooted into Red Hat Enterprise Linux and everything looks great.

I checked the system was registered correctly and changed the hostname because I previously had it set to oracle-linux.localnet.com.

$ subscription-manager status
+-------------------------------------------+
   System Status Details
+-------------------------------------------+
Overall Status: Disabled
Content Access Mode is set to Simple Content Access. This host has access to content, regardless of subscription status.

System Purpose Status: Disabled

This is a test machine, so the subscription status is fine. Installed the insights-client and registered.

$ dnf install insights-client
$ insights-client --register

Lastly, I’ll check that the WordPress installation is still working.

I noticed the hostname of the httpd configuration was still the previous oracle-linux hostname, so I’ll change that.

Everything looks great. The conversion from Oracle Linux to Red Hat Enterprise Linux was successful.

Categories
System Administration

Exploring OpenShift

OpenShift is Red Hat’s container orchestration platform built on top of Kubernetes. I love working with containers and Kubernetes, and as I’m also a big fan of Red Hat technologies I wanted to become more familiar with working with OpenShift.

I’ve also been studying Red Hat’s Openshift training courses including OpenShift Developer and OpenShift Administrator, so it makes sense to have an environment to work in and deploy some applications.

I’m going to install OpenShift local on my Fedora Linux development machine.

First you’ll need to download the crc tool from Red Hat. You’ll need a developer account and to log in to https://console.redhat.com/openshift/create/local to access the software.

Click the “Download OpenShift Local” button.

$ cd ~/Downloads
$ tar xvf crc-linux-amd64.tar.xz

Once it’s downloaded and you’ve extracted the tar archive, copy the crc tool to your users home directory.

mkdir -p ~/bin
cp ~/Downloads/crc-linux-*-amd64/crc ~/bin

If your users bin directory isn’t in your $PATH, you’ll need to add it.

$ export PATH=$PATH:$HOME/bin
$ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc

Next, initiate OpenShift with the crc setup command.

$ crc setup

This will take a while as it needs to download about 4GB for the OpenShift environment.

Once it’s downloaded, start the local cluster.

$ crc start

The crc tool will ask for the pull secret as it’s required to pull the OpenShift images from Red Hat. You can get the secret from console.redhat.com.

Copy the pull secret and paste it into your terminal when asked.

If everything worked correctly you should now have a running OpenShift instance on your machine.

The crc tool will give you a URL and login information once it’s ready. Open up your browser and login in to the dashboard.

You can change between the Administrator and the Developer perspective by clicking the dropdown under “Administrator”. This doesn’t change your permissions, but it changes the view of the dashboard to make it more focused to either Administrative tasks or Developer tasks.

Containers and Kubernetes are an important technology for modern application deployment and OpenShift is a really powerful addition to Kubernetes. I’m really enjoying getting more familiar with OpenShift.

Categories
System Administration

Building a Custom Ansible Execution Environment

Recently I’ve been working on an Ansible upgrade project that included building out an Ansible Automation Platform installation and upgrading legacy ansible code to modern standards. The ansible code that we were working with had been written mostly targeting Enterprise Linux versions 6 and 7 and was using pre ansible version 2.9 coding standards.

The newer versions of Ansible and Ansible Automation Platform utilise Execution Environments to run the ansible engine against a host. An Execution Environment is a container built with Ansible dependencies, Python libraries and Ansible Collections baked in.

On top of the legacy Ansible code that I was working with, the codebase does a lot of “magic” configuration for setting things up across the environment, so I had to make sure that everything worked like it did previously. I tested a few of the off-the-shelf execution environments, none of which worked for what we needed it for.

In this post I wanted to detail a quick tutorial on building a custom execution environment for running your Ansible code.

I’m using Fedora Linux 39 to set up a development environment, but most Linux distributions should follow similar steps.

From the command line, install the required dependencies. As execution environments are containers, we need a container runtime and for that we’ll use Podman. We also need some build tools.

$ sudo dnf install podman python3-pip

Now to install the Ansible dependencies.

$ python3 -m pip install ansible-navigator ansible-builder

Ansible navigator is the new interface to running Ansible and is great for testing out different execution environments and your ansible code as you’re developing. I briefly demonstrated using Ansible navigator in my article about using Ansible to configure Linux servers. You need the tools in Ansible builder to create the container images.

If you’ve ever built Docker containers before, the steps for EEs are very similar just with the Ansible builder wrapper. Create a folder to store your files.

$ mkdir custom-ee && cd custom-ee

The main file we need to create is the execution-environment.yml file, which Ansible builder uses to build the image.

---
version: 3

images:
  base_image:
    name: quay.io/centos/centos:stream9

dependencies:
  python_interpreter:
    package_system: python3.11
    python_path: /usr/bin/python3.11
  ansible_core:
    package_pip: ansible-core>=2.15
  ansible_runner:
    package_pip: ansible-runner

  galaxy: requirements.yml
  system: bindep.txt
  python: |
    netaddr
    receptorctl

additional_build_steps:
  append_base:
    - RUN $PYCMD -m pip install -U pip
  append_final:
    - COPY --from=quay.io/ansible/receptor:devel /usr/bin/receptor /usr/bin/receptor
    - RUN mkdir -p /var/run/receptor
    - RUN git lfs install --system
    - RUN alternatives --install /usr/bin/python python /usr/bin/python3.11 311

The main parts of the file are fairly self-explanatory, but from the top:

  • We’re using version 3 of the ansible builder spec.
  • The base container image we’re building from is CentOS stream 9 pulled from Quay.io.
  • We want to use Python 3.11 inside the container.
  • We want an Ansible core version higher than 2.15.

In the dependencies section, we can specify additional software our image requires. The galaxy entry is Ansible collections from the Galaxy repository. System is the software installed using DNF on a Linux system. And Python is the Python dependencies we need since Ansible is written in Python and it requires certain libraries to be available depending on what your requirements.

The Galaxy collections are being defined in an external file called requirements.yml which is in the working directory with the execution-environment.yml file. It’s simply a YAML file with the following entries:

---
collections:
  - name: ansible.posix
  - name: ansible.utils
  - name: ansible.netcommon
  - name: community.general

My project requires the ansible.posix, ansible.utils and ansible.netcommon collections, and the community.general collection. Previously, all of these collections would have been part of the ansible codebase and installed when you install Ansible, however the Ansible project has decided to split these out into collections, making the Ansible core smaller and more modular. You might not need these exact collections, or you might require different collections depending on your environment, so check out the Ansible documentation.

Next is the bindep.txt file for the system binary dependencies. These are installed in our image, which is CentOS, using DNF.

epel-release [platform:rpm]
python3.11-devel [platform:rpm]
python3-libselinux [platform:rpm]
python3-libsemanage [platform:rpm]
python3-policycoreutils [platform:rpm]
sshpass [platform:rpm]
rsync [platform:rpm]
git-core [platform:rpm]
git-lfs [platform:rpm]

Again, you might require different dependencies, so check the documentation for the Ansible modules you’re using.

Under the Python section, I’ve defined the Python dependencies directly rather than using a seperate file. If you need a separate file it’s called requirements.txt.

    netaddr
    receptorctl

Netaddr is the Python library for working with IP Addresses, which the ansible codebase I was working with needed, and receptorctl is a Python library for working with Receptor, network service mesh implementation that Ansible uses to distribute work across execution nodes.

With all of that definied, we can build the image.

ansible-builder build --tag=custom-ee:1.1

The custom-ee tag is the name of the image that we’ll use to call from Ansible. The ansible-builder command runs Podman to build the container image, The build should take a few minutes. If everything went according to plan, you should see a success message.

Because the images are just standard Podman images, you can run the podman images command to see it. You should see the output display ‘localhost/custom-ee’ or whatever you tagged your image with.

$ podman images

If the build was successful and the image is available, you can test the image with Ansible navigator. I’m going to test with a minimal RHEL 9 installation that I have running. In the ansible-navigator command, you can specify the –eei flag to change the EE from the default, or you can add a directive in an ansible-navigator.yml file in your ansible project, such as the following:

ansible-navigator:
  execution-environment:
    image: localhost/custom-ee:1.1
    pull:
      policy: missing
  playbook-artifact:
    enable: false

If you’re using Ansible Automation Platform you can pull the EE from a container registry or Private Automation Hub and specify which EE to use in your Templates.

ansible-navigator run web.yml -m stdout --eei localhost/custom-ee:1.1

You can also inspect the image with podman inspect with the image hash from the podman images command.

$ podman inspect 8e53f19f86e4

Once you’ve got the EE working how you need it to you can push it to either a public or private container registry for use in your environment.

Categories
System Administration

Building a Minecraft Server With Ubuntu Linux

I’ve recently built a Minecraft server in my homelab to give my kids something new to play with. All of my kids love playing Minecraft and we already have multiple copies on all of the different devices in the house.

In this post I wanted to explain the steps to getting a Minecraft server running on Ubuntu Linux. I have an Ubuntu 24.04 LTS virtual machine running in Proxmox with 4GB of RAM and 50GB storage, which should hopefully be enough for hosting the kids playing together.

Minecraft requires Java to run. Install the latest version of OpenJDK:

sudo apt install openjdk-21-jdk-headless -y

Verify the installation:

java -version

For security reasons, create a new user specifically for running the Minecraft server:

sudo adduser --system --home /opt/minecraft --shell /bin/bash minecraft

Navigate to the Minecraft user’s home directory and download the latest Minecraft server jar file from https://www.minecraft.net/en-us/download/server

sudo su - minecraft 
mkdir /opt/minecraft/server
cd /opt/minecraft/server

wget https://piston-data.mojang.com/v1/objects/145ff0858209bcfc164859ba735d4199aafa1eea/server.jar -O server.jar

First, run the server to generate the eula.txt file:

java -Xmx1024M -Xms1024M -jar server.jar nogui

Accept the EULA by editing the eula.txt file:

vim eula.txt

Change eula=false to eula=true and save the file.

Open the server.properties file to configure your server settings:

vim server.properties

Here, you can customize settings such as server name, game mode, difficulty, and more.

To manage the Minecraft server easily, create a systemd service file:

sudo vim /etc/systemd/system/minecraft.service

Add the following content:

[Unit]
Description=Minecraft Server
After=network.target

[Service]
User=minecraft
WorkingDirectory=/opt/minecraft/server
ExecStart=/usr/bin/java -Xmx1024M -Xms1024M -jar server.jar nogui
Restart=on-failure

[Install]
WantedBy=multi-user.target

Reload systemd to apply the new service:

sudo systemctl daemon-reload

Start the Minecraft server:

sudo systemctl enable --now minecraft

If you have a firewall enabled, allow traffic on Minecraft’s default port (25565):

sudo ufw allow 25565/tcp
sudo ufw allow ssh
sudo ufw enable

Your Minecraft server should now be up and running. To connect, open Minecraft on your client machine, go to Multiplayer, and add your server using your Ubuntu machine’s IP address.

Congratulations! You’ve successfully set up a Minecraft server on Ubuntu Linux. You can now customize your server further, add plugins, or even create scripts to automate server management tasks. Happy gaming!

Categories
System Administration WordPress

Installing WordPress Multisite on Linux

WordPress is one of the most popular content management systems in the world estimated to be running over 40% of all websites. Its easy to use of use, has a massive theme and plugin ecosystem, and it’s super flexible for both personal websites and businesses.

I’ve been using WordPress since the start of my career over 15 years ago. It’s always been my first choice as a Website platform and is the option that I recommend for many use cases.

In this post I’ll outline the steps for installing and configuring WordPress as a multisite. WordPress Multisite is a powerful feature that allows you to run multiple WordPress sites from a single WordPress installation. This is particularly useful for businesses or individuals who need to manage multiple websites but want the convenience of a single dashboard. With Multisite, you can create a network of sites with shared plugins and themes, making it easier to maintain consistency and control across all your sites.

I built my first WordPress Multisite a few years ago, around 2018, when I worked as Technical Lead for a publishing company. We had 30 websites all with similar functionality and design that I thought would suit the multisite features. It was painful having to manage all 30 sites as separate instances with separate user logins, having to keep all 30 up to date. Moving the sites into a multisite reduced the maintenance load, and the cost of running that amount of websites, and since then I’ve built and managed several fairly large multisite installations.

I’ve chosen to install WordPress multisite on a Linux distribution such as CentOS or AlmaLinux. Enterprise Linux distributions are well known for their stability, security, and support and used heavily in Web hosting. Using any of these distributions as a web server for WordPress ensures a reliable and secure hosting environment, critical for maintaining website performance.

Before you begin, ensure you have the following:

  • A server running RHEL, Oracle Linux, CentOS, or AlmaLinux.
  • A non-root user with sudo privileges.

I wont detail where to host your server in this article. I assume for the sake of convenience that you’ve already got a Linux server running in either a public cloud such as Amazon AWS or on a virtual machine running on your own PC. Any of the Enterprise Linux variants should do fine, the steps should be the same regardless.

Install Apache, PHP and MySQL

Apache is one of the most popular web servers in the world and is well-supported by WordPress and the web development community. It’s usually the default choice when installing WordPress.

$ sudo dnf install httpd -y
$ sudo systemctl enable --now httpd

MariaDB is a popular open-source relational database management system and is fully compatible with MySQL, which is the database system that WordPress uses.

$ sudo dnf install mariadb-server mariadb -y
$ sudo systemctl enable --now mariadb

Run the following command to secure your MariaDB installation.

$ sudo mysql_secure_installation

Follow the prompts to set the root password and remove anonymous users, disallow root login remotely, remove test databases, and reload privilege tables.

PHP is the progamming language that WordPress is built on. Your server needs to have PHP installed so that it can execute the code and produce your website. By default, Enterprise Linux 8 will install PHP 7.2, which is bit old and past it’s end-of-life, so we’ll install PHP 8.2 from App Streams instead.

$ sudo dnf module list php
$ sudo dnf module enable php:8.2
$ sudo dnf install php php-mysqlnd -y
$ sudo systemctl restart httpd

Log in to MariaDB and create a database and user for WordPress.

$ sudo mysql -u root -p

Then, run the following commands.

CREATE DATABASE wordpress;
GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpressuser'@'localhost' IDENTIFIED BY 'password';
FLUSH PRIVILEGES;
EXIT;

Changing ‘wordpressuser’ and ‘password’ for a username and secure password of your choice. This is the user details that WordPress will use to connect to the database, so it’s critical that these details are secure.

Navigate to the web root directory and download the latest WordPress package

$ cd /var/www/
$ sudo rm -rf html/
$ sudo wget https://wordpress.org/latest.tar.gz
$ sudo tar -xzvf latest.tar.gz
$ sudo mv wordpress html
$ sudo chown -R apache:apache /var/www/html
$ sudo chmod -R 755 /var/www/html

Create an Apache configuration file for WordPress.

$ sudo vim /etc/httpd/conf.d/wordpress.conf

Add the following configuration.

<VirtualHost *:80>
    ServerAdmin [email protected]
    DocumentRoot /var/www/html
    ServerName example.com
    ServerAlias www.example.com

    <Directory /var/www/html>
        Options FollowSymLinks
        AllowOverride All
        Require all granted
    </Directory>

    ErrorLog /var/log/httpd/wordpress-error.log
    CustomLog /var/log/httpd/wordpress-access.log combined
</VirtualHost>

Include the email address and domain name you wish to use for your website. In a WordPress multisite the domain name will be the primary network domain, however when you add subsites to the network you can change the subsite domain names from the WordPress dashboard.

Configuring SELinux

After configuring Apache, you might need to configure SELinux to allow Apache to serve WordPress files and communicate with the database. If you’ve followed the above configuration and your website doesn’t appear to work or you’re getting forbidden error messages, you will need to complete the follow configuration.

Note: Many people online suggest disabling SELinux out of convienience. Please don’t do that, SELinux is built into the Linux kernel to provide security access controls and is important to maintain the security of your Linux system, it’s better to configure SELinux properly rather than disabling it.

Set the proper file context for WordPress files

$ sudo semanage fcontext -a -t httpd_sys_rw_content_t "/var/www/html(/.*)?"

$ sudo restorecon -Rv /var/www/html

Allow Apache to connect to the database

$ sudo setsebool -P httpd_can_network_connect_db 1

Restart Apache

$ sudo systemctl restart httpd

You should now be able to access your WordPress installation. Open your web browser and navigate to http://your-server-ip. Follow the on-screen instructions to complete the initial installation.

If everything went correctly and the installation was able to connect to the database properly, you should now be able to login to your WordPress admin dashboard by navigating to http://your-server-ip/wp-admin.

After the initial installation, follow these steps to configure WordPress Multisite with subdomains:

Edit wp-config.php: Add the following lines above the line that says /* That's all, stop editing! Happy publishing. */:

define('WP_ALLOW_MULTISITE', true);

Install the Network: Refresh your WordPress dashboard. Navigate to Tools -> Network Setup. Select “Sub-domains” and click “Install”.

You will most likely need to enable the Apache mod_rewrite module to allow for WordPress to rewrite the site URLs.

Enter the following command, and if there’s no output you’ll need to enable the module by editing the file /etc/httpd/conf.modules.d/00-base.conf and uncommenting the line for rewrite_module.

$ httpd -M | grep rewrite
 rewrite_module (shared)

Update wp-config.php and .htaccess: WordPress will provide some code to add to your wp-config.php and .htaccess files. Edit these files and add the provided code. wp-config.php: Add the following code just below the line define('WP_ALLOW_MULTISITE', true);:

define('MULTISITE', true);
define('SUBDOMAIN_INSTALL', true);
define('DOMAIN_CURRENT_SITE', 'example.com');
define('PATH_CURRENT_SITE', '/');
define('SITE_ID_CURRENT_SITE', 1);
define('BLOG_ID_CURRENT_SITE', 1);

.htaccess:

Replace the existing WordPress rules with the provided Multisite rules:

RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]

# add a trailing slash to /wp-admin
RewriteRule ^wp-admin$ wp-admin/ [R=301,L]

RewriteCond %{REQUEST_FILENAME} -f [OR]
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule ^ - [L]
RewriteRule ^(wp-(content|admin|includes).*) $1 [L]
RewriteRule ^(.*\.php)$ $1 [L]
RewriteRule . index.php [L]

If necessary, you might need to restart Apache.

sudo systemctl restart httpd

You should be able to refresh your Web browser and re-login to WordPress and if everything was successful, you should be able to access the Network Admin section of the WordPress dashboard.

From here you can add sub-sites, you can install themes and plugins that all sub-sites will be able to use, and you’ll be able to manage user access across each site in your network.

Configuring the Linux Firewall

To ensure that your WordPress site is accessible from the web, you will need to configure the firewall on your server to allow HTTP and HTTPS traffic as well as SSH traffic, but restricting access to everything else.

First, check if firewalld is running on your server:

sudo systemctl status firewalld

If it is not running, start and enable it:

$ sudo systemctl enable --now firewalld

To allow HTTP (port 80) and HTTPS (port 443) and SSH (port 22) traffic through the firewall, run the following commands:

$ sudo firewall-cmd --permanent --add-service=http
$ sudo firewall-cmd --permanent --add-service=https
$ sudo firewall-cmd --permanent --add-service=ssh

After making these changes, reload the firewall to apply the new rules:

$ sudo firewall-cmd --reload

To verify that the rules have been added correctly, you can list the current firewall rules:

$ sudo firewall-cmd --list-all

Security Improvements

By default WordPress is relatively secure if you keep it up to date, however it’s only as secure as it’s weakest part. By following a few basic security practices you’ll minimise the risk of your website being vulnerable to attack.

Update WordPress regularly
Ensure your WordPress installation, themes, and plugins are always up-to-date to protect against vulnerabilities.

Use strong passwords
Use complex passwords for your WordPress admin account, database user, and any other user accounts.

Install security plugins
Consider using security plugins like Wordfence to add an extra layer of protection. Wordfence lets you limit login attempts, it scans your site for security vulnerabilities and automatically blocks malicious activity.

Secure Your .htaccess File
Add rules to your .htaccess file to prevent unauthorized access to sensitive files:

<Files wp-config.php>
  order allow,deny
  deny from all
</Files>

<Files .htaccess>
  order allow,deny
  deny from all
</Files>

Enable SSL
Install an SSL certificate to encrypt data transmitted between your server and your users. You can obtain a free SSL certificate from Let’s Encrypt and configure Apache to use it.

Another option for providing SSL certificates is to host your website behind Cloudflare and use their SSL certificates and Web Application firewall. I won’t detail Cloudflare setup in this post but it’s well worth considering.

Regular backups
Set up regular backups of your WordPress site, including the database and files, to ensure you can quickly recover in case of an incident.

You should now have a robust and flexible platform configured to host multiple websites from a single dashboard running on a solid, Enterprise grade Linux operating system.

Categories
Cloud Computing

Automating Server Deployments in AWS with Terraform

Previously I discussed deploying Enterprise Linux in AWS which I demonstrated by using the AWS console. This is a common way to deploy servers to the cloud, however doing server deployments manually can create a situation where you’re stuck with static images that are difficult to replicate when your infrastructure grows.

One of the benefits of Cloud Computing is that the infrastructure is programmable, meaning we can write code that can automate tasks for us. You can spend the time defining all of the settings and configurations once and then anytime you need to deploy your server again, whether it’s adding more servers to a cluster for high-availability, or re-deploying a server that’s died, you don’t have to reinvent the wheel and risk making costly mistakes. This also makes managing large-scale infrastructure much easier when provisioning can be scripted and stored in version control.

Terraform is a tool that’s used to automate infrastructure provisioning and can be used with AWS to make deploying servers fast, easy and re-producible.

Terraform lets you use a configuration language to describe your infrastructure and then goes out to your cloud platform and builds your environment. Similar to Ansible, Terraform allows you to describe the state you want your infrastructure to be in, and then makes sure that state is achieved.

You’ll need a few prerequisites to get the most out of this tutorial, specifically, I’ll be using Fedora Linux running on my desktop environment where I’ll write the Terraform configurations, and I’ll be using the AWS account that was set up previously.

Depending on your environment, you might need to follow different instructions to get Terraform installed. Most versions of Linux will be similar. In Fedora Linux, run the following commands:

$ sudo dnf install -y dnf-plugins-core
$ sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
$ sudo dnf -y install terraform

To be able to speak to the AWS infrastructure APIs, you’ll need to install the awscli tool. Download the awscli tool and install it with the following commands:

$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscli.zip"
$ unzip awscli.zip
$ sudo ./aws/install

Next you’ll need to create some AWS credentials to allow Terraform to authenticate to AWS, this needs to be separate credentials than the ones you log in with. When you create an AWS account, the “main” account is known as the root account. This is the all powerful administrator account and can do and access anything in your AWS console. When you set up your infrastructure, particularly when using automation tools like Terraform, you should create a non-root user account so you don’t inadvertently mess anything up. Head over to AWS IAM and create a new user.

AWS IAM and permissions settings are far beyond the scope of this post, however for the purposes of this demonstration ensure your new user has a policy that allows access to ec2 and set up the Access keys that the awscli tool will use to authenticate.

From your local machine type:

$ aws configure

And enter the access key and secret when prompted. You’ll also be asked to set your default region. In AWS, the region is the geographical location you want your services to run, as I’m in Sydney I’ll set my region appropriately.

We should be ready to start using Terraform now. I’ll create a folder in my local users home directory called ‘terraform’.

Inside the terraform folder, create a file called ‘main.tf’ and enter the following:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.16"
    }
  }

  required_version = ">= 1.2.0"
}

provider "aws" {
  region  = "ap-southeast-2"
}

resource "aws_instance" "rhel_server" {
  ami           = "ami-086918d8178bfe266"
  instance_type = "t2.micro"

  tags = {
    Name = "RHEL Test Server"
  }
}

Terraform uses a configuration language to define the desired infrastructure configuration. The important parts to take note of are the “provider” where you’re telling Terraform to use the AWS plugin, and the “resource” section where you define the instance you’re creating.

Here I’ve set the region to the ap-southeast-2 region which is the AWS region for Sydney, and in the resource section I’ve specified a t2.micro instance type which is the same as we used previously. I’ve given the instance a name of “rhel_server” and provided an AMI.

The AMI is the unique identifier that AWS uses to determine the OS image you want. Each image in the AMI marketplace has a different AMI code, which you can find from the console in the same location where you select the OS you want to use.

Once you’ve created the main.tf file you’ll need to initialise the terraform directory by running ‘terraform init’. This command reads the configuration file and downloads the required provider plugins.

Next you should run ‘terraform fmt’ and ‘terraform validate’ to properly format and validate your configuration.

Type ‘terraform apply’ to run your configuration. Terraform will evaluate the current state of your AWS infrastructure and determine what needs to be done to match your configuration. After a few moments you should be presented with the option to proceed where Terraform will ask you to confirm. Type ‘yes’ if the output looks good.

If you see the green output at the bottom of the screen saying ‘Apply complete’, you’ve successfully deployed an image to AWS using Terraform. Typing ‘terraform show’ will show you the current image configuration in AWS in JSON format.

And you can also check the AWS console to confirm.

Once you’re finished with the instance, type ‘terraform destroy’ to clean up and terminate the running instance. Terminating instances when you’re finished with them is the best way to keep your AWS costs low so you don’t get billed thousands of dollars at the end of the month.

Using Terraform to deploy infrastructure to the cloud is really powerful and flexible, allowing you to define exactly what you want running without having to manually deploy resources.

Categories
Cloud Computing

Deploying Enterprise Linux in AWS

In a previous post I discussed installing Enterprise Linux in a virtual machine, this time I wanted to write about deploying a server to the cloud.

Cloud Computing platforms like Amazon’s AWS allow you to build and run all kinds of Infrastructure and services on-demand without having to purchase and maintain expensive physical computing hardware. You can deploy a server in minutes and have the capability to scale your workload as much as you need. I’ve been running production servers in AWS for a few years now and of all the cloud platforms, it’s the one I’m most familiar with.

I assume you’ve registered for an AWS account already, if not, head over to https://aws.amazon.com and set one up.

AWS is huge. There’s services and infrastructure available for pretty much anything you can imagine, from basic Linux and Windows servers, to Machine Learning and AI and everything in between. It can be quite overwhelming the first time you log into the AWS console, however for this post we can safely ignore pretty much all of that.

Amazon’s EC2 is a service that lets us deploy server images, whether those are images that AWS have supplied, community contributed images, or something that we’ve built ourselves. Basically, an image is almost like what we built previously when installing Linux in a virtual machine. An image is a preconfigured Operating System that we can deploy from the AWS console.

From the Services menu, or the list of All Services above, select EC2.

You should see a dashboard that looks like the following.

Again, for the most part you can ignore everything there for now.

Click on the big orange button that says “Launch Instance”.

Here you get to select the Operating System you want to deploy. Depending on what you want, you can feel free to select any of the Quick Start OS images, most of them are self-explanatory, such as Windows, MacOS X or Ubuntu. Amazon Linux is a variant of Linux based closely on Red Hat / CentOS and has been tuned to work well with AWS.

I’m going to select Red Hat Enterprise Linux 9.

Give your instance a name. For this example I’ll just call mine “RHEL Server”, but you should give your server a name that matches its purpose or another easily identifiable name.

Next you’ll want to select the Instance type. This is basically the size of the server you want. AWS has a huge range of different instance types you can choose from depending on the purpose of the server you’re building. However, you need to be careful because the instance type dictates much of the price you’ll pay each month so don’t spin up massive servers unless you know you can pay for them.

In this example I’m going to deploy a t2.micro instance which is Free Tier eligible, which is perfect because it means I can deploy and play around for a bit and then shut it down when I’m ready without paying for it.

Below the instance type you want to select the key pair you’ll use to connect to the instance. The key pair allows you to securely connect to the instance using SSH without having to use passwords and is required by AWS. If you haven’t already set one up do so now by clicking the link next to the dropdown. Make sure you download the key you create and store it somewhere safe.

Further down in the Network settings, I’m going to select “Allow SSH traffic from” and select “My IP” from the dropdown. This just restricts access to the server from the Internet to only the IP address you’re connecting to the console from.

If you’re setting this up from your home you likely have an IP address assigned from your ISP dynamically. For the most part for home Internet access this is great, but can cause issues when setting AWS server access. If your ISP changes your home IP address you can be locked out of your server. For this example, this is fine.

I’ve deselected the “Allow HTTP traffic from the Internet” checkbox as I wont be setting up a web server at the moment.

That should be it for the basic configuration. If everything looks Ok, you can click the orange “Launch Instance” button.

After a few seconds you should see your running instance appear.

Your server is now ready for use. The base images only have the bare minimum software installed to give you a useful OS. From here you’ll need to configure the applications and services that you’ll be running.

To connect to the server, open a terminal and establish an SSH connection. Remember, we’ve restricted access to only our own IP address and we need to use the Key that was configured earlier. The default username that’s created with the image is ec2-user.

$ ssh ec2-user@[IP-Address] -i ~/key.pem

You now have a Red Hat Enterprise Linux server running in the AWS Cloud. From here you can configure your server to do anything you want, you could even try using Ansible for configuration management.

Once you’ve finished playing with your server, make sure you remember to terminate the instance so that you keep your AWS bill under control.

There’s much more to setting up and running production servers in AWS than just what I covered in this post, however this is a pretty good starting point for getting a server up and running quickly.

Categories
Security

How to Improve the Security of Your Website

As a website owner, keeping your site secure should be a top priority. Cyberattacks, data breaches, and malware infections can harm your reputation, disrupt your operations, and put your users at risk. Fortunately, there are practical steps you can take to enhance your website’s security.

Keep Your Software Updated

One of the most common vulnerabilities in websites comes from outdated software. Whether it’s your content management system (CMS), plugins, themes, or server software, failing to update can leave your site exposed to known exploits.

  • Enable automatic updates whenever possible.
  • Regularly check for and install updates manually.
  • Remove unused plugins or themes to minimize potential vulnerabilities.

Use Strong, Unique Passwords

Weak passwords are an open invitation to hackers. Using strong, unique passwords for all website accounts significantly reduces the risk of unauthorised access.

  • Use a password manager to generate and store complex passwords.
  • Implement multi-factor authentication (MFA) for an additional layer of security.
  • Regularly update passwords, especially for admin accounts.

Enable HTTPS

Switching from HTTP to HTTPS encrypts data transmitted between your website and its users, protecting sensitive information like login credentials and payment details.

  • Obtain an SSL certificate from a trusted provider. Many hosting companies offer free SSL options.
  • Configure your site to redirect all traffic to HTTPS.
  • Regularly check your SSL certificate’s validity and renew it as needed.

Perform Regular Backups

A robust backup strategy ensures you can quickly recover your website if it’s compromised.

  • Schedule automated backups for your website files and database.
  • Store backups in a secure, separate location.
  • Test backups periodically to ensure they can be restored without issues.

Limit User Privileges

Not everyone needs full administrative access to your website. Assign roles and permissions based on what each user needs to do.

  • Create separate accounts for each user.
  • Use the principle of least privilege (give users only the access they need).
  • Regularly review user accounts and remove those no longer required.

Install Security Plugins or Tools

Security plugins and tools can help monitor your site for vulnerabilities, malware, and suspicious activity.

  • Use a security plugin like Wordfence or Sucuri for WordPress sites.
  • Enable firewalls to block malicious traffic.
  • Regularly scan your site for malware and vulnerabilities.

Perform Security Assessments

A website security assessment identifies vulnerabilities and potential risks before they’re exploited.

  • Schedule regular security assessments with a professional.
  • Implement recommended changes to address vulnerabilities.
  • Stay informed about the latest threats in website security.

Protect Against DDoS Attacks

Distributed Denial of Service (DDoS) attacks can overwhelm your server and take your site offline.

  • Use a Content Delivery Network (CDN) like Cloudflare to distribute traffic and absorb attacks.
  • Configure rate limiting to restrict the number of requests a user can make.
  • Monitor traffic for unusual spikes that could indicate an attack.

Final Thoughts

Website security isn’t a one-time task—it’s an ongoing effort. By implementing these measures and staying vigilant, you can significantly reduce the risk of cyberattacks and keep your site safe for you and your visitors.

Need expert assistance to secure your website? Get in contact for a comprehensive security assessment and tailored protection solutions. Together, we can make your website a fortress against cyber threats.

Categories
Security

Vulnerability Scanning with Nessus

All software has bugs. Everyone has experienced waiting for your laptop, tablet or phone to install some critical update, or had their computer crash with a spinning wheel of death or blue screen.

Bugs in software are generally faults in the programming code or mistakes in the logic of the code. Programmers make mistakes, programming languages and platforms are developed with assumptions that don’t match quite how the software is used, or even users intentionally trying to manipulate or forcing the program to do something it was never intended to do.

Programming mistakes can be found during the development and testing phase, they can be found when users start interacting with the software and sometimes they aren’t found for years and years, hidden deep in thousands or millions of lines of code.

Modern software is incredibly complex. The Linux kernel which is the heart of all Linux based operating systems and is the part of the operating system that directly interacts with the computer hardware has over 27 million lines of code, and that’s not including all the software a user generally interacts with like Desktop UIs, games and web browsers.

So with millions and millions of lines of code, billions of users and devices all running with different configurations and environments for various purposes, it’s next to impossible to be 100% certain there’s no bugs.

But not all bugs are vulnerabilities.

A vulnerability is a weakness in a program or system that could be exploited by an attacker. A bug could be anything, a misplaced semi-colon in a programming statement that causes a script to fail might just be an annoyance unless an attacker can manipulate that annoyance, chaining it with other bugs or weaknesses, to gain access to a system or cause the software to do something unexpected, like dump out the list of your users and passwords.

Ethical Hackers and security experts dedicate their time and energy to hunting for these bugs, to find them before the attackers do, and to fix them. It’s essentially an arms race between the security professionals and the attackers to find the bugs first. Previously unknown vulnerabilities, known as zero-days (for the 0 days that the vendors or developers have had to implement a fix), can be worth big money.

Once bugs are found, and fixes produced in the form of updates or even firewall rules to prevent a bug being exploited, users need to be notified about them.

Tenable Nessus is an industry standard vulnerability scanner that can be used against single systems, applications or entire fleets of machines to help find vulnerabilities before they’re exploited. Nessus won’t find unknown vulnerabilites, but it uses it’s database of already discovered vulnerabilities to scan systems and provide a report that can be used to patch your systems.

I’ve had the opportunity to deploy Nessus agents across a fleet of hundreds of Linux servers and run extensive scans from Tenable cloud.

In this article I’ll describe setting up Nessus in Linux to scan remote hosts. Having knowledge of the vulnerabilities present in your environment is critical in defending against Cyber attack. Using Nessus you can scan hosts across your network and generate reports on the vulnerabilities discovered so that they can be remediated before an incident occurs.

First thing’s first, head over to the Nessus downloads page and download the package appropriate for the machine you’ll be using. As I’m using Kali Linux, I’ll download the latest Debian amd64 version.

Next open a terminal window and install the package.

Note: There are a number of ways to install a package in Linux. In this example I used apt from the terminal, you can also use dpkg or by opening a file explorer and clicking on the package to open the GUI software manager.

Once the package has installed, you can start the Nessus service by typing

 sudo systemctl start nessusd.service 

and then navigating to https://127.0.0.1:8834.

If you see a security warning it’s ok to click ‘Accept’ and continue.

I’ll select Nessus Essentials as it’s the free version, and click continue.

Next you’ll be asked to register for an activation key. You can either do this by filling in the form presented on the next screen which will send a verification email to your email address, or you can register for a key on the Tenable website. I just filled in the form and pasted the key into the next page.

Once you’ve activated Nessus you’ll have to wait for a few minutes for setup to complete. Nessus needs to download and install plugins and initialise the installation before it can be used, this can take a while depending on the resources available on your machine.

Once setup has completed you’ll be presented with the dashboard and a prompt to create your first scan.

For the purposes of this demo, I’ve got another virtual machine running CentOS Linux that I’ll scan. Type the IP address of the potentially vulnerable host you wish to scan and click ‘Submit’ followed by ‘Run Scan’.

Nessus will then kickstart a basic network scan to identify vulnerabilities on the host. Please note though that this basic scan is not going to be a thorough list of all vulnerabilities on the host. The basic network scan will only scan the host from the outside and can’t determine an extensive amount of details. For that, you’ll need to configure Nessus further, possibly even installing agents on the host that can probe deeper into the system. For now though, this is good enough.

Once the scan completes you can review the results.

As you can see, Nessus identified 14 potential vulnerabilities that can be investigated further. For this scan there’s nothing incredibly interesting as the machine I scanned is a basic CentOS install with no open services, so I didn’t expect to find anything.