Categories
System Administration

Exploring OpenShift

OpenShift is Red Hat’s container orchestration platform built on top of Kubernetes. I love working with containers and Kubernetes, and as I’m also a big fan of Red Hat technologies I wanted to become more familiar with working with OpenShift.

I’ve also been studying Red Hat’s Openshift training courses including OpenShift Developer and OpenShift Administrator, so it makes sense to have an environment to work in and deploy some applications.

I’m going to install OpenShift local on my Fedora Linux development machine.

First you’ll need to download the crc tool from Red Hat. You’ll need a developer account and to log in to https://console.redhat.com/openshift/create/local to access the software.

Click the “Download OpenShift Local” button.

$ cd ~/Downloads
$ tar xvf crc-linux-amd64.tar.xz

Once it’s downloaded and you’ve extracted the tar archive, copy the crc tool to your users home directory.

mkdir -p ~/bin
cp ~/Downloads/crc-linux-*-amd64/crc ~/bin

If your users bin directory isn’t in your $PATH, you’ll need to add it.

$ export PATH=$PATH:$HOME/bin
$ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc

Next, initiate OpenShift with the crc setup command.

$ crc setup

This will take a while as it needs to download about 4GB for the OpenShift environment.

Once it’s downloaded, start the local cluster.

$ crc start

The crc tool will ask for the pull secret as it’s required to pull the OpenShift images from Red Hat. You can get the secret from console.redhat.com.

Copy the pull secret and paste it into your terminal when asked.

If everything worked correctly you should now have a running OpenShift instance on your machine.

The crc tool will give you a URL and login information once it’s ready. Open up your browser and login in to the dashboard.

You can change between the Administrator and the Developer perspective by clicking the dropdown under “Administrator”. This doesn’t change your permissions, but it changes the view of the dashboard to make it more focused to either Administrative tasks or Developer tasks.

Containers and Kubernetes are an important technology for modern application deployment and OpenShift is a really powerful addition to Kubernetes. I’m really enjoying getting more familiar with OpenShift.

Categories
Cloud Computing

Automating Server Deployments in AWS with Terraform

Previously I discussed deploying Enterprise Linux in AWS which I demonstrated by using the AWS console. This is a common way to deploy servers to the cloud, however doing server deployments manually can create a situation where you’re stuck with static images that are difficult to replicate when your infrastructure grows.

One of the benefits of Cloud Computing is that the infrastructure is programmable, meaning we can write code that can automate tasks for us. You can spend the time defining all of the settings and configurations once and then anytime you need to deploy your server again, whether it’s adding more servers to a cluster for high-availability, or re-deploying a server that’s died, you don’t have to reinvent the wheel and risk making costly mistakes. This also makes managing large-scale infrastructure much easier when provisioning can be scripted and stored in version control.

Terraform is a tool that’s used to automate infrastructure provisioning and can be used with AWS to make deploying servers fast, easy and re-producible.

Terraform lets you use a configuration language to describe your infrastructure and then goes out to your cloud platform and builds your environment. Similar to Ansible, Terraform allows you to describe the state you want your infrastructure to be in, and then makes sure that state is achieved.

You’ll need a few prerequisites to get the most out of this tutorial, specifically, I’ll be using Fedora Linux running on my desktop environment where I’ll write the Terraform configurations, and I’ll be using the AWS account that was set up previously.

Depending on your environment, you might need to follow different instructions to get Terraform installed. Most versions of Linux will be similar. In Fedora Linux, run the following commands:

$ sudo dnf install -y dnf-plugins-core
$ sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
$ sudo dnf -y install terraform

To be able to speak to the AWS infrastructure APIs, you’ll need to install the awscli tool. Download the awscli tool and install it with the following commands:

$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscli.zip"
$ unzip awscli.zip
$ sudo ./aws/install

Next you’ll need to create some AWS credentials to allow Terraform to authenticate to AWS, this needs to be separate credentials than the ones you log in with. When you create an AWS account, the “main” account is known as the root account. This is the all powerful administrator account and can do and access anything in your AWS console. When you set up your infrastructure, particularly when using automation tools like Terraform, you should create a non-root user account so you don’t inadvertently mess anything up. Head over to AWS IAM and create a new user.

AWS IAM and permissions settings are far beyond the scope of this post, however for the purposes of this demonstration ensure your new user has a policy that allows access to ec2 and set up the Access keys that the awscli tool will use to authenticate.

From your local machine type:

$ aws configure

And enter the access key and secret when prompted. You’ll also be asked to set your default region. In AWS, the region is the geographical location you want your services to run, as I’m in Sydney I’ll set my region appropriately.

We should be ready to start using Terraform now. I’ll create a folder in my local users home directory called ‘terraform’.

Inside the terraform folder, create a file called ‘main.tf’ and enter the following:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.16"
    }
  }

  required_version = ">= 1.2.0"
}

provider "aws" {
  region  = "ap-southeast-2"
}

resource "aws_instance" "rhel_server" {
  ami           = "ami-086918d8178bfe266"
  instance_type = "t2.micro"

  tags = {
    Name = "RHEL Test Server"
  }
}

Terraform uses a configuration language to define the desired infrastructure configuration. The important parts to take note of are the “provider” where you’re telling Terraform to use the AWS plugin, and the “resource” section where you define the instance you’re creating.

Here I’ve set the region to the ap-southeast-2 region which is the AWS region for Sydney, and in the resource section I’ve specified a t2.micro instance type which is the same as we used previously. I’ve given the instance a name of “rhel_server” and provided an AMI.

The AMI is the unique identifier that AWS uses to determine the OS image you want. Each image in the AMI marketplace has a different AMI code, which you can find from the console in the same location where you select the OS you want to use.

Once you’ve created the main.tf file you’ll need to initialise the terraform directory by running ‘terraform init’. This command reads the configuration file and downloads the required provider plugins.

Next you should run ‘terraform fmt’ and ‘terraform validate’ to properly format and validate your configuration.

Type ‘terraform apply’ to run your configuration. Terraform will evaluate the current state of your AWS infrastructure and determine what needs to be done to match your configuration. After a few moments you should be presented with the option to proceed where Terraform will ask you to confirm. Type ‘yes’ if the output looks good.

If you see the green output at the bottom of the screen saying ‘Apply complete’, you’ve successfully deployed an image to AWS using Terraform. Typing ‘terraform show’ will show you the current image configuration in AWS in JSON format.

And you can also check the AWS console to confirm.

Once you’re finished with the instance, type ‘terraform destroy’ to clean up and terminate the running instance. Terminating instances when you’re finished with them is the best way to keep your AWS costs low so you don’t get billed thousands of dollars at the end of the month.

Using Terraform to deploy infrastructure to the cloud is really powerful and flexible, allowing you to define exactly what you want running without having to manually deploy resources.

Categories
Cloud Computing

Deploying Enterprise Linux in AWS

In a previous post I discussed installing Enterprise Linux in a virtual machine, this time I wanted to write about deploying a server to the cloud.

Cloud Computing platforms like Amazon’s AWS allow you to build and run all kinds of Infrastructure and services on-demand without having to purchase and maintain expensive physical computing hardware. You can deploy a server in minutes and have the capability to scale your workload as much as you need. I’ve been running production servers in AWS for a few years now and of all the cloud platforms, it’s the one I’m most familiar with.

I assume you’ve registered for an AWS account already, if not, head over to https://aws.amazon.com and set one up.

AWS is huge. There’s services and infrastructure available for pretty much anything you can imagine, from basic Linux and Windows servers, to Machine Learning and AI and everything in between. It can be quite overwhelming the first time you log into the AWS console, however for this post we can safely ignore pretty much all of that.

Amazon’s EC2 is a service that lets us deploy server images, whether those are images that AWS have supplied, community contributed images, or something that we’ve built ourselves. Basically, an image is almost like what we built previously when installing Linux in a virtual machine. An image is a preconfigured Operating System that we can deploy from the AWS console.

From the Services menu, or the list of All Services above, select EC2.

You should see a dashboard that looks like the following.

Again, for the most part you can ignore everything there for now.

Click on the big orange button that says “Launch Instance”.

Here you get to select the Operating System you want to deploy. Depending on what you want, you can feel free to select any of the Quick Start OS images, most of them are self-explanatory, such as Windows, MacOS X or Ubuntu. Amazon Linux is a variant of Linux based closely on Red Hat / CentOS and has been tuned to work well with AWS.

I’m going to select Red Hat Enterprise Linux 9.

Give your instance a name. For this example I’ll just call mine “RHEL Server”, but you should give your server a name that matches its purpose or another easily identifiable name.

Next you’ll want to select the Instance type. This is basically the size of the server you want. AWS has a huge range of different instance types you can choose from depending on the purpose of the server you’re building. However, you need to be careful because the instance type dictates much of the price you’ll pay each month so don’t spin up massive servers unless you know you can pay for them.

In this example I’m going to deploy a t2.micro instance which is Free Tier eligible, which is perfect because it means I can deploy and play around for a bit and then shut it down when I’m ready without paying for it.

Below the instance type you want to select the key pair you’ll use to connect to the instance. The key pair allows you to securely connect to the instance using SSH without having to use passwords and is required by AWS. If you haven’t already set one up do so now by clicking the link next to the dropdown. Make sure you download the key you create and store it somewhere safe.

Further down in the Network settings, I’m going to select “Allow SSH traffic from” and select “My IP” from the dropdown. This just restricts access to the server from the Internet to only the IP address you’re connecting to the console from.

If you’re setting this up from your home you likely have an IP address assigned from your ISP dynamically. For the most part for home Internet access this is great, but can cause issues when setting AWS server access. If your ISP changes your home IP address you can be locked out of your server. For this example, this is fine.

I’ve deselected the “Allow HTTP traffic from the Internet” checkbox as I wont be setting up a web server at the moment.

That should be it for the basic configuration. If everything looks Ok, you can click the orange “Launch Instance” button.

After a few seconds you should see your running instance appear.

Your server is now ready for use. The base images only have the bare minimum software installed to give you a useful OS. From here you’ll need to configure the applications and services that you’ll be running.

To connect to the server, open a terminal and establish an SSH connection. Remember, we’ve restricted access to only our own IP address and we need to use the Key that was configured earlier. The default username that’s created with the image is ec2-user.

$ ssh ec2-user@[IP-Address] -i ~/key.pem

You now have a Red Hat Enterprise Linux server running in the AWS Cloud. From here you can configure your server to do anything you want, you could even try using Ansible for configuration management.

Once you’ve finished playing with your server, make sure you remember to terminate the instance so that you keep your AWS bill under control.

There’s much more to setting up and running production servers in AWS than just what I covered in this post, however this is a pretty good starting point for getting a server up and running quickly.