Categories
Cloud Computing

Automating Server Deployments in AWS with Terraform

Previously I discussed deploying Enterprise Linux in AWS which I demonstrated by using the AWS console. This is a common way to deploy servers to the cloud, however doing server deployments manually can create a situation where you’re stuck with static images that are difficult to replicate when your infrastructure grows.

One of the benefits of Cloud Computing is that the infrastructure is programmable, meaning we can write code that can automate tasks for us. You can spend the time defining all of the settings and configurations once and then anytime you need to deploy your server again, whether it’s adding more servers to a cluster for high-availability, or re-deploying a server that’s died, you don’t have to reinvent the wheel and risk making costly mistakes. This also makes managing large-scale infrastructure much easier when provisioning can be scripted and stored in version control.

Terraform is a tool that’s used to automate infrastructure provisioning and can be used with AWS to make deploying servers fast, easy and re-producible.

Terraform lets you use a configuration language to describe your infrastructure and then goes out to your cloud platform and builds your environment. Similar to Ansible, Terraform allows you to describe the state you want your infrastructure to be in, and then makes sure that state is achieved.

You’ll need a few prerequisites to get the most out of this tutorial, specifically, I’ll be using Fedora Linux running on my desktop environment where I’ll write the Terraform configurations, and I’ll be using the AWS account that was set up previously.

Depending on your environment, you might need to follow different instructions to get Terraform installed. Most versions of Linux will be similar. In Fedora Linux, run the following commands:

$ sudo dnf install -y dnf-plugins-core
$ sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
$ sudo dnf -y install terraform

To be able to speak to the AWS infrastructure APIs, you’ll need to install the awscli tool. Download the awscli tool and install it with the following commands:

$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscli.zip"
$ unzip awscli.zip
$ sudo ./aws/install

Next you’ll need to create some AWS credentials to allow Terraform to authenticate to AWS, this needs to be separate credentials than the ones you log in with. When you create an AWS account, the “main” account is known as the root account. This is the all powerful administrator account and can do and access anything in your AWS console. When you set up your infrastructure, particularly when using automation tools like Terraform, you should create a non-root user account so you don’t inadvertently mess anything up. Head over to AWS IAM and create a new user.

AWS IAM and permissions settings are far beyond the scope of this post, however for the purposes of this demonstration ensure your new user has a policy that allows access to ec2 and set up the Access keys that the awscli tool will use to authenticate.

From your local machine type:

$ aws configure

And enter the access key and secret when prompted. You’ll also be asked to set your default region. In AWS, the region is the geographical location you want your services to run, as I’m in Sydney I’ll set my region appropriately.

We should be ready to start using Terraform now. I’ll create a folder in my local users home directory called ‘terraform’.

Inside the terraform folder, create a file called ‘main.tf’ and enter the following:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.16"
    }
  }

  required_version = ">= 1.2.0"
}

provider "aws" {
  region  = "ap-southeast-2"
}

resource "aws_instance" "rhel_server" {
  ami           = "ami-086918d8178bfe266"
  instance_type = "t2.micro"

  tags = {
    Name = "RHEL Test Server"
  }
}

Terraform uses a configuration language to define the desired infrastructure configuration. The important parts to take note of are the “provider” where you’re telling Terraform to use the AWS plugin, and the “resource” section where you define the instance you’re creating.

Here I’ve set the region to the ap-southeast-2 region which is the AWS region for Sydney, and in the resource section I’ve specified a t2.micro instance type which is the same as we used previously. I’ve given the instance a name of “rhel_server” and provided an AMI.

The AMI is the unique identifier that AWS uses to determine the OS image you want. Each image in the AMI marketplace has a different AMI code, which you can find from the console in the same location where you select the OS you want to use.

Once you’ve created the main.tf file you’ll need to initialise the terraform directory by running ‘terraform init’. This command reads the configuration file and downloads the required provider plugins.

Next you should run ‘terraform fmt’ and ‘terraform validate’ to properly format and validate your configuration.

Type ‘terraform apply’ to run your configuration. Terraform will evaluate the current state of your AWS infrastructure and determine what needs to be done to match your configuration. After a few moments you should be presented with the option to proceed where Terraform will ask you to confirm. Type ‘yes’ if the output looks good.

If you see the green output at the bottom of the screen saying ‘Apply complete’, you’ve successfully deployed an image to AWS using Terraform. Typing ‘terraform show’ will show you the current image configuration in AWS in JSON format.

And you can also check the AWS console to confirm.

Once you’re finished with the instance, type ‘terraform destroy’ to clean up and terminate the running instance. Terminating instances when you’re finished with them is the best way to keep your AWS costs low so you don’t get billed thousands of dollars at the end of the month.

Using Terraform to deploy infrastructure to the cloud is really powerful and flexible, allowing you to define exactly what you want running without having to manually deploy resources.