The popularity of cloud infrastructure services has hugely increased over the last few years. Companies value the flexibility and reliability provided by such services. The simplicity of the solutions delivered by cloud providers should remove the burden from the shoulders of busy Dev and Ops people and give the possibility to focus on real customer's needs.
Unfortunately the reality is not necessarily so simple. When you start your journey in the cloud you will discover new challenges. One of these challenges will be connected with the creation and provisioning of your new infrastructure. Simple structures can be created within minutes using web pages or a CLI, but these are not the best ways to create a cloud with 100 machines.
AWS provides many different interfaces which allow automation of an infrastructure process. You can use a REST API or CLI to create your own script. This is probably the most flexible solution, but at the same time it can be time consuming.
Terraform from HashiCorp can give you similar flexibility and at the same time you don't have to spend weeks to write bash or python scripts to provision your cloud.
Terraform for the rescue - plan, apply, update, destroy.
Plan
Infrastructure diagram
To demonstrate the use of Terraform we need to introduce some example infrastructure: let's provision a structure which will support a simple web service running in AWS. This web service will expose an API via a web proxy server. The service also requires a database and this database should have a separate EC2 instance to ease database maintenance. Instances responsible for business logic will be hidden in a private subnet and only the web proxy server will be available to the wider internet. At the same time our service needs to connect to external resources - therefore a NAT instance will take the responsibility of managing network connections from within the private subnet. All of these resources will constitute a single Virtual Private Cloud (VPC).
Provider
We are now ready to introduce Terraform. We need to create configuration files which will describe components required to build our infrastructure. Configuration files can be written in HashiCorp Configuration Language (similar to YAML) or JSON. All configuration files should have extension .tf and be stored in the same directory. Terraform automatically combines all resources defined in .tf files.
Before we add any resource we have know where our resources are going to exist. To do that we have to create a provider definition.
Terraform's provider is the mechanism used for managing resources, in our case we'll use the AWS provider. Our first configuration file might look like this:
provider-config.tf
Obviously we don't want to keep our secrets in a file which will potentially be stored in version control. We also want to have flexibility when we define a region in which we want to provision our environment. Terraform gives us the possibility to introduce variables.
First we have to declare the variables we want to use (see provider-variables.tf). The variables declaration introduces names, structure and default values for all variables used in the configuration file. We will override these default values later.
provider-variables.tf
Now we can update provider-config.tf.
provider-config.tf
VPC
When we know how to connect to our provider we can introduce resources. A resource definition in Terraform contains information about the type of a resource and its name. Types of resources are predefined by Terraform and represent building elements which we can instantiate in the cloud. Each resource also has a predefined set of config properties which describe the resource in detail. For a full list of supported AWS resource types, see here.
Resource Syntax
In our case we have to define a VPC for all our resources to reside in. We need to assign our VPC to a specific range of addresses by defining a CIDR block. Each VPC also needs an internet gateway.
Once again instead of hardcoded values we will declare variables specific for our VPC definition.
We can also introduce a variable which defines the name of our environment. This name will allow us to tag resources and recognise to which environment a given resource belongs.
environment-variables.tf
vpc-config.tf
vpc-variables.tf
We can use the type of a resource and its name as a reference variable to access properties exposed by a given resource, as we did to reference the VPC id above. Some properties of a resource will be defined by Terraform during creation of the resource, like internal ids or names. Some are already defined in our scripts.
In the above example we assign an internet gateway to our VPC by referencing the property id of aws_vpc resource with name environment*.
Subnets
Our example VPC should contain two subnets. For each subnet we have to define a range of addresses available (CIDR block), an availability zone and of course we have to assign this subnet to the VPC. Again, we declare variables instead defining values directly in the configuration script.
subnets-config.tf
subnets-variables.tf
Route tables
Each subnet in a VPC must be associated with a route table. This time we have an unusual situation. We have the reference to a resource which was not defined yet (${aws_instance.nat.id}). The order of files is not important for Terraform. It combines all files and based on that knowledge prepares a plan of execution. For that reason we can refer to resources which are defined in different files. Terraform will produce an error during creation if the resource is not defined anywhere.
route_tables-config.tf
Security groups
A definition of any EC2 instance requires assigning it to a security group. Security groups are another type of resource in Terraform. Once again configuration of this resource type is straightforward. Depending on our needs we can define inbound (ingress) and outbound (egress) rules for the desired range of ports, protocols and addresses.
security_groups-config.tf
EC2 Instances
With all the above resources declared we can finally define our EC2 instances.
Our NAT instance and web proxy instance require an Elastic IP (resource aws_eip). We also need to choose an instance type for each EC2 instance.
instances-config.tf
instances-variables.tf
Verify
Our VPC definition is now ready. But how do we know that everything is ready for provisioning? We can verify our all hard work. All we have to do is ask Terraform to prepare a plan by executing the following command:
Terraform will combine all available files and prepare an execution plan. This means that all definitions will be verified before you start provisioning your resources. The output from this command will also give us an overview of the kind of operations that will be performed during execution of the plan.
We have to remember that the plan only represents a dry run. We don't connect to AWS at this point and we will not find any errors which might occur in the cloud. For example a plan will not show any errors if you have already exceeded your limit of available EC2 instances.
Apply
So far we used only default values to run our plan. It is not particularly useful when you want to create real environment.
To apply our execution plan we have to prepare a file which will contain the definitions of our variables. We can override default definitions by creating a file with extension .tfvariables. This file has the format of a regular Java property file where each key is the path of a variable and each value is the value assigned to it. An example variable file for our first environment might look like this:
my_first_vpc_environment.tfvariables
We can verify the plan again and if we decide that we are ready we can apply it by executing the following command:
Terraform will now connect to AWS and try to create all resources defined in Terraform scripts. The output of this command is a state file terraform.tfstate which contains all the information about the environment that we just provisioned. At the time of writing the state file must be kept for future execution (e.g. in your version control system), because Terraform uses it to determine differences between cloud state and the current definition stored in your scripts.
Change
Sometimes we have to change our environment. It requires just a change in your definition config. Based on the state of your existing environment and your updated configuration, Terraform is able to prepare a new plan and apply changes to your infrastructure.
You can use the plan and apply commands in the same way as for a new environment. This time Terraform compares the state stored in the state file generated on the initial run, and plans/applies any newly-introduced changes to the configuration.
Destroy
Everything has to come to an end, sometime. When the time comes we can execute this command:
Terraform once again reuses our state file and will remove all resources defined there.
What's next?
In this post we only scratched the surface of Terraform. The AWS provider is one of many providers available. Terraform allows us to combine different providers which give the possibility of provisioning environments across multiple cloud providers.
It also has other basic features. For example outputs give you the possibility to generate files based on any available variables and resource properties. You can use it to generate documentation, config files or just human readable text files.
Now our infrastructure can be managed in code. We can check it into source control, raise pull requests in GitHub and provide living documentation for our environment topology. We can lay the foundation for our deployments and tools like Puppet, Chef and Docker.
All code examples described here can be found on GitHub.
Ready to be inspired?
Join our newsletter for expert tips and inspirational case studies