In Cloud Computing , Development and DevOps Last updated:
Share on:
Jira Software is the #1 project management tool used by agile teams to plan, track, release, and support great software.

Let’s talk about some of the best practices that should be followed while using Terraform.

Terraform is a very popular open-source IaC (infrastructure as code)  tool to define and provision the complete infrastructure.

Although Terraform was launched in 2014, the adoption of this tool has grown globally. More and more developers are learning Terraform to deploy infrastructure in their organization.

If you have started using Terraform, you must adopt the best practices for better production infrastructure provisioning.

If you are a newbie then check out this Terraform for beginner’s article.


When you are working on a large production infrastructure project using Terraform, you must follow a proper directory structure to take care of the complexities that may occur in the project. It would be best if you had separate directories for different purposes.

For example, if you are using terraform in development, staging, and production environments, have separate directories for each of them.

geekflare@geekflare:~$ tree terraform_project/
├── dev
│ ├──
│ ├──
│ └──
├── modules
│ ├── ec2
│ │ ├──
│ │ └──
│ └── vpc
│ ├──
│ └──
├── prod
│ ├──
│ ├──
│ └──
└── stg

6 directories, 13 files

Even the terraform configurations should be separate because, after a period, the configurations of a growing infrastructure will become complex.

For example – you can write all your terraform codes (modules, resources, variables, outputs) inside the file itself, but having separate terraform codes for variables and outputs makes it more readable and easy to understand.

Naming Convention

Naming conventions are used in Terraform to make things easily understandable.

For example, let’s say you want to make three different workspaces for different environments in a project. So, rather than naming then as env1, en2, env3, you should call them as a dev, stage, prod. From the name itself, it becomes pretty clear that there are three different workspaces for each environment.

Similar conventions for resources, variables, modules, etc. also should be followed. The resource name in Terraform should start with a provider name followed by an underscore and other details.

For example, the resource name for creating a terraform object for a route table in AWS would be aws_route_table.

So, if you follow the naming conventions right, it will be easier to understand even complex codes.

Use Shared Modules

It is strongly suggested to use official Terraform modules available. No need to reinvent a module that already exists. It saves a lot of time and pain. Terraform registry has plenty of modules readily available. Make changes to the existing modules as per the need.

Also, each module should concentrate on only one aspect of the infrastructure, such as creating an AWS EC2 instance, setting MySQL database, etc.

For example, if you want to use AWS VPC in your terraform code, you can use – simple VPC

module "vpc_example_simple-vpc" {
= "terraform-aws-modules/vpc/aws//examples/simple-vpc"
version = "2.48.0"

Latest Version

Terraform development community is very active, and the release of new functionalities happens frequently. It is recommended to stay on the latest version of Terraform as in when a new major release happens. You can easily upgrade to the latest version.

If you skip multiple major releases, upgrading will become very complex.

Run terraform -v command to check of a new update.

geekflare@geekflare:~$ terraform -v
Terraform v0.11.14
Your version of Terraform is out of date! The latest version
is 0.12.0. You can update by downloading from

Backup System State

Always backup the state files of Terraform.

These files keep track of the metadata and resources of the infrastructure. By default, these files called as terraform.tfstate are stored locally inside the workspace directory.

Without these files, Terraform will not be able to figure out which resources are deployed on the infrastructure. So, it is essential to have a backup of the state file. By default, a file with a name terraform.tfstate.backup will get created to keep a backup of the state file.

geekflare@geekflare:~$ tree terraform_demo/
├── terraform.tfstate
└── terraform.tfstate.backup
0 directories, 3 files

If you want to store a backup state file to some other location, use -backup flag in the terraform command and give the location path.

Most of the time, there will be multiple developers working on a project. So, to give them access to the state file, it should be stored at a remote location using a terraform_remote_state data source.

The following example will take a backup to S3.

data "terraform_remote_state" "vpc" {
backend = "s3"
config = {
bucket = “s3-terraform-bucket”
key = “vpc/terraform.tfstate"
region = “us-east-1”

Lock State File

There can be multiple scenarios where more than one developer tries to run the terraform configuration at the same time. This can lead to the corruption of the terraform state file or even data loss. The locking mechanism helps to prevent such scenarios. It makes sure that at a time, only one person is running the terraform configurations, and there is no conflict.

Here is an example of locking the state file, which is at a remote location using DynamoDB.

resource “aws_dynamodb_table” “terraform_state_lock” {
name = “terraform-locking”
read_capacity = 3
write_capacity = 3
hash_key = “LockingID”

attribute {
name = “LockingID”
type = “S”

terraform {
backend “s3” {
bucket = “s3-terraform-bucket”
key = “vpc/terraform.tfstate”
region = “us-east-2”
dynamodb_table = “terraform-locking”

When multiple users try to access the state file, DynamoDB database name and primary key will be used for state locking and maintaining the consistency.

Note: not all backend support locking.

Use self Variable

self variable is a special kind of variable that is used when you don’t know the value of the variable before deploying an infrastructure.

Let’s say you want to use the IP address of an instance which will be deployed only after terraform apply command, so you don’t know the IP address until it is up and running.

In such cases, you use self variables, and the syntax to use it is self.ATTRIBUTE. So, in this case, you will use self.ipv4_address as a self variable to get the IP address of the instance. These variables are only allowed on connection and provisioner blocks of terraform configuration.

connection {
host = self.ipv4_address
type = "ssh"
user = var.users[2]
private_key = file(var.private_key_path)

Minimize Blast Radius

The blast radius is nothing but the measure of damage that can happen if things do not go as planned.

For example, if you are deploying some terraform configurations on the infrastructure and the configuration do not get applied correctly, what will be the amount of damage to the infrastructure.

So, to minimize the blast radius, it is always suggested to push a few configurations on the infrastructure at a time. So, if something went wrong, the damage to the infrastructure will be minimal and can be corrected quickly. Deploying plenty of configurations at once is very risky.

Use var-file

In terraform, you can create a file with extension <em>.</em>tfvars and pass this file to terraform apply command using -var-file flag. This helps you in passing those variables which you don’t want to put in the terraform configuration code.

It is always suggested to pass variables for a password, secret key, etc. locally through -var-file rather than saving it inside terraform configurations or on a remote location version control system.

For example, if you want to want to launch an ec2 instance using terraform, you can pass access key and secret key using -var-file

Create a file terraform.tfvars and put the keys in this file.

geekflare@geekflare:~$ gedit terraform.tfvars

secret_key = "W9VCCs6I838NdRQQsAeclkejYSJA4YtaZ+2TtG2H"

Now, use this var file in the terraform command.

geekflare@geekflare:~$ terraform apply -var-file=/home/geekflare/terraform.tfvars

User Docker

When you are running a CI/CD pipeline build job, it is suggested to use docker containers. Terraform provides official Docker containers that can be used. In case you are changing the CI/CD server, you can easily pass the infrastructure inside a container.

Before deploying infrastructure on the production environment, you can also test the infrastructure on the docker containers, which are very easy to deploy. By combining Terraform and Docker, you get portable, reusable, repeatable infrastructure.


I hope these best practices will help you in writing better Terraform configurations. Go ahead and start implementing these in your terraform projects for better results.

Share on:
  • Avi
    Avi is a tech enthusiast with expertise in trending technologies such as DevOps, Cloud Computing, Big Data and many more. He is passionate about learning cutting-edge technologies and sharing his knowledge with others through…

Thanks to our Sponsors

More great readings on Cloud Computing

Power Your Business

Some of the tools and services to help your business grow.
  • The text-to-speech tool that uses AI to generate realistic human-like voices.

    Try Murf AI
  • Web scraping, residential proxy, proxy manager, web unlocker, search engine crawler, and all you need to collect web data.

    Try Brightdata
  • is an all-in-one work OS to help you manage projects, tasks, work, sales, CRM, operations, workflows, and more.

    Try Monday
  • Intruder is an online vulnerability scanner that finds cyber security weaknesses in your infrastructure, to avoid costly data breaches.

    Try Intruder