Learn Terraform: The Ultimate terraform tutorial [PART-2]

In the previous; Learn Terraform: The Ultimate terraform tutorial [PART-1], you got a jump start into Terraform world; why not gain a more advanced level of knowledge of Terraform that you need to become a terraform pro.

In this Learn Terraform: The Ultimate terraform tutorial [PART-2] guide, you will learn more advanced level of Terraform concepts such as terraform lifecycle, terraform function, terraform modules, terraform provisioners, terraform init, terraform plan, terraform apply commands and many more.

Without further delay, let’s get into it.

Join 17 other followers

What are Terraform modules?

Terraform modules contain the terraform configuration files that may be managing a single resource or group of resources. For example, if you are managing a single resource in the single terraform configuration file that is also a Terraform module, or if you wish to manage multiple resources defined different files and later clubbed together in a single file, that is also known as Terraform modules or a root module.

A Terraform root module can have multiple individual child modules, data blocks, resources blocks, and so on. To call a child module, you will need to explicitly define the location of the child module using the source argument as shown below.

  • In the below code the location of module EFS is one directory behind the current directory, so you defined the local Path as ./modules/EFS
module "efs" {                            # Module and Label is efs
  source               = "./modules/EFS"  # Define the Path of Child Module                             
  subnets              = var.subnet_ids
  efs_file_system_name = var.efs_file_system_name
  security_groups      = [module.SG.efs_sg_id]
  role_arn             = var.role_arn
}
  • In some cases the modules are stored in Terraform Registry, GitHub, Bitbucket, Mercurial Repo, S3 bucket etc and to use these repsoitories as your source you need to declare as shown below.
module "mymodule1" {                              # Local Path located  Module
  source = "./consul"
}

module "mymodule2" {                              # Terraform Registry located Module
  source = ".hasicorp/consul/aws"
  version = "0.1.0"
}

module "mymodule3" {                              # GIT located  Module
  source = "github.com/automateinfra/"
}

module "mymodule4" {                              # Mercurial located  Module
  source = "hg::https://automateinfra.com/vpc.hg"
}

module "mymodule5" {                               # S3 Bucket located  Module
  source = "s3::https://s3-eu-west-1.amazonaws.com/vpc.zip"
}
The diagram displaying the root module ( module1 and module2) containing the child modules such as (ec2, rds, s3 etc)
The diagram displaying the root module ( module1 and module2) containing the child modules such as (ec2, rds, s3, etc.)

Terraform provisioner

Do you know what Terraform allows you to perform an action on your local machine or remote machine such as running a command on the local machine, copying files from local to remote machines or vice versa, Passing data into virtual machines, etc. all this can be done by using Terraform provisioner?

Terraform provisioners allow to pass data in any resource that cannot be passed when creating resources. Multiple terraform provisioners can be specified within a resource block and executed in the order they’re defined in the configuration file.

The terraform provisioners interact with remote servers over SSH or WinRM. Most cloud computing platforms provide mechanisms to pass data to instances at the time of their creation such that the data is immediately available on system boot. Still, you can pass the data with Terraform provisioners even after creating the resource.

Terraform provisioners allows you to declare conditions such as when = destroy , on_failure = continue and If you wish to run terraform provisioners that aren’t directly associated with a specific resource, use null_resource.

Let’s look at the example below to declare multiple terraform provisioners.

  • Below code creates two resources where resource1 create an AWS EC2 instance and other work with Terraform provisioner and performs action on the AWS EC2 instance such as copying apache installation instrution from local machine to remote machine and then using file installing apache on the AWS EC2 instance.

resource "aws_instance" "resource1" {
  instance_type = "t2.micro"
  ami           = "ami-9876"
  timeouts {                     # Customize your operations longevity
   create = "60m"
   delete = "2h"
   }
}

resource "aws_instance" "resource2" {

  provisioner "local-exec" {
    command = "echo 'Automateinfra.com' >text.txt"
  }
  provisioner "file" {
    source      = "text.txt"
    destination = "/tmp/text.txt"
  }
  provisioner "remote-exec" {
    inline = [
      "apt install apache2 -f /tmp/text.txt",
    ]
  }
}

Join 17 other followers

Terraform Lifecycle

Terraform lifecycle defines the behavior of resources how they should be treated, such as ignoring changes to tags, preventing destroy the infrastructure.

There are mainly three arguments that you can declare within the Terraform lifecycle such as :

  1. create_before_destroy: By default Terraform destroys the existing object and then create a new replacement object but with create_before_destroy argument within terraform lifecycle the new replacement object is created first, and then the legacy or prior object is destroyed.
  2. prevent_destroy: Terraform skips the destruction of the existing object if you declare prevent_destroy within the terraform lifecycle.
  3. ignores-changes: When you execute Terraform commands if there are any differences or changes required in the infrastructure terraform by default informs you however if you need to ignores the changes, then consider using ignore_changes inside the terraform lifecycle.
  • In the below code aws_instance will ignore any tag changes for the instance and for azurerm_resource_group new resource group is created first and then destroyed once the new replacement is ready.
resource "aws_instance" "automate" {
  lifecycle {
    ignore_changes = [
      tags,
    ]
  }
}

resource "azurerm_resource_group" "automate" {
  lifecycle {
    create_before_destroy = true
  }
}

Terraform jsonencode example with Terraform json

If you need to encode Json files in your terraform code, consider using terraform jsonencode function. This is a quick section about terraform jsonencode, so let’s look at a basic Terraform jsonencode example with Terraform json.

  • The below code creates an IAM role policy in which you are defining the policy statement in json format.
resource "aws_iam_role_policy" "example" {
  name   = "example"
  role   = aws_iam_role.example.name
  policy = jsonencode({
    "Statement" = [{
      # This policy allows software running on the EC2 instance to access the S3 API
      "Action" = "s3:*",
      "Effect" = "Allow",
    }],
  })
}

Terraform locals

Terraform locals are the values that are declared once but can be referred to multiple times in the resource or module block without repeating it.

Terraform locals helps you to decrease the number of code lines and reduce the repetitive code.

locals {                                         # Declaring the set of related locals in a single block
  instance = "t2.micro"
  name     = "myinstance"
}

locals {                                         # Using the Local values 
 common_tags {
  instance_type  = local.instance
  instance_name  = local.name
   }
}

resource "aws_instance" "instance1" {            # Using the newly created Local values
  tags = local.common_tags
}

resource "aws_instance" "instance2" {             # Using the newly created Local values
  tags = local.common_tags
}

Terraform conditional expression

There is multiple time when you will encounter using conditional expressions in Terraform. Let’s look at some important terraform conditional expression examples below, which will forever help you using Terraform. Let’s get into it.

  • Below are examples on how to retrieve outputs with different conditions.
aws_instance.myinstance.id      # This will provide you a result with Ec2 Instance details.
aws_instance.myinstance[0].id   # This will provide you a result with first Ec2 Instance details.
aws_instance.myinstance[1].id   # This will provide you a result with second Ec2 Instance details
aws_instance.myinstance.*id     # This will provide you a result with all Ec2 Instance details
  • Now, let us see few complex examples where different conditions are applied to retrieve outputs.
[for value in aws_instance.myinstance:value.id]    # Returns all instance values by their ids.
var.a != "auto" ? var.a : "default-a"              # if var.a is auto then use var.a else default-a
[for a in var.list : a.instance[0].name]           # var.list[*].instance[0].name
[for a in var.list: upper(a)]                      # iterates over each item in var.list and lists upper case 
[for a in var.list : a => upper(a)]     # list original objects and corresponding upper case [("a","A"),("c","C")]                                                         

Terraform dynamic block conditional

Terraform dynamic block conditional is used when resource or module block cannot accept the static value of the argument and instead depend on separate objects that are related to, embedded within the other block or outputs.

For example application = "${aws_elastic_beanstalk_application.tftest.name}" .

Also, while creating any resource in the module, you are not allowed to provide the arguments multiple times, such as name and value, so in that case, you can use dynamic settings. Below is a basic example of a dynamic setting.

resource "aws_elastic_beanstalk_environment" "tfenvtest" {
  name                = "tf-test-name"
  application         = "${aws_elastic_beanstalk_application.tftest.name}"
  solution_stack_name = "64bit Amazon Linux 2018.03 v2.11.4 running Go 1.12.6"

  dynamic "setting" {
    for_each = var.settings
    content {
      namespace = setting.value["namespace"]
      name = setting.value["name"]
      value = setting.value["value"]
    }
  }
}

Terraform functions

The Terraform includes multiple terraform functions, also known as built-in functions that you can call from within expressions to transform and combine values. The syntax for function calls is a function name followed by comma-separated arguments in parentheses: min, join, element, jsonencode, etc.

min(2,3,4)                                 # The output of this function is 2

join("","","hello", "Automate", "infra")   # The output of this function is hello, Automate , infra

element(["a", "b", "c"], length(["a", "b", "c"])-1)   # The output of this function is c

lookup({a="ay", b="bee"}, "c", "unknown?")          # The output of this function is unknown

jsonencode({"hello"="Automate"})          # The output of this function is {"hello":"Automate"}

jsondecode("{\"hello\": \"Automate\"}")   # The output of this function is { "hello"="Automate"}                                                           
                                                                 

Terraform can function

Terraform can evaluate the given expression or condition and accordingly returns a boolean value (true if the expression is valid, else false if the result has any errors. This special function can catch errors produced when evaluating its argument.

local.instance {
  myinstance1 = "t2.micro"
  myinstance2 = "t2.medium"
}

can(local.instance.myinstance1) #  This is True
can(local.instance.myinstance3) #  This is False

variable "time" {
  validation {
     condition  = can(formatdate("","var.time"))   # Checking the 2nd argument
  }
}

Terraform try function

Terraform try function evaluates all of its argument expressions in turn and returns the result of the first one that does not produce any errors.

As you can check below, the terraform try function checks the expression and returns the first correct option, t2.micro, in the first and second options in the second case.

local.instance {
  myinstance1 = "t2.micro"
  myinstance2 = "t2.medium"
}

try(local.instance.myinstance1, "second-option") # This is available in local so output is t2.micro
try(local.instance.myinstance3, "second-option") # This is not available in local so output is second-option

Terraform templatefile function

The Terraform templatefile function reads the file at a given directory or path and renders the content present in the file as a template using the template variable.

Syntax: templatefile(path,vars)
  • Lets understand the example of Terraform templatefile function with Lists. Given below is the backend.tpl template file with below content. When you execute the templatefile() function it renders the backend.tpl and assigns the address and port to the backend argument.
# backend.tpl

%{for addr in ipaddr ~}      # Condition via Directive
backend ${addr}:${port}      # Prints this      
%{end for ~}                 # Condition via Directive
templatefile("${path.module}/backend.tpl, { port = 8080 , ipaddr =["1.1.1.1","2.2.2.2"]})

backend 1.1.1.1:8080
backend 2.2.2.2:8080
  • Lets checkout another example of Terraform templatefile function but this time with maps. When you execute the templatefile() function it renders the backend.tpl and assigns the value of set with each config mentioned in the templatefile (a=automate and i=infra).
# backend.tmpl

%{ for key,value in config }
set ${key} = ${value}
%{ endfor ~}
templatefile("${path.module}/backend.tmpl,
     { 
        config = {
              "a" = "automate"
              "i" = "infra"
           } 
      })

set a = automate
set i = infra

Terraform data source

Terraform data source allows you to fetch the data defined outside of Terraform, defined by another separate Terraform configuration, or modified by functions. After fetching the data, Terraform data source can use it as input and apply it to other resources.

Let’s learn with a basic example. In the below code, you will notice that using data it is fetching the instance details with a provided instance_id.

data "aws_instance" "my-machine1" {          # Fetching the instance
  instance_id = "i-0a0269cf952a02832"
  }

Terraform State file

The main function of the Terraform state file is to store the terraform state, which contains bindings between objects in remote systems and is defined in your Terraform configuration files. Terraform State file is by default stored locally on your machine where you run the Terraform commands with the name of terraform.tfstate.

The Terraform state is stored in JSON formats. When you run terraform show or terraform output command, it fetches the output in JSON format from the Terraform state file. Also, you can import existing infrastructure which you have created by other means such as manually or using scripts within Terraform state file.

When you are an individual, it is ok to keep the Terraform state file in your local machine but when you work in a team, consider storing it in a repository such as AWS S3, etc. While you write anything on the resource that is Terraform configuration file, then the Terraform state file gets Locked, which prevents someone else from using it simultaneously and avoids it being corrupted.

You can store your remote state file in S3, Terraform Cloud, Hasicorp consul, Google cloud storage, Azure blob storage, etc.

Join 17 other followers

Terraform backend [terraform backend s3]

Terraform backend is a location where terraform state file resides. The Terraform state file contains all the resource details and tracking which were provisioned or will be provisioned with Terraform, such as terraform plan or terraform apply command.

There are two types of Backend; one is local that resides where you run terraform from it could be Linux machine, windows machine or wherever you run it from, and other is remote backend which could be SAAS based URL or storage location such as AWS S3 bucket.

Let’s take a look at how you can configure local backend or remote backend with terraform backend s3

# Local Backend
# whenever statefile is created or updates it is stored in local machine.

terraform {
  backend "local" {
    path = "relative/path/to/terraform.tfstate"
  }
}

# Configuring Terraform to use the remote terraform backend s3.
# whenever statefile is created or updates it is stored in AWS S3 bucket. 

terraform {
  backend "s3" {
    bucket = "mybucket"
    key    = "path/to/my/key"
    region = "us-east-2"
  }
}

Terraform Command Line or Terraform CLI

The Terraform command-line interface or Terraform CLI can be used via terraform command, which accepts a variety of subcommands such as terraform init or terraform plan. Below is the list of all of the supported subcommands.

  1. terraform init: It initialize the provider , module version requirements and backend configurations.
    • terraform init -input=true →You can give inputs else terraform will fail.
    • terraform init -lock=false → Disable lock of state file (Not recommended)
    • terraform init -upgrade → Upgrade modules and plugins
  2. terraform get: It is only to initialize or download the modules only.
  3. terraform plan: Determines the state of all resources it declares and and compares with real infrastructure. It uses terraform state file data to compare . It uses providers API to perform this step.
    • terraform plan -compact-warnings → Only summary of warnings
    • terraform plan -out=path → Saves the execution Plan
    • terraform plan -var-file= abc.tfvars → To use specfic terraform.tfvars
  4. terraform apply: To apply changes in specific providers account
    • terraform apply -backup=path To Backup
    • terraform apply -lock=true → Locks the state file
    • terraform apply -state=path → Path to state file
    • terraform apply -var-file= abc.tfvars
    • terraform apply -auto-approve
  5. terraform destroy: It will destroy terraform managed infra
    • terraform destroy -auto-approve
  6. terraform console: Provides console for evaluating expressions
    • terraform console -state=path → Path to local state file
  7. terraform fmt: Formats the configuration file to proper format
    • terraform fmt -check → Checks the input format
    • terraform fmt – recursive → It formats subdirectories as well
    • terraform fmt – diff → displays the difference
  8. terraform validates: It validates the configuration files
    • terraform validate -json → Output is in json format
  9. terraform graph: It Generates visual representation of execution plan
    • terraform graph -draw-cycles
    • terraform graph -type=plan
  10. terraform output : This is to extract the values of an output variable from state file.
    • terraform output -json
    • terraform output -state=path
  11. terraform state list: It lists all the resources present in state file
    • terraform state list – id=id →( Id of resource )
    • terraform state list -state=path →( Path of the state file )
  12. terraform state show: It show attributes of specific resources.
    • terraform state show -state=path
  13. terraform import : It will import existing resource in terraform
  14. terraform refresh: It will reconcile the terraform state file. Whatever resource you created using terraform and if they are manually or by any means modified , refresh will sync them in state file.
  15. terraform rm : Removes the resources from state file
  16. terraform mv : Move the resources from state file
  17. terraform state pull: Manually download and output the state from remote state
  18. terraform state push: Manually upload the local state file to remote state

Quick Glance of Terraform CLI Commands

Initialize ProvisionModify ConfigCheck infraManipulate State
terraform initterraform planterraform fmtterraform graphterraform state list
terraform getterraform applyterraform validateterraform outputterraform state show
terraform destroyterraform consoleterraform state show terraform state mv/rm
terraform state listterraform state pull/push
Terraform CLI commands

Terraform ec2 instance example (terraform aws ec2)

Let’s wrap up this ultimate guide with a basic Terraform ec2 instance example or terraform aws ec2.

  • Assuming you already have Terraform installed on your machine.
  • First create a folder of your choice in any directory and a file named main.tf inside it and copy/paste the below content.
# This is main.tf terraform file.

resource "aws_instance" "my-machine" {
  ami = "ami-0a91cd140a1fc148a"
  for_each  = {
      key1 = "t2.micro"
	  key2 = "t2.medium"
   }
    instance_type      = each.value
	key_name       = each.key
    tags =  {
	   Name  = each.value
	}
}

resource "aws_iam_user" "accounts" {
  for_each = toset( ["Account11", "Account12", "Account13", "Account14"] )
  name     = each.key
}
  • Create another file vars.tf inside the same folder and copy/paste the below content.

#  This is var.tf terraform file.

variable "tag_ec2" {
  type = list(string)
  default = ["ec21a","ec21b"]
}
  • Finally, create another file output.tf again in the same folder and copy/paste the below content.
# This is  output.tf terraform file

output "aws_instance" {
   value = "${aws_instance.my-machine.*.id}"
}
output "aws_iam_user" {
   value = "${aws_iam_user.accounts.*.name}"
}

Make sure your machine has Terraform role attached or Terraform credentials configured properly before you run the below Terraform commands.

terraform -version  # It gives Terraform Version information
Finding Terraform version
Finding Terraform version
  • Now Initialize the terraform by running the terraform init command in same working directory where you have all the above terraform configuration files.
terraform init   # To initialize the terraform 
Initializing the terraform using terraform init command
Initializing the terraform using terraform init command
  • Next run the terraform plan command. This command provides the blueprint of what all resources will be deployed before deploying actually.
terraform plan   
Running the terraform plan command
Running the terraform plan command
terraform validate   # To validate all terraform configuration files.
Running the terraform validate command
Running the terraform validate command
  • Now run the Terraform show command provides the human readable output of state file that gets generated only after terraform plan command.
terraform show   # To provide human-readable output from a state or plan file.
Running the terraform show command
Running the terraform show command
  • To list all resources within terraform state file run the terraform state list command.
terraform state list 
Running the terraform state list command
Running the terraform state list command
terraform apply  # To Actually apply the resources 
Running the terraform apply command
Running the terraform apply command
  • To provide graphical view of all resources in configuration files run terraform graph command.
terraform graph  
Running the terraform graph command
Running the terraform graph command
  • To Destroy the resources that are provisioned using Terraform run Terraform destroy command.
terraform destroy   # Destroys all your resources or the one which you specified 
Running the terraform destroy command
Running the terraform destroy command

Join 17 other followers

Conclusion

Now that you have learned everything you should know about Terraform, you are sure going to be the Terraform leader in your upcoming projects or team, or organizations.

So with that, what are you planning to automate using Terraform in your next adventure?

Learn Terraform: The Ultimate terraform tutorial [PART-1]

If you are looking to learn to terraform, then you are in the right place; this Learn Terraform: The Ultimate terraform tutorial guide will simply help you to gain complete knowledge that you need from basics to becoming a terraform pro.

Terraform infrastructure as a code tool to build and change the infrastructure effectively and simpler way. With Terraform, you can work with various cloud providers such as Amazon AWS, Oracle, Microsoft Azure, Google Cloud, and many more.

Let’s get started with Learn Terraform: The Ultimate terraform tutorial without further delay.

Prerequisites

What is terraform?

Let’s kick off this tutorial with What is Terraform? Terraform is a tool for building, versioning, and updating the infrastructure. It is written in GO Language, and the syntax language of Terraform configuration files is HCL, i.e., HashiCorp Configuration Language, which is way easier than YAML or JSON.

Terraform has been in use for quite a while now and has several key features that make this tool more powerful such as

  • Infrastructure as a code: Terraform execution and configuration files are written in Infrastructure as a code language which comes under High-level language that is easy to understand by humans.
  • Execution Plan: Terraform provides you in depth details of execution plan such as what terraform will provision before deploying the actual code and resources it will create.
  • Resource Graph: Graph is an easier way to identify and manage the resource and quick to understand.

Terraform files and Terraform directory structure

Now that you have a basic idea of Terraform and some key features of Terraform. Let’s now dive into Terraform files and Terraform directory structure that will help you write the Terraform configuration files later in this tutorial.

Terraform code, that is, Terraform configuration files, are written in a tree-like structure to ease the overall understanding of code with .tf format or .tf.json or .tfvars format. These configuration files are placed inside the Terraform modules.

Terraform modules are on the top level in the hierarchy where configuration files reside. Terraform modules can further call another child to terraform modules from local directories or anywhere in disk or Terraform Registry.

Terraform modules folder structure
Terraform modules folder structure

Terraform contains mainly five files as main.tf , vars.tf , providers.tf , output.tf and terraform.tfvars.

  1. main.tf – Terraform main.tf file contains the main code where you define which resources you need to build, update or manage.
  2. vars.tf – Terraform vars.tf file contains the input variables which are customizable and defined inside the main.tf configuration file.
  3. output.tf : The Terraform output.tf file is the file where you declare what output paraeters you wish to fetch after Terraform has been executed that is after terraform apply command.
  4. .terraform: This directory contains cached provider , modules plugins and also contains the last known backend configuration. This is managed by terraform and created after you run terraform init command.
  5. terraform.tfvars files contains the values which are required to be passed for variables that are refered in main.tf and actually decalred in vars.tf file.
  6. providers.tf – The povider.tf is the most important file whrere you define your terraform providers such as terraform aws provider, terraform azure provider etc to authenticate with the cloud provider.

Join 17 other followers

How to declare Terraform variables

In the previous section, you learned Terraform files and Terraform directory structure. Moving further, it is important to learn how to declare Terraform variables in Terraform configuration file (var. tf)

Declaring the variables allows you to share modules across different Terraform configurations, making your module reusable. There are different types of variables used in Terraform, such as boolean, list, string, maps, etc. Let’s see how different types of terraform variables are declared.

  • Each input variable in the module must be declared using a variable block as shown below.
  • The label after the variable keyword is a name for the variable, which should be unique within the same module
  • The following arguments can be used within the variable block:
    • default – A default value allows you to decalre the value in this block only and makes the variable optional.
    • type – This argument declares the value types.
    • description – You can provide the description of the input variable’s.
    • validation -To define validation rules if any.
    • sensitive – If you specify the value as senstive then terraform will not print the values in while executing.
    • nullable – Specify null if you dont need any value for the variable.
variable "variable1" {                        
  type        = bool
  default     = false
  description = "boolean type variable"
}

variable  "variable2" {                       
   type    = map
   default = {
      us-east-1 = "image-1"
      us-east-2 = "image2"
    }

   description = "map type  variable"
}

variable "variable3" {                   
  type    = list(string)
  default = []
  description = "list type variable"
}

variable "variable4" {
  type    = string
  default = "hello"
  description = "String type variable"
}                        

variable "variable5" {                        
 type =  list(object({
  instancetype        = string
  minsize             = number
  maxsize             = number
  private_subnets     = list(string)
  elb_private_subnets = list(string)
            }))

 description = "List(Object) type variable"
}


variable "variable6" {                      
 type = map(object({
  instancetype        = string
  minsize             = number
  maxsize             = number
  private_subnets     = list(string)
  elb_private_subnets = list(string)
  }))
 description = "Map(object) type variable"
}


variable "variable7" {
  validation {
 # Condition 1 - Checks Length upto 4 char and Later
    condition = "length(var.image_id) > 4 && substring(var.image_id,0,4) == "ami-"
    condition = can(regex("^ami-",var.image_id)    
# Condition 2 - It checks Regular Expression and if any error it prints in terraform error_message =" Wrong Value" 
  }

  type = string
  description = "string type variable containing conditions"
}

Terraform variables follows below higher to the lower priority order.

  1. Specifying the environment variables like export TF_VAR_id='["id1","id2"]''
  2. Specifying the variables in the teraform.tfvars file
  3. Specifying the variables in theterraform.tfvars.json file
  4. Specifying the variables in the *.auto.tfvars or *.auto.tfvars.json file
  5. Specifying the variables on the command line with -var and -var-file options

How to declare Terraform Output Variables

In the previous section, you learned how to use terraform variables in the Terraform configuration file. As learned earlier, modules contain one more important file: outputs. tf that contains terraform output variables.

  • In the below output.tf file the you can see there are two different terraform output variables named:
  • output1 that will store and display the arn of instance after running terraform apply command.
  • output2 that will store and display the public ip address of the instance after running terraform apply command.
  • output3 that will store but doesnt display the private ip address of the instance after running terraform apply command using sensitive argument.
# Output variable which will store the arn of instance and display after terraform apply command.

output "output1" {
  value = aws_instance.my-machine.arn
}

# Output variable which will store instance public IP and display after terraform apply command
 
output "output2" {
  value       = aws_instance.my-machine.public_ip
  description = "The public IP address of the instance."
}

output "output3" {
  value = aws_instance.server.private_ip
# Using sensitive to prevent Terraform from showing the ouput values in terrafom plan and apply command.  
  senstive = true                             
}

How to declare Terraform resource block

You are going great in learning the terraform configuration file, but do you know your modules contain one more important main file.tf file, which allows you to manage, create, update resources with Terraform, such as creating AWS VPC, etc., and to manage the resource, you need to define them in terraform resource block.

# Below Code is a resource block in Terraform

resource "aws _vpc" "main" {    # <BLOCK TYPE> "<BLOCK LABEL>" "<BLOCK LABEL>" {
cidr_block = var.block          # <IDENTIFIER> =  <EXPRESSION>  #Argument (assigns value to name)
}                             

Declaring Terraform resource block in HCL format.

Now that you have an idea about the syntax of terraform resource block let’s check out an example where you will see resource creation using Terraform configuration file in HCL format.

  • Below code creates two resources where resource1 create an AWS ec2 instance and other work with Terraform provisioner and install apache on ec2 instance. Timeouts customizes how long certain operations are allowed.

There are some special arguments that can be used with resources such as depends_on, count, lifecycle, for_each and provider, and lastly provisioners.

resource "aws_instance" "resource1" {
  instance_type = "t2.micro"
  ami           = "ami-9876"
  timeouts {                          # Customize your operations longevity
   create = "60m"
   delete = "2h"
   }
}

resource "aws_instance" "resource2" {
  provisioner "local-exec" {
    command = "echo 'Automateinfra.com' >text.txt"
  }
  provisioner "file" {
    source      = "text.txt"
    destination = "/tmp/text.txt"
  }
  provisioner "remote-exec" {
    inline = [
      "apt install apache2 -f /tmp/text.txt",
    ]
  }
}

Declaring Terraform resource block in terraform JSON format.

Terraform language can also be expressed in terraform JSON syntax, which is harder for humans to read and edit but easier to generate and parse programmatically, as shown below.

  • Below example is same which you previously created using HCL configuration but this time it is using terraform JSON syntax. Here also code creates two resources resource1 → AWS EC2 instance and other resource work with Terraform provisioner to install apache on ec2 instance.
{
  "resource": {
    "aws_instance": {
      "resource1": {
        "instance_type": "t2.micro",
        "ami": "ami-9876"
      }
    }
  }
}


{
  "resource": {
    "aws_instance": {
      "resource2": {
        "provisioner": [
          {
            "local-exec": {
              "command": "echo 'Automateinfra.com' >text.txt"
            }
          },
          {
            "file": {
              "source": "example.txt",
              "destination": "/tmp/text.txt"
            }
          },
          {
            "remote-exec": {
              "inline": ["apt install apache2 -f tmp/text.txt"]
            }
          }
        ]
      }
    }
  }
}

Declaring Terraform depends_on

Now that you learned how to declare Terraform resource block in HCL format but within the resource block, as discussed earlier, you can declare special arguments such as depends_on. Let’s learn how to use terraform depends_on meta argument.

Use the depends_on meta-argument to handle hidden resource or module dependencies that Terraform can’t automatically handle.

  • In the below example while creating a resource aws_rds_cluster you need the information about the aws_db_subnet_group so aws_rds_cluster is dependent and in order to specify the dependency you need to declare depends_on meta argument within aws_rds_cluster.
resource "aws_db_subnet_group" "dbsubg" {
    name = "${var.dbsubg}" 
    subnet_ids = "${var.subnet_ids}"
    tags = "${var.tag-dbsubnetgroup}"
}


# Component 4 - DB Cluster and DB Instance

resource "aws_rds_cluster" "main" {
  depends_on                   = [aws_db_subnet_group.dbsubg]    # This RDS cluster is dependent on Subnet Group

Join 17 other followers

Using Terraform count meta argument

Another special argument is terraform count. Let’s learn how to use terraform depends_on meta argument.

By default, terraform create a single resource defined in terraform resource block. But at times, you want to manage multiple objects of the same kind, such as creating four AWS EC2 instances of the same type in the AWS cloud without writing a separate block for each instance. Let’s learn how to use Terraform count meta argument.

  • In the below code terraform will create 4 instance of t2.micro type with (ami-0742a572c2ce45ebf) ami as shown below.
resource "aws_instance" "my-machine" {
  count = 4 
  
  ami = "ami-0742a572c2ce45ebf"
  instance_type = "t2.micro"
  tags = {
    Name = "my-machine-${count.index}"
         }
}
Using Terraform count to create four ec2 instance
Using Terraform count to create four ec2 instance
  • Similarly in the below code terraform will create 4 AWS IAM users named user1, user2, user3 and user4.
resource "aws_iam_user" "users" {
  count = length(var.user_name)
  name = var.user_name[count.index]
}

variable "user_name" {
  type = list(string)
  default = ["user1","user2","user3","user4"]
}
Using Terraform count to create four IAM user
Using Terraform count to create four IAM user

Terraform for_each module

Earlier in the previous section, you learned to terraform count is used to create multiple resources with the same characteristics. If you need to create multiple resources in one go but with certain parameters, then terraform for_each module is for you.

The for_each meta-argument accepts a map or a set of strings and creates an instance for each item in that map or set. Let’s look at the example below to better understand terraform for_each.

Example-1 Terraform for_each module

  • In the below example, you will notice for_each contains two keys (key1 and key2) and two values (t2.micro and t2.medium) inside the for each loop. When the code is executed then for each loop will create:
    • One instance with key as “key1” and instance type as “t2.micro”
    • Another instance with key as “key2” and instance type as “t2.medium”.
  • Also below code will create different account with names such as account1, account2, account3 and account4.
resource "aws_instance" "my-machine" {
  ami = "ami-0a91cd140a1fc148a"
  for_each  = {
      key1 = "t2.micro"
      key2 = "t2.medium"
   }
  instance_type    = each.value	
  key_name         = each.key
  tags =  {
   Name = each.value 
	}
}

resource "aws_iam_user" "accounts" {
  for_each = toset( ["Account1", "Account2", "Account3", "Account4"] )
  name     = each.key
}
Terraform for_each module example 1 to launch ec2 instance and IAM users
Terraform for_each module example 1 to launch ec2 instance and IAM users

Example-2 Terraform for_each module

  • In the below example, you will notice for_each is a variable of type map(object) which has all the defined arguments such as (instance_type, key_name, associate_public_ip_address and tags). After Code is executed every time each of these arguments get a specific value.
resource "aws_instance" "web1" {
  ami                         = "ami-0a91cd140a1fc148a"
  for_each                    = var.myinstance
  instance_type               = each.value["instance_type"]
  key_name                    = each.value["key_name"]
  associate_public_ip_address = each.value["associate_public_ip_address"]
  tags                        = each.value["tags"]
}

variable "myinstance" {
  type = map(object({
    instance_type               = string
    key_name                    = string
    associate_public_ip_address = bool
    tags                        = map(string)
  }))
}

myinstance = {
  Instance1 = {
    instance_type               = "t2.micro"
    key_name                    = "key1"
    associate_public_ip_address = true
    tags = {
      Name = "Instance1"
    }
  },
  Instance2 = {
    instance_type               = "t2.medium"
    key_name                    = "key2"
    associate_public_ip_address = true
    tags = {
      Name = "Instance2"
    }
  }
}
Terraform for_each module example 2 to launch multiple ec2 instance
Terraform for_each module example 2 to launch multiple ec2 instances

Example-3 Terraform for_each module

  • In the below example, similarly you will notice instance_type is using toset which contains two values(t2.micro and t2.medium). When the code is executed then instance type takes each value from the set values inside toset.
locals {
  instance_type = toset([
    "t2.micro",
    "t2.medium",
  ])
}

resource "aws_instance" "server" {
  for_each      = local.instance_type

  ami           = "ami-0a91cd140a1fc148a"
  instance_type = each.key
  
  tags = {
    Name = "Ubuntu-${each.key}"
  }
}
Terraform for_each module example 3 to launch multiple ec2 instances
Terraform for_each module example 3 to launch multiple ec2 instances

Terraform provider

Terraform depend on the plugins to connect or interact with cloud providers or API services, and to perform this, you need Terraform provider. There are several terraform providers that are stored in Terraform registry such as terraform provider aws or aws terraform provider or terraform azure.

Terraform configurations must declare which providers they require so that Terraform can install and use them. Some providers require configuration (like endpoint URLs or cloud regions) before using. The provider also uses local utilities like generating random strings or passwords. You can create multiple or single configurations for a single provider. You can have multiple providers in your code.

Providers are stored inside the “Terraform registry,” Some are in-house providers ( companies that create their own providers). Providers are written in Go Language.

Let’s learn how to define a single provider and then define the provider’s configurations inside terraform.

# Defining the Provider requirement 

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
    postgresql = {
      source = "cyrilgdn/postgresql"
    }
  }
  required_version = ">= 0.13"   # New way to define version 
}


# Defining the Provider Configurations and names are Local here i.e aws,postgres,random

provider "aws" {
  assume_role {
  role_arn = var.role_arn
  }
  region = var.region
}

provider "random" {}

provider "postgresql" {
  host                 = aws_rds_cluster.main.endpoint
  username             = username
  password             = password
}

Defining multiple aws providers terraform

In the previous section, you learned how to use aws provider terraform to connect to AWS resources, which is great, but with that, you can only in one particular aws region. However, consider using multiple aws providers’ Terraform configurations if you need to work with multiple regions.

  • To create multiple configurations for a given provider, you should include multiple provider blocks with the same provider name but to use the additional non-default configuration, use the alias meta-argument as shown below.
  • In the below code, there is one aws terraform provider named aws that works with the us-east-1 region by default and If you need to work with another region, consider declaring same provider again but with different region and alias argument.
  • For creating a resource in us-west-1 region declare provider.<alias-name> in the resource block as shown below.
# Defining Default provider block with region us-east-1

provider "aws" {      
  region = us-east-1
}

# Name of the provider is same that is aws with region us-west-1 thats why used ALIAS

provider "aws" {    
  alias = "west"
  region = us-west-1
}

# No need to define default Provider here if using Default Provider 

resource "aws_instance" "resource-us-east-1" {}  

# Define Alias Provider here to use west region  

resource "aws_instance" "resource-us-west-1" {    
  provider = aws.west
}

Quick note on Terraform version : In Terraform v0.12 there was no way to give a source but in the case of Terraform v 0.13 onwards you have an option to add a source address.

# This is how you define provider in Terraform v0.13 and onwards
terraform {          
  required_providers {
    aws = {
      source = "hasicorp/aws"
      version = "~>1.0"
}}}

# This is how you define provider in Terraform v 0.12
terraform {               
  required_providers {
    aws = "~/>1.0"
}}

Join 17 other followers

Conclusion

In this Ultimate Guide, you learned what is terraform, terraform provider, and understood how to declare terraform provider aws and further used to interact with cloud services.

Now that you have gained a handful of Knowledge on Terraform continue with the PART-2 guide and become the pro of Terraform.

Learn Terraform: The Ultimate terraform tutorial [PART-2]

How to Install and Setup Terraform on Windows Machine step by step

There are lots of automation tools and scripts available for this and one of the finest tools to automate your infrastructure is Terraform which is also known as Infrastructure as code. Learn how to install and set up Terraform on Windows Machine step by step.

Table of Content

  1. What is Terraform ?
  2. Prerequisites
  3. How to Install Terraform on Windows 10 machine
  4. Creating an IAM user in AWS account with programmatic access
  5. Configuring the IAM user Credentials on Windows Machine
  6. Run Terraform commands from Windows machine
  7. Launch a EC2 instance using Terraform
  8. Conclusion

What is Terraform?

Terraform is a tool for building, versioning, and changing the Cloud infrastructure. Terraform is Written in GO Language and the syntax language of configuration files is hcl which stands for HashiCorp configuration language which is much easier than YAML or JSON.

Terraform has been in use for quite a while now. I would say it’s an amazing tool to build, change the infrastructure in a very effective and simpler way. It’s used with a variety of cloud providers such as Amazon AWS, Oracle, Microsoft Azure, Google Cloud, and many more. I hope you would love to learn it and utilize it.

Prerequisites

How to Install Terraform on a Windows machine

  • Open your favorite browser and download the appropriate version of Terraform from HashiCorp’s download Page. This tutorial will download terraform 0.13.0 version but you will find latest versions on the Hashicorps download page.
  • Make a folder on your C:\ drive where you can put the Terraform executable something Like  C:\tools where you can put binaries.
  • Extract the zip file to the folder C:\tools
  • Now Open your Start Menu and type in “environment” and the first thing that comes up should be Edit the System Environment Variables option. Click on that and you should see this window.
  • Now Under System Variables and look for Path and edit it
  • Click New and add the folder path where terraform.exe is located to the bottom of the list. By adding the terraform.exe in PATH will allow you to execute terraform command from anywhere in the system.
  • Click OK on each of the menus.
  • Now, Open Command Prompt or PowerShell to check if terraform is properly added in PATH by running the command terraform from any location.
On Windows Machine command Prompt
On Windows Machine PowerShell
  • Verify the installation was successful by entering terraform --version. If it returns a version, you’re good to go.

Creating an IAM user in AWS account with programmatic access

For Terraform to connect to AWS Service, you should have an IAM user with an Access key ID and secret keys in the AWS account that you will configure on your local machine to connect to AWS account from your local machine.

There are two ways to connect to an AWS account, the first is providing an username and password on the AWS login page on the browser and the other way is to configure Access key ID and secret keys on your machine and then use command-line tools to connect programmatically.

  1. Open your favorite web browser and navigate to the AWS Management Console and log in.
  2. While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item.
  1. To Create a user click on Users→ Add user and provide the name of the user myuser and make sure to tick the Programmatic access checkbox in Access type which enables an access key ID and secret access key and then hit the Permissions button.
  1. Now select the “Attach existing policies directly” option in the set permissions and look for the “Administrator” policy using filter policies in the search box. This policy will allow myuser to have full access to AWS services.
  1. Finally click on Create user.
  2. Now, the user is created successfully and you will see an option to download a .csv file. Download this file which contains IAM users i.e. myuser Access key ID and Secret access key which you will use later in the tutorial to connect to AWS service from your local machine.

Configuring the IAM user Credentials on Windows Machine

Now, you have an IAM user myuser created. The next, step is to set the download myuser credentials on the local machine which you will use to connect AWS service via API calls.

  1. Create a new file, C:\Users\your_profile\.aws\credentials on your local machine.
  2. Next, Enter the Access key ID and Secret access key from the downloaded csv file into the credentials file in the same format and save the file.
[default]     # Profile Name
aws_access_key_id = AKIAXXXXXXXXXXXXXXXX
aws_secret_access_key = vIaGXXXXXXXXXXXXXXXXXXXX

credentials files help you to set your profile. By this way, it helps you to create multiple profiles and avoid confusion while connecting to specific AWS accounts.

  1. Similarly, create another file C:\Users\your_profile\.aws\config in the same directory
  2. Next, add the “region” into the config file and make sure to add the name of the profile which you provided in the credentials file, and save the file. This file allows you to work with a specific region.
[default]   # Profile Name
region = us-east-2

Run Terraform commands from Windows machine

By Now, you have already installed Terraform on your windows Machine, Configured IAM user (myuser) credentials so that Terraform can use it and connect to AWS services in the Amazon account.

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Launch an EC2 Instance Using Terraform

In this demonstration, we will create a simple Amazon Web Service (AWS) EC2 instance and run Terraform commands on a Windows machine.

  • Create a folder on your desktop or any location on windows Machine ( I prefer it on Desktop)
  • Now create a file main.tf inside the folder you’re in and paste the below content
resource "aws_instance" "my-machine" {  # Resource block to define what to create
  ami = var.ami           # ami is required as we need ami in order to create an instance
  instance_type = var.instance_type             # Similarly we need instance_type
}
  • Create one more file vars.tf inside the same folder and paste the below content
variable "ami" {       
# Declare the variable ami which you used in main.tf
  type = string      
}

variable "instance_type" {        
# Declare the variable instance_type used in main.tf
  type = string 
}

Next, selecting the instance type is important. Click here to see a list of different instance types. To find the image ID ( ami ), navigate to the LaunchInstanceWizard and search for ubuntu in the search box to get all the ubuntu image IDs. This tutorial will use Ubuntu Server 18.04.LTS image.

  • Create one more file output.tf inside the same folder and paste the below content
output "ec2_arn" {
  value = aws_instance.my-machine.arn 
# Value depends on resource name and type ( same as that of main.tf)
}  
  • Create one more file provider.tf inside the same folder and paste the below content:
provider "aws" {     
# Defining the Provider Amazon  as we need to run this on AWS   
  region = "us-east-1"
}
  • Create one more file terraform.tfvars inside the same folder and paste the below content
ami = "ami-013f17f36f8b1fefb" 
instance_type = "t2.micro"
  • Now your files and code are ready for execution .
  • Initialize the terraform using below command.
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply

Great Job, terraform commands execution was done successfully. Now we should have the ec2 instance launched in AWS.

It generally takes a minute or so to launch an instance and yes we can see that the instance is successfully launched now in us-east-1 the region as expected.

Conclusion

In this tutorial, you learned What is terraform, how to Install and set up Terraform on Windows Machine, and launch an ec2 instance on an AWS account using terraform.

Keep Terraforming !!

Hope this tutorial will help you in understanding and setting up Terraform on Windows machines. Please share with your friends.

How to Launch AWS Elastic beanstalk using Terraform

If you want to scale instances, align a load balancer in front of them, host a website, and store all data in the database. Nothing could be better than Amazon Elastic beanstalk, which provides a common platform.

With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without having to worry about the infrastructure that runs those applications.

In this tutorial, we will learn how to step up Amazon Elastic beanstalk using Terraform on AWS step by step and then upload the code to run one of the simple applications.

Let’s get started.

Join 17 other followers

What is AWS Elastic beanstalk?

AWS Elastic Beanstalk is one of the most widely used Amazon web service tool services. It is a service that provides a platform for various languages such as python, go ruby, java, .net, PHP for hosting the application.

The only thing you need to do in elastic beanstalk is upload code, and the rest of the things such as scaling, load balancing, monitoring will be taken care of by elastic beanstalk itself.

Elastic beanstalk makes the life of developer and cloud admins or sysadmins so easy compared to setting each service individually and interlinking each other. Some of the key benefits of AWS Elastic beanstalk are:

  • It scales the applications up or down as per the required traffic.
  • As infrastructure is managed and taken care of by AWS Elastic beanstalk developers working with admins don’t need to spend much time.
  • It is fast and easy to setup
  • You can interlink with lots of other AWS services of your own choice or you can skip it such as linking of application or classic or network load balancer.

Prerequisites

  • Ubuntu machine to run terraform preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account. Recommended to have 4GB RAM and at least 5GB of drive space.
  • Ubuntu machine should have IAM role attached with AWS Elastic beanstalk creation permissions or admin rights or access key and secret key configured in AWS CLI.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Terraform on Ubuntu 18.04 LTS

Before writing the Terraform configuration file for AWS Elasticbeanstalk, let’s install Terraform on an Ubuntu machine.

  • Login to Ubuntu machine using your favorite SSH Client.
  • Update your already existing system packages.
sudo apt update
  • Next, download the latest version of terraform in opt directory
wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
Downloading the terraform on ubuntu machine
Downloading the terraform on ubuntu machine
  • Next, install zip package which will be required to unzip the terraform zip file that you just downloded.
sudo apt-get install zip -y
  • Now, unzip the Terraform download zip file that just got downloaded.
unzip terraform*.zip
  • Move the executable to executable directory so that Terraform can be executed from any directory of ubuntu machine.
sudo mv terraform /usr/local/bin
  • Verify the terraform by checking terraform command and version of terraform
terraform               # To check if terraform is installed 
terraform -version      # To check the terraform version  
Checking the terraform command
Checking the terraform command
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.
Checking the terraform version
Checking the terraform version

Building Terraform configuration files for AWS Elastic beanstalk

Now that you have Terraform installed on your machine, It’s time to build Terraform configuration files for AWS Elastic beanstalk that you will use to launch AWS Elastic beanstalk on the AWS Cloud.

Assuming you are still logged in Ubuntu machine.

  • Create a folder in opt directory and name it as terraform-elasticbeanstalk-demo and switch to this directory.
mkdir /opt/terraform-elasticbeanstalk-demo
cd /opt/terraform-elasticbeanstalk-demo
  • Create a file named main.tf in the /opt/terraform-elasticbeanstalk-demo directory and copy/paste the below content into it. The below Terraform configuration creates the AWS elastic beanstalk application and enviornment that will be required for application to be deployed.
# Create elastic beanstalk application


resource "aws_elastic_beanstalk_application" "elasticapp" {
  name = var.elasticapp
}

# Create elastic beanstalk Environment

resource "aws_elastic_beanstalk_environment" "beanstalkappenv" {
  name                = var.beanstalkappenv
  application         = aws_elastic_beanstalk_application.elasticapp.name
  solution_stack_name = var.solution_stack_name
  tier                = var.tier

  setting {
    namespace = "aws:ec2:vpc"
    name      = "VPCId"
    value     = var.vpc_id
  }
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "IamInstanceProfile"
    value     =  "aws-elasticbeanstalk-ec2-role"
  }
  setting {
    namespace = "aws:ec2:vpc"
    name      = "AssociatePublicIpAddress"
    value     =  "True"
  }

  setting {
    namespace = "aws:ec2:vpc"
    name      = "Subnets"
    value     = join(",", var.public_subnets)
  }
  setting {
    namespace = "aws:elasticbeanstalk:environment:process:default"
    name      = "MatcherHTTPCode"
    value     = "200"
  }
  setting {
    namespace = "aws:elasticbeanstalk:environment"
    name      = "LoadBalancerType"
    value     = "application"
  }
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "InstanceType"
    value     = "t2.medium"
  }
  setting {
    namespace = "aws:ec2:vpc"
    name      = "ELBScheme"
    value     = "internet facing"
  }
  setting {
    namespace = "aws:autoscaling:asg"
    name      = "MinSize"
    value     = 1
  }
  setting {
    namespace = "aws:autoscaling:asg"
    name      = "MaxSize"
    value     = 2
  }
  setting {
    namespace = "aws:elasticbeanstalk:healthreporting:system"
    name      = "SystemType"
    value     = "enhanced"
  }

}
  • Create another file named vars.tf in the /opt/terraform-elasticbeanstalk-demo directory and copy/paste the below content into it. The variable file contains all the variables that you have referred in main.tf file.
variable "elasticapp" {
  default = "myapp"
}
variable "beanstalkappenv" {
  default = "myenv"
}
variable "solution_stack_name" {
  type = string
}
variable "tier" {
  type = string
}

variable "vpc_id" {}
variable "public_subnets" {}
variable "elb_public_subnets" {}
  • Create another file named provider.tf in the /opt/terraform-elasticbeanstalk-demo directory and copy/paste the below content into it. The provider.tf file will authenticate and allows Terraform to connect to AWS cloud.
provider "aws" {
  region = "us-east-2"
}
  • Finally create one more file named terraform.tfvars in the /opt/terraform-elasticbeanstalk-demo directory and copy/paste the below content into it.
vpc_id              = "vpc-XXXXXXXXX"
Instance_type       = "t2.medium"
minsize             = 1
maxsize             = 2
public_subnets     = ["subnet-XXXXXXXXXX", "subnet-XXXXXXXXX"] # Service Subnet
elb_public_subnets = ["subnet-XXXXXXXXXX", "subnet-XXXXXXXXX"] # ELB Subnet
tier = "WebServer"
solution_stack_name= "64bit Amazon Linux 2 v3.2.0 running Python 3.8"
  • Now use tree command on your ubuntu machine and your folder structure should look something like below.
 tree command on your ubuntu machine and your folder structure
tree command on your ubuntu machine and your folder structure

Deploying Terraform configuration to Launch AWS Elastic beanstalk

Now that all Terraform configuration files are set up, these are not doing much unless you use Terraform commands and deploy them.

  • To deploy the AWS Elastic beanstalk first thing you need to do is Initialize the terraform by running terraform init command.
terraform init

As you see below, Terraform was initialized successfully; now, it’s time to run terraform plan.

 Terraform was initialized successfully
Terraform was initialized successfully
  • Next run the terraform plan command. Teraform plan command provides the information regarding what all resources will be provisioned or deleted by Terraform.
terraform plan
Running Terraform plan command
Running Terraform plan command
  • Finally run terraform apply command that actually deploy the code and provision the AWS Elastic terraform.
terraform apply

Verifiying AWS Elastic beanstalk in AWS Cloud.

Great Job; terraform commands were executed successfully. Now it’s time to validate the AWS Elastic beanstalk launched in AWS Cloud.

  • Navigate to the AWS cloud and then futher in AWS Elasticbeanstalk service. After you reach elastic beanstalk screen you will see the enviornment and applciation name that you specified in terraform.tfvar file.
AWS Elasticbeanstalk service page
AWS Elasticbeanstalk service page
  • Next in AWS Elastic beanstalk service page click on the application URL and you will see something like below.
AWS Elasticbeanstalk service link
AWS Elasticbeanstalk service link

Join 17 other followers

Conclusion

In this tutorial, you learned what AWS Elastic beanstalk is and how to set up Amazon Elastic beanstalk using Terraform on AWS step by step.

Now that you have AWS Elastic beanstalk launched on AWS using Terraform, which applications do you plan to deploy on it next?

How to create Secrets in AWS Secrets Manager using Terraform in Amazon account.

While deploying in the Amazon AWS cloud, are you saving your passwords in the text files, configuration files, or deployment files? That’s very risky and can expose your password to attackers. Still, no worries, you have come to the right place to learn and use AWS secrets in the AWS Secrets Manager, which solves all your security concerns, encrypts all of your stored passwords, and decrypts only while retrieving them.

In this tutorial, you will learn how to create Secrets in AWS Secrets Manager using Terraform in the Amazon account. Let’s get started.

What are AWS Secrets and AWS Secrets Manager?

There was a time when all the passwords of databases or applications were kept in configuration files. Although they are kept secure simultaneously, they can be compromised if not taken care of. If you are required to update the credentials, it used to take tons of hours to apply those changes to every single file, and if you miss any of the files, it can cause the entire application to get down immediately.

AWS Secrets Manager service manages all the above issues with AWS Secrets Manager by retrieving the AWS secrets or passwords programmatically. Another major benefit of using AWS secrets is that it rotates your credentials at the schedule you define. AWS Secrets Manager keeps the important user information passwords safe and secure.

The application connects with Secret Manager to retrieve secrets and then connects with database
The application connects with Secret Manager to retrieve secrets and then connects with the database.
Admin retrieving the secrets from the AWS Secret Manager and applying in the database
Admin retrieving the secrets from the AWS Secret Manager and applying in the database

Join 17 other followers

Prerequisites

  • Ubuntu machine 20.04 version would be great , if you don’t have any machine you can create a AWS EC2 instance on AWS account with recommended 4GB RAM and at least 5GB of drive space.
  • Ubuntu machine should have IAM role attached with full access to create AWS secrets in the AWS Secret Manager or administrator permissions.
  • Terraform installed on the Ubuntu Machine. Refer How to Install Terraform on an Ubuntu machine.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

Terraform files and Terraform directory structure

Now that you have Terraform installed. Let’s now dive into Terraform files and Terraform directory structure that will help you write the Terraform configuration files later in this tutorial.

Terraform code, that is, Terraform configuration files, are written in a tree-like structure to ease the overall understanding of code with .tf format or .tf.json or .tfvars format. These configuration files are placed inside the Terraform modules.

Terraform modules are on the top level in the hierarchy where configuration files reside. Terraform modules can further call another child to terraform modules from local directories or anywhere in disk or Terraform Registry.

Terraform contains mainly five files as main.tf , vars.tf , providers.tf , output.tf and terraform.tfvars.

  1. main.tf – Terraform main.tf file contains the main code where you define which resources you need to build, update or manage.
  2. vars.tf – Terraform vars.tf file contains the input variables which are customizable and defined inside the main.tf configuration file.
  3. output.tf : The Terraform output.tf file is the file where you declare what output paraeters you wish to fetch after Terraform has been executed that is after terraform apply command.
  4. .terraform: This directory contains cached provider , modules plugins and also contains the last known backend configuration. This is managed by terraform and created after you run terraform init command.
  5. terraform.tfvars files contains the values which are required to be passed for variables that are refered in main.tf and actually decalred in vars.tf file.
  6. providers.tf – The povider.tf is the most important file whrere you define your terraform providers such as terraform aws provider, terraform azure provider etc to authenticate with the cloud provider.

Building Terraform Configuration to create AWS Secrets and Secrets versions on AWS

Now that you have sound knowledge of what Terraform configuration files look like and the purpose of each of the Terraform configuration files. So, let’s create Terraform configuration files required to create AWS secrets.

  • Log in to the Ubuntu machine using your favorite SSH client.
  • Create a folder in opt directory named terraform-demo-secrets and switch to that folder.
mkdir /opt/terraform-demo-secrets
cd /opt/terraform-demo-secrets
  • Create a file and name it as main.tf and copy/paste the below content. The below file creates the below components:
    • Creates random password for user adminaccount in AWS secret(Masteraccoundb)
    • Creates a secret named Masteraccoundb
    • Creates a secret version that will contain AWS secret(Masteraccoundb)
# Firstly create a random generated password to use in secrets.

resource "random_password" "password" {
  length           = 16
  special          = true
  override_special = "_%@"
}

# Creating a AWS secret for database master account (Masteraccoundb)

resource "aws_secretsmanager_secret" "secretmasterDB" {
   name = "Masteraccoundb"
}

# Creating a AWS secret versions for database master account (Masteraccoundb)

resource "aws_secretsmanager_secret_version" "sversion" {
  secret_id = aws_secretsmanager_secret.secretmasterDB.id
  secret_string = <<EOF
   {
    "username": "adminaccount",
    "password": "${random_password.password.result}"
   }
EOF
}

# Importing the AWS secrets created previously using arn.

data "aws_secretsmanager_secret" "secretmasterDB" {
  arn = aws_secretsmanager_secret.secretmasterDB.arn
}

# Importing the AWS secret version created previously using arn.

data "aws_secretsmanager_secret_version" "creds" {
  secret_id = data.aws_secretsmanager_secret.secretmasterDB.arn
}

# After importing the secrets storing into Locals

locals {
  db_creds = jsondecode(data.aws_secretsmanager_secret_version.creds.secret_string)
}
  • Create another file and name it as provider.tf
provider "aws" {
  region = "us-east-2"
}
Checking all the files in terraform-demo-secrets folder
Checking all the files in the terraform-demo-secrets folder
  • Now your files and code are ready for execution. Initialize the terraform using the terraform init command.
terraform init
Initializing the terraform using the terraform init command.
Initializing the terraform using the terraform init command.
  • Terraform initialized successfully , now its time to run the plan command which provides you the details of the deployment. Run terraform plan command to confirm if correct resources is going to provisioned or deleted.
terraform plan
Running the terraform plan command
Running the terraform plan command
Output of the terraform plan command
The output of the terraform plan command
  • After verification, now its time to actually deploy the code using terraform apply command.
terraform apply
Running the terraform apply command
Running the terraform apply command
  • Great Job; terraform commands were executed succesfully. Now Open your AWS account and navigate to the AWS Secrets Manager.

As you can see, the AWS secret has been created successfully in the AWS account. Click on the secret (Masteraccoundb) and further click on Retrieve secret value button.

Verifying the AWS secret
Verifying the AWS secret
  • Click on Retrieve secret value to see the values stored for the AWS Secret.
Retrieve the AWS secret value
Retrieve the AWS secret value

As you can see, the secret keys and values are successfully added as you defined in Terraform configuration file.

Verifying the AWS secret values
Verifying the AWS secret values

Creating Postgres database using Terraform with AWS Secrets in AWS Secret Manager

Now the secret keys and values are successfully added as you defined in Terraform configuration file using Terraform. The next step is to use these AWS secrets as credentials for the database master account while creating the database.

  • Open the same Terraform configuration file main.tf agaian and copy/paste the below code at the bottom of th file. As you can see the below file creates the database cluster using the AWS secrets master_username = local.db_creds.username and master_password = local.db_creds.password.
resource "aws_rds_cluster" "main" { 
  cluster_identifier = "democluster"
  database_name = "maindb"
  master_username = local.db_creds.username
  master_password = local.db_creds.password
  port = 5432
  engine = "aurora-postgresql"
  engine_version = "11.6"
  db_subnet_group_name = "dbsubntg"  # Make sure you create this before manually
  storage_encrypted = true 
}


resource "aws_rds_cluster_instance" "main" { 
  count = 2
  identifier = "myinstance-${count.index + 1}"
  cluster_identifier = "${aws_rds_cluster.main.id}"
  instance_class = "db.r4.large"
  engine = "aurora-postgresql"
  engine_version = "11.6"
  db_subnet_group_name = "dbsubntg"
  publicly_accessible = true 
}
  • Again execute the terraform init → terraform plan → terraform apply commands.
terraform apply
terraform apply command created the database successfully using the AWS Secrets
terraform apply command created the database successfully using the AWS Secrets
  • Now navigate to the AWS RDS service on Amazon account and check the Postgres cluster that got created recently.
Navigating to the AWS RDS service on Amazon account
Navigating to the AWS RDS service on Amazon account
  • Finallly click on democluster and you should see the AWS secrets created earlier by Terraform are succesfully applied in the Postgres database in AWS RDS.
AWS secrets created earlier by Terraform are successfully applied in the Postgres database in AWS RDS
AWS secrets created earlier by Terraform are successfully applied in the Postgres database in AWS RDS

Join 17 other followers

Conclusion

In this tutorial, you learned what is AWS Secrets and AWS Secrets manager, how to create AWS secrets in the AWS Secrets Manager, and create a Postgres database utilizing AWS secrets as master account credentials.

Now that you have secured your database credentials by storing them in AWS secrets, what do you plan to secure next?

How to work with multiple Terraform Provisioners

Have you ever worked with passing the data or any script on any compute resource after they are created successfully ? Most of you might have worked with passing the user data or scripts at the time of creation.

You have come to right place to learn about latest and most widely used terraform provisoners which solves the problem of working with data after resource is created or already existing data.

Table of content

  1. What is Terraform provisioners
  2. What are different actions performed by terraform provisioners.
  3. Prerequisites
  4. How to Install Terraform on Ubuntu 18.04 LTS
  5. Terraform Configuration Files and Structure
  6. Working with Various terraform provisioners on AWS EC2 instance
  7. Conclusion

What is Terraform provisioners?

Most cloud computing platforms provide different ways to pass data into instances such as ec2 instance or any other compute resource at the time of their creation so that the data is immediately available on system boot. This is possible with various code functions such as by passing user_data . Also at the time of creating the EC2 instance AMI’s we can pass the data.

But what if we need to provide the data after resource is created or already in place? Here comes role of terraform provisioner which passes the data after resource is created or for existed resources.

There are lots of terraform provisioners that interact with remote servers over SSH or WinRM can be used to pass such data by logging in to the server and providing it directly.

What are different actions performed by terraform provisioners.

  1. They perform specific action on local machine that means they generate output on same machine
  2. They perform specific action on remote machine that means they generate output on remote machine
  3. They perform specific action on to copy files remotely on machines.
  4. These are used to enter or pass data in any resource which cannot be passed at the time of creation of resources.
  5. You can have conditions in provisioner such as when = destroy , on_failure = continue

Prerequisites

  • Ubuntu machine to run terraform preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account
  • Recommended to have 4GB RAM
  • At least 5GB of drive space
  • Ubuntu machine should have IAM role attached with all EC2 permissions or it is always great to have administrator permissions to work with demo’s.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Terraform on Ubuntu 18.04 LTS

  • Update your already existing system packages.
sudo apt update
  • Download the latest version of terraform in opt directory
wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
This image has an empty alt attribute; its file name is image-163.png
  • Install zip package which will be required to unzip
sudo apt-get install zip -y
  • unzip the Terraform download zip file
unzip terraform*.zip
  • Move the executable to executable directory
sudo mv terraform /usr/local/bin
  • Verify the terraform by checking terraform command and version of terraform
terraform               # To check if terraform is installed 

terraform -version      # To check the terraform version  
This image has an empty alt attribute; its file name is image-164.png
This image has an empty alt attribute; its file name is image-165.png
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Working with Various terraform provisioners on AWS EC2 instance

Now, lets dive into demo where you will use multiple provisioners . This tutorial will Create a secret key pair ( Public and Private keys) so that provisioners uses to connect and login to machine over SSH protocol . Using local exec provisioners executes command locally on the machine. Next, remote exec provisioners installs software on AWS EC2 instance and finally File Provisioners upload the file in the ec2 instance

  • Create a file main.tf and paste the below code.
resource "aws_key_pair" "deployer" {     # Creating the Key pair on AWS 
  key_name   = "deployer-key"
  public_key = "${file("~/.ssh/id_rsa.pub")}" # Generated private and public key on local machine
}
 
resource "aws_instance" "my-machine" {        # Creating the instance
 
  ami = "ami-0a91cd140a1fc148a"
  key_name = aws_key_pair.deployer.key_name
  instance_type = "t2.micro"
 
  provisioner  "local-exec" {                  # Provisioner 1
        command = "echo ${aws_instance.my-machine.private_ip} >> ip.txt"
        on_failure = continue
       }
 
  provisioner  "remote-exec" {            # Provisioner 2 [needs SSH/Winrm connection]
      connection {
      type        = "ssh"
      user        = "ubuntu"
      private_key = "${file("~/.ssh/id_rsa")}"
      agent       = false
      host        = aws_instance.my-machine.public_ip       # Using my instance to connect
      timeout     = "30s"
    }
      inline = [
        "sudo apt install -y apache2",
      ]
  }
 
  provisioner "file" {                    # Provisioner 3 [needs SSH/Winrm connection]
    source      = "C:\\Users\\4014566\\Desktop\\service-policy.json"
    destination = "/tmp/file.json"
    connection {
      type        = "ssh"
      user        = "ubuntu"
      host        = aws_instance.my-machine.public_ip
      private_key = "${file("~/.ssh/id_rsa")}"
      agent       = false
      timeout     = "30s"
    }
  }  
  • Create a file provider.tf and paste the below code
provider "aws" {
  region = "us-east-2"
}
  • Now your files and code are ready for execution . Initialize the terraform
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After plan verification , run the apply command to deploy the code.
terraform apply
  • Lets now verify the commands and execution.
command executed locally on the ubuntu machine using local exec
command executed on remote machine using other remote-exec and file provisioners

Great Job, Terraform commands execution was done successfully locally and on remote machine in AWS.

Conclusion

In this tutorial, we demonstrated some benefits of terraform provisioners and learnt how to work with various terraform provisioners using terraform on AWS step by step .

Hope this tutorial will help you in understanding the terraform and working with various terraform provisioners on Amazon cloud. Please share with your friends

How to Launch AWS S3 bucket on Amazon using Terraform

Do you have issues with lots of log rotation or does your system hangs when lots of logs are generated on the disk and your system behaves very abruptly and do you have less space to keep your important deployments JAR’s or WAR’s? These are all challenges which everyone has faced while working with datacenter applications of with less capacity VM’s.

It is right time to store all your logs, deployment code and scripts and this is very much possible with Amazon’s AWS S3 which provide unlimited storage , safe and secure and quick . So in this tutorial we will go through what is AWS S3 , AWS S3 features and how to launch a S3 bucket using terraform.

Table of content

  1. What is AWS Amazon S3 bucket?
  2. Prerequisites
  3. How to Install Terraform on Ubuntu 18.04 LTS
  4. Terraform Configuration Files and Structure
  5. Launch AWS S3 bucket on Amazon Web Service using Terraform
  6. Upload an object to AWS S3 bucket
  7. Conclusion

What is Amazon AWS S3 bucket?

AWS S3 , why it is S3 ? The name itself tells that its a 3 word whose alphabet starts with “S” . The Full form of AWS S3 is simple storage service. AWS S3 service helps in storing of unlimited data very safely and efficiently. There is a very basic architecture of AWS S3 . Everything in AWS S3 is a object such as pdf files, zip files , text files or war files anything. The next thing is bucket where all these objects resides.

AWS S3 Service ➡️ Bucket ➡️ Objects ➡️ PDF , HTML DOCS, WAR , ZIP FILES etc

Some of the features of AWS S3 bucket are:

  • In order to store the data in bucket you will need to upload it.
  • To keep your bucket permissions more secure provide necessary permissions to IAM role or IAM user.
  • Buckets have unique name globally that means there will be only 1 bucket throughout different accounts or any regions.
  • 100 buckets can be created in any AWS account , post that you need to raise a ticket to Amazon.
  • Owner of Bucket is specific to AWS account only.
  • Buckets are created region specific such as us-east-1 , us-east-2 , us-west-1 or us-west-2
  • Bucket objects are objected in AWS S3 using AWS S3 API service.
  • Buckets can be publicly visible that means anybody on the internet can access it. So it is always recommended to keep the public access blocked for all buckets unless very much required.

Prerequisites

  • Ubuntu machine to run terraform preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account or AWS Account
  • Recommended to have 4GB RAM
  • At least 5GB of drive space
  • Ubuntu machine should have IAM role attached with full access to create AWS S3 bucket or it is always great to have administrator permissions to work with demo’s.
  • If you wish to create bucket manually click here for the setup instruction but we will use terraform for this demo.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Terraform on Ubuntu 18.04 LTS

  • Update your already existing system packages.
sudo apt update
  • Download the latest version of terraform in opt directory
wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
This image has an empty alt attribute; its file name is image-163.png
  • Install zip package which will be required to unzip
sudo apt-get install zip -y
  • unzip the Terraform download zip file
unzip terraform*.zip
  • Move the executable to executable directory
sudo mv terraform /usr/local/bin
  • Verify the terraform by checking terraform command and version of terraform
terraform               # To check if terraform is installed 

terraform -version      # To check the terraform version  
This image has an empty alt attribute; its file name is image-164.png
This image has an empty alt attribute; its file name is image-165.png
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Launch AWS S3 bucket on AWS using Terraform

Now we will create all the configuration files which are required for creation of S3 bucket on AWS account.

  • Create a folder in opt directory and name it as terraform-s3-demo 
mkdir /opt/terraform-s3-demo
cd /opt/terraform-s3-demo
  • Create main.tf file under terraform-s3-demo folder and paste the content below.
# Bucket Access

resource "aws_s3_bucket_public_access_block" "publicaccess" {
  bucket = aws_s3_bucket.demobucket.id
  block_public_acls       = false
  block_public_policy     = false
}

# Creating the encryption key which will encrypt the bucket objects

resource "aws_kms_key" "mykey" {
  deletion_window_in_days = "20"
}

# Creating the bucket

resource "aws_s3_bucket" "demobucket" {

  bucket          = var.bucket
  force_destroy   = var.force_destroy

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = aws_kms_key.mykey.arn
        sse_algorithm     = "aws:kms"
      }
    }
  }
  versioning {
    enabled               = true
  }
  lifecycle_rule {
    prefix  = "log/"
    enabled = true
    expiration {
      date = var.date
    }
  }
}
  • Create vars.tf file under terraform-s3-demo folder and paste the content below
variable "bucket" {
 type = string
}
variable "force_destroy" {
 type = string
}
variable "date" {
 type = string
}
  • Create provider.tf file under terraform-s3-demo folder and paste the content below.
provider "aws" {
  region = "us-east-2"
}
  • Create terraform.tfvars file under terraform-s3-demo folder and paste the content below.
bucket          = "terraformdemobucket"
force_destroy   = false
date = "2022-01-12"
  • Now your files and code are ready for execution . Initialize the terraform
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply

Terraform command execution was done successfully. Now you should have AWS S3 bucket launched in AWS. Verify it by navigating it to AWS account and search for AWS S3 service and check if the bucket has been created.

Upload an object in AWS S3 bucket

All the files and folder inside the AWS S3 Buckets are known as objects.

  • Now Let us upload a sample text file in the bucket , Next Click on the terraformdemobucket
  • Now click on UPLOAD and then Add files
  • Choose from your system any file , We used sample.txt

Conclusion

In this tutorial, we demonstrated some benefits of Amazon AWS S3 and learnt how to set up Amazon AWS S3 using terraform on AWS step by step . Most of your phone data and your website data are stored on AWS S3. This service specially to host a website is best in market.

Hope this tutorial will help you in understanding the terraform and provisioning the AWS S3 on Amazon cloud. Please share with your friends

How to Launch multiple EC2 instances on AWS using Terraform count and for_each

Creating lots of instances in any cloud provider is always required for any organization or is a project need. If you are asked to create 10 EC2 machines in a particular AWS account using console UI , I am sure it will take tons of hours to create it and lots of efforts. There are lots of automated ways which can create multiple instance in quick time , Yes that’s quite possible and with terraform its very simple and easy.

In this tutorial, we will learn to create multiple ec2 instance in AWS account using terraform code.

Table of content

  1. What is terraform?
  2. Prerequisites
  3. How to Install Terraform on Ubuntu 18.04 LTS
  4. Launch multiple EC2 instances of same type using count on AWS using Terraform
  5. Launch multiple EC2 instances of different type using for_each on AWS using Terraform
  6. Conclusion

What is Terraform?

Terraform is a tool for building , versioning and changing the infrastructure. Terraform is Written in GO Language and the syntax language of configuration files is hcl which stands for HashiCorp configuration language which is much easier than yaml or json.

Terraform has been in use for quite a while now . I would say its an amazing tool to build , change the infrastructure in very effective and simpler way. It’s used with variety of cloud provider such as Amazon AWS, Oracle, Microsoft Azure , Google cloud and many more. I hope you would love to learn it and utilize it.

Prerequisites

  • Ubuntu machine preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account
  • Recommended to have 4GB RAM
  • At least 5GB of drive space
  • Ubuntu machine should have IAM role attached with AWS EC2 instance creation which we will use later in tutorial using terraform

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Terraform on Ubuntu 18.04 LTS

  • Update your already existing system packages.
sudo apt update
  • Download the latest version of terraform in opt directory
wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
  • Install zip package which will be required to unzip
sudo apt-get install zip -y
  • unzip the Terraform download zip file
unzip terraform*.zip
  • Move the executable to executable directory
sudo mv terraform /usr/local/bin
  • Verify the terraform by checking terraform command and version of terraform
terraform               # To check if terraform is installed 

terraform -version      # To check the terraform version  
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Launch multiple EC2 instances of same type using count on AWS using Terraform

Now, In this demonstration we will create multiple ec2 instance using count and for_each parameters in terraform. So Lets create all the configuration files which are required for creation of EC2 instance on AWS account using terraform.

  • Create a folder in opt directory and name it as terraform-demo 
mkdir /opt/terraform-demo
cd /opt/terraform-demo
  • Create main.tf file under terraform-demo folder and paste the content below.
resource "aws_instance" "my-machine" {
  count = 4     # Here we are creating identical 4 machines.
  
  ami = var.ami
  instance_type = var.instance_type
  tags = {
    Name = "my-machine-${count.index}"
         }
}
  • Create vars.tf file under terraform-demo folder and paste the content below
                                            # Creating a Variable for ami
variable "ami" {       
  type = string
}
                                           # Creating a Variable for instance_type
variable "instance_type" {    
  type = string
}
  • Create terraform.tfvars file under terraform-demo folder and paste the content below.
 ami = "ami-0742a572c2ce45ebf"
 instance_type = "t2.micro"
  • Create output.tffile under terraform-demo folder and paste the content below.

Note: value depends on resource name and type ( same as that of main.tf)

output "ec2_machines" {
  value = aws_instance.my-machine.*.arn  # Here * indicates that there are more than one arn as we used count as 4   
}
 

provider.tf:

provider "aws" {      # Defining the Provider Amazon  as we need to run this on AWS   
  region = "us-east-1"
}
  • Now your files and code are ready for execution . Initialize the terraform
terraform init
  • Terraform initialized successfully ,now its time to run the terraform plan command.
  • Terraform plan is a sort of a blueprint before deployment to confirm if correct resources are being provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply

Great Job, terraform commands execution was done successfully. Now we should have four EC2 instance launched in AWS.

Launch multiple EC2 instances of different type using for_each on AWS using Terraform

  • In previous example we created more than one resource but all with same attributes such as instance_type
  • Note: We use for_each in the terraform when we need to create more than one resources but with different attributes such as instance_type for keys etc.

main.tf

resource "aws_instance" "my-machine" {
  ami = var.ami
  for_each  = {                     # for_each iterates over each key and values
      key1 = "t2.micro"             # Instance 1 will have key1 with t2.micro instance type
      key2 = "t2.medium"            # Instance 2 will have key2 with t2.medium instance type
        }
        instance_type  = each.value
	key_name       = each.key
    tags =  {
	   Name  = each.value
	}
}

vars.tf

variable "tag_ec2" {
  type = list(string)
  default = ["ec21a","ec21b"]
}
                                           
variable "ami" {       # Creating a Variable for ami
  type = string
}

terraform.tfvars

ami = "ami-0742a572c2ce45ebf"
instance_type = "t2.micro"
  • Now code is ready for execution , initialize the terraform , run the plan and then use apply to deploy the code as described above.
terraform init 
terraform plan
terraform apply

Conclusion

Terraform is a great open source tool which provides easiest code and configuration files to work with. Its a best Infra as a code tool to start with. You should now have an idea to Launch multiple EC2 instances on AWS using Terraform count and for_each on Amazon web service.

Hope this tutorial will help you in understanding the terraform and running multiple instances on cloud. Please share with your friends.

Terraform Cheat Sheet and command line.

Terraform has been a hot topics in todays date and is used widely with all the organizations now. Terraform is a open source infrastructure as code tool which helps in provisioning of infrastructure and services on various cloud providers and third party vendors. This is a great tool and simple to learn. It has a very vast scope for DevOps and IT engineers.

If you wish to learn terraforms each component and topic which I have covered in terraform pocket guide .

Link: terraform-pocket-guide

In this tutorial I will only cover all the commands which will be used while using terraform. I am sure this tutorial is going to be very useful for all terraform beginners and experienced professionals.

TERRAFORM COMMANDS:

  1. terraform init: This command initializes the providers , modules and backend configurations which are required to run the terraform.
    • terraform init -input=true # If you wish to provide inputs use input as true else terraform will fail.
    • terraform init -lock=false # This command will disable lock of terraform state file (Not recommended)
    • terraform init -upgrade – # This command will upgrade your all modules and plugins required to run the terraform
  2. terraform get: This command will initialize or download the modules only.
  3. terraform plan: This command will provides you the state of all resources and and compares with real infrastructure. It uses terraform state file data to compare . It uses providers API to perform this step.
    • terraform plan -compact-warnings # This will provide summary of warnings while running terraform plan
    • terraform plan -out=path # This command will save the execution plan
    • terraform plan -var-file= abc.tfvars # This will use specific terraform.tfvars which contains environment wise values
  4. terraform apply: To apply changes in specific providers account
    • terraform apply -backup=path # This command will backup the terraform state file
    • terraform apply -lock=true # This command will lock the terraform state file
    • terraform apply -state=path # This command will prompt to provide the path of the terraform state file
    • terraform apply -var-file= abc.tfvars # This command will allow you to enter the specific terraform.tfvars which contains environment wise variables.
    • terraform apply -auto-approve # This command will not prompt to approve the apply command.
  5. terraform destroy: It will destroy the terraform managed infrastructure.
    • terraform destroy -auto-approve # This command will not prompt to approve the destroy command.
  6. terraform console: This command will evaluate the expressions in configuration files.
    • terraform console -state=path # This command will prompt to provide the path to local state file
  7. terraform fmt:  This command will format the configuration file to a proper format
    • terraform fmt -check # This command will check the input format
    • terraform fmt – recursive # This command will format subdirectories as well
    • terraform fmt – diff # This command will display the difference if any found in latest formatting with the previous ones.
  8. terraform validates: It validates the configuration files
    • terraform validate -json : Output is in json format
  9. terraform graph: It Generates visual representation of execution plan
    • terraform graph -draw-cycles
    • terraform graph -type=plan
  10. terraform output : This is to extract the values of an output variable from state file.
    • terraform output -json
    • terraform output -state=path
  11. terraform state list: This command will list all the resources present in terraform state file
    • terraform state list – id=id ( Id of resource ) # This command will search for particular resource using resource id in terraform state file.
    • terraform state list -state=path # This command will prompt you to provide the path of state file and then provides the list of all resources in terraform state file
  12. terraform state show: This command shows the attributes of specific resources .
    • terraform state show -state=path # This command will prompt you to provide the path and then provide the attributes of specific resources .
  13. terraform import : This command will import existing resource from infrastructure which was not created using terraform but will be imported in terraform state file and will be included in terraform next time we run it.
  14. terraform refresh: This command will reconcile the terraform state file. Whatever resource you created using terraform and if they got modified or updated by any other way such as CLI or using console UI , refresh will sync them in terraform state file.
  15. terraform rm : This command will remove the resources from the terraform state file
  16. terraform mv : This command moves the resources within the terraform state file from one location to another
  17. terraform state pull: This command will manually download the terraform state file from remote state in your local machine
  18. terraform state push: This command will manually upload the local state file to remote state

Quick Glance of Terraform CLI Commands

InitializeProvisionModify ConfigCheck infraManipulate State
terraform initterraform planterraform fmtterraform graphterraform state list
terraform getterraform applyterraform validateterraform outputterraform state show
terraform destroyterraform consoleterraform state showterraform state mv/rm
terraform state listterraform state pull/push

Conclusion:

In this tutorial we covered terraform commands which are very useful for beginners and experienced professionals. I am sure the summary of terraform commands listed in this tutorial is going to be of massive use.

Keep Terraforming !! Please share with your friends if you like it.

How to Install Terraform on an Ubuntu machine

Many automation tools and scripts are available in the market, but one of the most widely used and easy to use is Terraform, also known as Infrastructure as a code tool.

In this tutorial, you’ll install Terraform on Ubuntu 20.04. You’ll then work with Terraform to create a service on Amazon Managed Service. So let’s get started.

What is Terraform?

Terraform is a tool for building, versioning, and changing the infrastructure. Terraform is Written in GO Language, and the syntax language of configuration files is HashiCorp configuration language(HCL) which is much easier than yaml or json.

Terraform is used with various cloud providers such as Amazon AWS, Oracle, Microsoft Azure, Google Cloud, etc.

Join 17 other followers

Prerequisites

  • Ubuntu machine preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account. Recommended to have 4GB RAM and at least 5GB of drive space
  • Ubuntu machine should have IAM role attached with AWS EC2 instance creation permissions or admin rights.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Terraform on Ubuntu 20.04 LTS

Now that you have a basic idea about terraform let’s kick off this tutorial by first installing terraform on Ubuntu 20.04 machine.

  • First log in to the Ubuntu machine sing your favorite SSH client such as putty.
  • Next, update the existing system packages on the ubuntu machine by running below command.
sudo apt update
  • Now, download the latest version of Terraform in the opt directory. You can install Terraform on any directory but it is recommended to use opt directory for software installations.
wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
  • Further install zip package which you will need in next step to unzip Terraform zip file.
sudo apt-get install zip -y
  • Now, unzip the Terraform download zip file from the unzip command.
unzip terraform*.zip
  • Next, move the Terraform executable file to executable directory such as /usr/local/bin so that you can run Terraform from any directory of your Ubuntu machine.
sudo mv terraform /usr/local/bin
  • Finally, verify the Terraform installation by running terraform command or terraform version command.
terraform  # To check if terraform is installed 

terraform -version # To check the terraform version </mark> 
Checking the Terraform installation
Checking the Terraform installation
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.
Checking the Terraform installation by running terraform version command
Checking the Terraform installation by running terraform version command

Terraform files and Terraform directory structure

Now that you have Terraform installed. Let’s now dive into Terraform files and Terraform directory structure that will help you write the Terraform configuration files later in this tutorial.

Terraform code, that is, Terraform configuration files, are written in a tree-like structure to ease the overall understanding of code with .tf format or .tf.json or .tfvars format. These configuration files are placed inside the Terraform modules.

Terraform modules are on the top level in the hierarchy where configuration files reside. Terraform modules can further call another child to terraform modules from local directories or anywhere in disk or Terraform Registry.

Terraform contains mainly five files as main.tf , vars.tf , providers.tf , output.tf and terraform.tfvars.

  1. main.tf – Terraform main.tf file contains the main code where you define which resources you need to build, update or manage.
  2. vars.tf – Terraform vars.tf file contains the input variables which are customizable and defined inside the main.tf configuration file.
  3. output.tf : The Terraform output.tf file is the file where you declare what output paraeters you wish to fetch after Terraform has been executed that is after terraform apply command.
  4. .terraform: This directory contains cached provider , modules plugins and also contains the last known backend configuration. This is managed by terraform and created after you run terraform init command.
  5. terraform.tfvars files contains the values which are required to be passed for variables that are refered in main.tf and actually decalred in vars.tf file.
  6. providers.tf – The povider.tf is the most important file whrere you define your terraform providers such as terraform aws provider, terraform azure provider etc to authenticate with the cloud provider.

Terraform ec2 instance example (terraform aws ec2)

Let’s wrap up this ultimate guide with a basic Terraform ec2 instance example or terraform aws ec2.

  • Assuming you already have Terraform installed on your machine.
  • First create a folder terraform-demo in opt directory. This folder will contain all the configuraion file that Terraform needs to build ec2.
mkdir /opt/terraform-demo
cd /opt/terraform-demo
  • Now create main.tf file under terraform-demo folder and copy/paste the content below.
resource "aws_instance" "my-machine" {          # This is Resource block where we define what we need to create

  ami = var.ami                                 # ami is required as we need ami in order to create an instance
  instance_type = var.instance_type             # Similarly we need instance_type
}

  • Create one more file named vars.tf file under terraform-demo folder and copy/paste the content below. The vars.tf file contains the variables that you referred in main.tf file.
variable "ami" {                       # We are declaring the variable ami here which we used in main.tf
  type = string      
}

variable "instance_type" {             # We are declaring the variable instance_type here which we used in main.tf
  type = string 
}
  • Create one more file output.tf file under terraform-demo folder and paste the content below. This file will allow Terraform to display he output after running terraform apply command.
output "ec2_arn" {
  value = aws_instance.my-machine.arn    
}  
  • Create provider.tf file under terraform-demo folder and paste the content below.
provider "aws" {     # Defining the Provider Amazon  as we need to run this on AWS  
  region = "us-east-2"
}
  • Create terraform.tfvars file under terraform-demo folder and paste the content below. This file contains the value of Terraform vaiables declared in vars.tf file.
ami = "ami-013f17f36f8b1fefb" 
instance_type = "t2.micro"
  • Now, run tree command which will provide you the folder structure. You should see something like below.
tree command
tree command
  • Now your files and code are ready for execution. Initialize the terraform using the terraform init command.
terraform init
Initializing the terraform using the terraform init command.
Initializing the terraform using the terraform init command.
  • Terraform initialized successfully , now its time to run the plan command which provides you the details of the deployment. Run terraform plan command to confirm if correct resources is going to provisioned or deleted.
terraform plan
Running the terraform plan command
Running the terraform plan command
Running the terraform plan command
Running the terraform plan command
  • After verification, now its time to actually deploy the code using terraform apply command.
terraform apply
Running the terraform apply command
Running the terraform apply command
Output of the terraform apply command
The output of the terraform apply command

Great Job; terraform commands were executed successfully. Now you should have the AWS EC2 instance launched in AWS Cloud.

It generally takes a minute or so to launch an instance, and yes, you can see that the instance is successfully launched now in the us-east-2 region as expected.

Join 17 other followers

Conclusion

In this tutorial, you learned What is terraform, how to Install Terraform on the Ubuntu machine, and launch an ec2 instance on an AWS account using terraform. Keep Terraforming !!

Now that you have the AWS EC2 instance launched, what are you planning to deploy on the newly created AWS EC2?