How to Launch AWS Elasticsearch using Terraform in Amazon Account

It is important to have a search engine for your website or applications. When it comes to automation of these great features such as load balancing and scalability of websites, Amazon provides it own managed service known as Amazon Elasticsearch.

In this tutorial you will learn about what is Amazon Elasticsearch and how to create a Amazon Elasticsearch domain using Terraform.

Table of Contents

  1. What Is Amazon Elasticsearch Service?
  2. Prerequisites:
  3. Terraform Configuration Files and Structure
  4. Configure Terraform files for AWS Elasticsearch
  5. Verify AWS Elasticsearch in Amazon Account
  6. Conclusion

It is important to have a search engine for your website or applications. When it comes to automation of these great features such as load balancing and scalability of websites, Amazon provides it own managed service known as Amazon Elasticsearch.

In this tutorial you will learn about what is Amazon Elasticsearch and how to create a Amazon Elasticsearch domain using Amazon Management console and then search the data using Kibana.

What Is Amazon Elasticsearch Service?

Amazon Elasticsearch Service is a managed service which deploys and scale the Elasticsearch clusters in the cloud. Elasticsearch is an open source analytical and search engine which is used to perform real time application monitoring and log analytics.

Amazon Elasticsearch service provisions all resources for Elasticsearch clusters and launches it. It also replaces the failed Elasticsearch nodes in the cluster automatically.

Features of Amazon Elasticsearch Service

  • It can scale up to 3 PB of attached storage
  • It works with various instance types.
  • It easily integrates with other services such as IAM for security for ,VPC , AWS S3 for loading data , AWS Cloud Watch for monitoring and AWS SNS for alerts notifications.

Prerequisites:

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Configure Terraform files for AWS Elasticsearch

In this demonstration we will create a simple Amazon Elasticsearch using Terraform from Windows machine.

  • Create a folder on your desktop on windows Machine and name it as Terraform-Elasticsearch
  • Now create a file main.tf inside the folder you’re in and paste the below content
rresource "aws_elasticsearch_domain" "es" {
  domain_name           = var.domain
  elasticsearch_version = "7.10"

  cluster_config {
    instance_type = var.instance_type
  }
  snapshot_options {
    automated_snapshot_start_hour = 23
  }
  vpc_options {
    subnet_ids = ["subnet-0d8c53ffee6d4c59e"]
  }
  ebs_options {
    ebs_enabled = var.ebs_volume_size > 0 ? true : false
    volume_size = var.ebs_volume_size
    volume_type = var.volume_type
  }
  tags = {
    Domain = var.tag_domain
  }
}


resource "aws_elasticsearch_domain_policy" "main" {
  domain_name = aws_elasticsearch_domain.es.domain_name
  access_policies = <<POLICIES
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "es:*",
            "Principal": "*",
            "Effect": "Allow",
            "Resource": "${aws_elasticsearch_domain.es.arn}/*"
        }
    ]
}
POLICIES
}
  • Create one more file vars.tf inside the same folder and paste the below content
variable "domain" {
    type = string
}
variable "instance_type" {
    type = string
}
variable "tag_domain" {
    type = string
}
variable "volume_type" {
    type = string
}
variable "ebs_volume_size" {}
  • Create one more file output.tf inside the same folder and paste the below content
output "arn" {
    value = aws_elasticsearch_domain.es.arn
} 
output "domain_id" {
    value = aws_elasticsearch_domain.es.domain_id
} 
output "domain_name" {
    value = aws_elasticsearch_domain.es.domain_name
} 
output "endpoint" {
    value = aws_elasticsearch_domain.es.endpoint
} 
output "kibana_endpoint" {
    value = aws_elasticsearch_domain.es.kibana_endpoint
}
  • Create one more file provider.tf inside the same folder and paste the below content:
provider "aws" {      # Defining the Provider Amazon  as we need to run this on AWS   
  region = "us-east-1"
}
  • Create one more file terraform.tfvars inside the same folder and paste the below content
domain = "newdomain" 
instance_type = "r4.large.elasticsearch"
tag_domain = "NewDomain"
volume_type = "gp2"
ebs_volume_size = 10
  • Now your files and code are ready for execution .
  • Initialize the terraform using below command.
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply

Verify AWS Elasticsearch in Amazon Account

Terraform commands ( init , plan and apply ) all ran successfully. Now Lets verify it on AWS Management console of all the things were created properly.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘Elasticsearch’, and click on the Elasticsearch menu item.
  • Now You will see that the newdomain is created succesfully
  • Click on newdomainto see all the details.

Conclusion

In this tutorial you learnt about what is Amazon Elasticsearch and how to create a Amazon Elasticsearch domain using Terraform

So, Now you have a strong fundamental understanding of AWS Elasticsearch , which website are you going to implement on Elasticsearch with Terraform ?

How to Setup AWS WAF and Web ACL using Terraform on Amazon Cloud

It is always a good practice to monitor and make sure your applications or website are fully protected. AWS cloud provides you a service known as AWS WAF that Protect your web applications from common web exploits.

Lets learn everything about AWS WAF ( Web Application Firewall ) and use Terraform to create it.

Table of Contents

  1. What is AWS WAF ?
  2. Prerequisites
  3. Terraform Configuration Files and Structure
  4. Configure Terraform files for AWS WAF
  5. Deploy AWS WAF using Terraform commands
  6. Conclusion

What is AWS WAF ?

AWS WAF stands for Amazon Web services Web Application Firewall. Using AWS WAF you can monitor all the HTTP or HTTPSrequests that are forwarded to Amazon Cloud Front , Amazon Load balancer , Amazon API Gateway REST API etc. from users. It also controls who can access the required content or data based on specific conditions such source IP address etc.

AWS WAF Protect your web applications from common web exploits. To know more about Detailed view of AWS WAF , please find it on the other Blog Post What is AWS Web Application Firewall ?

Prerequisites:

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Configure Terraform files for AWS WAF

In this demonstration we will create a simple Amazon WAF instance using Terraform on Windows machine.

  • Create a folder on your desktop or any location on windows Machine ( I prefer it on Desktop). Now create a file main.tf inside the folder you’re in and paste the below content
# Creating the IP Set

resource "aws_waf_ipset" "ipset" {
   name = "MyFirstipset"
   ip_set_descriptors {
     type = "IPV4"
     value = "10.111.0.0/20"
   }
}

# Creating the Rule which will be applied on Web ACL component

resource "aws_waf_rule" "waf_rule" { 
  depends_on = [aws_waf_ipset.ipset]
  name        = var.waf_rule_name
  metric_name = var.waf_rule_metrics
  predicates {
    data_id = aws_waf_ipset.ipset.id
    negated = false
    type    = "IPMatch"
  }
}

# Creating the Rule Group which will be applied on Web ACL component

resource "aws_waf_rule_group" "rule_group" {  
  name        = var.waf_rule_group_name
  metric_name = var.waf_rule_metrics

  activated_rule {
    action {
      type = "COUNT"
    }
    priority = 50
    rule_id  = aws_waf_rule.waf_rule.id
  }
}

# Creating the Web ACL component in AWS WAF

resource "aws_waf_web_acl" "waf_acl" {
  depends_on = [ 
     aws_waf_rule.waf_rule,
     aws_waf_ipset.ipset,
      ]
  name        = var.web_acl_name
  metric_name = var.web_acl_metics

  default_action {
    type = "ALLOW"
  }
  rules {
    action {
      type = "BLOCK"
    }
    priority = 1
    rule_id  = aws_waf_rule.waf_rule.id
    type     = "REGULAR"
 }
}
  • Create one more file vars.tf inside the same folder and paste the below content
variable "web_acl_name" {
  type = string
}
variable "web_acl_metics" {
  type = string
}
variable "waf_rule_name" {
  type = string
}
variable "waf_rule_metrics" {
  type = string
}
variable "waf_rule_group_name" {
  type = string
}
variable "waf_rule_group_metrics" {
  type = string
}
  • Create one more file output.tf inside the same folder and paste the below content
output "aws_waf_rule_arn" {
   value = aws_waf_rule.waf_rule.arn
}

output "aws_waf_rule_id" {
   value = aws_waf_rule.waf_rule.id
}

output "aws_waf_web_acl_arn" {
   value = aws_waf_web_acl.waf_acl.arn
}

output "aws_waf_web_acl_id" {
   value = aws_waf_web_acl.waf_acl.id
}

output "aws_waf_rule_group_arn" {
   value = aws_waf_rule_group.rule_group.arn
}

output "aws_waf_rule_group_id" {
   value = aws_waf_rule_group.rule_group.id
}
  • Create one more file provider.tf inside the same folder and paste the below content
provider "aws" {      
  region = "us-east-1"
}
  • Again, Create one more file terraform.tfvars inside the same folder and paste the below content
web_acl_name = "myFirstwebacl"
web_acl_metics = "myFirstwebaclmetics"
waf_rule_name = "myFirstwafrulename"
waf_rule_metrics = "myFirstwafrulemetrics"
waf_rule_group_name = "myFirstwaf_rule_group_name"
waf_rule_group_metrics = "myFirstwafrulgroupmetrics"
  • Now your files and code are all set and your directory should look something like below.

Deploy AWS WAF using Terraform commands

  • Now, Lets Initialize the terraform by running the following init command.
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply command .
terraform apply

By now, you should have created the Web ACL and other components of AWS WAF with Terraform. Let’s verify by manually checking in the AWS Management Console.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘WAF’, and click on the WAF menu item.
  • Now You should be on AWS WAF Page, Lets verify each component starting from Web ACL .
  • Now verify the IP Set
  • Now, Verify the Rules which in the Web ACL.
  • Next, Lets verify the Web ACL Rule Groups.

Conclusion

In this tutorial you learned about AWS WAF that is Web Application Firewall and how to setup in Amazon cloud using Terraform .

It is very important to protect your website from attacks. So which Website do you plan to protect ?

Hope this tutorial helped you and if so please comment and share it with your friends.

How to Install and Setup Terraform on Windows Machine step by step

There are lots of automation tools and scripts available for this and one of the finest tool to automate your infrastructure is Terraform which is also known as Infrastructure as code.

Learn how to Install and Setup Terraform on Windows Machine step by step.

Table of Content

  1. What is Terraform ?
  2. Prerequisites
  3. How to Install Terraform on Windows 10 machine
  4. Creating an IAM user in AWS account with programmatic access
  5. Configuring the IAM user Credentials on Windows Machine
  6. Run Terraform commands from Windows machine
  7. Launch a EC2 instance using Terraform
  8. Conclusion

What is Terraform ?

Terraform is a tool for building , versioning and changing the Cloud infrastructure. Terraform is Written in GO Language and the syntax language of configuration files is hcl which stands for HashiCorp configuration language which is much easier than yaml or json.

Terraform has been in use for quite a while now . I would say its an amazing tool to build , change the infrastructure in very effective and simpler way. It’s used with variety of cloud provider such as Amazon AWS, Oracle, Microsoft Azure , Google cloud and many more. I hope you would love to learn it and utilize it.

Prerequisites

How to Install Terraform on Windows machine

  • Open your favorite browser and download the appropriate version of Terraform from HashiCorp’s download Page. This tutorial will download terraform 0.13.0 version
  • Make a folder on your C:\ drive where you can put the Terraform executable something Like  C:\tools where you can put binaries.
  • Extract the zip file to the folder C:\tools
  • Now Open your Start Menu and type in “environment” and the first thing that comes up should be Edit the System Environment Variables option. Click on that and you should see this window.
  • Now Under System Variables and look for Path and edit it
  • Click New and add the folder path where terraform.exe is located to the bottom of the list
  • Click OK on each of the menus.
  • Now, Open Command Prompt or PowerShell to check if terraform is properly added in PATH by running the command terraform from any location.
On Windows Machine command Prompt
On Windows Machine PowerShell
  • Verify the installation was successful by entering terraform --version. If it returns a version, you’re good to go.

Creating an IAM user in AWS account with programmatic access

For Terraform to connect to AWS Service, you should have an IAM user with an Access key ID and secret keys in the AWS account that you will configure on your local machine to connect to AWS account from your local machine.

There are two ways to connect to an AWS account, the first is providing a username and password on the AWS login page on the browser and the other way is to configure Access key ID and secret keys on your machine and then use command-line tools to connect programmatically.

  1. Open your favorite web browser and navigate to the AWS Management Console and log in.
  2. While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item.
  1. To Create a user click on Users→ Add user and provide the name of the user myuser and make sure to tick the Programmatic access checkbox in Access type which enables an access key ID and secret access key and then hit the Permissions button.
  1. Now select the “Attach existing policies directly” option in the set permissions and look for the “Administrator” policy using filter policies in the search box. This policy will allow myuser to have full access to AWS services.
  1. Finally click on Create user.
  2. Now, the user is created successfully and you will see an option to download a .csv file. Download this file which contains IAM users i.e. myuser Access key ID and Secret access key which you will use later in the tutorial to connect to AWS service from your local machine.

Configuring the IAM user Credentials on Windows Machine

Now, you have an IAM user myuser created. The next, step is to set the download myuser credentials on the local machine which you will use to connect AWS service via API calls.

  1. Create a new file, C:\Users\your_profile\.aws\credentials on your local machine.
  2. Next, Enter the Access key ID and Secret access key from the downloaded csv file into the credentials file in the same format and save the file.
[default]     # Profile Name
aws_access_key_id = AKIAXXXXXXXXXXXXXXXX
aws_secret_access_key = vIaGXXXXXXXXXXXXXXXXXXXX

credentials files help you to set your profile. By this way, it helps you to create multiple profiles and avoid confusion while connecting to specific AWS accounts.

  1. Similarly, create another file C:\Users\your_profile\.aws\config in the same directory
  2. Next, add the “region” into the config file and make sure to add the name of the profile which you provided in the credentials file, and save the file. This file allows you to work with a specific region.
[default]   # Profile Name
region = us-east-2

Run Terraform commands from Windows machine

By Now , you have already installed Terraform on your windows Machine, Configured IAM user (myuser) credentials so that Terraform can use it and connect to AWS services in Amazon account.

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Launch a EC2 Instance Using Terraform

In this demonstration we will create a simple Amazon Web Service (AWS) EC2 instance and run Terraform commands on Windows machine.

  • Create a folder on your desktop or any location on windows Machine ( I prefer it on Desktop)
  • Now create a file main.tf inside the folder you’re in and paste the below content
resource "aws_instance" "my-machine" {  # Resource block to define what to create
  ami = var.ami         # ami is required as we need ami in order to create an instance
  instance_type = var.instance_type             # Similarly we need instance_type
}
  • Create one more file vars.tf inside the same folder and paste the below content
variable "ami" {         # Declare the variable ami which you used in main.tf
  type = string      
}

variable "instance_type" {        # Declare the variable instance_type used in main.tf
  type = string 
}

Next, selecting the instance type is important. Click here to see a list of different instance types. To find the image ID ( ami ) , navigate to the LaunchInstanceWizard and search for ubuntu in the search box to get all the ubuntu image IDs. This tutorial will use Ubuntu Server 18.04.LTS image.

  • Create one more file output.tf inside the same folder and paste the below content
output "ec2_arn" {
  value = aws_instance.my-machine.arn     # Value depends on resource name and type ( same as that of main.tf)
}  
  • Create one more file provider.tf inside the same folder and paste the below content:
provider "aws" {      # Defining the Provider Amazon  as we need to run this on AWS   
  region = "us-east-1"
}
  • Create one more file terraform.tfvars inside the same folder and paste the below content
ami = "ami-013f17f36f8b1fefb" 
instance_type = "t2.micro"
  • Now your files and code are ready for execution .
  • Initialize the terraform using below command.
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply
  • After verification , now its time to actually deploy the code using apply.
terraform apply

Great Job, terraform commands execution was done successfully. Now we should have ec2 instance launched in AWS.

It generally take a minute or so to launch a instance and yes we can see that the instance is successfully launched now in us-east-1 region as expected.

Conclusion

In this tutorial you learnt What is terraform , how to Install and Setup Terraform on Windows Machine and launched an ec2 instance on AWS account using terraform.

Keep Terraforming !!

Hope this tutorial will helps you in understanding and setting up Terraform on Windows machine. Please share with your friends.

How to create AWS EKS cluster using Terraform and connect Kubernetes cluster with ubuntu machine.

Working with container orchestration like kubernetes has always remained on top most priorities. Well , starting from kubernetes then toward EKS has benefited almost every single person on this planet who has used it. As this is all managed AWS very well by taking care of all your infrastructure , deployments and scaling of cluster.

Having said that, still there is some work to do such as creating AWS EKS with right permissions and policy , why not automate this as well? That would be exceptionally well if that happens and yes this will happen right in this tutorial as we will use terraform to automate your creation of EKS cluster.

So in this tutorial we will configure few files in terraform and then you can create as many as cluster in few seconds. Please follow along.

Table of content

  • What is AWS EKS
  • Prerequisites
  • How to install Terraform on ubuntu machine
  • Terraform Configuration Files and Structure
  • Configure Terraform files to create AWS EKS cluster
  • Configure & Connect your Ubuntu Machine to communicate with your cluster
  • Conclusion

What is AWS EKS ( Amazon Elastic Kubernetes Services) ?

Amazon provides its own service AWS EKS where you can host kubernetes without worrying about infrastructure like kubernetes nodes, installation of kubernetes etc. It gives you a platform to host kubernetes.

Some features of Amazon EKS ( Elastic kubernetes service)

  • It expands and scales across many availability zones so that there is always a high availability.
  • It automatically scales and fix any impacted or unhealthy node.
  • It is interlinked with various other AWS services such as IAM, VPC , ECR & ELB etc.
  • It is very secure service.

How does AWS EKS service work?

  • First step in EKS is to create EKS cluster using AWS CLI or AWS Management console.
  • Now, next you can have your own machines EC2 where you can deploy applications or deploy to AWS Fargate which manages it for you.
  • Now connect to kubernetes cluster with kubectl commands.
  • Finally deploy and run applications on EKS cluster.

Prerequisites

  • Ubuntu machine to run terraform preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account
  • Recommended to have 4GB RAM
  • At least 5GB of drive space
  • Ubuntu machine should have IAM role attached with AWS EKS full permissions or it is always great to have administrator permissions to work with demo’s.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Terraform on Ubuntu 18.04 LTS

  • Update your already existing system packages.
sudo apt update
  • Download the latest version of terraform in opt directory
wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
This image has an empty alt attribute; its file name is image-163.png
  • Install zip package which will be required to unzip
sudo apt-get install zip -y
  • unzip the Terraform download zip file
unzip terraform*.zip
  • Move the executable to executable directory
sudo mv terraform /usr/local/bin
  • Verify the terraform by checking terraform command and version of terraform
terraform               # To check if terraform is installed 

terraform -version      # To check the terraform version  
This image has an empty alt attribute; its file name is image-164.png
This image has an empty alt attribute; its file name is image-165.png
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Configure Terraform files to create AWS EKS cluster

In this demonstration we will create IAM role and IAM policy which we will attach to the same role. Later in this tutorial we will create Kubernetes cluster with proper network configurations. Lets get started and configure terraform files which are required for creation of AWS EKS on AWS account.

  • Create a folder inside opt directory
mkdir /opt/terraform-eks-demo
cd /opt/terraform-eks-demo
  • Now create a file main.tf inside the directory you’re in
vi main.tf
  • This is our main.tf file and paste the below code inside the file.
# Creating IAM role with assume policy so that it can be assumed while connecting with Kubernetes cluster.

resource "aws_iam_role" "iam-role-eks-cluster" {
  name = "terraform-eks-cluster"
  assume_role_policy = <<POLICY
{
 "Version": "2012-10-17",
 "Statement": [
   {
   "Effect": "Allow",
   "Principal": {
    "Service": "eks.amazonaws.com"
   },
   "Action": "sts:AssumeRole"
   }
  ]
 }
POLICY
}

# Attach both EKS-Service and EKS-Cluster policies to the role.

resource "aws_iam_role_policy_attachment" "eks-cluster-AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = "${aws_iam_role.iam-role-eks-cluster.name}"
}

resource "aws_iam_role_policy_attachment" "eks-cluster-AmazonEKSServicePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
  role       = "${aws_iam_role.iam-role-eks-cluster.name}"
}

# Crate security group for AWS EKS.

resource "aws_security_group" "eks-cluster" {
  name        = "SG-eks-cluster"
  vpc_id      = "vpc-XXXXXXXXXXX"  # Use your VPC here

  egress {                   # Outbound Rule
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {                  # Inbound Rule
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

}

# Create EKS cluster

resource "aws_eks_cluster" "eks_cluster" {
  name     = "terraformEKScluster"
  role_arn =  "${aws_iam_role.iam-role-eks-cluster.arn}"
  version  = "1.19"

  vpc_config {             # Configure EKS with vpc and network settings 
   security_group_ids = ["${aws_security_group.eks-cluster.id}"]
   subnet_ids         = ["subnet-XXXXX","subnet-XXXXX"] # Use Your Subnets here
    }

  depends_on = [
    "aws_iam_role_policy_attachment.eks-cluster-AmazonEKSClusterPolicy",
    "aws_iam_role_policy_attachment.eks-cluster-AmazonEKSServicePolicy",
   ]
}



# Creating IAM role for EKS nodes with assume policy so that it can assume 


resource "aws_iam_role" "eks_nodes" {
  name = "eks-node-group"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.eks_nodes.name
}

resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.eks_nodes.name
}

resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.eks_nodes.name
}

# Create EKS cluster node group

resource "aws_eks_node_group" "node" {
  cluster_name    = aws_eks_cluster.eks_cluster.name
  node_group_name = "node_tuto"
  node_role_arn   = aws_iam_role.eks_nodes.arn
  subnet_ids      = ["subnet-","subnet-"]

  scaling_config {
    desired_size = 1
    max_size     = 1
    min_size     = 1
  }

  # Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
  # Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
  depends_on = [
    aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
  ]
}
 
 
  • Below should directory of our demo look like.
  • Now your files and code are ready for execution . Initialize the terraform
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply
  • Generally EKS cluster take few minutes to launch.
  • Lets verify our AWS EKS cluster and other components which were created by terraform.
IAM Role with proper permissions.

  • Now verify Amazon EKS cluster
  • Finally verify the node group of the cluster.

Configure & Connect your Ubuntu Machine to communicate with your cluster

Up to now we created Kubernetes cluster in AWS EKS with proper IAM role permissions and configuration , but please make sure to configure AWS credentials on local machine to match with same IAM user or IAM role used while creating the cluster. That means use same IAM role credentials in local machine which we used to create Kubernetes cluster.

Here in this demonstration we are using IAM role credentials in EC2 instance from which we created AWS EKS using terraform. So on the same machine we can perform below steps.

  • Prerequisites – Make sure you have AWS CLI and kubectl installed on ubuntu machine in order to make connection. If you don’t have these two , don’t worry please visit our another article to find both the installation
  • On ubuntu machine configure kubeconfig to make communication from your local machine to Kubernetes cluster in AWS EKS
aws eks update-kubeconfig --region us-east-2 --name terraformEKScluster
  • Now finally test the communication between local machine and cluster .
kubectl get svc

Great we can see the connectivity from our local machine to Kubernetes cluster.

Conclusion:

In this tutorial . Firstly we went through detailed view of what is AWS Elastic kubernetes service and how to create kubernetes cluster using Terraform . Then we showcased connection between Kubernetes cluster and kubectl client on ubuntu machine.

Hope you had a great time seeing this tutorial with detailed explanations and with practical’s. . If you like this , please share with your friends and spread the word.

How to Launch AWS Elastic beanstalk using Terraform

Working on Amazon web service itself is a amazing thing as there are lots of services which are provided by Amazon to make you free from hassle of setting up entire infrastructure step by step . Suppose you want to create a 3 instance and align a load balancer in front of them and hosting a website and store all data in database.

No doubt we have amazon provided AWS EC2 service, for database we have AWS RDS service in case of relational database and for load balancer amazon provides ELB i.e.. elastic load balancers. But what if we have can have a common platform to work with all these services together and work ? Wouldn’t that be much easier for all of us. Yes that’s correct it will make things much easier, so does the Amazon Elastic beanstalk and is one of the best service one can use Amazon Web Service.

In this tutorial, we will learn how to step up Amazon Elastic beanstalk using terraform on AWS step by step and then upload the code to run one of the simple application.

Table of content

  1. What is Elastic beanstalk?
  2. Prerequisites
  3. How to Install Terraform on Ubuntu 18.04 LTS
  4. Terraform Configuration Files and Structure
  5. Launch Elastic beanstalk on Amazon Web Service using Terraform
  6. Conclusion

What is Elastic beanstalk?

AWS Elastic beanstalk is one of the best service one can use on amazon web service tool. It is service which can work with variety of languages such as python , go , ruby, java , .net , PHP for hosting the application. The only thing you need to do in elastic beanstalk is just create your code with any of these high level languages and upload your code in AWS elastic beanstalk, then rest of the things will be taken care by elastic beanstalk itself such as scaling, load balancing , monitoring and so on .

Elastic beanstalk makes the life of developer as well as for cloud admins or sysadmins so easy compared to setting each service individually and interlinking each other.

  • Some of the key benefits of AWS Elastic beanstalk are
    1. It scales the applications up or down as per the required traffic.
    2. As infrastructure is managed and taken care by AWS Elastic beanstalk developers working with admins don’t need to spend much time.
    3. It is fast and easy to setup
    4. You can interlink with lots of other AWS services of your own choice or you can skip it such as linking of application or classic or network load balancer.
Listen to What is AWS Elastic beanstalk?

Prerequisites

  • Ubuntu machine to run terraform preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account
  • Recommended to have 4GB RAM
  • At least 5GB of drive space
  • Ubuntu machine should have IAM role attached with AWS Elastic beanstalk creation permissions or it is always great to have administrator permissions to work with demo’s.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Terraform on Ubuntu 18.04 LTS

  • Update your already existing system packages.
sudo apt update
  • Download the latest version of terraform in opt directory
wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
This image has an empty alt attribute; its file name is image-163.png
  • Install zip package which will be required to unzip
sudo apt-get install zip -y
  • unzip the Terraform download zip file
unzip terraform*.zip
  • Move the executable to executable directory
sudo mv terraform /usr/local/bin
  • Verify the terraform by checking terraform command and version of terraform
terraform               # To check if terraform is installed 

terraform -version      # To check the terraform version  
This image has an empty alt attribute; its file name is image-164.png
This image has an empty alt attribute; its file name is image-165.png
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Launch AWS Elastic beanstalk on AWS using Terraform

  • Now you will create all the configuration files which are required for creation of Elastic beanstalk on AWS account .
  • Create a folder in opt directory and name it as terraform-elasticbeanstalk-demo and create all the files under this folder.

main.tf

# Create elastic beanstalk application


resource "aws_elastic_beanstalk_application" "elasticapp" {
  name = var.elasticapp
}

# Create elastic beanstalk Environment

resource "aws_elastic_beanstalk_environment" "beanstalkappenv" {
  name                = var.beanstalkappenv
  application         = aws_elastic_beanstalk_application.elasticapp.name
  solution_stack_name = var.solution_stack_name
  tier                = var.tier

  setting {
    namespace = "aws:ec2:vpc"
    name      = "VPCId"
    value     = var.vpc_id
  }
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "IamInstanceProfile"
    value     =  "aws-elasticbeanstalk-ec2-role"
  }
  setting {
    namespace = "aws:ec2:vpc"
    name      = "AssociatePublicIpAddress"
    value     =  "True"
  }

  setting {
    namespace = "aws:ec2:vpc"
    name      = "Subnets"
    value     = join(",", var.public_subnets)
  }
  setting {
    namespace = "aws:elasticbeanstalk:environment:process:default"
    name      = "MatcherHTTPCode"
    value     = "200"
  }
  setting {
    namespace = "aws:elasticbeanstalk:environment"
    name      = "LoadBalancerType"
    value     = "application"
  }
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "InstanceType"
    value     = "t2.medium"
  }
  setting {
    namespace = "aws:ec2:vpc"
    name      = "ELBScheme"
    value     = "internet facing"
  }
  setting {
    namespace = "aws:autoscaling:asg"
    name      = "MinSize"
    value     = 1
  }
  setting {
    namespace = "aws:autoscaling:asg"
    name      = "MaxSize"
    value     = 2
  }
  setting {
    namespace = "aws:elasticbeanstalk:healthreporting:system"
    name      = "SystemType"
    value     = "enhanced"
  }

}

vars.tf

variable "elasticapp" {
  default = "myapp"
}
variable "beanstalkappenv" {
  default = "myenv"
}
variable "solution_stack_name" {
  type = string
}
variable "tier" {
  type = string
}

variable "vpc_id" {}
variable "public_subnets" {}
variable "elb_public_subnets" {}

provider.tf

provider "aws" {
  region = "us-east-2"
}

terraform.tfvars

vpc_id              = "vpc-XXXXXXXXX"
Instance_type       = "t2.medium"
minsize             = 1
maxsize             = 2
public_subnets     = ["subnet-XXXXXXXXXX", "subnet-XXXXXXXXX"] # Service Subnet
elb_public_subnets = ["subnet-XXXXXXXXXX", "subnet-XXXXXXXXX"] # ELB Subnet
tier = "WebServer"
solution_stack_name= "64bit Amazon Linux 2 v3.2.0 running Python 3.8"
  • Now your files and code are ready for execution . Initialize the terraform
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply

Great Job, terraform commands execution was done successfully. Now we should have AWS Elastic beanstalk launched in AWS.

  • Now go to AWS account and search for AWS Elastic beanstalk service and after it gets open click on myenv and check the URL

Conclusion

In this tutorial, we demonstrated some benefits of Amazon Elastic beanstalk and learnt how to set up Amazon Elastic beanstalk using terraform on AWS step by step .

Hope this tutorial will help you in understanding the terraform and provisioning the elastic beanstalk on Amazon cloud. Please share with your friends

How to create Secrets in AWS Secrets Manager using Terraform in Amazon account.

Are you saving your passwords in text files or configuration files or deployment files while deploying in Amazon AWS accounts? That’s very very risking but no worries you have come to right place to learn and use AWS secrets which solves all your security concerns and encrypts all of your stored password and decrypt only while retrieving them.

Table of content

  1. What is AWS Secrets and Secret Manager?
  2. Prerequisites
  3. How to Install Terraform on Ubuntu 18.04 LTS
  4. Terraform Configuration Files and Structure
  5. Configure Terraform File to Create AWS Secrets and Secrets versions on AWS
  6. Create Postgres database using terraform with database master account credentials as AWS Secrets
  7. Conclusion

What is AWS Secrets?

There was a time when all the passwords of database or applications were kept in configuration files. Although they are kept secure but at the same time they can be compromised if not taken care . If you required to update the credentials it use to take tons of hours to apply those changes at every single file and if you miss any of the file it can cause entire application to get down immediately.

Now here come a AWS service which manages all the above issues with Secrets manager by retrieving the password programmatically. Another major benefit of using AWS secrets is it can rotate your credentials at any schedules defined by you.

We are using AWS Secrets Manager , so that we can keep our main and important Passwords safe and secure.

Application connects with Secret Manager to retrieve secrets and then connects with database

Prerequisites

  • Ubuntu machine to run terraform preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account or AWS Account
  • Recommended to have 4GB RAM
  • At least 5GB of drive space
  • Ubuntu machine should have IAM role attached with full access to create AWS secrets or it is always great to have administrator permissions to work with demo’s.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Terraform on Ubuntu 18.04 LTS

  • Update your already existing system packages.
sudo apt update
  • Download the latest version of terraform in opt directory
wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
This image has an empty alt attribute; its file name is image-163.png
  • Install zip package which will be required to unzip
sudo apt-get install zip -y
  • unzip the Terraform download zip file
unzip terraform*.zip
  • Move the executable to executable directory
sudo mv terraform /usr/local/bin
  • Verify the terraform by checking terraform command and version of terraform
terraform               # To check if terraform is installed 

terraform -version      # To check the terraform version  
This image has an empty alt attribute; its file name is image-164.png
This image has an empty alt attribute; its file name is image-165.png
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Configure Terraform File to Create AWS Secrets and Secrets versions on AWS

Now, Lets create terraform configuration files which will be required to create secrets.

  • Create a folder in opt directory and name it as terraform-demo-secrets and create all the files under this folder.

main.tf

# Firstly we will create a random generated password which we will use in secrets.

resource "random_password" "password" {
  length           = 16
  special          = true
  override_special = "_%@"
}


# Now create secret and secret versions for database master account 

resource "aws_secretsmanager_secret" "secretmasterDB" {
   name = "Masteraccoundb"
}

resource "aws_secretsmanager_secret_version" "sversion" {
  secret_id = aws_secretsmanager_secret.secretmasterDB.id
  secret_string = <<EOF
   {
    "username": "adminaccount",
    "password": "${random_password.password.result}"
   }
EOF
}


# Lets import the Secrets which got created recently and store it so that we can use later. 


data "aws_secretsmanager_secret" "secretmasterDB" {
  arn = aws_secretsmanager_secret.secretmasterDB.arn
}

data "aws_secretsmanager_secret_version" "creds" {
  secret_id = data.aws_secretsmanager_secret.secretmasterDB.arn
}

# After Importing the secrets Storing the Imported Secrets into Locals

locals {
  db_creds = jsondecode(
  data.aws_secretsmanager_secret_version.creds.secret_string
   )
}

provider.tf

provider "aws" {
  region = "us-east-2"
}
  • Now your files and code are ready for execution . Initialize the terraform
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply
  • Now Open your AWS account and search for AWS Secrets Manager.
  • Click on the secret which we created . In our case it was Masteraccoundb and scroll little down .
  • Click on Retrieve secret value

We can see that secrets got created successfully using terraform. Next step is to use these secrets as credentials for database master account while creating the database.

Create Postgres database using terraform with database master account credentials as AWS Secrets

  • Open the same main.tf again and paste the below code at the bottom
resource "aws_rds_cluster" "main" { 
  cluster_identifier = "democluster"
  database_name = "maindb"
  master_username = local.db_creds.username
  master_password = local.db_creds.password
  port = 5432
  engine = "aurora-postgresql"
  engine_version = "11.6"
  db_subnet_group_name = "dbsubntg"  # Make sure you create this before manually
  storage_encrypted = true 
}


resource "aws_rds_cluster_instance" "main" { 
  count = 2
  identifier = "myinstance-${count.index + 1}"
  cluster_identifier = "${aws_rds_cluster.main.id}"
  instance_class = "db.r4.large"
  engine = "aurora-postgresql"
  engine_version = "11.6"
  db_subnet_group_name = "dbsubntg"
  publicly_accessible = true 
}
  • This time you can directly run apply command as this is demo version , although its recommend to run terraform init then plan and then apply.
terraform apply
  • Now go to AWS RDS service on Amazon account and check the Postgres cluster
  • Now click on democluster and then hop over to configurations.

Conclusion

In this tutorial, we demonstrated AWS Secrets Manager and learnt how to create AWS secrets and later created Postgres database utilizing AWS secrets as master account credentials.

Hope this tutorial will help you in understanding the terraform and provisioning the AWS secrets on Amazon cloud. Please share with your friends

How to work with multiple Terraform Provisioners

Have you ever worked with passing the data or any script on any compute resource after they are created successfully ? Most of you might have worked with passing the user data or scripts at the time of creation.

You have come to right place to learn about latest and most widely used terraform provisoners which solves the problem of working with data after resource is created or already existing data.

Table of content

  1. What is Terraform provisioners
  2. What are different actions performed by terraform provisioners.
  3. Prerequisites
  4. How to Install Terraform on Ubuntu 18.04 LTS
  5. Terraform Configuration Files and Structure
  6. Working with Various terraform provisioners on AWS EC2 instance
  7. Conclusion

What is Terraform provisioners?

Most cloud computing platforms provide different ways to pass data into instances such as ec2 instance or any other compute resource at the time of their creation so that the data is immediately available on system boot. This is possible with various code functions such as by passing user_data . Also at the time of creating the EC2 instance AMI’s we can pass the data.

But what if we need to provide the data after resource is created or already in place? Here comes role of terraform provisioner which passes the data after resource is created or for existed resources.

There are lots of terraform provisioners that interact with remote servers over SSH or WinRM can be used to pass such data by logging in to the server and providing it directly.

What are different actions performed by terraform provisioners.

  1. They perform specific action on local machine that means they generate output on same machine
  2. They perform specific action on remote machine that means they generate output on remote machine
  3. They perform specific action on to copy files remotely on machines.
  4. These are used to enter or pass data in any resource which cannot be passed at the time of creation of resources.
  5. You can have conditions in provisioner such as when = destroy , on_failure = continue

Prerequisites

  • Ubuntu machine to run terraform preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account
  • Recommended to have 4GB RAM
  • At least 5GB of drive space
  • Ubuntu machine should have IAM role attached with all EC2 permissions or it is always great to have administrator permissions to work with demo’s.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Terraform on Ubuntu 18.04 LTS

  • Update your already existing system packages.
sudo apt update
  • Download the latest version of terraform in opt directory
wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
This image has an empty alt attribute; its file name is image-163.png
  • Install zip package which will be required to unzip
sudo apt-get install zip -y
  • unzip the Terraform download zip file
unzip terraform*.zip
  • Move the executable to executable directory
sudo mv terraform /usr/local/bin
  • Verify the terraform by checking terraform command and version of terraform
terraform               # To check if terraform is installed 

terraform -version      # To check the terraform version  
This image has an empty alt attribute; its file name is image-164.png
This image has an empty alt attribute; its file name is image-165.png
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Working with Various terraform provisioners on AWS EC2 instance

Now, lets dive into demo where you will use multiple provisioners . This tutorial will Create a secret key pair ( Public and Private keys) so that provisioners uses to connect and login to machine over SSH protocol . Using local exec provisioners executes command locally on the machine. Next, remote exec provisioners installs software on AWS EC2 instance and finally File Provisioners upload the file in the ec2 instance

  • Create a file main.tf and paste the below code.
resource "aws_key_pair" "deployer" {     # Creating the Key pair on AWS 
  key_name   = "deployer-key"
  public_key = "${file("~/.ssh/id_rsa.pub")}" # Generated private and public key on local machine
}
 
resource "aws_instance" "my-machine" {        # Creating the instance
 
  ami = "ami-0a91cd140a1fc148a"
  key_name = aws_key_pair.deployer.key_name
  instance_type = "t2.micro"
 
  provisioner  "local-exec" {                  # Provisioner 1
        command = "echo ${aws_instance.my-machine.private_ip} >> ip.txt"
        on_failure = continue
       }
 
  provisioner  "remote-exec" {            # Provisioner 2 [needs SSH/Winrm connection]
      connection {
      type        = "ssh"
      user        = "ubuntu"
      private_key = "${file("~/.ssh/id_rsa")}"
      agent       = false
      host        = aws_instance.my-machine.public_ip       # Using my instance to connect
      timeout     = "30s"
    }
      inline = [
        "sudo apt install -y apache2",
      ]
  }
 
  provisioner "file" {                    # Provisioner 3 [needs SSH/Winrm connection]
    source      = "C:\\Users\\4014566\\Desktop\\service-policy.json"
    destination = "/tmp/file.json"
    connection {
      type        = "ssh"
      user        = "ubuntu"
      host        = aws_instance.my-machine.public_ip
      private_key = "${file("~/.ssh/id_rsa")}"
      agent       = false
      timeout     = "30s"
    }
  }  
  • Create a file provider.tf and paste the below code
provider "aws" {
  region = "us-east-2"
}
  • Now your files and code are ready for execution . Initialize the terraform
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After plan verification , run the apply command to deploy the code.
terraform apply
  • Lets now verify the commands and execution.
command executed locally on the ubuntu machine using local exec
command executed on remote machine using other remote-exec and file provisioners

Great Job, Terraform commands execution was done successfully locally and on remote machine in AWS.

Conclusion

In this tutorial, we demonstrated some benefits of terraform provisioners and learnt how to work with various terraform provisioners using terraform on AWS step by step .

Hope this tutorial will help you in understanding the terraform and working with various terraform provisioners on Amazon cloud. Please share with your friends

How to Launch AWS S3 bucket on Amazon using Terraform

Do you have issues with lots of log rotation or does your system hangs when lots of logs are generated on the disk and your system behaves very abruptly and do you have less space to keep your important deployments JAR’s or WAR’s? These are all challenges which everyone has faced while working with datacenter applications of with less capacity VM’s.

It is right time to store all your logs, deployment code and scripts and this is very much possible with Amazon’s AWS S3 which provide unlimited storage , safe and secure and quick . So in this tutorial we will go through what is AWS S3 , AWS S3 features and how to launch a S3 bucket using terraform.

Table of content

  1. What is AWS Amazon S3 bucket?
  2. Prerequisites
  3. How to Install Terraform on Ubuntu 18.04 LTS
  4. Terraform Configuration Files and Structure
  5. Launch AWS S3 bucket on Amazon Web Service using Terraform
  6. Upload an object to AWS S3 bucket
  7. Conclusion

What is Amazon AWS S3 bucket?

AWS S3 , why it is S3 ? The name itself tells that its a 3 word whose alphabet starts with “S” . The Full form of AWS S3 is simple storage service. AWS S3 service helps in storing of unlimited data very safely and efficiently. There is a very basic architecture of AWS S3 . Everything in AWS S3 is a object such as pdf files, zip files , text files or war files anything. The next thing is bucket where all these objects resides.

AWS S3 Service ➡️ Bucket ➡️ Objects ➡️ PDF , HTML DOCS, WAR , ZIP FILES etc

Some of the features of AWS S3 bucket are:

  • In order to store the data in bucket you will need to upload it.
  • To keep your bucket permissions more secure provide necessary permissions to IAM role or IAM user.
  • Buckets have unique name globally that means there will be only 1 bucket throughout different accounts or any regions.
  • 100 buckets can be created in any AWS account , post that you need to raise a ticket to Amazon.
  • Owner of Bucket is specific to AWS account only.
  • Buckets are created region specific such as us-east-1 , us-east-2 , us-west-1 or us-west-2
  • Bucket objects are objected in AWS S3 using AWS S3 API service.
  • Buckets can be publicly visible that means anybody on the internet can access it. So it is always recommended to keep the public access blocked for all buckets unless very much required.

Prerequisites

  • Ubuntu machine to run terraform preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account or AWS Account
  • Recommended to have 4GB RAM
  • At least 5GB of drive space
  • Ubuntu machine should have IAM role attached with full access to create AWS S3 bucket or it is always great to have administrator permissions to work with demo’s.
  • If you wish to create bucket manually click here for the setup instruction but we will use terraform for this demo.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Terraform on Ubuntu 18.04 LTS

  • Update your already existing system packages.
sudo apt update
  • Download the latest version of terraform in opt directory
wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
This image has an empty alt attribute; its file name is image-163.png
  • Install zip package which will be required to unzip
sudo apt-get install zip -y
  • unzip the Terraform download zip file
unzip terraform*.zip
  • Move the executable to executable directory
sudo mv terraform /usr/local/bin
  • Verify the terraform by checking terraform command and version of terraform
terraform               # To check if terraform is installed 

terraform -version      # To check the terraform version  
This image has an empty alt attribute; its file name is image-164.png
This image has an empty alt attribute; its file name is image-165.png
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Launch AWS S3 bucket on AWS using Terraform

Now we will create all the configuration files which are required for creation of S3 bucket on AWS account.

  • Create a folder in opt directory and name it as terraform-s3-demo 
mkdir /opt/terraform-s3-demo
cd /opt/terraform-s3-demo
  • Create main.tf file under terraform-s3-demo folder and paste the content below.
# Bucket Access

resource "aws_s3_bucket_public_access_block" "publicaccess" {
  bucket = aws_s3_bucket.demobucket.id
  block_public_acls       = false
  block_public_policy     = false
}

# Creating the encryption key which will encrypt the bucket objects

resource "aws_kms_key" "mykey" {
  deletion_window_in_days = "20"
}

# Creating the bucket

resource "aws_s3_bucket" "demobucket" {

  bucket          = var.bucket
  force_destroy   = var.force_destroy

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = aws_kms_key.mykey.arn
        sse_algorithm     = "aws:kms"
      }
    }
  }
  versioning {
    enabled               = true
  }
  lifecycle_rule {
    prefix  = "log/"
    enabled = true
    expiration {
      date = var.date
    }
  }
}
  • Create vars.tf file under terraform-s3-demo folder and paste the content below
variable "bucket" {
 type = string
}
variable "force_destroy" {
 type = string
}
variable "date" {
 type = string
}
  • Create provider.tf file under terraform-s3-demo folder and paste the content below.
provider "aws" {
  region = "us-east-2"
}
  • Create terraform.tfvars file under terraform-s3-demo folder and paste the content below.
bucket          = "terraformdemobucket"
force_destroy   = false
date = "2022-01-12"
  • Now your files and code are ready for execution . Initialize the terraform
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply

Terraform command execution was done successfully. Now you should have AWS S3 bucket launched in AWS. Verify it by navigating it to AWS account and search for AWS S3 service and check if the bucket has been created.

Upload an object in AWS S3 bucket

All the files and folder inside the AWS S3 Buckets are known as objects.

  • Now Let us upload a sample text file in the bucket , Next Click on the terraformdemobucket
  • Now click on UPLOAD and then Add files
  • Choose from your system any file , We used sample.txt

Conclusion

In this tutorial, we demonstrated some benefits of Amazon AWS S3 and learnt how to set up Amazon AWS S3 using terraform on AWS step by step . Most of your phone data and your website data are stored on AWS S3. This service specially to host a website is best in market.

Hope this tutorial will help you in understanding the terraform and provisioning the AWS S3 on Amazon cloud. Please share with your friends

How to Launch multiple EC2 instances on AWS using Terraform count and for_each

Creating lots of instances in any cloud provider is always required for any organization or is a project need. If you are asked to create 10 EC2 machines in a particular AWS account using console UI , I am sure it will take tons of hours to create it and lots of efforts. There are lots of automated ways which can create multiple instance in quick time , Yes that’s quite possible and with terraform its very simple and easy.

In this tutorial, we will learn to create multiple ec2 instance in AWS account using terraform code.

Table of content

  1. What is terraform?
  2. Prerequisites
  3. How to Install Terraform on Ubuntu 18.04 LTS
  4. Launch multiple EC2 instances of same type using count on AWS using Terraform
  5. Launch multiple EC2 instances of different type using for_each on AWS using Terraform
  6. Conclusion

What is Terraform?

Terraform is a tool for building , versioning and changing the infrastructure. Terraform is Written in GO Language and the syntax language of configuration files is hcl which stands for HashiCorp configuration language which is much easier than yaml or json.

Terraform has been in use for quite a while now . I would say its an amazing tool to build , change the infrastructure in very effective and simpler way. It’s used with variety of cloud provider such as Amazon AWS, Oracle, Microsoft Azure , Google cloud and many more. I hope you would love to learn it and utilize it.

Prerequisites

  • Ubuntu machine preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account
  • Recommended to have 4GB RAM
  • At least 5GB of drive space
  • Ubuntu machine should have IAM role attached with AWS EC2 instance creation which we will use later in tutorial using terraform

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Terraform on Ubuntu 18.04 LTS

  • Update your already existing system packages.
sudo apt update
  • Download the latest version of terraform in opt directory
wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
  • Install zip package which will be required to unzip
sudo apt-get install zip -y
  • unzip the Terraform download zip file
unzip terraform*.zip
  • Move the executable to executable directory
sudo mv terraform /usr/local/bin
  • Verify the terraform by checking terraform command and version of terraform
terraform               # To check if terraform is installed 

terraform -version      # To check the terraform version  
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Launch multiple EC2 instances of same type using count on AWS using Terraform

Now, In this demonstration we will create multiple ec2 instance using count and for_each parameters in terraform. So Lets create all the configuration files which are required for creation of EC2 instance on AWS account using terraform.

  • Create a folder in opt directory and name it as terraform-demo 
mkdir /opt/terraform-demo
cd /opt/terraform-demo
  • Create main.tf file under terraform-demo folder and paste the content below.
resource "aws_instance" "my-machine" {
  count = 4     # Here we are creating identical 4 machines.
  
  ami = var.ami
  instance_type = var.instance_type
  tags = {
    Name = "my-machine-${count.index}"
         }
}
  • Create vars.tf file under terraform-demo folder and paste the content below
                                            # Creating a Variable for ami
variable "ami" {       
  type = string
}
                                           # Creating a Variable for instance_type
variable "instance_type" {    
  type = string
}
  • Create terraform.tfvars file under terraform-demo folder and paste the content below.
 ami = "ami-0742a572c2ce45ebf"
 instance_type = "t2.micro"
  • Create output.tffile under terraform-demo folder and paste the content below.

Note: value depends on resource name and type ( same as that of main.tf)

output "ec2_machines" {
  value = aws_instance.my-machine.*.arn  # Here * indicates that there are more than one arn as we used count as 4   
}
 

provider.tf:

provider "aws" {      # Defining the Provider Amazon  as we need to run this on AWS   
  region = "us-east-1"
}
  • Now your files and code are ready for execution . Initialize the terraform
terraform init
  • Terraform initialized successfully ,now its time to run the terraform plan command.
  • Terraform plan is a sort of a blueprint before deployment to confirm if correct resources are being provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply

Great Job, terraform commands execution was done successfully. Now we should have four EC2 instance launched in AWS.

Launch multiple EC2 instances of different type using for_each on AWS using Terraform

  • In previous example we created more than one resource but all with same attributes such as instance_type
  • Note: We use for_each in the terraform when we need to create more than one resources but with different attributes such as instance_type for keys etc.

main.tf

resource "aws_instance" "my-machine" {
  ami = var.ami
  for_each  = {                     # for_each iterates over each key and values
      key1 = "t2.micro"             # Instance 1 will have key1 with t2.micro instance type
      key2 = "t2.medium"            # Instance 2 will have key2 with t2.medium instance type
        }
        instance_type  = each.value
	key_name       = each.key
    tags =  {
	   Name  = each.value
	}
}

vars.tf

variable "tag_ec2" {
  type = list(string)
  default = ["ec21a","ec21b"]
}
                                           
variable "ami" {       # Creating a Variable for ami
  type = string
}

terraform.tfvars

ami = "ami-0742a572c2ce45ebf"
instance_type = "t2.micro"
  • Now code is ready for execution , initialize the terraform , run the plan and then use apply to deploy the code as described above.
terraform init 
terraform plan
terraform apply

Conclusion

Terraform is a great open source tool which provides easiest code and configuration files to work with. Its a best Infra as a code tool to start with. You should now have an idea to Launch multiple EC2 instances on AWS using Terraform count and for_each on Amazon web service.

Hope this tutorial will help you in understanding the terraform and running multiple instances on cloud. Please share with your friends.