Learn Terraform: The Ultimate terraform tutorial [PART-2]

In the previous; Learn Terraform: The Ultimate terraform tutorial [PART-1], you got a jump start into Terraform world; why not gain a more advanced level of knowledge of Terraform that you need to become a terraform pro.

In this Learn Terraform: The Ultimate terraform tutorial [PART-2] guide, you will learn more advanced level of Terraform concepts such as terraform lifecycle, terraform function, terraform modules, terraform provisioners, terraform init, terraform plan, terraform apply commands and many more.

Without further delay, let’s get into it.

Join 17 other followers

What are Terraform modules?

Terraform modules contain the terraform configuration files that may be managing a single resource or group of resources. For example, if you are managing a single resource in the single terraform configuration file that is also a Terraform module, or if you wish to manage multiple resources defined different files and later clubbed together in a single file, that is also known as Terraform modules or a root module.

A Terraform root module can have multiple individual child modules, data blocks, resources blocks, and so on. To call a child module, you will need to explicitly define the location of the child module using the source argument as shown below.

  • In the below code the location of module EFS is one directory behind the current directory, so you defined the local Path as ./modules/EFS
module "efs" {                            # Module and Label is efs
  source               = "./modules/EFS"  # Define the Path of Child Module                             
  subnets              = var.subnet_ids
  efs_file_system_name = var.efs_file_system_name
  security_groups      = [module.SG.efs_sg_id]
  role_arn             = var.role_arn
}
  • In some cases the modules are stored in Terraform Registry, GitHub, Bitbucket, Mercurial Repo, S3 bucket etc and to use these repsoitories as your source you need to declare as shown below.
module "mymodule1" {                              # Local Path located  Module
  source = "./consul"
}

module "mymodule2" {                              # Terraform Registry located Module
  source = ".hasicorp/consul/aws"
  version = "0.1.0"
}

module "mymodule3" {                              # GIT located  Module
  source = "github.com/automateinfra/"
}

module "mymodule4" {                              # Mercurial located  Module
  source = "hg::https://automateinfra.com/vpc.hg"
}

module "mymodule5" {                               # S3 Bucket located  Module
  source = "s3::https://s3-eu-west-1.amazonaws.com/vpc.zip"
}
The diagram displaying the root module ( module1 and module2) containing the child modules such as (ec2, rds, s3 etc)
The diagram displaying the root module ( module1 and module2) containing the child modules such as (ec2, rds, s3, etc.)

Terraform provisioner

Do you know what Terraform allows you to perform an action on your local machine or remote machine such as running a command on the local machine, copying files from local to remote machines or vice versa, Passing data into virtual machines, etc. all this can be done by using Terraform provisioner?

Terraform provisioners allow to pass data in any resource that cannot be passed when creating resources. Multiple terraform provisioners can be specified within a resource block and executed in the order they’re defined in the configuration file.

The terraform provisioners interact with remote servers over SSH or WinRM. Most cloud computing platforms provide mechanisms to pass data to instances at the time of their creation such that the data is immediately available on system boot. Still, you can pass the data with Terraform provisioners even after creating the resource.

Terraform provisioners allows you to declare conditions such as when = destroy , on_failure = continue and If you wish to run terraform provisioners that aren’t directly associated with a specific resource, use null_resource.

Let’s look at the example below to declare multiple terraform provisioners.

  • Below code creates two resources where resource1 create an AWS EC2 instance and other work with Terraform provisioner and performs action on the AWS EC2 instance such as copying apache installation instrution from local machine to remote machine and then using file installing apache on the AWS EC2 instance.

resource "aws_instance" "resource1" {
  instance_type = "t2.micro"
  ami           = "ami-9876"
  timeouts {                     # Customize your operations longevity
   create = "60m"
   delete = "2h"
   }
}

resource "aws_instance" "resource2" {

  provisioner "local-exec" {
    command = "echo 'Automateinfra.com' >text.txt"
  }
  provisioner "file" {
    source      = "text.txt"
    destination = "/tmp/text.txt"
  }
  provisioner "remote-exec" {
    inline = [
      "apt install apache2 -f /tmp/text.txt",
    ]
  }
}

Join 17 other followers

Terraform Lifecycle

Terraform lifecycle defines the behavior of resources how they should be treated, such as ignoring changes to tags, preventing destroy the infrastructure.

There are mainly three arguments that you can declare within the Terraform lifecycle such as :

  1. create_before_destroy: By default Terraform destroys the existing object and then create a new replacement object but with create_before_destroy argument within terraform lifecycle the new replacement object is created first, and then the legacy or prior object is destroyed.
  2. prevent_destroy: Terraform skips the destruction of the existing object if you declare prevent_destroy within the terraform lifecycle.
  3. ignores-changes: When you execute Terraform commands if there are any differences or changes required in the infrastructure terraform by default informs you however if you need to ignores the changes, then consider using ignore_changes inside the terraform lifecycle.
  • In the below code aws_instance will ignore any tag changes for the instance and for azurerm_resource_group new resource group is created first and then destroyed once the new replacement is ready.
resource "aws_instance" "automate" {
  lifecycle {
    ignore_changes = [
      tags,
    ]
  }
}

resource "azurerm_resource_group" "automate" {
  lifecycle {
    create_before_destroy = true
  }
}

Terraform jsonencode example with Terraform json

If you need to encode Json files in your terraform code, consider using terraform jsonencode function. This is a quick section about terraform jsonencode, so let’s look at a basic Terraform jsonencode example with Terraform json.

  • The below code creates an IAM role policy in which you are defining the policy statement in json format.
resource "aws_iam_role_policy" "example" {
  name   = "example"
  role   = aws_iam_role.example.name
  policy = jsonencode({
    "Statement" = [{
      # This policy allows software running on the EC2 instance to access the S3 API
      "Action" = "s3:*",
      "Effect" = "Allow",
    }],
  })
}

Terraform locals

Terraform locals are the values that are declared once but can be referred to multiple times in the resource or module block without repeating it.

Terraform locals helps you to decrease the number of code lines and reduce the repetitive code.

locals {                                         # Declaring the set of related locals in a single block
  instance = "t2.micro"
  name     = "myinstance"
}

locals {                                         # Using the Local values 
 common_tags {
  instance_type  = local.instance
  instance_name  = local.name
   }
}

resource "aws_instance" "instance1" {            # Using the newly created Local values
  tags = local.common_tags
}

resource "aws_instance" "instance2" {             # Using the newly created Local values
  tags = local.common_tags
}

Terraform conditional expression

There is multiple time when you will encounter using conditional expressions in Terraform. Let’s look at some important terraform conditional expression examples below, which will forever help you using Terraform. Let’s get into it.

  • Below are examples on how to retrieve outputs with different conditions.
aws_instance.myinstance.id      # This will provide you a result with Ec2 Instance details.
aws_instance.myinstance[0].id   # This will provide you a result with first Ec2 Instance details.
aws_instance.myinstance[1].id   # This will provide you a result with second Ec2 Instance details
aws_instance.myinstance.*id     # This will provide you a result with all Ec2 Instance details
  • Now, let us see few complex examples where different conditions are applied to retrieve outputs.
[for value in aws_instance.myinstance:value.id]    # Returns all instance values by their ids.
var.a != "auto" ? var.a : "default-a"              # if var.a is auto then use var.a else default-a
[for a in var.list : a.instance[0].name]           # var.list[*].instance[0].name
[for a in var.list: upper(a)]                      # iterates over each item in var.list and lists upper case 
[for a in var.list : a => upper(a)]     # list original objects and corresponding upper case [("a","A"),("c","C")]                                                         

Terraform dynamic block conditional

Terraform dynamic block conditional is used when resource or module block cannot accept the static value of the argument and instead depend on separate objects that are related to, embedded within the other block or outputs.

For example application = "${aws_elastic_beanstalk_application.tftest.name}" .

Also, while creating any resource in the module, you are not allowed to provide the arguments multiple times, such as name and value, so in that case, you can use dynamic settings. Below is a basic example of a dynamic setting.

resource "aws_elastic_beanstalk_environment" "tfenvtest" {
  name                = "tf-test-name"
  application         = "${aws_elastic_beanstalk_application.tftest.name}"
  solution_stack_name = "64bit Amazon Linux 2018.03 v2.11.4 running Go 1.12.6"

  dynamic "setting" {
    for_each = var.settings
    content {
      namespace = setting.value["namespace"]
      name = setting.value["name"]
      value = setting.value["value"]
    }
  }
}

Terraform functions

The Terraform includes multiple terraform functions, also known as built-in functions that you can call from within expressions to transform and combine values. The syntax for function calls is a function name followed by comma-separated arguments in parentheses: min, join, element, jsonencode, etc.

min(2,3,4)                                 # The output of this function is 2

join("","","hello", "Automate", "infra")   # The output of this function is hello, Automate , infra

element(["a", "b", "c"], length(["a", "b", "c"])-1)   # The output of this function is c

lookup({a="ay", b="bee"}, "c", "unknown?")          # The output of this function is unknown

jsonencode({"hello"="Automate"})          # The output of this function is {"hello":"Automate"}

jsondecode("{\"hello\": \"Automate\"}")   # The output of this function is { "hello"="Automate"}                                                           
                                                                 

Terraform can function

Terraform can evaluate the given expression or condition and accordingly returns a boolean value (true if the expression is valid, else false if the result has any errors. This special function can catch errors produced when evaluating its argument.

local.instance {
  myinstance1 = "t2.micro"
  myinstance2 = "t2.medium"
}

can(local.instance.myinstance1) #  This is True
can(local.instance.myinstance3) #  This is False

variable "time" {
  validation {
     condition  = can(formatdate("","var.time"))   # Checking the 2nd argument
  }
}

Terraform try function

Terraform try function evaluates all of its argument expressions in turn and returns the result of the first one that does not produce any errors.

As you can check below, the terraform try function checks the expression and returns the first correct option, t2.micro, in the first and second options in the second case.

local.instance {
  myinstance1 = "t2.micro"
  myinstance2 = "t2.medium"
}

try(local.instance.myinstance1, "second-option") # This is available in local so output is t2.micro
try(local.instance.myinstance3, "second-option") # This is not available in local so output is second-option

Terraform templatefile function

The Terraform templatefile function reads the file at a given directory or path and renders the content present in the file as a template using the template variable.

Syntax: templatefile(path,vars)
  • Lets understand the example of Terraform templatefile function with Lists. Given below is the backend.tpl template file with below content. When you execute the templatefile() function it renders the backend.tpl and assigns the address and port to the backend argument.
# backend.tpl

%{for addr in ipaddr ~}      # Condition via Directive
backend ${addr}:${port}      # Prints this      
%{end for ~}                 # Condition via Directive
templatefile("${path.module}/backend.tpl, { port = 8080 , ipaddr =["1.1.1.1","2.2.2.2"]})

backend 1.1.1.1:8080
backend 2.2.2.2:8080
  • Lets checkout another example of Terraform templatefile function but this time with maps. When you execute the templatefile() function it renders the backend.tpl and assigns the value of set with each config mentioned in the templatefile (a=automate and i=infra).
# backend.tmpl

%{ for key,value in config }
set ${key} = ${value}
%{ endfor ~}
templatefile("${path.module}/backend.tmpl,
     { 
        config = {
              "a" = "automate"
              "i" = "infra"
           } 
      })

set a = automate
set i = infra

Terraform data source

Terraform data source allows you to fetch the data defined outside of Terraform, defined by another separate Terraform configuration, or modified by functions. After fetching the data, Terraform data source can use it as input and apply it to other resources.

Let’s learn with a basic example. In the below code, you will notice that using data it is fetching the instance details with a provided instance_id.

data "aws_instance" "my-machine1" {          # Fetching the instance
  instance_id = "i-0a0269cf952a02832"
  }

Terraform State file

The main function of the Terraform state file is to store the terraform state, which contains bindings between objects in remote systems and is defined in your Terraform configuration files. Terraform State file is by default stored locally on your machine where you run the Terraform commands with the name of terraform.tfstate.

The Terraform state is stored in JSON formats. When you run terraform show or terraform output command, it fetches the output in JSON format from the Terraform state file. Also, you can import existing infrastructure which you have created by other means such as manually or using scripts within Terraform state file.

When you are an individual, it is ok to keep the Terraform state file in your local machine but when you work in a team, consider storing it in a repository such as AWS S3, etc. While you write anything on the resource that is Terraform configuration file, then the Terraform state file gets Locked, which prevents someone else from using it simultaneously and avoids it being corrupted.

You can store your remote state file in S3, Terraform Cloud, Hasicorp consul, Google cloud storage, Azure blob storage, etc.

Join 17 other followers

Terraform backend [terraform backend s3]

Terraform backend is a location where terraform state file resides. The Terraform state file contains all the resource details and tracking which were provisioned or will be provisioned with Terraform, such as terraform plan or terraform apply command.

There are two types of Backend; one is local that resides where you run terraform from it could be Linux machine, windows machine or wherever you run it from, and other is remote backend which could be SAAS based URL or storage location such as AWS S3 bucket.

Let’s take a look at how you can configure local backend or remote backend with terraform backend s3

# Local Backend
# whenever statefile is created or updates it is stored in local machine.

terraform {
  backend "local" {
    path = "relative/path/to/terraform.tfstate"
  }
}

# Configuring Terraform to use the remote terraform backend s3.
# whenever statefile is created or updates it is stored in AWS S3 bucket. 

terraform {
  backend "s3" {
    bucket = "mybucket"
    key    = "path/to/my/key"
    region = "us-east-2"
  }
}

Terraform Command Line or Terraform CLI

The Terraform command-line interface or Terraform CLI can be used via terraform command, which accepts a variety of subcommands such as terraform init or terraform plan. Below is the list of all of the supported subcommands.

  1. terraform init: It initialize the provider , module version requirements and backend configurations.
    • terraform init -input=true →You can give inputs else terraform will fail.
    • terraform init -lock=false → Disable lock of state file (Not recommended)
    • terraform init -upgrade → Upgrade modules and plugins
  2. terraform get: It is only to initialize or download the modules only.
  3. terraform plan: Determines the state of all resources it declares and and compares with real infrastructure. It uses terraform state file data to compare . It uses providers API to perform this step.
    • terraform plan -compact-warnings → Only summary of warnings
    • terraform plan -out=path → Saves the execution Plan
    • terraform plan -var-file= abc.tfvars → To use specfic terraform.tfvars
  4. terraform apply: To apply changes in specific providers account
    • terraform apply -backup=path To Backup
    • terraform apply -lock=true → Locks the state file
    • terraform apply -state=path → Path to state file
    • terraform apply -var-file= abc.tfvars
    • terraform apply -auto-approve
  5. terraform destroy: It will destroy terraform managed infra
    • terraform destroy -auto-approve
  6. terraform console: Provides console for evaluating expressions
    • terraform console -state=path → Path to local state file
  7. terraform fmt: Formats the configuration file to proper format
    • terraform fmt -check → Checks the input format
    • terraform fmt – recursive → It formats subdirectories as well
    • terraform fmt – diff → displays the difference
  8. terraform validates: It validates the configuration files
    • terraform validate -json → Output is in json format
  9. terraform graph: It Generates visual representation of execution plan
    • terraform graph -draw-cycles
    • terraform graph -type=plan
  10. terraform output : This is to extract the values of an output variable from state file.
    • terraform output -json
    • terraform output -state=path
  11. terraform state list: It lists all the resources present in state file
    • terraform state list – id=id →( Id of resource )
    • terraform state list -state=path →( Path of the state file )
  12. terraform state show: It show attributes of specific resources.
    • terraform state show -state=path
  13. terraform import : It will import existing resource in terraform
  14. terraform refresh: It will reconcile the terraform state file. Whatever resource you created using terraform and if they are manually or by any means modified , refresh will sync them in state file.
  15. terraform rm : Removes the resources from state file
  16. terraform mv : Move the resources from state file
  17. terraform state pull: Manually download and output the state from remote state
  18. terraform state push: Manually upload the local state file to remote state

Quick Glance of Terraform CLI Commands

Initialize ProvisionModify ConfigCheck infraManipulate State
terraform initterraform planterraform fmtterraform graphterraform state list
terraform getterraform applyterraform validateterraform outputterraform state show
terraform destroyterraform consoleterraform state show terraform state mv/rm
terraform state listterraform state pull/push
Terraform CLI commands

Terraform ec2 instance example (terraform aws ec2)

Let’s wrap up this ultimate guide with a basic Terraform ec2 instance example or terraform aws ec2.

  • Assuming you already have Terraform installed on your machine.
  • First create a folder of your choice in any directory and a file named main.tf inside it and copy/paste the below content.
# This is main.tf terraform file.

resource "aws_instance" "my-machine" {
  ami = "ami-0a91cd140a1fc148a"
  for_each  = {
      key1 = "t2.micro"
	  key2 = "t2.medium"
   }
    instance_type      = each.value
	key_name       = each.key
    tags =  {
	   Name  = each.value
	}
}

resource "aws_iam_user" "accounts" {
  for_each = toset( ["Account11", "Account12", "Account13", "Account14"] )
  name     = each.key
}
  • Create another file vars.tf inside the same folder and copy/paste the below content.

#  This is var.tf terraform file.

variable "tag_ec2" {
  type = list(string)
  default = ["ec21a","ec21b"]
}
  • Finally, create another file output.tf again in the same folder and copy/paste the below content.
# This is  output.tf terraform file

output "aws_instance" {
   value = "${aws_instance.my-machine.*.id}"
}
output "aws_iam_user" {
   value = "${aws_iam_user.accounts.*.name}"
}

Make sure your machine has Terraform role attached or Terraform credentials configured properly before you run the below Terraform commands.

terraform -version  # It gives Terraform Version information
Finding Terraform version
Finding Terraform version
  • Now Initialize the terraform by running the terraform init command in same working directory where you have all the above terraform configuration files.
terraform init   # To initialize the terraform 
Initializing the terraform using terraform init command
Initializing the terraform using terraform init command
  • Next run the terraform plan command. This command provides the blueprint of what all resources will be deployed before deploying actually.
terraform plan   
Running the terraform plan command
Running the terraform plan command
terraform validate   # To validate all terraform configuration files.
Running the terraform validate command
Running the terraform validate command
  • Now run the Terraform show command provides the human readable output of state file that gets generated only after terraform plan command.
terraform show   # To provide human-readable output from a state or plan file.
Running the terraform show command
Running the terraform show command
  • To list all resources within terraform state file run the terraform state list command.
terraform state list 
Running the terraform state list command
Running the terraform state list command
terraform apply  # To Actually apply the resources 
Running the terraform apply command
Running the terraform apply command
  • To provide graphical view of all resources in configuration files run terraform graph command.
terraform graph  
Running the terraform graph command
Running the terraform graph command
  • To Destroy the resources that are provisioned using Terraform run Terraform destroy command.
terraform destroy   # Destroys all your resources or the one which you specified 
Running the terraform destroy command
Running the terraform destroy command

Join 17 other followers

Conclusion

Now that you have learned everything you should know about Terraform, you are sure going to be the Terraform leader in your upcoming projects or team, or organizations.

So with that, what are you planning to automate using Terraform in your next adventure?

Learn Terraform: The Ultimate terraform tutorial [PART-1]

If you are looking to learn to terraform, then you are in the right place; this Learn Terraform: The Ultimate terraform tutorial guide will simply help you to gain complete knowledge that you need from basics to becoming a terraform pro.

Terraform infrastructure as a code tool to build and change the infrastructure effectively and simpler way. With Terraform, you can work with various cloud providers such as Amazon AWS, Oracle, Microsoft Azure, Google Cloud, and many more.

Let’s get started with Learn Terraform: The Ultimate terraform tutorial without further delay.

Prerequisites

What is terraform?

Let’s kick off this tutorial with What is Terraform? Terraform is a tool for building, versioning, and updating the infrastructure. It is written in GO Language, and the syntax language of Terraform configuration files is HCL, i.e., HashiCorp Configuration Language, which is way easier than YAML or JSON.

Terraform has been in use for quite a while now and has several key features that make this tool more powerful such as

  • Infrastructure as a code: Terraform execution and configuration files are written in Infrastructure as a code language which comes under High-level language that is easy to understand by humans.
  • Execution Plan: Terraform provides you in depth details of execution plan such as what terraform will provision before deploying the actual code and resources it will create.
  • Resource Graph: Graph is an easier way to identify and manage the resource and quick to understand.

Terraform files and Terraform directory structure

Now that you have a basic idea of Terraform and some key features of Terraform. Let’s now dive into Terraform files and Terraform directory structure that will help you write the Terraform configuration files later in this tutorial.

Terraform code, that is, Terraform configuration files, are written in a tree-like structure to ease the overall understanding of code with .tf format or .tf.json or .tfvars format. These configuration files are placed inside the Terraform modules.

Terraform modules are on the top level in the hierarchy where configuration files reside. Terraform modules can further call another child to terraform modules from local directories or anywhere in disk or Terraform Registry.

Terraform modules folder structure
Terraform modules folder structure

Terraform contains mainly five files as main.tf , vars.tf , providers.tf , output.tf and terraform.tfvars.

  1. main.tf – Terraform main.tf file contains the main code where you define which resources you need to build, update or manage.
  2. vars.tf – Terraform vars.tf file contains the input variables which are customizable and defined inside the main.tf configuration file.
  3. output.tf : The Terraform output.tf file is the file where you declare what output paraeters you wish to fetch after Terraform has been executed that is after terraform apply command.
  4. .terraform: This directory contains cached provider , modules plugins and also contains the last known backend configuration. This is managed by terraform and created after you run terraform init command.
  5. terraform.tfvars files contains the values which are required to be passed for variables that are refered in main.tf and actually decalred in vars.tf file.
  6. providers.tf – The povider.tf is the most important file whrere you define your terraform providers such as terraform aws provider, terraform azure provider etc to authenticate with the cloud provider.

Join 17 other followers

How to declare Terraform variables

In the previous section, you learned Terraform files and Terraform directory structure. Moving further, it is important to learn how to declare Terraform variables in Terraform configuration file (var. tf)

Declaring the variables allows you to share modules across different Terraform configurations, making your module reusable. There are different types of variables used in Terraform, such as boolean, list, string, maps, etc. Let’s see how different types of terraform variables are declared.

  • Each input variable in the module must be declared using a variable block as shown below.
  • The label after the variable keyword is a name for the variable, which should be unique within the same module
  • The following arguments can be used within the variable block:
    • default – A default value allows you to decalre the value in this block only and makes the variable optional.
    • type – This argument declares the value types.
    • description – You can provide the description of the input variable’s.
    • validation -To define validation rules if any.
    • sensitive – If you specify the value as senstive then terraform will not print the values in while executing.
    • nullable – Specify null if you dont need any value for the variable.
variable "variable1" {                        
  type        = bool
  default     = false
  description = "boolean type variable"
}

variable  "variable2" {                       
   type    = map
   default = {
      us-east-1 = "image-1"
      us-east-2 = "image2"
    }

   description = "map type  variable"
}

variable "variable3" {                   
  type    = list(string)
  default = []
  description = "list type variable"
}

variable "variable4" {
  type    = string
  default = "hello"
  description = "String type variable"
}                        

variable "variable5" {                        
 type =  list(object({
  instancetype        = string
  minsize             = number
  maxsize             = number
  private_subnets     = list(string)
  elb_private_subnets = list(string)
            }))

 description = "List(Object) type variable"
}


variable "variable6" {                      
 type = map(object({
  instancetype        = string
  minsize             = number
  maxsize             = number
  private_subnets     = list(string)
  elb_private_subnets = list(string)
  }))
 description = "Map(object) type variable"
}


variable "variable7" {
  validation {
 # Condition 1 - Checks Length upto 4 char and Later
    condition = "length(var.image_id) > 4 && substring(var.image_id,0,4) == "ami-"
    condition = can(regex("^ami-",var.image_id)    
# Condition 2 - It checks Regular Expression and if any error it prints in terraform error_message =" Wrong Value" 
  }

  type = string
  description = "string type variable containing conditions"
}

Terraform variables follows below higher to the lower priority order.

  1. Specifying the environment variables like export TF_VAR_id='["id1","id2"]''
  2. Specifying the variables in the teraform.tfvars file
  3. Specifying the variables in theterraform.tfvars.json file
  4. Specifying the variables in the *.auto.tfvars or *.auto.tfvars.json file
  5. Specifying the variables on the command line with -var and -var-file options

How to declare Terraform Output Variables

In the previous section, you learned how to use terraform variables in the Terraform configuration file. As learned earlier, modules contain one more important file: outputs. tf that contains terraform output variables.

  • In the below output.tf file the you can see there are two different terraform output variables named:
  • output1 that will store and display the arn of instance after running terraform apply command.
  • output2 that will store and display the public ip address of the instance after running terraform apply command.
  • output3 that will store but doesnt display the private ip address of the instance after running terraform apply command using sensitive argument.
# Output variable which will store the arn of instance and display after terraform apply command.

output "output1" {
  value = aws_instance.my-machine.arn
}

# Output variable which will store instance public IP and display after terraform apply command
 
output "output2" {
  value       = aws_instance.my-machine.public_ip
  description = "The public IP address of the instance."
}

output "output3" {
  value = aws_instance.server.private_ip
# Using sensitive to prevent Terraform from showing the ouput values in terrafom plan and apply command.  
  senstive = true                             
}

How to declare Terraform resource block

You are going great in learning the terraform configuration file, but do you know your modules contain one more important main file.tf file, which allows you to manage, create, update resources with Terraform, such as creating AWS VPC, etc., and to manage the resource, you need to define them in terraform resource block.

# Below Code is a resource block in Terraform

resource "aws _vpc" "main" {    # <BLOCK TYPE> "<BLOCK LABEL>" "<BLOCK LABEL>" {
cidr_block = var.block          # <IDENTIFIER> =  <EXPRESSION>  #Argument (assigns value to name)
}                             

Declaring Terraform resource block in HCL format.

Now that you have an idea about the syntax of terraform resource block let’s check out an example where you will see resource creation using Terraform configuration file in HCL format.

  • Below code creates two resources where resource1 create an AWS ec2 instance and other work with Terraform provisioner and install apache on ec2 instance. Timeouts customizes how long certain operations are allowed.

There are some special arguments that can be used with resources such as depends_on, count, lifecycle, for_each and provider, and lastly provisioners.

resource "aws_instance" "resource1" {
  instance_type = "t2.micro"
  ami           = "ami-9876"
  timeouts {                          # Customize your operations longevity
   create = "60m"
   delete = "2h"
   }
}

resource "aws_instance" "resource2" {
  provisioner "local-exec" {
    command = "echo 'Automateinfra.com' >text.txt"
  }
  provisioner "file" {
    source      = "text.txt"
    destination = "/tmp/text.txt"
  }
  provisioner "remote-exec" {
    inline = [
      "apt install apache2 -f /tmp/text.txt",
    ]
  }
}

Declaring Terraform resource block in terraform JSON format.

Terraform language can also be expressed in terraform JSON syntax, which is harder for humans to read and edit but easier to generate and parse programmatically, as shown below.

  • Below example is same which you previously created using HCL configuration but this time it is using terraform JSON syntax. Here also code creates two resources resource1 → AWS EC2 instance and other resource work with Terraform provisioner to install apache on ec2 instance.
{
  "resource": {
    "aws_instance": {
      "resource1": {
        "instance_type": "t2.micro",
        "ami": "ami-9876"
      }
    }
  }
}


{
  "resource": {
    "aws_instance": {
      "resource2": {
        "provisioner": [
          {
            "local-exec": {
              "command": "echo 'Automateinfra.com' >text.txt"
            }
          },
          {
            "file": {
              "source": "example.txt",
              "destination": "/tmp/text.txt"
            }
          },
          {
            "remote-exec": {
              "inline": ["apt install apache2 -f tmp/text.txt"]
            }
          }
        ]
      }
    }
  }
}

Declaring Terraform depends_on

Now that you learned how to declare Terraform resource block in HCL format but within the resource block, as discussed earlier, you can declare special arguments such as depends_on. Let’s learn how to use terraform depends_on meta argument.

Use the depends_on meta-argument to handle hidden resource or module dependencies that Terraform can’t automatically handle.

  • In the below example while creating a resource aws_rds_cluster you need the information about the aws_db_subnet_group so aws_rds_cluster is dependent and in order to specify the dependency you need to declare depends_on meta argument within aws_rds_cluster.
resource "aws_db_subnet_group" "dbsubg" {
    name = "${var.dbsubg}" 
    subnet_ids = "${var.subnet_ids}"
    tags = "${var.tag-dbsubnetgroup}"
}


# Component 4 - DB Cluster and DB Instance

resource "aws_rds_cluster" "main" {
  depends_on                   = [aws_db_subnet_group.dbsubg]    # This RDS cluster is dependent on Subnet Group

Join 17 other followers

Using Terraform count meta argument

Another special argument is terraform count. Let’s learn how to use terraform depends_on meta argument.

By default, terraform create a single resource defined in terraform resource block. But at times, you want to manage multiple objects of the same kind, such as creating four AWS EC2 instances of the same type in the AWS cloud without writing a separate block for each instance. Let’s learn how to use Terraform count meta argument.

  • In the below code terraform will create 4 instance of t2.micro type with (ami-0742a572c2ce45ebf) ami as shown below.
resource "aws_instance" "my-machine" {
  count = 4 
  
  ami = "ami-0742a572c2ce45ebf"
  instance_type = "t2.micro"
  tags = {
    Name = "my-machine-${count.index}"
         }
}
Using Terraform count to create four ec2 instance
Using Terraform count to create four ec2 instance
  • Similarly in the below code terraform will create 4 AWS IAM users named user1, user2, user3 and user4.
resource "aws_iam_user" "users" {
  count = length(var.user_name)
  name = var.user_name[count.index]
}

variable "user_name" {
  type = list(string)
  default = ["user1","user2","user3","user4"]
}
Using Terraform count to create four IAM user
Using Terraform count to create four IAM user

Terraform for_each module

Earlier in the previous section, you learned to terraform count is used to create multiple resources with the same characteristics. If you need to create multiple resources in one go but with certain parameters, then terraform for_each module is for you.

The for_each meta-argument accepts a map or a set of strings and creates an instance for each item in that map or set. Let’s look at the example below to better understand terraform for_each.

Example-1 Terraform for_each module

  • In the below example, you will notice for_each contains two keys (key1 and key2) and two values (t2.micro and t2.medium) inside the for each loop. When the code is executed then for each loop will create:
    • One instance with key as “key1” and instance type as “t2.micro”
    • Another instance with key as “key2” and instance type as “t2.medium”.
  • Also below code will create different account with names such as account1, account2, account3 and account4.
resource "aws_instance" "my-machine" {
  ami = "ami-0a91cd140a1fc148a"
  for_each  = {
      key1 = "t2.micro"
      key2 = "t2.medium"
   }
  instance_type    = each.value	
  key_name         = each.key
  tags =  {
   Name = each.value 
	}
}

resource "aws_iam_user" "accounts" {
  for_each = toset( ["Account1", "Account2", "Account3", "Account4"] )
  name     = each.key
}
Terraform for_each module example 1 to launch ec2 instance and IAM users
Terraform for_each module example 1 to launch ec2 instance and IAM users

Example-2 Terraform for_each module

  • In the below example, you will notice for_each is a variable of type map(object) which has all the defined arguments such as (instance_type, key_name, associate_public_ip_address and tags). After Code is executed every time each of these arguments get a specific value.
resource "aws_instance" "web1" {
  ami                         = "ami-0a91cd140a1fc148a"
  for_each                    = var.myinstance
  instance_type               = each.value["instance_type"]
  key_name                    = each.value["key_name"]
  associate_public_ip_address = each.value["associate_public_ip_address"]
  tags                        = each.value["tags"]
}

variable "myinstance" {
  type = map(object({
    instance_type               = string
    key_name                    = string
    associate_public_ip_address = bool
    tags                        = map(string)
  }))
}

myinstance = {
  Instance1 = {
    instance_type               = "t2.micro"
    key_name                    = "key1"
    associate_public_ip_address = true
    tags = {
      Name = "Instance1"
    }
  },
  Instance2 = {
    instance_type               = "t2.medium"
    key_name                    = "key2"
    associate_public_ip_address = true
    tags = {
      Name = "Instance2"
    }
  }
}
Terraform for_each module example 2 to launch multiple ec2 instance
Terraform for_each module example 2 to launch multiple ec2 instances

Example-3 Terraform for_each module

  • In the below example, similarly you will notice instance_type is using toset which contains two values(t2.micro and t2.medium). When the code is executed then instance type takes each value from the set values inside toset.
locals {
  instance_type = toset([
    "t2.micro",
    "t2.medium",
  ])
}

resource "aws_instance" "server" {
  for_each      = local.instance_type

  ami           = "ami-0a91cd140a1fc148a"
  instance_type = each.key
  
  tags = {
    Name = "Ubuntu-${each.key}"
  }
}
Terraform for_each module example 3 to launch multiple ec2 instances
Terraform for_each module example 3 to launch multiple ec2 instances

Terraform provider

Terraform depend on the plugins to connect or interact with cloud providers or API services, and to perform this, you need Terraform provider. There are several terraform providers that are stored in Terraform registry such as terraform provider aws or aws terraform provider or terraform azure.

Terraform configurations must declare which providers they require so that Terraform can install and use them. Some providers require configuration (like endpoint URLs or cloud regions) before using. The provider also uses local utilities like generating random strings or passwords. You can create multiple or single configurations for a single provider. You can have multiple providers in your code.

Providers are stored inside the “Terraform registry,” Some are in-house providers ( companies that create their own providers). Providers are written in Go Language.

Let’s learn how to define a single provider and then define the provider’s configurations inside terraform.

# Defining the Provider requirement 

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
    postgresql = {
      source = "cyrilgdn/postgresql"
    }
  }
  required_version = ">= 0.13"   # New way to define version 
}


# Defining the Provider Configurations and names are Local here i.e aws,postgres,random

provider "aws" {
  assume_role {
  role_arn = var.role_arn
  }
  region = var.region
}

provider "random" {}

provider "postgresql" {
  host                 = aws_rds_cluster.main.endpoint
  username             = username
  password             = password
}

Defining multiple aws providers terraform

In the previous section, you learned how to use aws provider terraform to connect to AWS resources, which is great, but with that, you can only in one particular aws region. However, consider using multiple aws providers’ Terraform configurations if you need to work with multiple regions.

  • To create multiple configurations for a given provider, you should include multiple provider blocks with the same provider name but to use the additional non-default configuration, use the alias meta-argument as shown below.
  • In the below code, there is one aws terraform provider named aws that works with the us-east-1 region by default and If you need to work with another region, consider declaring same provider again but with different region and alias argument.
  • For creating a resource in us-west-1 region declare provider.<alias-name> in the resource block as shown below.
# Defining Default provider block with region us-east-1

provider "aws" {      
  region = us-east-1
}

# Name of the provider is same that is aws with region us-west-1 thats why used ALIAS

provider "aws" {    
  alias = "west"
  region = us-west-1
}

# No need to define default Provider here if using Default Provider 

resource "aws_instance" "resource-us-east-1" {}  

# Define Alias Provider here to use west region  

resource "aws_instance" "resource-us-west-1" {    
  provider = aws.west
}

Quick note on Terraform version : In Terraform v0.12 there was no way to give a source but in the case of Terraform v 0.13 onwards you have an option to add a source address.

# This is how you define provider in Terraform v0.13 and onwards
terraform {          
  required_providers {
    aws = {
      source = "hasicorp/aws"
      version = "~>1.0"
}}}

# This is how you define provider in Terraform v 0.12
terraform {               
  required_providers {
    aws = "~/>1.0"
}}

Join 17 other followers

Conclusion

In this Ultimate Guide, you learned what is terraform, terraform provider, and understood how to declare terraform provider aws and further used to interact with cloud services.

Now that you have gained a handful of Knowledge on Terraform continue with the PART-2 guide and become the pro of Terraform.

Learn Terraform: The Ultimate terraform tutorial [PART-2]

Ultimate Ansible interview questions and answers

If you are preparing for a DevOps interview or for an Ansible administrator, consider this guide as your friend to practice ansible interview questions and answers and help you pass the exam or interview.

Without further delay, let’s get into this Ultimate Ansible interview questions and answers guide, where you will have three Papers to practice containing 20 questions of Ansible.

PAPER-1

Q1. What is Ansible ?

Answer: Ansible is open source configuration tool written in python Language. Ansible is used to deploy or configure any software, tools or files on remote machines quickly using SSH Protocol.

Q2. What are advantages of Ansible.

Answer: Ansible is simple to manage, agent-less, has great performance as it is quick to deploy and doesn’t require much efforts to setup and no doubt it is reliable.

Q3. What are the things which Ansible can do ?

Answer: Deployment of apps such as apache tomcat, AWS EC2 instance, configuration Management such as configure multiple file in different remote nodes , Automates tasks and is used for IT orchestration.

Q4. Is it possible to have Ansible control node on windows ?

Answer: No, you can have Ansible controller host or node only on Linux based operating system however you can configure windows machine as your remote hosts.

Q5. What are requirements when the remote host is windows machine ?

Answer: Ansible needs Powershell 3.0 and at least .NET 4.0 to be installed on windows Host, winRM Listener should be created and activated before you actually deploy or configure remote node as windows machine.

Q6. Where are different components of Ansible ?

Answer: API’s, Modules, Host , Playbook , Cloud, Networking and Inventories.

Q7.What are Ansible adhoc commands ?

Answer: Ansible adhoc commands are the single line commands that are generally used for testing purpose or if you need to take an action which is not repeatable and rarely used such as restart service on a machine etc. Below is the example of ansible adhoc command.

The below command starts the apache service on the remote node.

ansible all -m ansible.builtin.service -a “name=apache2 state=started”

Q8. What is the ansible command to check uptime of all servers ?

Answer: Below is the ansible command to check uptime of the servers. This command will provide you an output stating since long the remote node is up.

ansible all -a /usr/bin/uptime 
Ansible ad hoc command to check the uptime of the server which is 33 days.
Ansible ad hoc command to check the server’s uptime, which is 33 days.

Q9. How to Install the Apache service using ansible command ?

Answer: To install apache service using ansible command you can use ansible adhoc command as shown below. In the below command b flag is to become root.

ansible all -m apt -a  "name=apache2 state=latest" -b  

Q10.What are the steps or commands to Install Ansible on Ubuntu Machine ?

Answer: The below commands you will need to execute to Install Ansible on Ubuntu Machine

# Update your system packages using apt update command
sudo apt update 
# Install below prerequisites package to work with PPA repository.
sudo apt install software-properties-common 
# Install Ansible PPA repository (Personal Package repository) 
sudo apt-add-repository –yes –update ppa:ansible/ansible
# Finally Install ansible
sudo apt install ansible

Q11. What is Ansible facts in Ansible ?

Answer: Ansible facts allows you to fetch or access the data or values such as hostname or Ip address from the remote hosts and stored.

Below is the example showing how you can run Ansible facts using ansible-playbook named main.yml.

# main.yml 
---
- name: Ansible demo
  hosts: web
  remote_user: ubuntu
  tasks:
    - name: Print all available facts
      ansible.builtin.debug:
        var: ansible_facts
 ansible-playbook main.yml
Output of the Ansible facts using ansible-playbook
The output of the Ansible facts using ansible-playbook

Q12. What are Ansible tasks ?

Answer: Ansible tasks are group of task which ansible playbook needs to perform such as copy , installing package , editing configurations on remote node and restarting services on remote node etc.

Let’s look at a basic Ansible task. In below code the Ansible task is to check if apache service is running on the remote node?

tasks:
  - name: make sure apache is running
    service:
      name: httpd
      state: started

Q13. What are Ansible Roles ?

Answer: Ansible roles is a way to structurally maintain your playbooks such that you can easily understand and work on it. Ansible role basically contains different folders for the simplicity such as it lets you load the files from files folder, variables from variable folder, handlers, tasks etc.

You can create different Ansible roles and reuse them as many times as you need.

Q14. Command to Create a user on linux machine using Ansible?

Answer: To create a user on linux machine using Ansible you can use ansible adhoc command as shown below.

ansible all -m ansible.builtin.user -a “name=name password=password” -b

Q15. What is Ansible Tower ?

Answer: Ansible tower is web based solution that makes Ansible even more to easy to use for IT teams. Ansible tower can be used for upto 10 nodes. It captures all recent activities like status of host . It integrates with notifications’ about all necessary updates. It also schedules Ansible Jobs very well.

Q16. How to connect with remote machines in Ansible?

Answer: After installing Ansible , configure Ansible inventory with the list of hosts or grouping them accordingly and finally connecting them using SSH protocol. After you configure the Ansible inventory you can test the connectivity between Ansible controller and remote nodes using ping module to ping all the nodes in your inventory

ansible all -m ping

You should see output for each host in your inventory, similar to this:

aserver.example.org | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}

Q17. Does Ansible support AWS?

Answer: Yes. There are lots of AWS modules that are present in Ansible and can be used to manage AWS resources. Refer Ansible collections in the Amazon Namespace.

Q18. Which Ansible module allows you to copy files from remote machine to control machine ?

Answer: Ansible fetch module. Ansible fetch module is used for fetching files from remote machines and storing them locally in a file tree, organized by hostname.

- name: Store file from remote node directory to host directory 
  ansible.builtin.fetch:
    src: /tmp/remote_node_file
    dest: /tmp/fetched_host_file

Q19. Where you can find Ansible inventory by default ?

Answer: The default location of Ansible inventory is /etc/ansible/hosts.

Q20. How can you check the Ansible version?

Answer: To check the ansible version, run the ansible –version command below.

ansible --version
Checking the ansible version by using ansible --version command.
Checking the ansible version by using the ansible –version command.

Join 17 other followers

PAPER-2

Q1. Is Ansible agentless ?

Answer: Yes, Ansible is an open source tool that is agentless. Agentless here means when you install Ansible on host controller and when you use to deploy or configure changes on remote nodes, the remote node doesn’t require any agent or softwares to be installed.

Q2. What is Primary use of Ansible ?

Answer: Ansible can be used in IT Infrastructure to manage and deploy software application on remote machines.

Q3. What are Ansible Hosts or remote nodes ?

Answer: Ansible hosts are machines or nodes on which Ansible controller host deploys the software’s. Ansible host could be Linux, RedHat, Windows, etc.

Q4. What is CI (Continuous Integration) ?

Answer: CI also known as Continuous integration is primarily used by developers. Successful Continuous integration means when developers code is built , tested and then pushed to Shared repository whenever there is a change in code.

Q5. What is the main purpose of role in Ansible ?

Answer: The main purpose of Ansible role is to reuse the content again by using the proper Ansible folder structure directory. These directory’s folder contains multiple configuration files or content accordingly that needs to be declared at various places and in various modules, to minimize the re-code work roles are used.

Q6. What is Control Node?

Answer: The Control node is the Ansible node on which Ansible is installed. Before you install control node make sure Python is already installed on machine prior to installing Ansible.

Q7. Can you have Windows machine as Controller node ?

Answer: No.

Q8. What is the other names of Ansible Hosts?

Answer: Ansible hosts can also be called as Managed nodes. Ansible is not installed on managed nodes.

Q9. What is host file in Ansible ?

Answer: Inventory file is also known as host file in Ansible which is by default stored in /etc/ansible/hosts directory.

Q10. What is collections in Ansible ?

Answer: Ansible collections is a distribution format that include playbooks, roles, modules, plugins.

Q11. What is Ansible module in Ansible.

Answer: Ansible contains various Ansible modules that have a specific purpose such as copying the data, adding a user and many more. You can invoke a single module within a task defined in playbooks or several different modules in a playbook.

Q12. What is task in Ansible ?

Answer: To perform any action you need a task. Similarly in Ansible you need a task to run modules. For Ansible ad hoc command you can execute task only once.

Q13. What is Ansible Playbook?

Answer: Ansible playbook is an ordered lists of tasks that you run and are designed to be human-readable and are developed in a basic text language. For example in the below ansible playbook there are two tasks first is to create a user named adam and other task is to create a user shanky in the remote node.

---
- name: Ansible Create user functionality module demo
  hosts: web # Defining the remote server
  tasks:

    - name: Add the user 'Adam' with a specific uid and a primary group of 'sudo'
      ansible.builtin.user:
        name: adam
        comment: Adam
        uid: 1095
        group: sudo
        createhome: yes        # Defaults to yes
        home: /home/adam   # Defaults to /home/<username>

    - name: Add the user 'Adam' with a specific uid and a primary group of 'sudo'
      ansible.builtin.user:
        name: shanky
        comment: shanky
        uid: 1089
        group: sudo
        createhome: yes        # Defaults to yes
        home: /home/shanky  # Defaults to /home/<username>



Creating two users using ansible playbook
Creating two users using ansible-playbook

Q14. Where do you create basic inventory in Ansible?

Answer: /etc/ansible/hosts

Q15. What is Ansible Tower ?

Answer: Ansible tower is web based solution that makes Ansible even more to easy to use for IT teams. Ansible tower can be used for upto 10 nodes. It captures all recent activities like status of host . It integrates with notification’s about all necessary updates. It also schedules Ansible Jobs very well.

Q16. What is the command for running the Ansible playbook?

Answer: The below is the command to run or execute the ansible-playbook.

ansible-playbook my_playbook

Q17. On which protocol does Ansible communicate to remote node?

Answer: SSH

Q18. How to use ping module to ping all the nodes?

Answer: Below is the command which you can use to ping all the remote nodes.

ansible all -m ping

Q19. Provide an example to run a live command on all of your nodes?

Answer:

ansible all -a "/bin/echo hello"
Printing hello on remote node using ansible command.
Printing hello on remote node using ansible command.

Q20. How to run ansible command with privilege escalation (sudo and similar) ?

Answer: Below command executes the ansible command with root access by using --become flag.

ansible all -m ping -u adam --become

PAPER-3

Q1. Which module allows you to create a directory?

Answer: Ansible file module allows you to create a directory.

Q2. How to define number of parallel processes while communicating to hosts .

Answer: By setting the forks in ansible and to set the forks you need to edit ansible.cfg file.

Q3. Is Ansible agentless configuration management Tool ?

Answer: Yes

Q4. What is Ansible Inventory ?

Answer: Ansible works against managed nodes or hosts to create or manage the infrastructure . We list down these hosts or nodes in a file known as Inventory. Inventory can be of two types one is ini and other is YAML format

Q5. How to create a Ansible inventory in the ini format ?

Answer:

automate2.mylabserver.com
[httpd]
automate3.mylabserver.com
automate4.mylabserver.com
[labserver]
automate[2:6].mylabserver.com

Q6. How to create a Ansible inventory in the YAML format?

Answer:

all:
  hosts:
     automate2.mylabserver.com
  children:
      httpd:
        hosts:
          automate3.mylabserver.com
          automate4.mylabserver.com
      labserver:
         hosts:
          automate[2:6].mylabserver.com

Q7.What is Ansible TAG ?

Answer: When you need to add tags with Ansible then you can use Ansible Tags to do this. You can apply Ansible Tags on block level , playbook level, individual task level or role level.

tasks:
- name: Install the servers
  ansible.builtin.yum:
    name:
    - httpd
    - memcached
    state: present
  tags:
  - packages
  - webservers

Q8. What are key things required for Playbook ?

Answer: Hosts should be configured in inventory, Tasks should be declared in ansible playbook and Ansible should already be installed.

Q9. How to use existing Ansible tasks ?

Answer: We can use by importing the tasks import_tasks. Ansible import_tasks imports a list of tasks to be added to the current playbook for subsequent execution.

Q10. How can you secure the data in Ansible-playbook ?

Answer: You can secure the data using ansible-vault to encrypt the data and later decrypt it. Ansible Vault is a feature of ansible that allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plaintext in playbooks or roles.

Q11. What is Ansible Galaxy?

Answer: Ansible Galaxy is a repository for Ansible Roles that are available to drop directly into your Playbooks to streamline your automation projects.

Q12. How can you download roles from Ansible Galaxy ?

Answer: Below code allows you to download roles from Ansible Galaxy.

ansible-galaxy install username.role_name

Q13. What are Variables in ansible?

Answer: Ansible variables are assigned with values which are further used for commuting. After you create variables, either by defining them in a file, passing them at the command line, or registering the return value or values of a task as a new variable

Q14. Command to generate a SSH key-pair for connecting with remote machines?

Answer: ssh-keygen

Q15. What is Ansible Tower ?

Answer: Ansible tower is web based solution that makes Ansible even more to easy to use for IT teams. Ansible tower can be used for upto 10 nodes. It captures all recent activities like status of host . It integrates with notification’s about all necessary updates. It also schedules Ansible Jobs very well.

Q16. What is the command for running a playbook?

Answer: ansible-playbook my_playbook.

Q17. Does Ansible support AWS?

Answer: Yes.There are lots of modules that are present in Ansible.

Q18. How to create encrypted files using Ansible?

Answer: By using the below ansible-vault command.

ansible-vault create file.yml 

Q19. What are some key features of Ansible Tower?

Answer:

With Ansible Tower, you can view dashboards and see whatever is going on in real-time, job updates, who ran the playbook or ansible commands, Integrated notifications, schedule ansible jobs, run ansible remote commands, and perform.

Q20. What is the first-line syntax of any ansible playbook?

Answer: First line syntax of ansible-playbook is – – –

---   # The first Line Syntax of ansible playbook

Join 17 other followers

Conclusion

In this ultimate guide, you had a chance to revise everything you needed to pass the interview with Ansible interview questions and answers.

Now that you have sound knowledge of Ansible and various components, modules, and features, and are ready for your upcoming interview.

The Ultimate Python interview questions: learn python the hard way

If you are preparing for a DevOps interview or for a Python developer, consider this guide as your friend to practice Python interview questions: learn python the hard way and answers and help you pass the exam or interview.

Without further delay, let’s get into this Ultimate Python interview questions and answers guide, where you will have 20 questions to practice.

Let’s get into it.

PAPER

Q1. What is difference between List and Tuple in Python?

Answer: Lists are mutable i.e they can be modified and Tuples are immutable. As you can see in the below code the list can be modified however Tuples are not modifable.

List = ["a","i",20]
Tuples = ("a","i",20)
List = ["a","i",20]
Tuples = ("a","i",20)
print(List)
print(Tuples)
List[1] = 22
print(List)
Tuples[1] = "n"
print(Tuples)

After running the above code, you will notice that the Lists can modify, but the Tuples are not.

Running Python code to modify list and tuple
Running Python code to modify list and tuple

Q2. What are key features of Python.

Answer: Python is interpreted language, that means it doesn’t required to be compiled. It can be used in various automation areas such as AI technologies, easier to understand and write language. The Latest version of Python is 3.10

Q3. Give Examples of some famous Python modules?

Answer: Some of the example of Python modules are os, json, sys, math, random, math

Q4. What are local and global variables in Python ?

Answer: Local Variables are declared inside the function and scope is inside function only however global variables are those variables whose scope is for entire program.

Q5. What are functions in Python ?

Answer: This is a block of code which is only executed when called. To declare the function you need use the below syntax def < function-name> and call it using <function-name> as shown below.

def function(n): 
     a, b = 0, 1
     while a > n:
         print(a)
         print(b)
     print(n)
function(200)     
python shanky.py
Running Python function
Running Python function

Q6. What is __init__ ?

Answer: This is a method which automatically allocate memory when object or instance is created. All the Classes have __init__method.

Q7. What is the Lambda function in Python?

Answer: Lambda functions are also known as the anonymous function, which takes any number of parameters but is written in a single statement.

square = lambda num: num ** 2

Q8. What is self in Python?

Answer: self is an instance or object of a class. This is used with __init__ function and is explicitly included as first parameter.

Join 17 other followers

Q9. How can you Randomize the list in Python ?

Answer: To randamize the list in Python consider importaning random module as showb below.

from random import shuffle
list = ["a" , "b" , "c" ,"d", "e"] 
shuffle(list)
print(list)

Q10.What does *args and *Kargs mean ?

Answer: The *args are used when you are sure about the number of arguments that you will pass in a function. Similarly * Kargs are keyword arguments where you are not sure about number of argument but they are declared in the dictionary formats.

Q11. Does Python Supports OOPs concept?

Answer: Yes , Python does support OOPs concept by creating classes and objects . Backends determines where state is stored or loaded from. By default we have local backend but we can give remote backed such as S3

Q12. Name some Python Libraries ?

Answer: The Python libraries are collection of built-in modules (written in C) that provide access to system functionality such as file I/O that would otherwise be inaccessible to Python programmers, as well as modules written in Python that provide standardized solutions for many problems that occur in everyday programming. For example Pandas, Numpy etc.

Q13. What are various ways to import the modules in python?

Answer:

import math 

import math as mathes ( Alias name ) 
from flask import Flask

Q14. What is Python Flask?

Answer: Python flask is a web framework which makes life of developer easy by reusing the code, extensions for operation to build a reliable, scalable and maintainable web apps. With Python flask web frameworks, you can create and build from static to dynamic applications and work with API requests.

There are different Python Web frameworks apart from Python flask such as TORNADO, PYRAMID and DJANGO .

Related POST: Python Flask Tutorial: All about Python flask

Q15. How can we open a file in Python?

Answer:

with open("myfile.txt", "r") as newfile:

Q16. How can you see statistics of file located in directory?

Answer: To see the stats of file located in directory consider using os module.

import os

os.stat("file_name") # These stats include st_mode, the file type and permissions, and st_atime, the time the item was last accessed.

Q17. How do you define method and URL Bindings in Python Flask?

Answer:

@app.route("/login", methods = ["POST"])

Related POST: Python Flask Tutorial: All about Python flask

Q18. How does Python Flask gets executed in Python ?

Answer:

if __name__ ==  '__main__'
app.run(debug = True)

Q19. Which built in function evaluates expression and return Boolean result ?

Answer: can function

Q20. How can you make Python Script executable in Unix ?

Answer: Below are the steps that you would need to make Python Script executable in unix.

  • Define the Path of Python interpretor
#/usr/local/bin/python
  • Next convert the script into executable by using below command.
chmod +x abc.py
  • Finally run the script
python abc.py

Join 17 other followers

Conclusion

In this ultimate guide, you had a chance to revise everything you needed to pass the interview with Python interview questions and answers.

Now that you have sound knowledge of Python and various components, modules, and features, and are ready for your upcoming interview.

How to create a new Docker image using Dockerfile: Dockerfile Example

Are you looking to create your own Docker image? The docker images are the basic software applications or an operating system but when you need to create software with advanced functionalities of your choice, then consider creating a new docker image with dockerfile.

In this tutorial, you will learn how to create your own Docker Image using dockerFile, which contains a set of instructions and arguments again each instruction. Let’s get started.

Prerequisites

If you’d like to follow along step-by-step, you will need the following installed:

  • Ubuntu machine with Docker installed. This tutorial uses Ubuntu 21.10 machine.
  • Docker v19.03.8 installed.

Join 17 other followers

What is Dockerfile?

If you are new to dockerfile, you should know what dockerfile is. Dockerfile is a text file that contains all the instructions a user could call on the command line to assemble an image from a base image. The multiple instructions could be using the base image, updating the repository, Installing dependencies, copying source code, etc.

Docker can build images automatically by reading the instructions from a Dockerfile. Each instruction in DockerFile creates another layer (All the instructions take their own memory). While building the new image using the docker build command ( which is done by Docker daemon), if any instruction fails and if you rebuild the image, then previous instructions which are cached are used to build.

New Docker image can be built using simply executing docker build command, or if you need to build docker image from a different path use f flag.

docker build . 
docker build -f /path/to/a/Dockerfile .

Dockerfile instructions or Dockerfile Arguments

  • FROM : From instruction initializes new build stage and sets the base image for subsequent instructions. From instruction may appear multiple times in the dockerFile.
  • ARG: ARG is the only instruction that comes before FROM. The ARG instruction defines a variable that users can pass while building the image using the docker build command such as
 --build-arg <varname>=<value> flag
  • EXPOSE: Expose instruction informs docker about the port’s container listens on. The EXPOSE instruction does not actually publish the port.; it is just for the sake of understanding for admins to know about which ports are intended to be published
  • ENV: The ENV instruction sets the environment variable in the form of key-value pair.
  • ADD: The ADD instruction copies new files, directories, or remote file URLs from your docker host and adds them to the filesystem of the image.
  • VOLUME: The VOLUME instruction creates a mount point and acts as externally mounted volumes from the docker host or other containers.
  • RUN: The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. RUN are declared in two ways, either shell way or executable way.
  • Shell way: the command is run in a shell i.e.,/bin/sh. If you need to run multiple commands, use the backslash.
  • Executable way: RUN [“executable”, “param1”, “param2”] . If you need to use any other shell than /bin/sh, you should consider using executable.
RUN /bin/bash -c 'source $HOME/.bashrc; echo $HOME' # Shell way
RUN ["/bin/bash", "-c", "echo HOME"] # Executable way (other than /bin/sh)
  • CMD: The CMD instruction execute the command within the container just like docker run exec command. There can be only one CMD instruction in DockerFile. If you list more than one then last will take effect. CMD has also three forms as shown below
    • CMD ["executable","param1","param2"] (exec form, this is the preferred form)
    • CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
    • CMD command param1 param2 (shell form)

Let’s take an example in the below docker file; if you need that your container should sleep for 5 seconds and then exists, use the below command.

FROM Ubuntu
CMD sleep 5

Run the below docker run command to create a container, and then it sleeps for 5 seconds, and then the container exists.

docker run <new-image>

But If you wish to modify the sleep time such as 10 then either you will need to either change the value manually in the Dockerfile or add the value in the docker command.

FROM Ubuntu    
CMD sleep 10    # Manually changing the sleep time in dockerfile
docker run <new-image> sleep 10  # Manually changing the sleep time in command line

You can also use the entry point shown below to automatically add the sleep command in the running docker run command, where you just need to provide the value of your choice.

FROM Ubuntu
ENTRYPOINT ["sleep"]

With Entrypoint, when you execute the docker run command with a new image, sleep will automatically get appended in the command as shown below. Still, the only thing you need to specify is the number of seconds it should sleep.

docker run <new-image> sleep <add-value-of-your-choice>

So in case of CMD command instruction command line parameters passed are completely replaced where in case of entrypoint parameters passed are appended.

If you don’t provide the <add-value-of-your-choice>, this will result in an error. To avoid the error, you should consider using both CMD and ENTRYPOINT but make sure to define both CMD and ENTRYPOINT in json format.

FROM Ubuntu    
ENTRYPOINT sleep       # This command will always run sleep command
CMD ["5"]              # In case you pass any parameter in command line it will be picked else 5 by default
  • ENTRYPOINT: An ENTRYPOINT allows you to run the commands as an executable in a container. ENTRYPOINT is preferred when defining a container with a specific executable. You cannot override an ENTRYPOINT when starting a container unless you add the --entrypoint flag.

Dockerfile Example

Up to now, you learned how to declare dockerfile instructions and executable of each instruction, but unless you create a dockerfile and build a new image with these commands, they are not doing much. So let’s learn and understand by creating a new dockerfile. Let’s begin.

  • Login to the ubuntu machine using your favorite SSH client.
  • Create a folder under home directory named dockerfile-demo and switch to this directory.
mkdir ~/dockerfile-demo
cd dockerfile-demo/
  • Create a file inside the ~/dockerfile-demo directory named dockerfile and copy/paste the below code. Below code contains from instruction which sets the base image as ubuntu and runs the update and nginx installation commands and builds the new image. Once you run the docker containter then Image created is printed on the screen on containers terminal using echo command.
FROM ubuntu:20.04
MAINTAINER shanky@automateinfra.com
RUN apt-get update 
RUN apt-get install nginx 
CMD [“echo”,”Image created”] 
docker build -t docker-image:tag1 .
Building the docker image and tagging successfully
Building the docker image and tagging successfully

As you can see below, once the docker container is started, the Image created is printed on the container’s screen.

Running the container
Running the container

Join 17 other followers

Conclusion

In this tutorial, you learned what is dockerfile, a lot of dockerfile instructions and executables, and finally, how to create your own Docker Image using dockerFile.

So which application are you planning to run using the newly created docker image?

Ultimate docker interview questions for DevOps

If you are looking to crack your DevOps engineer interview, docker is one of the important topics that you should prepare. In this guide understand the docker interview questions for DevOps that you should know.

Let’s go!

Q1. What is Docker ?

Answer: Docker is lightweight containerized technology. It allows you to automate deployment in portable containers which is built from docker images.

Q2. What is Docker Engine.

Answer: Docker Engine is an server where docker is installed . Docker Client and server remains on the same server or remote host. Clients can connect with server using CLI or RESTful API’s.

Q3. What is use of Dockerfile and what are common instructions used in docker file?

Answer: We can either pull docker image and use it directly to build our apps or we can use it and on top of it we can create one more layer according to the need that’s where Dockerfile comes in play. With Docker file you can design the image accordingly. Some common instructions are FROM, LABEL, RUN , CMD

Q4. What are States of Docker Containers ?

Answer: Running , Exited , Restarting and Paused.

Q5. What is DockerHUB ?

Answer: Dockerhub is a cloud based registry for docker images . You can either pull or push your images in DockerHub.

Q6. Where are Docker Volumes stored ?

Answer: Docker Volumes are stored in /var/lib/docker/volumes.

Q7.Write a Dockerfile to Create and Copy a directory and built using Python Module ?

Answer:

FROM Python:3.0
WORKDIR /app
COPY . /app

Q8. What is the medium of communication between docker client and server?

Answer: Communication between docker client and server is taken care by REST API , socker.IO and TCP Protocol.

Q9. How to start Docker container and create it ?

Answer: docker run -i -t centos:6 – This command will create the container as well as run the container. Ideally just to create container you use docker container create “Container_name”

Q10.What is difference between EXPOSE PORT and PUBLISH Port ?

Answer: Expose Port means you just exposes it locally i.e to container only . Publish Port means you are allowing from outside World.

Q11. How can you publish Port i.e Map Host to container port ? Provide an example with command

Answer: docker container run -d -p 80:80 nginx

Here p is mapping between Host and container and -d is dettached mode i.e container runs in background and you just see container ID displayed on

Q12. How do you mount a volume in docker ?

Answer: docker container run -d –name “My container” –mount source=”vol1″,target=/app nginx

Q13. How Can you run multiple containers in single service ?

Answer: We can achieve this by using docker swarm or docker compose. Docker compose uses YAML formatted files.

Q14. Where do you configure logging driver in docker?

Answer: We can do that in file daemon.jason.file.

Q15. How can we go inside the container ?

Answer: docker exec -it “Container_ID” /bin/bash or simply bash

Q16. How can you scale your Docker containers?

Answer: By using Docker compose command. docker-compose –file scale.yml scale myservice=5

Q17. Describe the Workflow from Docker file to Container execution ?

Answer: Docker file ➤ Docker Build ➤ Docker Image (or Pull from Registry) ➤Docker run -it ➤ Docker Container ➤Docker exec -it ➤Bash

Q18. How to monitor your docker in production ?

Answer:

docker stats : Get information about CPU , memory and usage etc.

docker events : Check activities of containers such as attach , detach , die, rename , commit etc.

Q19. Is Docker swarm an approach to orchestrate containers ?

Answer: Yes it is one of them and other is kubernetes

Q20. How can you check docker version?

Answer: docker version command which gives you client and server information together.

Q21. What can you tag your Docker Image ?

Answer: docker tag “ImageID” “Repository”:tag

Conclusion

In this guide you learnt some of the basic questions around the docker interview questions for DevOps that you should know.

There are some more interview guide that are published on automateinfra.com, which one did you like the most?

How to Install Apache tomcat using Ansible.

If you are looking to install apache tomcat instances, consider using Ansible as a great way.

Ansible is an agentless automation tool that manages machines over the SSH protocol by default. Once installed, Ansible does not add a database, and there will be no daemons to start or keep running.

With Ansible, you can create an ansible playbook and use it to deploy dozens of tomcat in one go. In this tutorial, you will learn how to install apache tomcat using Ansible. Let’s get started.

Prerequisites

This post will be a step-by-step tutorial. If you’d like to follow along, be sure you have:

  • An Ansible controller host. This tutorial will be using Ansible v2.9.18.
  • A remote Linux computer to test out the tomcat installation. This tutorial uses Ubuntu 20.04.3 LTS as the remote node.
  • An inventory file and one or more hosts are configured to run Ansible commands and playbooks. The remote Linux computer is called webserver, and this tutorial uses an inventory group called web.

Ensure your remote machine IP address is inside /etc/ansible/hosts ( either one remote machine or define it as a group)

Building tomcat Ansible-playbook on the Ansible Controller

Ansible is an automation tool used for deploying applications and systems easily; it could be Cloud, Services, orchestration, etc. Ansible uses YAML Language to build playbooks which are finally used to deploy or configure the required change. To deploy tomcat, let’s move ahead and create the ansible playbook.

  • SSH or login into your any Linux machine.
  • Create a file named my_playbook3.yml inside /etc/ansible folder and paste below code.

The below playbook contains all the tasks to install tomcat on the remote node. The first task is to update your system packages by using the apt command, further creating tomcat user and group. The next task is to install java, install tomcat, and create necessary folders and permissions for the tomcat directory.

---
- name: Install Apache Tomcat10 using ansible
  hosts: webserver
  remote_user: ubuntu
  become: true
  tasks:
    - name: Update the System Packages
      apt:
        upgrade: yes
        update_cache: yes

    - name: Create a Tomcat User
      user:
        name: tomcat

    - name: Create a Tomcat Group
      group:
        name: tomcat

    - name: Install JAVA
      apt:
        name: default-jdk
        state: present


    - name: Create a Tomcat Directory
      file:
        path: /opt/tomcat10
        owner: tomcat
        group: tomcat
        mode: 755
        recurse: yes

    - name: download & unarchive tomcat10 
      unarchive:
        src: https://mirrors.estointernet.in/apache/tomcat/tomcat-10/v10.0.4/bin/apache-tomcat- 10.0.4.tar.gz
        dest: /opt/tomcat10
        remote_src: yes
        extra_opts: [--strip-components=1]

    - name: Change ownership of tomcat directory
      file:
        path: /opt/tomcat10
        owner: tomcat
        group: tomcat
        mode: "u+rwx,g+rx,o=rx"
        recurse: yes
        state: directory

    - name: Copy Tomcat service from local to remote
      copy:
        src: /etc/tomcat.service
        dest: /etc/systemd/system/
        mode: 0755

    - name: Start and Enable Tomcat 10 on sever
      systemd:
        name: tomcat
        state: started
        daemon_reload: true

Running Ansible-playbook on the Ansible Controller

Earlier in the previous section, you created the ansible-playbook, which is great, but it is not doing much unless you deploy it. To deploy the playbook using the ansible-playbook command.

Assuming you are logged into Ansible controller:

  • Now run the playbook using the below ansible-playbook command.
ansible-playbook my_playbook3.yml

As you can see below, all the tasks are successfully completed; if the status of TASK shows ok, that means the task was already completed; else, for changed status, Ansible performs the task on the remote node.

Running the ansible-playbook in the Ansible controller host
Running the ansible-playbook in the Ansible controller host
  • Next, verify remote machine if Apache Tomcat is installed successfully and started use the below command.
systemctl status tomcat 
service tomcat status
Verifying the tomcat service on the remote node
Verifying the tomcat service on the remote node
  • Also you can verify by running process command.
ps -ef | grep tomcat
ps -aux | grep tomcat
Checking the tomcat process
Checking the tomcat process
Checking the tomcat process
Checking the tomcat process

Join 17 other followers

Tomcat files and Tomcat directories on a remote node

Now that you have successfully installed the tomcat on the remote node and verified the tomcat service, it is equally important to check the tomcat files created and the purpose of each of them.

  • Firstly all the tomcat files and tomcat directories are stored under <tomcat-installation-directory>/*.

Your installation directory is represented by environmental variable  $CATALINA_HOME

  • The tomcat directory and files should be owned by user tomcat
  • The tomcat user should be member of tomcat group.
Verify all files of tomcat
Verify all files of tomcat
  • <tomcat-installation-directory>/bin: This directory consists of startup and shutdown scripts (startup.sh and shutdown.sh) to run or stop the tomcat directly without using the tomcat service configured.
Verify installation directory of tomcat
Verify installation directory of tomcat
  • <tomcat-installation-directory>/conf: This is very crucial directory where tomcat keeps all its configuration files.
Tomcat configuration directory
Verify Tomcat configuration directory
  • <tomcat-installation-directory>/logs: In case you get any errors while running your tomcat then you can look at your safeguard ie. logs , tomcat creates its own logs under this directory.
Tomcat logs directory
Verify Tomcat logs directory
  • <tomcat-installation-directory>/webapps: This is the directory where you place your code such as .war and run your applications. It is highly recommended to stop tomcat and then deploy your application inside this directory and then start tomcat.
Tomcat Code directory
Verify Tomcat Code directory

Conclusion

In this tutorial, we covered in-depth how can you install Apache Tomcat 10 on the ubuntu 18.0 version using Ansible controller and finally discussed files and directories which are most important for any Apache tomcat admins and developers. If you wish to run your application on lightweight and easily, Apache Tomcat is your friend.

The Ultimate Handbook of Linux command cheat sheet

If you are new to Linux operating system, then this handbook is a Linux command cheat sheet for you that will help you to jumpstart in the Linux world.

In this ultimate Handbook of Linux command cheat sheet, you will find a variety of Linux command that you are used every day in the Linux administrator role.

Prerequisites

This post will be a step-by-step tutorial. If you’d like to follow along, be sure you have:

  • CentOS 8 machine or preferably CentOS 7 plus , if you don’t have any machine you can create a ec2 instance on AWS account. Recommended to have 4GB RAM and at least 5GB of drive space.

how to check linux version or the kernel version : You can use uname –kernel-name –kernel-release –machine command

Linux command cheat sheet

In this section, you will see the below table containing the purpose of command and command to execute. You should try these commands on your centos machine which will help you to command linux operating system.

PurposeCommand to execute
System uptime informationuptime
kernel informationuname -r 
The IP address of the machinehostname -I
last rebootlast reboot
Who you are logged in aswhoami
who is onlinew or who
CPU informationcat /proc/cpuinfo
Memory Informationcat /proc/meminfo
free and used memory free -h
hardware infodmidecode
Processestop
List files opened by the userlsof -u user
all the currently running processesps -ef
your currently running processesps
mysql processps –ef | grep mysql
Date & Time Settingschronyd
Date & Time Settingsntpd
Displaying the Current Date and Timedate
Displaying the Current Date and Timetimedatectl
Command to manage systemdsystemctl
System-wide locale settings/etc/locale.conf
Listing available system locale settings:localectl list-locales
current status of the system locales settingslocalectl status
configuring network accessnmcli
configuring network accessnmtui
modify the existing connection:nmcli con mod “con-name”
display all connections:nmcli con show
display the active connection:nmcli con show –active
Linux Softwares areRPM packages
Repo Directory/etc/yum.repos.d/
available repositories:subscription-manager repos –list
currently enabled repositories:yum repolist
Searching for packagesyum search string
Installing a package:yum install package_name
Updating all packagesyum update
Updating a package:yum update package_name
Uninstalling a packageyum remove package_name
installed and available packagesyum list all
installed packages:yum list installed
enable a servicesystemctl enable service_name
disable a servicesystemctl disable service_name
Firewall service firewalld
current status of the firewallsystemctl status firewalld
Start firewallsystemctl start firewalld
the additional layer of system securitySELinux
Display the current SELinuxgetenforce
Change the state of SELinuxsetenforce Enforcing/Permissive
Permanent SELinux State/etc/selinux/config
SSH connectionfacilitates client-server communication
Disabling SSH Root Loginvi /etc/ssh/sshd_config
PermitRootLogin no
systemctl restart sshd
Min value in useraddUID_MIN 1000
Max value in useraddUID_MAX 60000
Adding a user to a group:usermod -a -G group_name user_name
<1000Reserved userID details in/etc/login.defs
Brilliant Guide to Linux command line sheet

Conclusion

In this ultimate Handbook of Linux command cheat sheet, you learned various Linux commands that you can use in the Linux administrator role.

So now you are Linux command pro, so you should consider creating shell scripts with all these commands and try.

How to Protect Ubuntu network and ubuntu firewall

In today’s world, there are high chances that attackers may attack your system because machines may be open to the world in one or another way. In order to get rid of all vulnerabilities and attacked you should consider protecting the Ubuntu network and ubuntu firewalls.

In this tutorial, you will learn how to Protect the Ubuntu network and ubuntu firewall with various ways to keep your machine far away from attacks and more secure.

Let’s go!

Prerequisites

This post will be a step-by-step tutorial. If you’d like to follow along, be sure you have:

  • Ubuntu 21.04 machine or preferably Ubuntu 18.04 version plus , if you don’t have any machine you can create a ec2 instance on AWS account. Recommended to have 4GB RAM and at least 5GB of drive space

Checking ubuntu vulnerabilities

Let’s kick off this tutorial by first checking what could be potential reasons for ubuntu vulnerabilities. There are multiple points that one must consider to get rid of vulnerabilities like:

  • Regularly check requirement of the applications such as softwares that are needed or no longer needed and if not needed make sure to purge those softwares or uninstall them.
  • Delete the services that no longer need for example to install apache2 web server you only need apache2 service and avoid installing some other extra services that are not required.
  • Disable services that are longer required in long run and are open to the world.

How to check the list of running services on ubuntu 21.04 machine

To check the list of running services on the ubuntu 21.04 machine consider running the below service command.

service --status-all | grep '\[ + \]'
list of running services on ubuntu 21.04
list of running services on ubuntu 21.04

How to stop ubuntu service running on ubuntu 21.04 machine

To stop the ubuntu service that is no longer required on the ubuntu 21.04 machine run the following command.

# service <service-name> stopservice
service apport stop 
# To check the status of service use service <service-name> status
How to stop ubuntu service running on ubuntu 21.04 machine
How to stop ubuntu service running on ubuntu 21.04 machine

Scanning ubuntu 21.04 machine using various tools

There is various tool that one should use and scan the ubuntu machine to look for any connectivity issue or unknown IP attacking your system. Let’s start with the Nmap.

Running nmap commands

Nmap is also known as network mapper is most widely used to do an analysis of network, monitor host details and connections audits, and check all ports and connectivity on your machine or remote machine.

After you run the Nmap command you will see it displays all the closed ports, DNS records, etc.

nmap ip-address
nmap command
nmap ip-address command
  • You can also run the Nmap command using the hostname instead of Ip address of network component.
nmap Hostname      # nmap ip-10-111-4-53  
nmap hostname command
nmap hostname command

Scanning ubuntu 21.04 with Rootkit hunter or Rkhunter

Rootkit hunter or RKhunter tool is used to find the issues in file and directories permission , hash changes and executables with incorrect file permissions, hidden files.

  • Run rkhunter command to check the system files and configurations.
rkhunter -c  # To check our own machine's  system check
Rootkit hunter
Running Rootkit hunter
Rootkit hunter summary
Rootkit hunter summary

Protecting ubuntu 21.04 using tcptrace command

Another way to protect the linux or ubuntu 21.04 machines is by runnig tcptrace command. tcptrace is used to trace the TCP Packet information both on the receiving and sending end of connections.

  • Run tcptrace to check the tcp connections as shown below.
 tcptrace -houtput
Running tcptrace command on ubuntu 21.04
Running tcptrace command on ubuntu 21.04

How To Set Up the UFW Firewall on Linux

Without a firewall, there are no rules or restrictions on your network traffic and that leads to a number of negative consequences. Linux system comes with a default firewall configuration tool, which is Uncomplicated Firewall (UFW). But how do you set up a UFW firewall?

UFW service enables you to implement selective or restrictive policies regarding access to your System and is a interface to iptables. Lets check ufw commands in details to understand better.

  • To install ufw firewall on ubuntu machine ( Note: Although this is already installed but incase it is not available)
apt install ufw 
 install ufw firewall on ubuntu machine
install ufw firewall on ubuntu machine
  • Now, check if ufw is successfully installed on your machine by running service ufw status command.
service ufw status
Checking the ufw firewall on ubuntu machine
Checking the ufw firewall on ubuntu machine
  • To check the UFW status on ubuntu 21.04 machine without running service command run the following command.
ufw status                 # Don't use service command 
Checking the ufw firewall status on ubuntu machine
Checking the ufw firewall status on ubuntu machine
  • To enable ufw on ubuntu 21.04 run the ufw enable command.
ufw enable  # select yes to proceed 
Enabling the ufw firewall status on ubuntu machine
Enabling the ufw firewall status on ubuntu machine
  • To allow port 80 and port 22 using ufw allow command.
ufw allow ssh   ufw allow 80
Allow the ufw firewall status on ubuntu machine
Allow the ufw firewall status on ubuntu machine
  • To enable logging for ufw commands run the ufw loggin on commad else ufw logging off.
ufw logging on 
This image has an empty alt attribute; its file name is image-131.png
Enabling the logging using ufw command on ubuntu machine

UFW commands for ubuntu machine ( ufw firewall , ufw allow port , uwf limit etc.)

Lets quickly check the list of all UFW commands that are useful for network connectivity on ubuntu 21.04 machine. Let do a quick summary of ufw commands.

ufw enable ✏ ufw enable command enables the firewall on the machine.
ufw disable ✏ ufw disable command disables the firewall on the machine.
ufw reload ✏ufw reload command reloads the firewall to ensure changes are applied
ufw logging on|off ✏ ufw logging on|off command enables or disables logging ufw on the mahcine
ufw allow ✏ ufw allow command adds an allow rule on the machine.
ufw deny ✏ ufw deny command adds a deny rule on the machine.
ufw reject ✏ ufw reject command adds a reject rule on the machine.
ufw limit ✏ ufw limit command adds a limit rule on the machine.
ufw delete ✏ ufw delete command deletes the rule on the machine.
ufw status ✏ ufw status command shows the firewall status on the machine.
ufw-version ✏ufw-version command displays version information on the machine.

The Uncomplicated Firewall (ufw) is a front-end for iptables and is particularly well-suited for host-based firewalls, so you can block or allow traffic based on IP address, NIC, port, network, and more. You can set iptables to log all actions or just specific actions.

sudo iptables -L     #  Lists the currently set firewall rules
Lists the currently set firewall rules using iptables
Lists the currently set firewall rules using iptables
sudo iptables -L -vn   #  Lists the currently set firewall rules with more details
Lists the currently set firewall rules in details using iptables
Lists the currently set firewall rules in details using iptables
sudo iptables -F    #  Deletes the currently set firewall rules 

Conclusion

Throughout this tutorial, you’ve realized that setting up a firewall and protecting the ubuntu machine is important and there are multiple ways to protect such as by using UFW, NMap commands, tcptrace.

You should now have a good understanding of how to protect Ubuntu network and ubuntu firewall, why not build on this newfound knowledge.

How to Install Tomcat on Ubuntu Machine.

If you are looking to deploy your critical web applications on a web server, nothing could be better than Apache Tomcat.

Tomcat is a lightweight and widely used web server based on the implementation of Java servlets, JSP, and Java expression language. Tomcat provides a pure Java HTTP web server environment where java code runs. Many applications are hosted on tomcat as this is open-source, a victory for system operators.

In this tutorial, you will learn how to install Apache Tomcat 10.0 on an Ubuntu Linux machine.

Table of Contents

  1. Prerequisites
  2. How to Install Java 11 on ubuntu 18.04 machine
  3. How to Install Tomcat 10 on ubuntu 18.04 machine
  4. Files and Directories in Tomcat
  5. Conclusion

Prerequisites

This post will be a step-by-step tutorial. To follow along, be sure you have the following:

Apache Tomcat is supported on all Windows, Linux, and macOS operating systems.

How to Install Java 11 on ubuntu 18.04 machine

As previously specified, Tomcat requires Java to be installed as tomcat implements Java-based technologies. If you don’t have java installed, let’s learn how to install Java Version 11 on the ubuntu 18.04 machine.

  • Connect to Ubuntu machine using your favorite SSH client.
  • Next, install java by running the apt install command. default-jdk is an open source java runtime which is most widely used.
# Installing Java Version: Java SE 11 (LTS)
sudo apt install default-jdk 
  • After apt install default-jdk command is executed successfully , verify if java has been installed successfully by running the below command.
java -version               # To check the Installed Java Version
Checking Java version using java -version command
Checking Java version using java -version command
  • To verify the Java you can also check the location of java binaries using which and whereis commands.
which java :               # Provides the location of executable file 
whereis java               # Provides location of all the files related to Java 
Checking Java binaries
Checking Java binaries
  • Run the below command to check the installation path of java. The system should respond with the path where Java is installed
update-alternatives --list java

If you have multiples Java installed on your machine and if you want to switch from one Java to another version consider running update-alternatives --config java command.

Checking the installation path of java
Checking the installation path of java

How to install Tomcat 10 on ubuntu 18.04 machine

Now that you have Java installed successfully installed on the ubuntu 18.04 machine, which is great. Next, you need to install a tomcat. Installing Tomcat is a straightforward task; let’s checkout.

  • Create a folder named tomcat inside the opt directory mkdir command.
cd /opt
mkdir tomcat
  • Download the Binary distribution of Tomcat 10 using curl command as shown below..
curl -O https://mirrors.estointernet.in/apache/tomcat/tomcat-10/v10.0.4/bin/apache-tomcat-10.0.4.tar.gz
Download the Binary distribution of Tomcat 10
Download the Binary distribution of Tomcat 10
  • Extract the tomcat archieve that you just downloaded using tar command. After you execute the tar command you should see the tomcat folder in the opt directory.
 sudo tar xzvf apache-tomcat-10.0.4.tar.gz -C /opt/tomcat --strip-components=1
Extract the tomcat archive
Extract the tomcat archive
  • Now that you have tomcat installed but you should consider running tomcat as a tomcat user which should be the part of tomcat group.
  • Creating a new group tomcat using groupadd command.
sudo groupadd tomcat
  • Next, create a tomcat user using useradd command and make it part of tomcat group .
    • -s /bin/false denotes that nobody can login as this user
    • /opt/tomcat will be tomcat home directory.
sudo useradd -s /bin/false -g tomcat -d /opt/tomcat tomcat      
  • To run tomcat as tomcat user assign the tomcat user and tomcat group to the tomcat directory.
cd /opt/tomcat                     # Go to tomcat directory
sudo chgrp -R tomcat /opt/tomcat   # tomcat group given group ownership on /opt/tomcat
sudo chmod -R g+r conf             # Assign Read permission to tomcat group on conf
sudo chmod g+x conf                # Assign Execute permission to tomcat group on conf
sudo chown -R tomcat opt/tomcat    # Assign tomcat as owner of the directory                                                                  
  • Now you are ready to start the tomcat application but it is always recommended to run the application using a service because the service saves applications to stop if the system reboots mistakenly or by any means. Let’s create the tomcat service.
  • To create the tomcat service create a new file named tomcat.service as shown below.
sudo vi /etc/systemd/system/tomcat.service
  • Next, copy and paste the below code in tomcat.service
[Unit]
Description=Apache Tomcat 
After=network.target

[Service]
Type=forking

Environment=JAVA_HOME=/usr/lib/jvm/java-1.11.0-openjdk-amd64
Environment=CATALINA_PID=/opt/tomcat/temp/tomcat.pid
Environment=CATALINA_HOME=/opt/tomcat
Environment=CATALINA_BASE=/opt/tomcat

ExecStart=/opt/tomcat/bin/startup.sh
ExecStop=/opt/tomcat/bin/shutdown.sh

User=tomcat
Group=tomcat
UMask=0007
Restart=always

[Install]
WantedBy=multi-user.target
  • Next, run the below commands to load the tomcat.service file & then start and enable the service.
sudo systemctl daemon-reload    # This will load the file in systemd and it.
sudo systemctl start tomcat     # Start the tomcat service
sudo systemctl enable tomcat    # Enable the tomcat service
sudo systemctl status tomcat    # Enable the tomcat service
Checking the status of the tomcat service
Checking the status of the tomcat service

By now, Apache Tomcat 10 service should be started and running successfully. To verify the tomcat application, navigate to the default webpage on the browser and type <IP-address-of your-tomcat-server>:8080.

Make sure to check your inbound rules and if port 8080 is open.

verify the tomcat application,
verify the tomcat application,

Conclusion

In this tutorial, you have learned how to install Apache Tomcat on an Ubuntu server. Deploying Java applications on Apache Tomcat is a quick and easy process!

Apache Tomcat is the most widely used open-source tool for Java developers to run their web applications. Now, that you have installed Apache Tomcat which Java application are you going to deploy and manage next?

How to Work with Ansible When and Other Conditionals

If you need to execute Ansible tasks based on different conditions, then you’re in for a treat. Ansible when and other conditionals let you evaluate conditions, such as based on OS, or if one task is dependent on the previous task.

In this tutorial, you’re going to learn how to work with Ansible when and other conditionals so you can execute tasks without messing things up.

Click here and Continue reading