How to Install AWS CLI Version 2 and Setup AWS credentials

AWS CLI that is AWS Command Line Interface that enables you to interact with AWS services in various AWS accounts using commands in your command-line shell from your local environment or remotely. The AWS CLI provides direct access to the public APIs of AWS services.

You can control multiple AWS services from the command line and automate them through scripts. You can run AWS CLI commands from Linux shell such as bash , zsh , tcsh and from windows machine you can usecommand prompt or PowerShell to execute AWS CLI commands.

The AWS CLI is available in two versions but lets learn how to install AWS CLI version 2.

Table of Contents

  1. Installing AWS CLI Version 2 on windows machine
  2. Creating an IAM user in AWS account with programmatic access
  3. Configure AWS credentials using aws configure
  4. Verify aws configure from AWS CLI by running a simple commands
  5. Configuring AWS credentials using Named profile.
  6. Verify Named profile from AWS CLI by running a simple commands.
  7. Configuring AWS credentials using environment variable
  8. Conclusion

Installing AWS CLI Version 2 on windows machine

  • Download the installed for AWS CLI on windows machine from here
  • Select I accept the terms and then click next button
  • Do custom setup like location of installation and then click next button
  • Now you are ready to install the AWS CLI 2
  • Click finish and now verify the AWS cli
  • Verify the AWS version by going to command prompt and type
aws --version

Now AWS cli version 2 is successfully installed on windows machine.

Creating an IAM user in AWS account with programmatic access

There are two ways to connect to an AWS account, the first is providing a username and password on the AWS login page using browser and the other way is to configure Access key ID and secret keys of IAM user on your machine and then use command-line tools such as AWS CLI to connect programmatically.

For applications to connect from AWS CLI to AWS Service, you should already have Access key ID and secret keys with you that you will configure on your local machine to connect to AWS account.

Lets learn how to create a IAM user and Access key ID and secret keys !!

  1. Open your favorite web browser and navigate to the AWS Management Console and log in.
  2. While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item.
  1. To Create a user click on Users→ Add user and provide the name of the user myuser and make sure to tick the Programmatic access checkbox in Access type which enables an access key ID and secret access key and then hit the Permissions button.
  1. Now select the “Attach existing policies directly” option in the set permissions and look for the “Administrator” policy using filter policies in the search box. This policy will allow myuser to have full access to AWS services.
  1. Finally click on Create user.
  2. Now, the user is created successfully and you will see an option to download a .csv file. Download this file which contains IAM users i.e. myuser Access key ID and Secret access key which you will use later in the tutorial to connect to AWS service from your local machine.

Configure AWS credentials using aws configure

Now you IAM user with Access key ID and secret keys ,but AWS CLI cannot perform anything unless you configure AWS credentials . Once you configure the credentials then AWS CLI allows you to connect to AWS account and execute commands.

  • Configure AWS Credentials by running the aws configure command on command prompt
aws configure
  • Enter the details such as AWS Access key ID , Secret Access Key , region . You can skip the output format as default or text or json .
  • Once AWS is configured successfully , verify by navigating to C:\Users\YOUR_USER\.aws  and see if two file credentials and config are present.
  • Now open both the files and verify.
  • Now, you’re AWS credentials are configured successfully using aws configure.

Verify aws configure from AWS CLI by running a simple commands

Now, you can test if AWS Access key ID , Secret Access Key , region you configured in AWS CLI is working fine by going to command prompt and running the following commands.

aws ec2 describe-instances

Configuring AWS credentials using Named profile.

A named profile is a collection of settings and credentials that you can apply to a AWS CLI command. When you specify a profile to run a command, the settings and credentials are used to run that command

Earlier you created one IAM user and configure AWS credentials using aws configure, lets learn how to store named profiles.

  1. Open credentials files which got created earlier using aws configure or create a file at  C:\Users\your_profile\.aws\credentials on your windows machine.
  2. Now , you can provide multiple  Access key ID and Secret access key  into the credentials file in the below format and save the file.

credentials files help you to set your profile. By this way, it helps you to create multiple profiles and avoid confusion while connecting to specific AWS accounts.

  1. Similarly, create another file C:\Users\your_profile\.aws\config in the same directory
  2. Next, add the “region” into the config file and make sure to add the name of the profile which you provided in the credentials file, and save the file. This file allows you to work with a specific region.

~/.aws/credentials (Linux & Mac) or %USERPROFILE%\.aws\credentials (Windows)

~/.aws/config (Linux & Mac) or %USERPROFILE%\.aws\config (Windows)

Verify Named profile from AWS CLI by running a simple commands

Lets open command prompt and run the below command to verify sandbox profile which you created earlier under two files ( %USERPROFILE%\.aws\credentials and USERPROFILE%\.aws\config)

aws ec2 describe-instances --profile sandbox

If you get a response shows you were able to configure Named profile succesfully.

Configuring AWS credentials using environment variable

Lets open command prompt and set the AWS secret key and access key using environmental variable. Using set to set an environment variable changes the value used until the end of the current command prompt session, or until you set the variable to a different value

Conclusion

In this tutorial, you learned how to install AWS CLI and configured it using AWS Access key ID , Secret Access Key, region. Also you learned how to generate AWS Access key ID , Secret Access Key by creating an IAM user.

Python Tutorial 1 – All you need to know !!

Table of Content

  1. Understanding the difference between high level and low level language.
  2. Interpreted v/s Compiled Language
  3. How Python Works ?
  4. Python Interpreter
  5. Introduction to Python
  6. Python Standard Library
  7. Python Implementations
  8. Python Installation
    • Python Installation on Linux Machine
    • Python Installation on Windows Machine
    • Python Installation on MacOS
  9. Python Keywords
  10. Python Data types
  11. Conclusion

Understanding the difference between High & Low level Languages

High Level Language : High level language is easier to understand that is it is human readable. It is either compiled or interpreted. It consume way more memory and are slow in execution. It is portable. It requires compiler or interpreter for translation.

The fastest translator that converts high level language is .

Low Level Language : Low level Language are machine friendly that is machines can read the code but not humans. It consume less memory and are fast to execute. It cannot be ported. It requires assembler for translation.

Introduction to Python

Python is a high level language, which is used in designing, deploying and testing at lots of places. It is consistently ranked among todays most popular programming languages. It is also dynamic and object oriented language but also works on procedural styles as well, and runs on all major hardware platforms. Python is an interpreted language.

Interpreted v/s Compiled Language

Compiled Language: Compiled language is first compiled and then expressed in instruction of target machine that is machine code. For example – C, C++, C# , COBOL

Interpreted Language: Interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program and these kind of languages are known as interpreter languages. For example JavaScript, Perl, Python, BASIC

How Python Works ?

  • First step is to write a Python program such as test.py
  • Then using Python interpreter program is in built compiled and gets converted into byte code that is test.pyc
  • Now byte code that is test.pyc is further converted into machine code using virtual machine such as (10101010100010101010)
  • Finally Program is executed and output is displayed.
How Python runs? – Indian Pythonista

Python Interpreter

Python includes both interpreter and compiler which is implicitly invoked.

  • In case of Python version 2, the Python interpreter compiles the source file such as file.py and keep it in same directory with an extension file.pyc
  • In case of Python version 3 : the Python interpreter compiles the source file such as file.py and keep it in subdirectory __pycache__
  • Python does not save the compiled bytecode when you run the script directly; rather, Python recompiles the script each time you run it.
  • Python saves bytecode files only for modules you import however running Python command with -B flag avoids saving compiled bytecode to disk.
  • You can also directly execute Python script in the Unix operating system if you add shebang inside your script.
#! /usr/bin/env python

Python Standard Library

Python standard library contains several well designed Python modules for convenient reuse like representing data , processing text , processing data , interacting with operating system and filesystems and web programming. Python modules are basically Python Programs like a file (abc.py) which are imported.

There are some extension modules that allow Python code to access functionalities supplied by underlying OS or other software’s components such as GUI , database and network, XML parsing .You can also wrap existing C/C++ libraries into python extension modules.

Python Implementations

Python is more than a language , you can utilize the implementation of Python in many ways such as :

  • CPython: CPython is an interpreter, compiler, set of built in and optional extension modules all coded in C language. Python code is converted into bytecode before interpreting it.
  • IronPython: Python implementation for the Microsoft-designed Common Language Runtime (CLR), most commonly known as .NET, which is now open source and ported to Linux and macOS
  • PyPy: PyPy is a fast and flexible implementation of Python, coded in a subset of Python itself, able to target several lower-level languages and virtual machines using advanced techniques such as type inferencing
  • Jython: Python implementation for any Java Virtual Machine (JVM) compliant with Java 7 or better. With Jython, you can use all Java libraries and framework and it supports only v2 as of now.
  • IPython: Enhances standard CPython to make it more powerful and convenient for interactive use. IPython extends the interpreter’s capabilities by allowing abbreviated function call syntax, using question mark to query an objects documentation etc.

Python Installation

Python Installation on Linux Machine

If you are working on Latest platforms you will find Python already installed in the systems. You can either type Python command.

At time Python is not installed but binaries are available in the system which you can install using RPM tool or using APT in Linux machines and for Windows use MSI( Microsoft Installer ) .

Ubuntu 16 server
Ubuntu 18 server

Python Installation on Windows Machine

Python can be installed in Windows with few steps , and to install Python steps can be found here.

Python Installation on MacOS

Python V2 comes installed on MacOS but you should install latest Python version always. The popular third-party macOS open source package manager Homebrew offers, among many other packages, excellent versions of Python, both v2 and v3

  • To install Homebrew, open Terminal or your favorite OS X terminal emulator and run
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
  • Add homebrew directory at the top of your PATH environment variable.
export PATH="/usr/local/opt/python/libexec/bin:$PATH"
  • Now install Python3 using the following commands.
brew install python3
  • Verify the installation of Python using below command

Python Keywords

Python reserves certain words for special purpose known as keywords such as

  • continue
  • def
  • del
  • break
  • class
  • if
  • return

Python Data types

Whatever program you write in Python is a data and this data contains some values which is also known as objects. Each object has a type known as data types. These data types are either mutable in nature that is modifiable or immutable in nature that is unmodifiable.

Python contains numerous built in types such as:

  1. Numbers : These are of two types that is integer and floating point.
    • Decimal integer: 1 , 2
    • Binary integer: 0b010101
    • Octal integer: 0o1
    • Hexadecimal integer: 0x1
    • Floating point: 0.0 , 2.e0
  2. Strings

Python strings are collection of characters surrounded by quotes ” “. There are different ways in which strings are created.

  • str() :
"This is method 1 to display string"
  • Directly calling it in quotes
"Hello, this is method2 to display string"
  • Using Format:

This was introduced in Python3 and uses curly brackets {} to replace the values.

Example 1

In below example you will notice that first curly bracket will be replaced by first value that is a and second will be replaced by b

'{} {}'.format('a','b')

Example 2

In below example if you provide any numerical value inside the curly braces it considers it as index and then retrieve from the given values accordingly

'{0} {0}'.format('a','b')

Example 3

In below example if you provide key value pair then values are substituted according to key

'{a} {b}'.format(a='apple', b='ball')
  • Using f string

f string are prepended with either f or F before the first quotation mark. Lets take a example.

a=1
f"a is {a}" 
  • Template strings are designed to offer a simple string substitution mechanism. These built-in methods work for tasks where simple word substitutions are necessary.
from string import Template
new_value = Template("$a b c d")       #  a will be substituted here
x = new_value.substitute(a = "Automation")
y = new_value.substitute(a = "Automate")
print(x,y)
  1. Tuples : They are immutable ordered sequence of items that is they cannot be modified. The items of a tuple are arbitrary objects and may be of different types. For Example (10,20,30) or (3.14,5.14,6.14)

4. Lists: The list is a mutable ordered sequence of items. The items of a list are arbitrary objects
and may be of different types. For Example [2,3,”automate”]

5. Dictionaries: Dictionary is written as key:value pair , where key is an expression giving the item’s key and value is an expression giving the item’s value. For Example {‘x’:42, ‘y’:3.14, ‘z’:7} # Dictionary with three items, str keys.

6. Sets: Set stores multiple items in a single variable. It contains unordered and unindexed data. Sets cannot have two items with the same value. Example of sets are fruits = {“apple”, “banana”, “cherry”}

Data typesMutable or Immutable
StringImmutable (Cannot be modified)
TuplesImmutable (Cannot be modified)
IntegersImmutable (Cannot be modified)
ListMutable (Can be modified)
SetsMutable (Can be modified)
Floating pointImmutable (Cannot be modified)
DictionariesMutable (Can be modified)

Conclusion

In this tutorial you learned basic introduction to python , why it is interpreted and high level language. Also you learned lots of details of python datatypes , keywords and how python works. There were handful examples which you learned. Hope this tutorial will help you and if you like please share it.

Jump to another tutorial-2 for some more interesting facts about Python.

How to Launch AWS Elasticsearch using Terraform in Amazon Account

It is important to have a search engine for your website or applications. When it comes to automation of these great features such as load balancing and scalability of websites, Amazon provides it own managed service known as Amazon Elasticsearch.

In this tutorial you will learn about what is Amazon Elasticsearch and how to create a Amazon Elasticsearch domain using Terraform.

Table of Contents

  1. What Is Amazon Elasticsearch Service?
  2. Prerequisites:
  3. Terraform Configuration Files and Structure
  4. Configure Terraform files for AWS Elasticsearch
  5. Verify AWS Elasticsearch in Amazon Account
  6. Conclusion

It is important to have a search engine for your website or applications. When it comes to automation of these great features such as load balancing and scalability of websites, Amazon provides it own managed service known as Amazon Elasticsearch.

In this tutorial you will learn about what is Amazon Elasticsearch and how to create a Amazon Elasticsearch domain using Amazon Management console and then search the data using Kibana.

What Is Amazon Elasticsearch Service?

Amazon Elasticsearch Service is a managed service which deploys and scale the Elasticsearch clusters in the cloud. Elasticsearch is an open source analytical and search engine which is used to perform real time application monitoring and log analytics.

Amazon Elasticsearch service provisions all resources for Elasticsearch clusters and launches it. It also replaces the failed Elasticsearch nodes in the cluster automatically.

Features of Amazon Elasticsearch Service

  • It can scale up to 3 PB of attached storage
  • It works with various instance types.
  • It easily integrates with other services such as IAM for security for ,VPC , AWS S3 for loading data , AWS Cloud Watch for monitoring and AWS SNS for alerts notifications.

Prerequisites:

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Configure Terraform files for AWS Elasticsearch

In this demonstration we will create a simple Amazon Elasticsearch using Terraform from Windows machine.

  • Create a folder on your desktop on windows Machine and name it as Terraform-Elasticsearch
  • Now create a file main.tf inside the folder you’re in and paste the below content
rresource "aws_elasticsearch_domain" "es" {
  domain_name           = var.domain
  elasticsearch_version = "7.10"

  cluster_config {
    instance_type = var.instance_type
  }
  snapshot_options {
    automated_snapshot_start_hour = 23
  }
  vpc_options {
    subnet_ids = ["subnet-0d8c53ffee6d4c59e"]
  }
  ebs_options {
    ebs_enabled = var.ebs_volume_size > 0 ? true : false
    volume_size = var.ebs_volume_size
    volume_type = var.volume_type
  }
  tags = {
    Domain = var.tag_domain
  }
}


resource "aws_elasticsearch_domain_policy" "main" {
  domain_name = aws_elasticsearch_domain.es.domain_name
  access_policies = <<POLICIES
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "es:*",
            "Principal": "*",
            "Effect": "Allow",
            "Resource": "${aws_elasticsearch_domain.es.arn}/*"
        }
    ]
}
POLICIES
}
  • Create one more file vars.tf inside the same folder and paste the below content
variable "domain" {
    type = string
}
variable "instance_type" {
    type = string
}
variable "tag_domain" {
    type = string
}
variable "volume_type" {
    type = string
}
variable "ebs_volume_size" {}
  • Create one more file output.tf inside the same folder and paste the below content
output "arn" {
    value = aws_elasticsearch_domain.es.arn
} 
output "domain_id" {
    value = aws_elasticsearch_domain.es.domain_id
} 
output "domain_name" {
    value = aws_elasticsearch_domain.es.domain_name
} 
output "endpoint" {
    value = aws_elasticsearch_domain.es.endpoint
} 
output "kibana_endpoint" {
    value = aws_elasticsearch_domain.es.kibana_endpoint
}
  • Create one more file provider.tf inside the same folder and paste the below content:
provider "aws" {      # Defining the Provider Amazon  as we need to run this on AWS   
  region = "us-east-1"
}
  • Create one more file terraform.tfvars inside the same folder and paste the below content
domain = "newdomain" 
instance_type = "r4.large.elasticsearch"
tag_domain = "NewDomain"
volume_type = "gp2"
ebs_volume_size = 10
  • Now your files and code are ready for execution .
  • Initialize the terraform using below command.
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply

Verify AWS Elasticsearch in Amazon Account

Terraform commands ( init , plan and apply ) all ran successfully. Now Lets verify it on AWS Management console of all the things were created properly.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘Elasticsearch’, and click on the Elasticsearch menu item.
  • Now You will see that the newdomain is created succesfully
  • Click on newdomainto see all the details.

Conclusion

In this tutorial you learnt about what is Amazon Elasticsearch and how to create a Amazon Elasticsearch domain using Terraform

So, Now you have a strong fundamental understanding of AWS Elasticsearch , which website are you going to implement on Elasticsearch with Terraform ?

Getting Started with Amazon Elasticsearch Service and Kibana

It is important to have a search engine for your website or applications. When it comes to automation of these great features such as load balancing and scalability of websites, Amazon provides it own managed service known as Amazon Elasticsearch.

In this tutorial you will learn about what is Amazon Elasticsearch and how to create a Amazon Elasticsearch domain using Amazon Management console and then search the data using Kibana.

Table of contents

  1. What Is Amazon Elasticsearch Service?
  2. Creating the Amazon Elasticsearch Service domain
  3. Upload data to Amazon Elasticsearch for indexing
  4. Search documents using Kibana in Amazon Elasticsearch
  5. Conclusion

What Is Amazon Elasticsearch Service?

Amazon Elasticsearch Service is a managed service which deploys and scale the Elasticsearch clusters in the cloud. Elasticsearch is an open source analytical and search engine which is used to perform real time application monitoring and log analytics.

Amazon Elasticsearch service provisions all resources for Elasticsearch clusters and launches it. It also replaces the failed Elasticsearch nodes in the cluster automatically.

Features of Amazon Elasticsearch Service

  • It can scale up to 3 PB of attached storage
  • It works with various instance types.
  • It easily integrates with other services such as IAM for security for ,VPC , AWS S3 for loading data , AWS Cloud Watch for monitoring and AWS SNS for alerts notifications.

Creating the Amazon Elasticsearch Service domain

In this tutorial you will see how to create Elasticsearch cluster using Amazon Management console. Lets start.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘Elasticsearch’, and click on the Elasticsearch menu item.
  • Now, one thing to note here is the name of Amazon Elasticsearch domain is same as that of Elasticsearch cluster that means domains are clusters with the settings, instance types, instance counts, and storage resources that you specify.
  • Now, click on Create a new domain.
  • Select the deployment type as development and testing domain
  • Now Under Configure domain provide the Elasticsearch domain name as “firstdomain” . A domain is the collection of resources needed to run Elasticsearch. The domain name will be part of your domain endpoint.
  • Under Data nodes, choose the t3.small.elasticsearch and ignore rest of the settings and click on NEXT
  • Under Network configuration, choose Public access. For Fine-grained access control, choose Create master user. Provide a user name as user and password as Admin@123. Fine-grained access control keeps your data safe.
  • For Domain access policy, choose Allow open access to the domain. Access policies control whether a request is accepted or rejected when it reaches the Amazon Elasticsearch Service domain
  • Now click on NEXT till end and create the domain. It takes few minutes for Domain to get Launched.
  • Click on the firstdomain Elasticsearch domain

Upload data to Amazon Elasticsearch for indexing

  • You can load streaming data into your Amazon Elasticsearch Service (Amazon ES) domain from many different sources. Some sources, like Amazon Kinesis Data Firehose and Amazon Cloud Watch Logs, have built-in support for Amazon ES. Others, like Amazon S3, Amazon Kinesis Data Streams, and Amazon DynamoDB, use AWS Lambda functions as event handlers
  • In this tutorial we will directly use a sample data to upload the data.
  • Click on the Kibana link as shown in above snapshot using the username user and password Admin@123 and then click on Add data
  • As this is just the demonstration , lets use sample data and Add e-commerce Orders.

Search documents using Kibana in Amazon Elasticsearch

Kibana is a popular open source visualization tool which works with AWS Elasticsearch service. It provides interface to monitor and search the indexes. Lets use Kibana to search the sample data which you just uploaded in AWS ES.

  • Click on Discover option from the main menu to search the data.
  • Now you will notice that Kibana will search the data and populate for you. You can modify the timelines and many other fields accordingly.

Kibana did provide the data when we searched in the dashboard using sample data which you uploaded.

Conclusion

In this tutorial you learnt about what is Amazon Elasticsearch and how to create a Amazon Elasticsearch domain using Amazon Management console. Also we learnt how to upload the sample data in AWS ES although this can be done using various ways such as S3 , Dynamo DB etc.

So, Now you have a strong fundamental understanding of AWS Elasticsearch , which site are you going to implement it in ?

How to Setup AWS WAF and Web ACL using Terraform on Amazon Cloud

It is always a good practice to monitor and make sure your applications or website are fully protected. AWS cloud provides you a service known as AWS WAF that Protect your web applications from common web exploits.

Lets learn everything about AWS WAF ( Web Application Firewall ) and use Terraform to create it.

Table of Contents

  1. What is AWS WAF ?
  2. Prerequisites
  3. Terraform Configuration Files and Structure
  4. Configure Terraform files for AWS WAF
  5. Deploy AWS WAF using Terraform commands
  6. Conclusion

What is AWS WAF ?

AWS WAF stands for Amazon Web services Web Application Firewall. Using AWS WAF you can monitor all the HTTP or HTTPSrequests that are forwarded to Amazon Cloud Front , Amazon Load balancer , Amazon API Gateway REST API etc. from users. It also controls who can access the required content or data based on specific conditions such source IP address etc.

AWS WAF Protect your web applications from common web exploits. To know more about Detailed view of AWS WAF , please find it on the other Blog Post What is AWS Web Application Firewall ?

Prerequisites:

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Configure Terraform files for AWS WAF

In this demonstration we will create a simple Amazon WAF instance using Terraform on Windows machine.

  • Create a folder on your desktop or any location on windows Machine ( I prefer it on Desktop). Now create a file main.tf inside the folder you’re in and paste the below content
# Creating the IP Set

resource "aws_waf_ipset" "ipset" {
   name = "MyFirstipset"
   ip_set_descriptors {
     type = "IPV4"
     value = "10.111.0.0/20"
   }
}

# Creating the Rule which will be applied on Web ACL component

resource "aws_waf_rule" "waf_rule" { 
  depends_on = [aws_waf_ipset.ipset]
  name        = var.waf_rule_name
  metric_name = var.waf_rule_metrics
  predicates {
    data_id = aws_waf_ipset.ipset.id
    negated = false
    type    = "IPMatch"
  }
}

# Creating the Rule Group which will be applied on Web ACL component

resource "aws_waf_rule_group" "rule_group" {  
  name        = var.waf_rule_group_name
  metric_name = var.waf_rule_metrics

  activated_rule {
    action {
      type = "COUNT"
    }
    priority = 50
    rule_id  = aws_waf_rule.waf_rule.id
  }
}

# Creating the Web ACL component in AWS WAF

resource "aws_waf_web_acl" "waf_acl" {
  depends_on = [ 
     aws_waf_rule.waf_rule,
     aws_waf_ipset.ipset,
      ]
  name        = var.web_acl_name
  metric_name = var.web_acl_metics

  default_action {
    type = "ALLOW"
  }
  rules {
    action {
      type = "BLOCK"
    }
    priority = 1
    rule_id  = aws_waf_rule.waf_rule.id
    type     = "REGULAR"
 }
}
  • Create one more file vars.tf inside the same folder and paste the below content
variable "web_acl_name" {
  type = string
}
variable "web_acl_metics" {
  type = string
}
variable "waf_rule_name" {
  type = string
}
variable "waf_rule_metrics" {
  type = string
}
variable "waf_rule_group_name" {
  type = string
}
variable "waf_rule_group_metrics" {
  type = string
}
  • Create one more file output.tf inside the same folder and paste the below content
output "aws_waf_rule_arn" {
   value = aws_waf_rule.waf_rule.arn
}

output "aws_waf_rule_id" {
   value = aws_waf_rule.waf_rule.id
}

output "aws_waf_web_acl_arn" {
   value = aws_waf_web_acl.waf_acl.arn
}

output "aws_waf_web_acl_id" {
   value = aws_waf_web_acl.waf_acl.id
}

output "aws_waf_rule_group_arn" {
   value = aws_waf_rule_group.rule_group.arn
}

output "aws_waf_rule_group_id" {
   value = aws_waf_rule_group.rule_group.id
}
  • Create one more file provider.tf inside the same folder and paste the below content
provider "aws" {      
  region = "us-east-1"
}
  • Again, Create one more file terraform.tfvars inside the same folder and paste the below content
web_acl_name = "myFirstwebacl"
web_acl_metics = "myFirstwebaclmetics"
waf_rule_name = "myFirstwafrulename"
waf_rule_metrics = "myFirstwafrulemetrics"
waf_rule_group_name = "myFirstwaf_rule_group_name"
waf_rule_group_metrics = "myFirstwafrulgroupmetrics"
  • Now your files and code are all set and your directory should look something like below.

Deploy AWS WAF using Terraform commands

  • Now, Lets Initialize the terraform by running the following init command.
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply command .
terraform apply

By now, you should have created the Web ACL and other components of AWS WAF with Terraform. Let’s verify by manually checking in the AWS Management Console.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘WAF’, and click on the WAF menu item.
  • Now You should be on AWS WAF Page, Lets verify each component starting from Web ACL .
  • Now verify the IP Set
  • Now, Verify the Rules which in the Web ACL.
  • Next, Lets verify the Web ACL Rule Groups.

Conclusion

In this tutorial you learned about AWS WAF that is Web Application Firewall and how to setup in Amazon cloud using Terraform .

It is very important to protect your website from attacks. So which Website do you plan to protect ?

Hope this tutorial helped you and if so please comment and share it with your friends.

How to Install and Setup Terraform on Windows Machine step by step

There are lots of automation tools and scripts available for this and one of the finest tool to automate your infrastructure is Terraform which is also known as Infrastructure as code.

Learn how to Install and Setup Terraform on Windows Machine step by step.

Table of Content

  1. What is Terraform ?
  2. Prerequisites
  3. How to Install Terraform on Windows 10 machine
  4. Creating an IAM user in AWS account with programmatic access
  5. Configuring the IAM user Credentials on Windows Machine
  6. Run Terraform commands from Windows machine
  7. Launch a EC2 instance using Terraform
  8. Conclusion

What is Terraform ?

Terraform is a tool for building , versioning and changing the Cloud infrastructure. Terraform is Written in GO Language and the syntax language of configuration files is hcl which stands for HashiCorp configuration language which is much easier than yaml or json.

Terraform has been in use for quite a while now . I would say its an amazing tool to build , change the infrastructure in very effective and simpler way. It’s used with variety of cloud provider such as Amazon AWS, Oracle, Microsoft Azure , Google cloud and many more. I hope you would love to learn it and utilize it.

Prerequisites

How to Install Terraform on Windows machine

  • Open your favorite browser and download the appropriate version of Terraform from HashiCorp’s download Page. This tutorial will download terraform 0.13.0 version
  • Make a folder on your C:\ drive where you can put the Terraform executable something Like  C:\tools where you can put binaries.
  • Extract the zip file to the folder C:\tools
  • Now Open your Start Menu and type in “environment” and the first thing that comes up should be Edit the System Environment Variables option. Click on that and you should see this window.
  • Now Under System Variables and look for Path and edit it
  • Click New and add the folder path where terraform.exe is located to the bottom of the list
  • Click OK on each of the menus.
  • Now, Open Command Prompt or PowerShell to check if terraform is properly added in PATH by running the command terraform from any location.
On Windows Machine command Prompt
On Windows Machine PowerShell
  • Verify the installation was successful by entering terraform --version. If it returns a version, you’re good to go.

Creating an IAM user in AWS account with programmatic access

For Terraform to connect to AWS Service, you should have an IAM user with an Access key ID and secret keys in the AWS account that you will configure on your local machine to connect to AWS account from your local machine.

There are two ways to connect to an AWS account, the first is providing a username and password on the AWS login page on the browser and the other way is to configure Access key ID and secret keys on your machine and then use command-line tools to connect programmatically.

  1. Open your favorite web browser and navigate to the AWS Management Console and log in.
  2. While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item.
  1. To Create a user click on Users→ Add user and provide the name of the user myuser and make sure to tick the Programmatic access checkbox in Access type which enables an access key ID and secret access key and then hit the Permissions button.
  1. Now select the “Attach existing policies directly” option in the set permissions and look for the “Administrator” policy using filter policies in the search box. This policy will allow myuser to have full access to AWS services.
  1. Finally click on Create user.
  2. Now, the user is created successfully and you will see an option to download a .csv file. Download this file which contains IAM users i.e. myuser Access key ID and Secret access key which you will use later in the tutorial to connect to AWS service from your local machine.

Configuring the IAM user Credentials on Windows Machine

Now, you have an IAM user myuser created. The next, step is to set the download myuser credentials on the local machine which you will use to connect AWS service via API calls.

  1. Create a new file, C:\Users\your_profile\.aws\credentials on your local machine.
  2. Next, Enter the Access key ID and Secret access key from the downloaded csv file into the credentials file in the same format and save the file.
[default]     # Profile Name
aws_access_key_id = AKIAXXXXXXXXXXXXXXXX
aws_secret_access_key = vIaGXXXXXXXXXXXXXXXXXXXX

credentials files help you to set your profile. By this way, it helps you to create multiple profiles and avoid confusion while connecting to specific AWS accounts.

  1. Similarly, create another file C:\Users\your_profile\.aws\config in the same directory
  2. Next, add the “region” into the config file and make sure to add the name of the profile which you provided in the credentials file, and save the file. This file allows you to work with a specific region.
[default]   # Profile Name
region = us-east-2

Run Terraform commands from Windows machine

By Now , you have already installed Terraform on your windows Machine, Configured IAM user (myuser) credentials so that Terraform can use it and connect to AWS services in Amazon account.

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Launch a EC2 Instance Using Terraform

In this demonstration we will create a simple Amazon Web Service (AWS) EC2 instance and run Terraform commands on Windows machine.

  • Create a folder on your desktop or any location on windows Machine ( I prefer it on Desktop)
  • Now create a file main.tf inside the folder you’re in and paste the below content
resource "aws_instance" "my-machine" {  # Resource block to define what to create
  ami = var.ami         # ami is required as we need ami in order to create an instance
  instance_type = var.instance_type             # Similarly we need instance_type
}
  • Create one more file vars.tf inside the same folder and paste the below content
variable "ami" {         # Declare the variable ami which you used in main.tf
  type = string      
}

variable "instance_type" {        # Declare the variable instance_type used in main.tf
  type = string 
}

Next, selecting the instance type is important. Click here to see a list of different instance types. To find the image ID ( ami ) , navigate to the LaunchInstanceWizard and search for ubuntu in the search box to get all the ubuntu image IDs. This tutorial will use Ubuntu Server 18.04.LTS image.

  • Create one more file output.tf inside the same folder and paste the below content
output "ec2_arn" {
  value = aws_instance.my-machine.arn     # Value depends on resource name and type ( same as that of main.tf)
}  
  • Create one more file provider.tf inside the same folder and paste the below content:
provider "aws" {      # Defining the Provider Amazon  as we need to run this on AWS   
  region = "us-east-1"
}
  • Create one more file terraform.tfvars inside the same folder and paste the below content
ami = "ami-013f17f36f8b1fefb" 
instance_type = "t2.micro"
  • Now your files and code are ready for execution .
  • Initialize the terraform using below command.
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply
  • After verification , now its time to actually deploy the code using apply.
terraform apply

Great Job, terraform commands execution was done successfully. Now we should have ec2 instance launched in AWS.

It generally take a minute or so to launch a instance and yes we can see that the instance is successfully launched now in us-east-1 region as expected.

Conclusion

In this tutorial you learnt What is terraform , how to Install and Setup Terraform on Windows Machine and launched an ec2 instance on AWS account using terraform.

Keep Terraforming !!

Hope this tutorial will helps you in understanding and setting up Terraform on Windows machine. Please share with your friends.

What is AWS WAF (Web Application Firewall) and how to Setup WAF in AWS account.

It is always a good practice to monitor and make sure your applications or website are fully protected. AWS cloud provides you a service known as AWS WAF that Protect your web applications from common web exploits.

Lets learn everything about AWS WAF ( Web Application Firewall )

Table of Content

  1. What is AWS WAF( Web Application Firewall) ?
  2. Components of AWS WAF ( Web Application Firewall)
  3. Prerequisites
  4. Getting started with AWS WAF ( Web Application Firewall)
  5. Conclusion

What is AWS WAF ?

AWS WAF stands for Amazon Web services Web Application Firewall. Using AWS WAF you can monitor all the HTTP or HTTPSrequests that are forwarded to Amazon Cloud Front , Amazon Load balancer , Amazon API Gateway REST API etc. from users. It also controls who can access the required content or data based on specific conditions such source IP address etc.

AWS WAF Protect your web applications from common web exploits.

Benefits of AWS WAF

  • This is helpful when you want Amazon Cloud Front , Amazon Load balancer , Amazon API Gateway REST to provide the content or serve content to particular users or block particular users.
  • You can configure AWS WAF to count the requests that match those properties without allowing or blocking those requests
  • Protects you from web attacks using conditions you specify.
  • It provides you real time metrics and details of web requests.

Components of AWS WAF

AWS WAF service contains some important components , lets discuss each of them now.

web ACL (web Access Control List) : It is used to protect set of AWS Resources. After you create web ACL you add rules inside it. Rules define specific conditions which are applied on web requests coming from users and how to handle these web requests. You also set default action in web ACL whether to allow or block requests that passes these rules.

Rules : Rules contains statements which define the criteria. IF criteria is matched then the web requests are allowed else they are blocked. Rule are based on some criteria like IP addresses or address ranges , Country or geographical location, Strings that appear in the request etc.

AWS Managed Rules rule group : You can use rules individually or in reusable rule groups. There are two types of rules AWS Managed rule groups or Managing your own rule groups

IP sets and regex pattern sets: AWS WAF stores some more complex information in sets that you use by referencing them in your rules.

  • An IP set is a group of IP addresses and IP address ranges that you want to use together in a rule statement. IP sets are AWS resources.
  • A regex pattern set provides a collection of regular expressions that you want to use together in a rule statement. Regex pattern sets are AWS resources.

Prerequisites

  • You must have AWS account in order to setup AWS WAF . If you don’t have AWS account, please create a account from here AWS account.
  • You must have IAM user with Administrator rights and setup credentials using AWS CLI or using AWS Profile.

Getting started with AWS WAF

In order to work and setup AWS WAF , the most important component is to create Web ACL. In AWS WAF there is nothing like WAF which gets created its just the name of the service that works with CloudFront, Load balancer and many more services. Lets get started.

Creating a Web ACL

You use a Web Access Control list (ACL) to protect a set of AWS resources. You create a web ACL and define its by adding the rules such to block or allow and to which extent it should allow or block it. You can use individual rule or groups of rule. To create Web ACL.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘WAF’, and click on the WAF menu item.
  • Now click on Create Web ACL
  • Now Provide the Name , Cloud Watch metric Name and Resource type as choose CloudFront distributions.
  • Next, Click on Add AWS Resources and select the CloudFront distribution which already have and then hit NEXT
  • Now , In Add rules and rule groups select Add my own rules and rule groups that means you need to Add the Values by your own.
    • Provide the name as myrule123
    • In Type choose Regular Rule
    • Inspect as Header
    • Header field as User-Agent
  • Select if a request as matches the statement for this tutorial however you can also use other available options such as Create a string match condition , geo match condition or an IP match condition .
  • While building the rules there are 3 types of Rule Actions options available such as
    • Count: AWS WAF counts the request but doesn’t determine whether to allow it or block it
    • Allow: AWS WAF allows the request to be forwarded to the protected AWS resource
    • Block: AWS WAF blocks the request and sends back to the client.
  • You can instruct AWS WAF to insert custom headers into the original HTTP request for rule actions or web ACL default actions that are set to allow or count. You can only add to the request. You can’t modify or replace any part of the original request
  • Hit the next button till the end and then Create Web ACL
  • The above rules you added were manual rules which you added, but at times you need to add AWS Managed rules, to do that select AWS Managed rules and select and then
  • So Your Web ACL is Ready and should look like as below.

AWS WAF service contains the most important component that is to Web ACL which you created and inside that you created the rule and applied them. Once Web ACL is created with rules then this you assign these Web ACL’s with CloudFront , Load Balancer etc. to save them from getting exploited from attacks.

Conclusion

In this tutorial you learned about AWS WAF that is Web Application Firewall and how to setup in Amazon cloud. It is very important to protect your website from attacks. So which Website do you plan to protect ?

What is CloudFront and how to Setup CloudFront with AWS S3 and ALB Distributions

Internet users are always impressed with a high speed & loading capacities websites . Why Not you have a website that loads the content within quick seconds and delivering it ?

In this tutorial you learn what is Cloud Front and how to set Cloud Front Distributions in Amazon cloud. Cloud Front enables and helps users to retrieve their content quickly by utilizing the concept of caching.

Table of Content

  1. What is Cloud Front?
  2. Prerequisites
  3. Creating an IAM user in AWS account with programmatic access
  4. Configuring the IAM user Credentials on local Machine
  5. Setting up Amazon CloudFront
  6. How to Use Custom URLs in CloudFront by Adding Alternate Domain Names (CNAMEs)
  7. Using Amazon EC2 or Other Custom Origins
  8. Conclusion

What is Cloud Front

CloudFront is a Amazon web service that helps in speeding up the distribution of content either static or dynamic such as .html, .css, .js , images and many more to users. CloudFront delivers the content using edge locations when any request is requested by users.

By utilizing the Cloud Front the content is delivered to the users very quickly using edge location. In case content is not available in edge locations then CloudFront request from the origin configured. Origins could be like AWS S3 bucket or HTTP server or Load Balancer etc.

Use cases of Cloud Front

  • It accelerates the delivery of your static website such as images , style sheets , Java script and so on.
  • Live streaming of video
  • Also use of Lambda at edge location with CloudFront adds more variety of ways to customize cloud front.

How CloudFront delivers content to your users

  • User makes a request to website or application let say a html page http://www.example.com/mypage.html
  • DNS server routes the request to Cloud Front edge locations.
  • Cloud Front checks if the request can be fulfilled with edge location .
  • IF Edge location have the files , then CloudFront sends back to the user else
  • CloudFront queries to the origin server
  • Origin server sends the files back to edge location and then Cloud front sends back to the User.

How CloudFront works with regional edge caches

This kind of cache brings the content more closer to the users to help performance. Regional edge caches help with all types of content, particularly content which becomes less popular over time such as user-generated content, such as video, photos, or artwork; e-commerce assets such as product photos and videos etc.

This cache sits in between the origin server and edge locations. The Edge location stores the content and cache but when the content is too old it removes it from its cache. There comes the role of regional cache which has wide coverage to store lots of content.

Prerequisites

  • You must have AWS account in order to setup AWS CloudFront. If you don’t have AWS account, please create a account from here AWS account.
  • You must have IAM user with Administrator rights and setup credentials using AWS CLI or using AWS Profile. You will see this below step to create IAM and configure credentials.
  • AWS S3 bucket

Creating an IAM user in AWS account with programmatic access

In order to connect to AWS Service, you should have an IAM user with an Access key ID and secret keys in the AWS account that you will configure on your local machine to connect to AWS account from your local machine.

There are two ways to connect to an AWS account, the first is providing a username and password on the AWS login page on the browser and the other way is to configure Access key ID and secret keys on your machine and then use command-line tools to connect programmatically.

  1. Open your favorite web browser and navigate to the AWS Management Console and log in.
  2. While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item.
  1. To Create a user click on Users→ Add user and provide the name of the user myuser and make sure to tick the Programmatic access checkbox in Access type which enables an access key ID and secret access key and then hit the Permissions button.
  1. Now select the “Attach existing policies directly” option in the set permissions and look for the “Administrator” policy using filter policies in the search box. This policy will allow myuser to have full access to AWS services.
  1. Finally click on Create user.
  1. Now, the user is created successfully and you will see an option to download a .csv file. Download this file which contains IAM users i.e. myuser Access key ID and Secret access key which you will use later in the tutorial to connect to AWS service from your local machine.

Configuring the IAM user Credentials on local Machine

Now, you have an IAM user myuser created. The next, step is to set the download myuser credentials on the local machine which you will use to connect AWS service via API calls.

  1. Create a new file, C:\Users\your_profile\.aws\credentials on your local machine.
  2. Next, Enter the Access key ID and Secret access key from the downloaded csv file into the credentials file in the same format and save the file.
[default]     # Profile Name
aws_access_key_id = AKIAXXXXXXXXXXXXXXXX
aws_secret_access_key = vIaGXXXXXXXXXXXXXXXXXXXX

credentials files help you to set your profile. By this way, it helps you to create multiple profiles and avoid confusion while connecting to specific AWS accounts.

  1. Similarly, create another file C:\Users\your_profile\.aws\config in the same directory
  2. Next, add the “region” into the config file and make sure to add the name of the profile which you provided in the credentials file, and save the file. This file allows you to work with a specific region.
[default]   # Profile Name
region = us-east-2

Setting up Amazon CloudFront

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘CloudFront’, and click on the CloudFront menu item.
  • Click on Create distributions and then Get Started
  • In the Origin setting provide the S3 bucket name and keep other values as default.
  • For the settings under Default Cache Behavior Set and Distribution Settings, accept the default values and then click on Create distribution.
  • AWS S3 bucket is already created before we started this tutorial. Lets upload a index.html ( with a text hello ) to the bucket and set the permission as public access as shown below.
  • Now check the Amazon S3 URL to verify that your content is publicly accessible
  • Check the CloudFront URL by hitting Domain Name /index.html and it should show the same result as your index.html file contains
domainname/index.html

How to Use Custom URLs in CloudFront by Adding Alternate Domain Names (CNAMEs)

In CloudFront as seen above the CloudFront URL is generated with a domain name *.cloudfront.net by default. If you wish to use your own domain name that is CNAME such as abc.com in the URL , you can assign it as yourself.

  • In our case by default the URL is :
http://dsx78lsseoju7.cloudfront.net/index.html
  • If you wish to use alternate domain such below, follow the step as below
http://abc.com/index.html
  • Go back to CloudFront Page and look for the distribution where you need to change the domain and click on Edit
  • Provide the domain name and you must have SSL certificate already in place.
  • Finally Create an alias resource record set in Route 53 by visiting Route53 Page .
  • Go to the Route53 Page by searching on the top of the AWS Page
  • Click on the Hosted Zone and then click on Create Record
  • Now Here, Provide the name of Record which can be anything, Record type and Route traffic as CloudFront distribution

After Successful creation of Route 53 you can verify the index page ( http://mydomain.abc.com/index.html ) if it works fine.

Using Amazon EC2 or Other Custom Origins

A custom Origin can be a Amazon Elastic Compute Cloud (AWS EC2) for example http server. You need to provide the DNS name of the server as custom origin.

Below are some key points to keep in mind while setting the custom origin as AWS EC2.

  • Host and serve the same content on all server in same way.
  • Restrict access requests to the HTTP and HTTPS ports that your custom origin listens on that is AWS EC2.
  • Synchronize the clocks of all servers in your implementation.
  • Use an Elastic Load Balancing load balancer to handle traffic across multiple Amazon EC2 instances
  • When you create your CloudFront distribution, specify the URL of the load balancer for the domain name of your origin server

Conclusion

In this tutorial you learnt what is Cloud Front and how to set Cloud Front Distributions in Amazon cloud. Cloud Front enables and helps users to retrieve their content quickly by utilizing the concept of caching.

By Now, you know what is CloudFront and how to setup CloudFront , what are you going to manage with CloudFront Next ?

The Ultimate Guide: Getting Started with Groovy and Groovy Scripts

Powerful, dynamic language, with static-typing and static compilation capabilities, for the Java platform aimed at improving developer productivity .Groovy syntax is simple and easy. It saves a lot of code and effort thus increasing the productivity of developer if he had to do the same thing in Java.

In this tutorial you will learn what is groovy and how to install Groovy on windows and Linux machine . Later you will see two examples which helps you kickstart for Writing Groovy Scripts.

Table of Content

  1. What is Groovy?
  2. Prerequisites
  3. How to Install Groovy on Windows Machine
  4. How to Install Groovy on Ubuntu Machine
  5. Groovy Syntax
  6. Groovy Examples
  7. Conclusion

What is Groovy?

Groovy is a Powerful static as well as dynamic language which is almost same as Java language with few differences. Groovy language is vastly used in Jenkins Pipelines. It integrates with Java libraries very well to deliver powerful enhancements and features including domain specific language authoring and scripting capabilities.

Basic Features of Groovy

  • Groovy supports all Java libraries and it has its own libraries as well.
  • It has a simple similar syntax as that of Java but in more simpler form
  • It has both static and dynamic nature
  • It has great extensibility for the language and tooling.
  • Last but not the least it a free open source language which is being used by lot of developers.

Prerequisites

  • Ubuntu 18 Machine or Windows machine
  • Make sure to have Java 8 plus installed on machines. To check Java version run the following command.
java --version
On Ubuntu Machine
On Windows Machine

How to Install Groovy on ubuntu Machine

Installing Groovy on Ubuntu machine is pretty straightforward. Lets Install the Groovy on Ubuntu 18 machine.

  • First Update the ubuntu official repository by running the apt command.
sudo apt update
  • Now, download the groovy installation script by running the curl command.
curl -s get.sdkman.io | bash
  • Now Install the groovy using the sdk command

How to install Groovy on Windows machine

  • Now you will see windows installer package, once you click on it it will automatically download the file.
  • Now click on the the downloaded windows installer package and installation will begin.
  • Accept the license Agreement
  • Make sure you select Typical for Setup Type and click on Install
  • Now Groovy is successfully installed on windows machine. Open Groovy console from the Start menu & run a simple command to test.

Groovy Syntax

Shebang line

  • This allows script to run groovy scripts directly from command line provided you have groovy installed and groovy command is available on the PATH
#!/usr/bin/env groovy
println "Hello from the shebang line"

Strings

  • Strings are basically chain of characters. Groovy strings are written with single quotes ' or double quotes '' and even with triple quotes '''
'This is an example of single line'

"This is an example of double line"

def threequotes = '''
line1
line2
line3
'''

String interpolation

Groovy expressions can be interpolated and it is just like replacing a placeholder with its value. Placeholder in groovy are surrounded by ${} or $ . Also if you pass GString in any method where string is required then you should replace it by calling toString() on that method.

def name  = "automate"
def greet =  "Hello $name"

Groovy Examples

Lets now see two examples of Groovy

  1. JsonSlurper : JsonSlurper is a class that parses JSON text or reader content into Groovy data
    • creating instance of the JsonSlurper class
    • Using the parseText function of the JsonSlurper class to parse some JSON text
    • access the values in the JSON string via the key.
import groovy.json.JsonSlurper 

class Example {
   static void main(String[] args) {
      def jsonSlurper = new JsonSlurper() # creating instance of the JsonSlurper class
      def object = jsonSlurper.parseText('{ "name":  "John", "ID" : "1"}') 
 	
      println(object.name);
      println(object.ID);
   } 
}
  1. Catching Exceptions
    • Accessing an array with an index value which is greater than the size of the array
class Example {
   static void main(String[] args) {
      try {
         def arr = new int[3];
         arr[5] = 5;
      }catch(ArrayIndexOutOfBoundsException ex) {
         println("Catching the Array out of Bounds exception");
      }catch(Exception ex) {
         println("Catching the exception");
      }
		
      println("Let's move on after the exception");
   } 
}

Conclusion

This tutorial is pretty straightforward and to get you started with Groovy. In this tutorial you learnt what is groovy and how to install Groovy on windows and Linux machine . Later you learnt two examples which helps you kickstart for Writing Groovy Scripts.

Well , Groovy is used at various places such as Jenkins pipelines etc.What do you plan to code with Groovy next ?

The Ultimate Guide: Getting Started with GitLab

With lots of software development and testing around different applications and products you certainly need a best way to deploy it in effective and in best way. With So many microservices and code it becomes very crucial for any developer or system engineers to collaborate and make a successful product ready.

Managing the code is now very well taken care by Git which is distributed code repository but on the top of it deployment has been very effective and easily managed with the help of GitLab

In this tutorial you will learn all about GitLab , Managing Pipelines , Projects and many more which a devops engineer should know to get started.

Table of Content

  1. What is GitLab?
  2. Prerequisites
  3. Creating Projects on GitLab
  4. Creating a Repository on GitLab
  5. Creating a Branch on GitLab
  6. Get started with GitLab CI/CD Pipelines
  7. Pipeline Architecture
  8. Conclusion

What is GitLab?

Git is a distributed version control designed to handle small to large projects with speed and efficiency. On the top of Git , GitLab is fully integrated platform to manage devops lifecycle or software developments.

It is single application to manage entire DevOps lifecycle.

Prerequisites

  • You should have GitLab account handy. If you don’t have create it from here

Creating Projects on GitLab

GitLab projects hold all the files , folders , code and all the documents you need to build your applications.

  • To create a project in GitLab click on Projects on the top and then click on Create a Project
  • Now click on Create blank project
  • On the Blank project tab provide the Project name and as this is demo we will keep this repository Private.
  • Now Project is successfully created.
  • You are ready to upload files either manually create/upload on GitLab
  • Also you can push the files using command line by cloning the repository and adding the files as show below.
git clone https://gitlab.com/XXXXXXXXX/XXXXX.git
cd firstgitlab
touch README.md
git add README.md
git commit -m "add README"
git push -u origin master

Creating a Repository on GitLab

A repository is a place where you store all your code and related files. It is part of a Project. You can create multiple repositories in a single project.

To create a new repository, all you need to do is create a new project or fork an existing project. Once you create a new project, you can add new files via UI or via command line.

Creating a Branch on GitLab

  • By Now, you saw GitLab Project creation. By default if you add any file it will be checked in master branch.
  • Click on New file and then select Dockerfile and add content and then commit the file by adding the comments.
  • You will see that Dockerfile is now added in master branch under FirstGitLab project.
  • So far we created a file which by default gets added in master branch. But if you need a separate Branch click on the Branches and then hit New Branch.
  • Provide a name for the new branch.

Get started with GitLab CI/CD Pipelines

Before you start CI/CD part on GitLab make sure to have following

  • runners : runners are agents that run your CI/CD jobs. To check the available runners Go to Settings > CI/CD and expand Runners. As long as you have at least one active available runner then you will be able to run the Job.
  • .gitlab-ci.yml file : In this file you define your CI/CD jobs , decisions which runner should take with specific conditions, structure of job and order of Jobs. Go to Project overview and then click on New file & name it as .gitlab-ci.yml
  • Now Paste the below content
build-job: 
    stage: build 
    script:
       - echo "Hello, $GITLAB_USER_LOGIN"
test-job:
    stage: test
    script: 
       - echo "Testing CI/CD Pipeline"
deploy-job:
    stage: deploy
    script:
       - echo "Deploy from the $CI_COMMIT_BRANCH branch" 
  • Now Pipeline should automatically trigger for this pipeline configuration. Click on Pipelines to validate and View status of pipeline.
  • To view details of a job, click the job name, for example build.
  • Pipelines can be scheduled to run automatically as and when required.

Pipeline Architecture

Pipelines are the fundamental building blocks for CI/CD in GitLab. There are three main ways to structure your pipelines, each with their own advantages. These methods can be mixed and matched if needed:

  • Basic: Good for straightforward projects where all the configuration are stored at one place. This is the simplest pipeline in GitLab. It runs everything in the build stage at the same time and once all of those finish, it runs everything in the test stage the same way, and so on.

If Build A is completed it waits for BUILD B and once both are completed it moves to next TEST STAGE. Similarly if TEST B is completed it will wait for TEST A and then once both are completed they move to DEPLOY STAGE.

Directed Acyclic Graph: Good for large, complex projects that need efficient execution and you want everything to run as quickly as possible.

If Build A and TEST A both are completed it moves to next DEPLOY STAGE even if TEST B is still running

Child/Parent Pipelines: Good for monorepos and projects with lots of independently defined components. This job is run mostly using trigger keyword.

Conclusion

GitLab is the first single application for software development, security, and operations that enables continuous DevOps. GitLab makes the software lifecycle faster and improves the speed of business.

GitLab provides solutions for each of the stages of the DevOps lifecycle. So Which application are you going to build ?

Hope you had learnt a lot from this guide and helped you. If you like please share.

Helm Charts: A Simple way to deploy application on Kubernetes

Kubernetes deployment can be manually done but it may take lots of efforts and ton of hours to build and organize yaml file in structured way. Helm charts are one of the best practices for building efficient clusters in Kubernetes.

In this tutorial you will learn step-by-step How to create a Helm chart , set up, and deploy on a web server. Helm charts simplify application deployment on a Kubernetes cluster

Table of content

  1. What is Helm and Helm charts
  2. Prerequisites
  3. Installing Helm on windows machine
  4. Installing Helm on ubuntu machine
  5. Installing Minikube on Ubuntu machine
  6. Creating Helm charts
  7. Configure Helm Chart
  8. Deploy Helm chart
  9. View the Deployed Application
  10. Verify the Pods which we created using Helm chart
  11. Conclusion

What is Helm and Helm charts

Helm is a package manager for kubernetes which makes application deployments and management easier. Helm is a command line tool which allows you to create helm chart .

Helm charts is a collection of templates and setting which defines set of kubernetes resources. In Helm charts we define all the resources which are needed as part of application. Helm charts communicates with kubernetes cluster using REST api.

Working with Helm chart makes the job of deployment or management easier. It also supports versioning.

Prerequisites

  • Docker should be installed on ubuntu machine.
  • kubectl should be installed on ubuntu machine.

Installing Helm on windows machine

To install Helm on Windows machine

  • Now, extract the windows-amd64 zip to the preferred location
  • Now open command prompt and hop to the same location and type helm.exe
  • Now, check the version of helm

Installing Helm on ubuntu machine

To install Helm on ubuntu machine

  • Download the  latest version of Helm package
 wget https://get.helm.sh/helm-v3.4.1-linux-amd64.tar.gz
  •  Unpack the helm package manager
tar xvf helm-v3.4.1-linux-amd64.tar.gz
  • Now move linux-amd64/helm to /usr/local/bin
sudo mv linux-amd64/helm /usr/local/bin
  • Check the version of helm
helm version

Installing Minikube on Ubuntu machine

minikube is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes. Lets Install it.

  • Download and Install the minikube package on ubuntu machine.
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb

sudo dpkg -i minikube_latest_amd64.deb
  • Now start minikube with normal user but not with root user.
minikube start
  • Now verify if minikube is installed properly by running the following commands
minikube status

Creating Helm charts

Before we create Helm chart make sure helm is installed properly. To check run the below command.

which helm
  • Starting a new Helm chart requires one simple command
helm create automate
  • As soon as First-chart is created it will create a folder with the same name containing different files.

Configure Helm Chart

By now Helm chart has been created by using just a single command. But for deploying using Helm chart we would need to configure few files which got generated with helm create command.

  1. Chart.yaml contains details of chart such as name , description , api version to be used , chart version to be deployed etc. You don’t require to update this file .
  1. template directory: Next , most important part of the chart is the template directory which holds all the configurations for your application that will be deployed into the cluster such as ingress.yaml , service.yaml etc . You don’t need any modification in this directory as well.
  1. charts : This folder contains no file initially. Other dependent charts are added here if required. (optional task). Skip this as well.
  1. values.yml: values.yml is a main file which contains all the configuration related to deployments. Customize the values.yml file according to the deployment .
    • replicaCount: is set to 1 that means only 1 pod will come up ( No change required)
    • pullPolicy : Update to Always.
    • nameOverride: automate-app
    • fullnameOverride: automate-chart
    • There are two types of networking options available a) ClusterIP address which exposes service on cluster internal IP and b) NodePort exposes service on each kubernetes node IP address. We will use NodePort here.

Your values.yaml should like something below.

replicaCount: 1

image:
  repository: nginx
  pullPolicy: Always
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""

imagePullSecrets: []
nameOverride: "automate-app"
fullnameOverride: "automate-chart"

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: "automateinfra"

podAnnotations: {}

podSecurityContext: {}
  # fsGroup: 2000

securityContext: {}
service:
  type: NodePort
  port: 80

ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths: []
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: {}
autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

nodeSelector: {}

tolerations: []

affinity: {}

Deploy Helm chart

Now that you’ve made the necessary modifications to create a Helm chart, you can deploy it using a Helm command, add a name point to the chart, add a values file, and send it to a namespace:

helm install automate-chart automate/ --values automate/values.yaml
  • Helm install command deploys the app , Now run the both export commands as shown in the helm install command’s output.
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services automate-chart)

export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")

View the Deployed Application

  • Run echo command as shown in the output of helm install command.
echo http://$NODE_IP:$NODE_PORT

Verify the Pods which we created using Helm chart

You already saw that application is deployed successfully and we can see that Nginx Page loaded. But to verify from kubernetes end , lets run the following command and verify.

kubectl get nodes

kubectl get pods

Conclusion

After following the outlined step-by-step instructions, you have a Helm chart created, set up, and deployed on a web server. Helm charts simplify application deployment on a Kubernetes cluster.

Hope you liked this tutorial and it helped you. Please share with your friends.

The Ultimate Guide: Getting Started with Python( Python for beginners)

Python’s standard library is very extensive, offering a wide range of facilities . The library contains built-in modules (written in C) that provide access to system functionality such as file I/O that would otherwise be inaccessible to Python programmers, as well as modules written in Python that provide standardized solutions for many problems that occur in everyday programming

In this tutorial we will learn everything which a beginner and a Devops engineer should know in Python. We will cover basic definition of python and some brilliant examples which will be enough to get you started with Python and for sure you will love it.

Table of content

  1. What is Python?
  2. Prerequisites
  3. Variables
  4. Strings
  5. Dictionary
  6. Lists
  7. Python Built-in functions
  8. Handling Exceptions
  9. Python Functions
  10. Python Searching
  11. Conclusion

What is Python?

Python is a high level , oops based , interactive and a general purpose scripting programing language. Python is a language which is used as a backend as well as frontend language. It focuses on object over functions.

Python is also a interpreted language because it converts codes in machine level code even before it runs. It works on variety of protocols such as https, ftp , smtp and many more. The latest version is 3.9.2 which was released in December 2020. Python works very well with most of the such as atom, notepad ++ , vim.

Python works on windows , Linux and macOS system and many more. For windows OS it can run a single command on windows terminal and for Linux & macOS it can easily run on shell without needing to save the program every time.

Prerequisites

  • Python doesn’t come installed on windows so make sure you have Python installed on windows machine. To see how to install python on windows click here.
  • For macOS and Linux Python comes installed by default but could be the older version such as python2. To check python version run the command.
python
This is older version Python2
  • Also use command python3 in lowercase to check if Python3 is installed.
python3
  • In case both the commands shows either Python2 or Python not found run the following command to install python.
sudo apt install python3

Variables

Variables are stored as a information it could be number , symbol , name etc. which are used to be referenced. Lets see some of the examples of Python variables.

  • There are few points one must remember when using variables such as
    • Variables cannot start with digits
    • Spaces are not allowed in variables.
    • Avoid using Python keywords

Example 1:

  • In below example var is a variable and value of var is this is a variable
var="this is a variable" # Defining the variable
print(var)    # Printing the value of variable

Example 2:

  • In below example we are declaring three variable.
    • first_word and second_word are storing the values
    • add_words is substituting the variables with values
first_word="hello"
second_word="devops"
add_words=f"{first_word}{second_word}"
print(add_words)
  • If you wish to print words in different line then use "\n" as below
first_word="hello"
second_word="devops"
add_words=f"{first_word}\n{second_word}"
print(add_words)

Strings

Python strings are collection of characters surrounded by quotes ” “. There are different ways in which strings are created.

  1. str() :
"This is method 1 to display string"

2. Directly calling it in quotes

"Hello, this is method2 to display string"

3. Using Format:

This was introduced in Python3 and uses curly brackets {} to replace the values.

Example 1

In below example you will notice that first curly bracket will be replaced by first value that is a and second will be replaced by b

'{} {}'.format('a','b')

Example 2

In below example if you provide any numerical value inside the curly braces it considers it as index and then retrieve from the given values accordingly

'{0} {0}'.format('a','b')

Example 3

In below example if you provide key value pair then values are substituted according to key

'{a} {b}'.format(a='apple', b='ball')

4. Using f string

f string are prepended with either f or F before the first quotation mark. Lets take a example.

a=1
f"a is {a}" 

5. Template strings are designed to offer a simple string substitution mechanism. These built-in methods work for tasks where simple word substitutions are necessary.

from string import Template
new_value = Template("$a b c d")       #  a will be substituted here
x = new_value.substitute(a = "Automation")
y = new_value.substitute(a = "Automate")
print(x,y)

Dictionary

In simple words these are key value pairs where keys can be number, string or custom object. Dictionary are represented in key value pairs separated by comma within curly braces.

map = {'key-1': 'value-1', 'key-2': 'value-2'}
  • You can access the particular key using following way
map['key-1']

Lets see an example to access values using get() method

my_dictionary = {'key-1': 'value-1', 'key-2': 'value-2'}
my_dictionary.get('key-1')    # It will print value of key-1 which is value-1
print(my_dictionary.values()) # It will print values of each key
print(my_dictionary.keys())   # It will print keys of each value
my_dictionary.get('key-3')    # It will not print anything as key-3 is missing

Lists

Lists are ordered collection of items. Lists are represented using square brackets containing ordered list of item.

[0, 1, 2, 3, 4, 5, 6, 7, 8, 9] # Example of List
  • We can add or remove items from the list using built in function such as pop() or insert() or append() and many more. Lets us see an example.

The contents of one list can be added to another using the extend method:

list1 =['a', 'b', 'c', 'd']
print(list1)                        # Printing only List 1
list2 = ['e', 'f']
list2.extend(list1)
print(list2)                        # Printing List 2 and also 1
  • Use insert() to add one new guest to the beginning of your list.
  • Use insert() to add one new guest to the middle of your list.
  • Use append() to add one new guest to the end of your list.

Python Built-in functions

There are various single line command which are already embedded in python library and those are known as built in functions. You invoke a function by typing the function name, followed by parentheses.

  • To check the Python version on windows or Linux machine run the following command.
python3 --version
  • To print the output of a program , use the print command.
print("Hello Devops")
  • To generate a list of number through a range built-in function run the following command.
list(range(0,10))

Handling Exceptions

Exceptions are error which causes a program to stop if not handled properly. There are many built-in exceptions, such as IOErrorKeyError, and ImportError. Lets see a simple example below.

  • Here we defined a list of characters and stored it in a variable devops
  • Now, while true indicated that till the ,condition is true it will execute the try block.
  • .pop() is built in method to remove each item one by one.
  • Now in our case as soon as all the characters are removed then except block catches the IndexError and prints the message.
devops = ['d','e','v','o','p','s']
 
while True:
    try:
        devop = devops.pop()
        print(devop)
    except IndexError as e:
        print("I think I did lot of pop ")
        print(e)
        break
 
Output:
 
s
p
o
v
e
d
I think I did lot of pop
pop from empty list

Python Functions

Earlier in this tutorial we have already seen that there are numerous built in function and some of them you used above. But you can define and create your own functions. Lets see the syntax of function.

def <FUNCTION NAME>(<PARAMETERS>):
    <CODE BLOCK>
<FUNCTION NAME>(<ARGUMENTS>)

Lets look at some of the Python functions examples

EXAMPLE 1

  • Here each argument use order of arguments to assign value which is also known as positional argument.
  • a and b variables are parameters which are required to run the function
  • 1 and 2 are arguments which are used to pass the value to the function ( arguments are piece of information that’s passed from a function call to a function)
def my_function(a,b):
  print(f" value of a is {a}")
  print(f" value of b is {b}")
my_function(1, 2)

EXAMPLE 2:

  • With keyword arguments, assign each argument a default value:
def my_function(a=3,b=4):
  print(f" value of a is {a}")
  print(f" value of b is {b}")
my_function()

EXAMPLE 3

Passing arbitrary number of arguments. When you are not sure about the number of parameters to be passed then we call it as arbitrary. Lets look at an example

  • Find the Even in the string

mylist = []
def myfunc(*args):      #  args is to take any number of arguments together in myfunc
    for item in args:
        if int(item)%2 == 0:
            mylist.append(item)
    print(mylist)
myfunc(5,6,7,8,9)

EXAMPLE 4

  • IF LOOP: Find the least among two numbers if both numbers are even else return greater among both the numbers

def two_of_less(a,b):    # Defining the Function where a and b variables are parameters
    if a%2==0 and b%2==0:
      print(min(a,b))       # using built in function min()
    if a%2==1 or b%2==1:
      print(max(a,b))       # using built in function max()
two_of_less(2,4)

EXAMPLE 5

  • Write a function takes a two-word string and returns True if both words begin with same letter

def check(a):
    m = a.split()
    if m[0][0] == m[1][0] :
     print("Both the Words in the string starts with same letter")
    else:
     print("Both the Words in the string don't start with same letter")    
check('devops Engineer')

Python Searching

The need to match patterns in strings comes up again and again. You could be looking for an identifier in a log file or checking user input for keywords or a myriad of other cases.

Regular expressions use a string of characters to define search patterns. The Python re package offers regular expression operations similar to those found in Perl.

Lets look at example which will give you overall picture of in built functions which we can use with re module.

  • You can use the re.search function, which returns a re.Match object only if there is a match.
import re
import datetime
 
name_list = '''Ezra Sharma <esharma@automateinfra.com>,
   ...: Rostam Bat   <rostam@automateinfra.com>,
   ...: Chris Taylor <ctaylor@automateinfra.com,
   ...: Bobbi Baio <bbaio@automateinfra.com'''
 
# Some commonly used ones are \w, which is equivalent to [a-zA-Z0-9_] and \d, which is equivalent to [0-9]. 
# You can use the + modifier to match for multiple characters:
 
print(re.search(r'Rostam', name_list))
print(re.search('[RB]obb[yi]',  name_list))
print(re.search(r'Chr[a-z][a-z]', name_list))
print(re.search(r'[A-Za-z]+', name_list))
print(re.search(r'[A-Za-z]{5}', name_list))
print(re.search(r'[A-Za-z]{7}', name_list))
print(re.search(r'[A-Za-z]+@[a-z]+\.[a-z]+', name_list))
print(re.search(r'\w+', name_list))
print(re.search(r'\w+\@\w+\.\w+', name_list))
print(re.search(r'(\w+)\@(\w+)\.(\w+)', name_list))
 

OUTPUT

<re.Match object; span=(49, 55), match='Rostam'>
<re.Match object; span=(147, 152), match='Bobbi'>
<re.Match object; span=(98, 103), match='Chris'>
<re.Match object; span=(0, 4), match='Ezra'>
<re.Match object; span=(5, 10), match='Sharm'>
<re.Match object; span=(13, 20), match='esharma'>
<re.Match object; span=(13, 38), match='esharma@automateinfra.com'>
<re.Match object; span=(0, 4), match='Ezra'>
<re.Match object; span=(13, 38), match='esharma@automateinfra.com'>
<re.Match object; span=(13, 38), match='esharma@automateinfra.com'>

Conclusion

In this tutorial you learnt everything which a beginner and a Devops engineer should know. This tutorial covered definition of python and some brilliant examples which will be enough to get you started with Python and for sure you will love it.

By Now, you are ready to build some exciting python programs. Hope you liked this tutorial and please share it with your friends.

How to Launch AWS RedShift Cluster using AWS Management Console in Amazon account.

Although there are lots of Storage service which stores ample of data but when it comes to analyzing the data performance has always remained a challenge. The issues with performance are like unable to retrieve data within time, storage leakage etc.

To solve these issues AWS Amazon provides its own managed service for both storing Gigs, Terabyte of data and then analyzing the data and the service is AWS Redshift.

In this tutorial you will learn about Amazons data warehouse and analytic service AWS Redshift , What is AWS Redshift cluster and how to create AWS Redshift cluster using AWS Management console.

Table of Content

  1. What is Amazon Redshift?
  2. What is Amazon Redshift Cluster?
  3. Amazon Redshift Cluster overview
  4. Prerequisites
  5. How to Create a basic Redshift Cluster using AWS Management console
  6. Conclusion

What is Amazon Redshift?

Amazon Redshift is a AWS analytical service which is used to analyze the data. Amazon Redshift allows us to store massive data and analyze the data using query on the database. It is fully managed service that means you don’t need to worry about scalability and infrastructure.

First step to upload the data is to create the set of nodes which is known ad Amazon Redshift cluster. Cluster contains groups of nodes. Once Cluster is created then you can upload tons of data ( in Gigabits) and then start analyzing the data.

Amazon Redshift manages everything for you such as monitoring, scaling , applying patches, upgrades , capacity whatever is required at infrastructure end.

What is Amazon Redshift Cluster?

Amazon Redshift cluster can contain a single node or more than one node. It all depends on the requirements. IF you wish to create more than one node then that is known as cluster. AWS Redshift Cluster contains one leader node and other nodes are known as compute nodes.

You can create AWS Redshift cluster using various ways such as:

  • AWS Command Line interface ( AWS CLI )
  • AWS Management console
  • AWS SDK’s ( Software Development kit) libraries .

Amazon Redshift Cluster overview

Lets see some of the concepts of Amazon Redshift cluster.

  • Redshift cluster snapshots can be created either manually or automatically & are stored in AWS S3 bucket.
  • Administrator assigns IAM permissions on Redshift cluster if any users wants to access it.
  • Amazon cloud watch is primarily used to capture health and performance of Amazon Redshift cluster.
  • As soon as you create Amazon Redshift cluster one database is also created. This database is used to query and analyze the data. While you provision the cluster you need to provide master user which is superuser for the database & has all rights.
  • When a client queries Redshift cluster all the request are received by leader node , it further parses and develop query execution plans. Leader node coordinates with compute node and then provide final results to clients.

Prerequisites

  • You must have AWS account in order to setup AWS Redshift cluster. If you don’t have AWS account, please create a account from here AWS account.
  • You must have access to create IAM role and AWS Redshift cluster.
  • (Optional) : If you have AWS Administrator rights then it will be helpful.

How to Create a basic Redshift Cluster using AWS Management console

Before we start creating a Redshift cluster we need an IAM role which Redshift will assume to work with other services such as AWS S3 etc. So lets get started.

  • Open your browser and and go to AWS Management console and on the top search for IAM , here click on Roles
  • Next , click on Create Role.
  • Next , select service as Redshift
  • Now , scroll down to the bottom and you will see “Select your use case”, here choose Redshift – Customizable, then choose Next: Permissions.
  • Now attach AmazonS3ReadOnlyAccess policy and click N
  • Next , skip tagging as of now just click on Next: Tags and then Review & finally hit Create Role.
  • IAM role is created successfully , keep the IAM role ARN handy with you:
  • Now on AWS Management console search for Redshift on the top of the page.
  • Now click on Create Cluster and provide the name of cluster . As this is the demo , we will use free trial cluster.
  • Now , provide the database details and save them for future. Also Associate IAM role which we created earlier.
  • Finally click on Create cluster
  • By Now, AWS Redshift cluster is created successfully and available for use.
  • Lets validate our database connection by running a simple query. Click on Query data
  • Now Enter Database credentials for making the connecting to AWS Redshift cluster ( dev database was created by default)
  • Now Run a query as below
    • Some of the tables inside the database like events , date were created by default.
select * from date

This confirms that AWS Redshift Cluster is created successfully and we are able to hit queries on it .

Conclusion

In this tutorial we learnt about Amazons data warehouse and analytic service AWS Redshift , What is AWS Redshift cluster and how to create AWS Redshift cluster using AWS Management console.

By learning this Service now you are ready with working with Gigs and Terabyte of data and analyze it with best performance.

Ultimate Guide on how to add apt-repository and PPA repositories and working with ubuntu repository

As a Linux administrator it is very important to know how you are managing your applications & Software’s. Every command and every installation of packages require critical attention before executing it.

So In this Ultimate guide we will learn everything you should know about ubuntu repositories , how to add apt-repository & PPA repositories and working with ubuntu repository and apt commands.

Table of Content

  1. What is ubuntu repository?
  2. How to add a ubuntu repository?
  3. Manually Adding apt-repository in ubuntu
  4. Adding PPA Repositories
  5. Working with Ubuntu repositories
  6. How apt or apt-get command work with Ubuntu Repository
  7. Conclusion

What is ubuntu repository?

APT repository is a network shared server or a local directory containing deb packages and metadata files that are readable by the APT tools. When installing packages using the Ubuntu Software Center or the command line utilities such as apt or apt-get the packages are downloaded from one or more apt software repositories.

On Ubuntu and all other Debian based distributions, the apt software repositories are defined in the /etc/apt/sources.list file or in separate files under the /etc/apt/sources.list.d/ directory.

The names of the repository files inside the /etc/apt/sources.list.d/ directory must end with .list.

How to add apt-repository in ubuntu ?

add-apt-repository is basically a python script that helps in addition of repositories in ubuntu.

Lets take a example to add a mongodb repository in ubuntu machine

  • add-apt-repository utility is included in software-properties-common package.
sudo apt update
sudo apt install software-properties-common
  • Import the repository public key by running apt-key command
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4
  • Add the MongoDB repository using the command below.
sudo add-apt-repository 'deb [arch=amd64] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse'
Verified in /etc/apt/source.list repository has been added succesfully

Manually Adding apt-repository in ubuntu

To add repositories manually in ubuntu edit the /etc/apt/sources.list file and add the apt repository line to the file.

To add the repository open the sources.list file with your favorite editor

sudo vi /etc/apt/sources.list

Add the repository line to the end of the file:

sudo add-apt-repository 'deb [arch=amd64] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse'
  • If required to add manually public key for which you can use wget or curl command

Adding PPA Repositories

Personal Package Archives (PPA) allows you to upload Ubuntu source packages that are built and published with Launchpad as an apt repository.

When you add a PPA repository the add-apt-repository command creates a new file under the /etc/apt/sources.list.d/ directory.

Lets take a example of Addition of ansible PPA repository in ubuntu machine

  • PPA utility is included in software-properties-common package similar to add-apt-repository
sudo apt update
sudo apt install software-properties-common
  • Add PPA ansible Repository in the system.
sudo apt-add-repository --yes --update ppa:ansible/ansible 
#  PPA is Personal Package Archive 
  • Lets check the directory /etc/apt/sources.list.d/ has ansible PPA repository

Working with Ubuntu repositories

Repositories in ubuntu machine are basically file servers or network shares under which it has lot of packages , it could be .deb packages or files which are readable by apt or apt-get command.

/etc/apt/sources.list or 

/etc/apt/sources.list.d

What does sources.list and sources.list.d contains ?

  • Software in Ubuntu’s repository is divided into four categories or components – main, restricted, universe and multiverse.
    • main: contains applications that are free software that are fully supported by ubuntu.
    • multiverse: contains software’s that are not free that requires license.
    • restricted: only to promote free software and ubuntu team cannot fix it & then provide it back to author if any issues are found.
    • universe: They have all the possible software’s which are free and open sourced but ubuntu don’t provide regular patch guarantee.
deb http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ bionic main restricted
deb-src http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ bionic main restricted
deb http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ bionic-updates main restricted
deb-src http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ bionic-updates main restricted
deb http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ bionic universe
deb-src http://security.ubuntu.com/ubuntu bionic-security multiverse
  • deb or deb-src are either .deb packages or source code
  • http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ is the repository URL
  • bionic , bionic-security , xenial are distributions code name.
  • main, restricted, universe and multiverse are repository categories.

How apt or apt-get command work with Ubuntu Repository

APT stands for Advanced Packaging Tool which performs functions such as installation of new software packages, upgrade of existing software packages, updating of the package list index, and even upgrading the entire Ubuntu system by connecting with repositories stored under /etc/apt/sources.list or /etc/apt/source.list.d/

Let us see an example of how apt command works with ubuntu repositories.

  • Install below three packages
apt install curl

apt install wget

apt install telnet
  • You will notice that all the above packages are already up to date and latest
  • Now run the apt update command to update the repositories. apt command contains three types of lines.
    • Hit: If there is no change in package version from the previous version
    • Ign: It means package is being ignored.
    • Get: It means it has a new version available. It will download the information about the version (not the package itself). You can see that there is download information (size in kb) with the ‘get’ line in the screenshot above.
apt update
  • After completion of command it provides the details if upgrade is required by any package or not. In our case it shows 37 packages can be upgraded. Lets see the list of packages which can be upgraded by running the following command.
apt list --upgradable

You can either upgrade a single package or upgrade all packages together.

To upgrade a single package use : apt install <package-name>

To upgrade all packages use : apt upgrade

  • Lets just update the curl package by running the apt install command and verify
 apt install curl
  • You will notice that updating curl command upgraded 2 packages which were related to curl and rest of 35 are still not upgraded.
  • Now, lets upgrade rest of the 35 packages together by running apt upgrade command.
apt upgrade
  • Lets run apt update command again to verify if ubuntu still requires any software to be upgrade. Command output should look like “All packages are up to date”
apt update

Conclusion

In this tutorial we learnt everything about ubuntu repositories and how to add various repositories and how to work with them . Finally we saw how apt command works with ubuntu repositories.

This Ultimate Guide will give you a very strong understanding of package management which is most important thing for a Linux administrator. Hope you liked this tutorial and was helpful. Please share.

How to Connect Windows to Linux and Linux to Windows using PowerShell 7 SSH Remoting ( PS Remoting Over SSH)

PowerShell Remoting has various benefits. It started with Windows when Windows administrators use to work remotely work with tons of windows machine over WinRM protocol. With Automation and Unix distribution spreading across the world and require by every single IT engineer, PowerShell introduced PSRemoting over SSH in PowerShell 7 to connect Windows to Linux and Linux to Windows remotely .

In this tutorial we will learn how to setup PS Remoting on windows machine and on Linux machine using PS remoting over SSH ( PowerShell 7 supported) . Finally we will connect both Windows to Linux and Linux to Windows machine. Lets get started.

Table of Content

  1. What is PSRemoting or PowerShell Remoting Over WinRM?
  2. What is PSRemoting or PowerShell Remoting Over SSH?
  3. Prerequisites
  4. Step by step set up SSH remoting on Windows
  5. Step by step set up SSH remoting on Ubuntu
  6. Test the OpenSSH connectivity from Windows machine to Linux using PSRemoting
  7. Test the OpenSSH connectivity from Linux to Windows machine using PSRemoting
  8. Conclusion

What is PSRemoting or PowerShell Remoting?

PowerShell Remoting is a feature of PowerShell. With PowerShell Remoting you can connect with a single or tons of servers at a single time.

PS Remoting Over SSH (Windows to Linux and Windows to Windows)

WS-Management or Web services management or WS-Man provides a common way for systems to access and exchange management information across the IT infrastructure.

Microsoft implemented WS-Management or Web services management or WS-Man in WinRM that is Windows Remote Management that allows hardware and operating systems, from different vendors to connect to each other. For WinRM to obtain data from remote computers, you must configure a WinRM listener. WinRM listener can work on both HTTP or HTTPS Protocols.

PS Remoting Over WinRM (Linux to Windows)

When PowerShell Remoting takes place between two servers that is one server try to run commands remotely on other server, the source server connects to destination server on WinRM Listener. To configure PSRemoting on local machine or remote machine please visit the link

What is PSRemoting or PowerShell Remoting Over SSH?

Microsoft introduced PowerShell 7 Remoting over SSH, which allows true multiplatform PowerShell remoting between Linux, macOS, and Windows. PowerShell SSH Remoting creates a PowerShell host process on the target machine as an SSH subsystem. Normally, Windows PowerShell remoting uses WinRM for connection negotiation and data transport. However, WinRM is only available on Windows-based machines. That means Linux machines can connect with windows or windows can connect to Windows over WinRM but Windows cannot connect to Linux.

With PowerShell 7 Remoting over SSH Now its possible to remote between Linux, macOS, and Windows.

PS Remoting Over SSH ( Windows to Linux , Linux to Windows)

Prerequisites

  • Microsoft Windows Server 2019 standard . This machine should also have PowerShell 7 installed. If you don’t have PowerShell installed please follow here to install.
  • Make sure you have local account setup in Windows server 2019. We will be using “automate” user.
  • Make sure you set the password for ubuntu user on ubuntu machine or if you have it then ignore.
  • Ubuntu machine with PowerShell 7 installed.

Step by step set up SSH remoting on Windows

Here we will discuss about how to setup SSH remoting on Windows Machine and run the PSRemoting commands.

  • Assuming you are on Windows 2019 standard machine with PowerShell 7 installed. Lets verify it once.
  • Before we setup SSH on windows machine & if you try to make a SSH session with Linux machine you will received an error message like this.
  • Next step is to install Open SSH client and server on Windows 2019 standard server. Lets use the PowerShell utility Add-WindowsCapabillity and run the commands.
Add-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0
 
Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
  • Once Open SSH is installed successfully , we need to start the OpenSSH Services.
Start-Service sshd
Set-Service sshd -StartupType Automatic
  • Now, Edit the OpenSSH configuration sshd_config located in C:\Windows\System32\OpenSSH or you can find it in C:\ProgramData\ssh\sshd_config by adding Subsystem for PowerShell.
Subsystem powershell c:/progra~1/powershell/7/pwsh.exe -sshs -NoLogo -NoProfile
  • Also make sure OpenSSH configuration file sshd_config has PasswordAuthentication set to yes
  • Restart the service
Restart-Service sshd
  • SSH remoting is now properly set on Windows Machine

Step by step set up SSH remoting on Ubuntu

Previously we configured SSH remoting on Windows Machine , now we need to perform similar steps in ubuntu machines with various commands.

  • PowerShell 7 must be installed on ubuntu machine
  • Install OpenSSH client and server on ubuntu machine
sudo apt install openssh-client
sudo apt install openssh-server
  • Similarly Edit the Sshd_config file in ubuntu machine
vi /etc/ssh/sshd_config
  • Paste the below content (Add the Subsystem for PowerShell) and make sure PasswordAuthentication set to yes
Subsystem powershell /usr/bin/pwsh -sshs -NoLogo -NoProfile
  • Restart the service
sudo service sshd restart

Test the OpenSSH connectivity from Windows machine to Linux using PSRemoting

Here now we are set with windows and ubuntu SSH remoting steps , now lets verify the SSH connectivity between from windows to ubuntu machine

Verification Method 1

  • Create a session and then enter into session and run commands from windows PowerShell to Linux PowerShell
New-PSSession -Hostname  54.221.35.44 -UserName ubuntu # Windows to Linux Create Session

Enter-PSSession -Hostname 54.221.35.44 -UserName ubuntu # Windows to Linux Enter Session

Verification Method 2

  • Create the session and then test the connectivity from Windows machine to Linux using Invoke-Command command
$SessionParams = @{
     HostName = "54.221.35.44"
     UserName = "ubuntu"
     SSHTransport = $true
 }
Invoke-Command @SessionParams -ScriptBlock {Get-Process}

Test the OpenSSH connectivity from Linux to Windows machine using PSRemoting

Lets verify the SSH connectivity between from ubuntu machine to windows Machine.

  • Open PowerShell on ubuntu machine with following command
pwsh
  • Although you are on ubuntu machine lets verify the ubuntu version [Optional Step]
  • Now SSH into Windows machine using following command
ssh automate@3.143.233.234
  • Here we go , You can clearly see that we are have SSH into Windows machine successfully

Conclusion

PowerShell Remoting has various benefits. It started with Windows when Windows administrators use to work remotely work with tons of windows machine over WinRM protocol. With Automation and Unix distribution spreading across the world and require by every single IT engineer. To resolve problem of connecting windows to Linux and Linux to windows PowerShell introduced PSRemoting over SSH to connect Windows to Linux and Linux to Windows remotely with easy setups .

Hope you find this tutorial helpful. If you like please share it with your friends.

What is PSRemoting or PowerShell Remoting and how to Enable PS Remoting

PSRemoting or PowerShell Remoting is a PowerShell based remoting which allows you to connect to one or thousands of remote computers and execute commands. PSRemoting allows you to sit at one place and execute commands on remote machine as if you are executing physically on the servers.

In this tutorial you will learn what is PS Remoting that is PowerShell Remoting and how to enable PowerShell Remoting locally and on remote machines.

Table of Content

  1. What is PSRemoting or PowerShell Remoting?
  2. Prerequisites
  3. How to Enable PS Remoting Locally on system?
  4. How to Enable PS Remoting on remote system?
  5. Conclusion

What is PSRemoting or PowerShell Remoting?

PowerShell Remoting is a feature of PowerShell. With PowerShell Remoting you can connect with a single or tons of servers at a single time.

WS-Management or Web services management or WS-Man provides a common way for systems to access and exchange management information across the IT infrastructure.

Microsoft implemented WS-Management or Web services management or WS-Man in WinRM that is Windows Remote Management that allows hardware and operating systems, from different vendors to connect to each other. For WinRM to obtain data from remote computers, you must configure a WinRM listener. WinRM listener can work on both HTTP or HTTPS Protocols.

When PowerShell Remoting takes place between two servers that is one server try to run commands remotely on other server, the source server connects to destination server on WinRM Listener.

How to check WinRM listeners on Windows Host?

To check the WinRM listeners on windows host use the following command

 winrm e winrm/config/listener

Prerequisites

  • Make sure you windows machine with PowerShell 7 installed . If you don’t have, Install it from here.

How to Enable PS Remoting Locally on system?

There are two ways in which you can enable PSRemoting on the local machine.

Use Enable-PSRemoting to Enable PS Remoting Locally on system

  • Invoke the command Enable-PSRemoting and this performs the following function
    • WinRM service is started
    • Creates listener on 5985 for HTTP
    • Registers and Enable PowerShell sessions
    • Set PowerShell sessions to allow remote sessions.
    • Restarts WinRM server

Enable-PSRemoting  # By Default its enabled in Windows
  • On a Server OS, like Windows Server 2019, the firewall rule for Public networks allows on remote connections from other devices on the same network. On a client OS, like Windows 10, you will receive an error stating that you are a public network.
Command Ran on Windows 2019 server
Command Ran on Windows 10 Machine
  • If you want to ignore the Error message because of Network Profile on client like windows 10 use the following command
Enable-PSRemoting -SkipNetworkProfileCheck

Use WinRM to Enable PS Remoting Locally on system

  • We can use WinRM quickconfig command as well to enable PS Remoting on local machine
winrm quickconfig

How to Enable PS Remoting on remote system?

There are two ways in which you can enable PSRemoting on the remote machine.

Use PS exec to Enable PS Remoting on remote system

  • Using PS exec you can run command on remote machine after connecting to remote machine. When you run PS exec command , it initialize the PowerShell session on remote machine and then run the command.
.\psexec.exe \\3.143.113.23 -h -s powershell.exe Enable-PSRemoting -Force # 3.143.113.23 is remote machine's IP address

Use WMI to Enable PS Remoting on remote system

Using PowerShell and the Invoke-CimMethod cmdlet. Using the Invoke-CimMethod cmdlet, you can instruct PowerShell to connect to the remote computer over DCOM and invoke methods.

$SessionArgs = @{
     ComputerName  = 'WIN-U22NTASS3O7'
     Credential    = Get-Credential
     SessionOption = New-CimSessionOption -Protocol Dcom
 }
 $MethodArgs = @{
     ClassName     = 'Win32_Process'
     MethodName    = 'Create'
     CimSession    = New-CimSession @SessionArgs
     Arguments     = @{
         CommandLine = "powershell Start-Process powershell -ArgumentList 'Enable-PSRemoting -Force'"
     }
 }
 Invoke-CimMethod @MethodArgs

Conclusion

In this tutorial, you have learned what is PSRemoting and how to enable PSRemoting with various methods locally on the machine as well as remotely on the machine. This will give you great opportunity to automate with various remote machines together.

Getting Started with PowerShell Commands which Every Devops should Know.

PowerShell is a strong tool which contains rich command utilities and commands which can make life easier for developers and Devops engineers. In this tutorial we will learn about important commands which are run on PowerShell with all practical’s to get you started with it.

Table of Content

  1. What is PowerShell ?
  2. Prerequisites
  3. Getting Started with PowerShell commands
  4. Wrapping Up

What is PowerShell ?

PowerShell is a command line tool or command line shell which helps in automation of various tasks , allows you to run scripts & helps you in managing variety of configuration. PowerShell runs on Windows , Linux and macOS

PowerShell is built on .NET Command Language Runtime that is ( CLR ) . It works currently on .NET 5.0 Framework as its runtime.

Features of PowerShell

  • It provides tab completion
  • It works with all .NET Frameworks objects
  • It allows pipelines of commands.
  • It has built support for various file formats such as JSON, CSV and XML

Prerequisites

Getting Started with PowerShell commands

PowerShell is a command line shell or command line tool or command line utility. There are tons of commands which are already loaded or in built in PowerShell and these commands are known as cmdlets.

  • There are majorly three types of command type in PowerShell
    • Alias
    • cmdlets
    • Function
  • To check the current version of PowerShell
$PSVersionTable
  • To check the execution policy of PowerShell
    • Restricted indicates that users are not allowed to run the scripts unless restrictions are removed.
Get-ExecutionPolicy
  • To Update the execution policy of PowerShell
    • This policy will allow users to run the Scripts
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned    # Run as Administrator
  • To Check all the commands on PowerShell
Get-Command
  • To get help with command execution and about the command on Powershell
Get-Help
  • To check the status of Windows32 time service
Get-Service -Name w32time
  • To check the short form of PowerShell commands use Alias.
Get-Alias -Name gcm
Get-Alias -Name gm
  • To check the Folder structure and files under the folder.
 Get-ChildItem -Path C:\
  • To open system logs using PowerShell command
Show-EventLog
  • To check specific details of process such as chrome browser
 Get-Process chrome
  • To get content of a particular file
Get-Content .\.gitignore
  • To get drives in the current session
Get-PSDrive
  • To remove a particular file or folder using the following command.
Remove-Item .\date.txt

Wrapping up

This was pretty straightforward tutorial which covers basic PowerShell commands. We learnt majorly GET-command, GET-service command and different cmdlets which can be used with PowerShell. Hope this was useful tutorial to get you started with how to run commands on PowerShell.

How to Install PowerShell 7.1.3 on Ubuntu and Windows Machine Step by Step.

With some many windows or Linux Administrators in the world automation has always been top most requirement. PowerShell is one the most widely and command line shell which gives you string ability to perform any tasks with any remote operating system very easily.

In this tutorial we will go through basic definition of PowerShell, benefits and features of PowerShell and finally how to install latest PowerShell on both Windows and Ubuntu Machine.

Table of content

  1. What is PowerShell?
  2. Working with PowerShell
  3. Install PowerShell 7.1.3 on Windows Machine
  4. How to Install PowerShell 7.1.3 on Ubuntu Machine
  5. Conclusion

What is PowerShell?

PowerShell is a command line tool or command line shell which helps in automation of various tasks , allows you to run scripts & helps you in managing variety of configuration. PowerShell runs on Windows , Linux and macOS

PowerShell is built on .NET Command Language Runtime that is ( CLR ) . It works currently on .NET 5.0 Framework as its runtime.

Features of PowerShell

  • It provides tab completion
  • It works with all .NET Frameworks objects
  • It allows pipelines of commands.
  • It has built support for various file formats such as JSON, CSV and XML

Working with PowerShell

PowerShell is a command line tool or command line shell which was meant for windows automation . But it has widely grown and upgraded with lots of feature and benefits. Lets check out some of the key benefits.

  • PowerShell can be used for cloud management such as retrieve or deploy new resources.
  • PowerShell can be used with Continuous integration and continuous deployment pipelines i.e.. CI/CD
  • PowerShell is widely used now by Devops and sysops engineers.
  • PowerShell comes with hundreds of preinstalled commands
  • PowerShell command are called cmdlets

To check the version of PowerShell , although there are various command but lets run the following

$PSVersionTable.PSVersion

Install PowerShell on Windows Machine

By default PowerShell is already present on the windows machine. To verify click on start bar and look for PowerShell.

  • Verify the current version of PowerShell by running the following command.
Get-Host | Select-Object Version
  • Extract the downloaded binary on the desktop
  • Execute the pwsh.exe
  • Now you should see PowerShell version7.1.3 when you run the following command.
  • Lets verify PowerShell by invoking the Get-Command

How to Install PowerShell 7.1.3 on Ubuntu Machine

We will install PowerShell on ubuntu 18.04 via Package repository. So lets dive in here and start

  • Update the list of packages
sudo apt-get update
  • Install pre-requisite packages.
sudo apt-get install -y wget apt-transport-https software-properties-common
  • Download the Microsoft repository GPG keys
wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb

he apt software repositories are defined in the /etc/apt/sources.list file or in separate files under the /etc/apt/sources.list.d/ directory.
  • Register the Microsoft repository GPG keys. You will notice that as soon as we run the below command a repository is added inside /etc/apt/source.list.d directory
sudo dpkg -i packages-microsoft-prod.deb
  • Update the Repository again
sudo apt-get update
  • Enable the “universe” repositories
sudo add-apt-repository universe
  • Install PowerShell
sudo apt-get install -y powershell
  • Start PowerShell
pwsh
  • Lets verify PowerShell by invoking the Get-Command

Conclusion

This tutorial is pretty straightforward and to get you started with PowerShell. In this tutorial we defined what is PowerShell and what are benefits of PowerShell. Later we Installed Latest PowerShell 7.1.3 on both ubuntu and windows machine. Hope this tutorial helps you with PowerShell setup & please share it if you like.

How to Create Dockerfile step by step and Build Docker Images using Dockerfile

There were days when a organization use to get physical server and a system administrator was asked to make the system ready within months like Installing OS, Adding Software’s and Network configuration and finally applications use to get deployed in months.

Now the same work can be done in literally 5 minutes . Yes it can be done by launching docker containers using Dockerfile ( Layer based ) docker image building file. If you would like to know more follow along.

In this this tutorial we will learn everything about Dockerfile , how to create Dockerfile and commands used inside Dockerfile also known as Docker Instruction. These Dockerfile further can be used to create customized docker image. Lets jump in to understand each bit of it.

Table of content

  1. What is Dockerfile?
  2. Prerequisites
  3. How to Create Dockerfile ( Dockerfile commands or Dockerfile Instructions)
  4. How to build a Docker Image and run a container using Dockerfile
  5. Conclusion

What is Dockerfile?

Docker file is used to create a customized docker images on top of basic docker image. It is a text file that contains all the commands to build or assemble a new docker image. Using docker build command we can create new customized docker images . Its basically another layer which sits on top of docker image. Using newly built docker image we can run containers in similar way.

This image has an empty alt attribute; its file name is image-43.png

Prerequisites

  • You must have ubuntu machine preferably 18.04 version + and if you don’t have any machine you can create a ec2 instance on AWS account
  • Docker must be installed on ubuntu machine. If you don’t have follow here

How to Create Dockerfile ( Dockerfile commands)

  • There are two forms in which docker file can be written
    • Shell form <instruction> command
    • Exec form <instruction> [“executable”, “param1”, “param2”]
# Shell form
ENV name John Dow
ENTRYPOINT echo "Hello, $name"
# exec form
RUN ["apt-get", "install", "python3"]
CMD ["/bin/echo", "Hello world"]
ENTRYPOINT ["/bin/echo", "Hello world"]
  • To build docker Image from Dockerfile
docker build .        or

docker build -f /path-of-Docker-file .
  • Environmental variables inside Docker file can be written as $var_name or ${var_name}
WORKDIR ${HOME}  # This is equivalent to WORKDIR ~
ADD . $HOME      # This is equivalent to ADD . ~
  • FROM command is used when we need to build a new Docker Image using Base Image
    • Below command will set ubuntu:14.04 as the base image.
FROM base:${CODE_VERSION}

FROM ubuntu:14.04
  • RUN command is executed while building the image that is on top of the current image and then creates a new layer. You can run multiple RUN commands in Dockerfile
RUN echo $VERSION
# RUN <command> (shell form)
# RUN ["executable", "param1", "param2"] (exec form)
  • ADD command will add all the files from the host to container
    • Below command will add a file from folder directory kept at host to containers /etc directory
ADD folder/file.txt /etc/
  • CMD command will set the default command if you don’t specify any command while starting an container.
    • It can be overridden by user passing an argument while running the container.
    • If you apply multiple CMD command only last takes effect
CMD ["Bash"]

EXAMPLE

  • Lets assume a single line Docker file containing following code
CMD [“echo”, “Hello World”]
  • Lets create a docker Image
docker build . 
  • Run a container to see CMD command actions
sudo docker run [image_name]
  • Check the Output of the command
O/p:  Hello World
  • Run a container with an argument to see CMD command actions
sudo docker run [image_name] hostname
  • Check the Output of the command
O/P: 067687387283 # Which is containers hostname
  • Maintainer allows you to add author details
MAINTAINER support@automateinfra.com
  • EXPOSE helps to inform docker about the port which container should listen on
    • Below are are setting a container to listen on port 8080
EXPOSE 8080
  • The ENV command sets an environment variable in the new container
    • Below we are setting HOME environments variable to /root
ENV HOME /root
  • USER command Sets the default user within the container
USER ansible
  • VOLUME command creates a shared volume that can be shared among containers or by the host machine
VOLUME ["/var/www", "/var/log/apache2", "/etc/apache2"]
  • WORKDIR command set the default working directory for the container
WORKDIR app/
  • ARG command allows users to pass at build-time with the docker build command .
    • Syntax  --build-arg <varname>=<value> 
ARG username
docker build  --build-arg username=automateinfra 
  • LABEL instruction adds metadata to an image and it uses key value pair
LABEL 
  • SHELL command allows to overwrite the use of default shell.
    • SHELL command will overwrite the use of [“/bin/sh”,”-c”] in case of linux shell
    • SHELL command will overwrite the use of [“cmd”,”/S”,”/C”] in case of windows shell

Executed as cmd /S /C echo -command Write Host default
RUN powershell -command Write-Host default


Executed as PowerShell  -command Write-Host hello
SHELL ["PowerShell", "-command"]
RUN Write-Host hello
  • ENTRYPOINT is also used for running the command but with a difference from CMD command
    • In case of ENTRYPOINT command if you give command line argument ENTRYPOINT doesn’t allow to override it.

EXAMPLE

  • Lets assume a single line Docker file containing following code
ENTRYPOINT  [“echo”, “Hello World”]
  • Lets create a docker Image
docker build . 
  • Run a container to see ENTRYPOINTcommand actions
sudo docker run [image_name]
  • Check the Output of the command
O/p:  Hello World
  • Run a container with an argument to see ENTRYPOINT command actions
sudo docker run [image_name] parameter
  • Check the Output of the command
O/P: Hello World parameter

How to Create Docker Image and run a container using Dockerfile

Now we should be good with how Dockerfile is created using different commands. Lets now dive in to see some of the examples to get you started.

EXAMPLE 1

  • Create a folder under opt directory and name it as dockerfile-demo1
cd /opt
mkdir dockerfile-demo1
cd dockerfile-demo1
  • Create a Dockerfile with your favorite editor
vi Dockerfile
  • Command which we will use for Dockerfile
    • FROM: It sets the base image as ubuntu
    • RUN: It runs the following commands in the container
    • ADD: It adds the file from a folder
    • WORKDIR: It tells about the working directory
    • ENV: It sets a environment variable
    • CMD: Its runs a command when the container starts
  • Paste the below content
FROM ubuntu:14.04
RUN \
    apt-get -y update && \
    apt-get -y upgrade && \
    apt-get -y install git curl unzip man wget telnet
ADD folder/.bashrc /root/.bashrc
WORKDIR /root
ENV HOME /root
CMD ["bash"]
  • Now, build a Docker Image using the following command
 docker build -t image1 .
  • Lets verify the Docker Image by running the following command.
docker images
  • Now, its time to check if Docker Image is successfully working . So lets run a container and then verify all the Dockerfile commands inside the container.
docker run -i -t 5d983653b8f4
Looks Great, we can see all the commands which we used in Docker file were executed and Docker Image was created . We tested this on one of the container built using Docker Image.

EXAMPLE 2

  • Create a folder under opt directory and name it as dockerfile-demo2
cd /opt
mkdir dockerfile-demo2
cd dockerfile-demo2
  • Create a Dockerfile with your favorite editor
vi Dockerfile
  • Paste the below content
FROM ubuntu:14.04


ARG LABEL_NAME
LABEL org.label-schema.name="$LABEL_NAME"
SHELL ["/bin/sh", "-c"]


RUN apt-get update && \
    apt-get install -y sudo curl git gcc make openssl libssl-dev libbz2-dev libreadline-dev libsqlite3-dev zlib1g-dev libffi-dev


USER ubuntu
WORKDIR /home/ubuntu


ENV LANG en_US.UTF-8
CMD [“echo”, “Hello World”]


  • Now, build a Docker Image using the following command
 docker build --build-arg LABEL_NAME=mylabel  -t imagetwo .
  • Lets verify the Docker Image by running the following command.
docker images
  • Now, its time to check if Docker Image is successfully working . So lets run a container and then verify all the Dockerfile commands inside the container.
docker run -i -t 2716c9e6c4af
  • CMD command ran successfully
  • As we defined a custom CMD which displays the echo command and exits out of container. Lets go inside the container and check other details.
    • User is ubuntu
    • Working directory is /home/ubuntu
    • Curl is installed
    • ENV LANG is also set.
docker run -i -t 2716c9e6c4af /bin/bash
  • Finally if you want to check LABEL command then you can see it on host by inspecting the docker image.
docker inspect 2716c9e6c4af

Looks Great, we can see all the commands which we used in Docker file were executed and Docker Image was created . We tested this on one of the container built using Docker Image.

Conclusion

In this tutorial we learnt in depth commands used inside Dockerfile to build a Docker Image . There are several commands which we covered using examples in the demonstration.

Also we learnt how to create docker image and run containers and verify if those commands were executed successfully. Dockerfile is very important concept for building new Docker Images on top of Base Docker image.

Now, you’re ready with how to create Dockerfile and using Dockerfile create Image and run containers. Hope this tutorial will help you with Docker concepts. Please share if you like.

How to create Node.js Docker Image and Push to Docker Hub using Jenkins Pipeline

Creating a application on docker is a huge benefit because of its light weighted technology and security. Docker images are stored safely on docker hub . But how can we create docker images and push to docker hub automatically ? Its possible with none other than JENKINS

In this tutorial we will learn how to create docker images for node.js application and using Jenkins pushing it to docker hub.

Table of content

  1. What is Jenkins Pipeline?
  2. What is Node.js?
  3. Prerequisites
  4. How to Install node.js and node.js framework on ubuntu machine
  5. Create Node.js Application
  6. Create Docker file for Node.js application
  7. Push the code and configuration to GIT Repository
  8. Create Jenkins file for creating a docker image of Node.js application and pushing to Docker hub
  9. Configure Jenkins to Deploy Docker Image and Push to Docker Hub
  10. Conclusion

What is Jenkins Pipeline?

Jenkins Pipeline are group of plugins which helps to deliver a complete continuous delivery pipeline into Jenkins. Jenkins Pipeline plugin is automatically installed while installing the Jenkins with suggested plugins. This starts from building the code till deployment of the software right up to the customer. Jenkins pipeline allows you to write complex operations and code deployment as code with DSL language ( Domain specific language ) where we define a text file called “JENKINSFILE” which is checked into the repository.

  • Benefits of Jenkins pipeline
    • Pipeline can be written in code which can be more easier and gives more ability to review.
    • In case Jenkins stop you can still continue to write Jenkins file
    • With code capabilities you can allow waiting, approvals , stop and many other functionalities.
    • It support various extensions & plugins.

Related: the-ultimate-guide-getting-started-with-Jenkins-pipeline

What is Node.js?

Node.js is an open source JavaScript runtime environment. Now, what is JavaScript ? Basically JavaScript is a language which is used with other languages to create a web page and add some dynamic features such as roll over and graphics.

Node.js runs as a single process without wasting much of memory and CPU and never blocks any threads or process which is why its performance is very efficient. Node.js also allows multiple connections at the same time.

With the Node.js it has become one of the most advantage for JavaScript developer as now they can create any apps utilizing it as both frontend or as a backend.

Building applications that runs in the any browser is a completely different story than than creating a Node.js application although both uses JavaScript language.

Prerequisites

  • You must have ubuntu machine preferably 18.04 version + and if you don’t have any machine you can create a ec2 instance on AWS account
  • Docker must be installed on ubuntu machine.
  • Make sure you have git hub account and a repository created . If you don’t have follow here

How to Install node.js and node.js framework on ubuntu machine

  • Create a folder under opt directory
cd /opt
mkdir nodejs-jenkins
cd nodejs-jenkins
  • Install node.js on ubuntu machine
sudo apt install nodejs
  • Install node js package manager. This will install modules node_modules inside the same directory.
sudo apt install npm
  • Install Nodejs Express Web Framework and initialize it. This command will generate package.json file containing the project and metadata required details.
npm init

package.json which got created after initializing the Nodejs framework will have all the dependencies which are required to run. Let us add one dependency which is highly recommended.

npm install express --save

Create Node.js Application

  • Create a node.js application . So lets create a file main.js and name it as main.js on the same folder /opt/nodejs-jenkins
var express = require('express')    //Load express module with `require` directive
var app = express() 

//Define request response in root URL (/)
app.get('/', function (req, res) {
  res.send('Hello Welcome to Automateinfra.com')
})


app.listen(8081, function () {
  console.log('app listening on port 8081!')
})

Create Dockerfile for Node.js application

Docker file is used to create a customized docker images on top of basic docker image. It is a text file that contains all the commands to build or assemble a new docker image. Using docker build command we can create new customized docker images . Its basically another layer which sits on top of docker image. Using newly built docker image we can run containers in similar way.

This image has an empty alt attribute; its file name is image-43.png
  • Create docker file under the same folder/opt/nodejs-jenkins
FROM node:7              # Sets the base image

RUN mkdir -p /app
WORKDIR /app             # Sets the working directory in the container
COPY package.json /app   # copy the dependencies file to the working directory
RUN npm install          # Install dependencies
COPY . /app       # Copy the content of the local src directory to the working directory
EXPOSE 4200
CMD ["npm", "run", "start"]
  • Verify this Docker file on ubuntu machine by running the following command.
docker build .

Push the code and configuration to GIT Repository

  • Now we are ready with our code and configurations as below.
    • Dockerfile
    • main.js
    • package.json
    • node_modules

Now push all the code into GIT repository by performing below steps

  • Initialize your new repository in the same directory /opt/nodejs-jenkins
git init
  • Add the file in git repository using the command in the same directory /opt/nodejs-jenkins
git add .
  • Again check the status of git repository using the command in the same directory /opt/nodejs-jenkins
git status
  • Commit your changes in git repository using the command in the same directory /opt/nodejs-jenkins
 git commit -m "MY FIRST COMMIT"
  • Add the remote repository which we created earlier as a origin in the same directory /opt/nodejs-jenkins
git remote add origin https://github.com/Engineercloud/nodejs-jenkins.git
  • Push the changes in the remote branch ( Enter your credentials when prompted)
git push -u origin master
  • Verify the code on GIT HUB by visiting the repository link

Create Jenkins file for creating a docker image of Node.js application and pushing to dockerhub

  • Create a file and name it as Jenkins file your favorite editor and paste the below content.
    • Make sure to change the sXXXXXXX410/dockerdemo as per you’re docker hub username and repository name
node {
     def app 
     stage('clone repository') {
      checkout scm  
    }
     stage('Build docker Image'){
      app = docker.build("sXXXXX410/dockerdemo")
    }
     stage('Test Image'){
       app.inside {
         sh 'echo "TEST PASSED"' 
      }  
    }
     stage('Push Image'){
       docker.withRegistry('https://registry.hub.docker.com', 'git') {            
       app.push("${env.BUILD_NUMBER}")            
       app.push("latest")   
   }
}
  • Now push this file as well in github using git commands or simply create this file directly in the repository. Finally repository should something like this.

Configure Jenkins to Deploy Docker Image and Push to Docker Hub

  • Assuming you have Jenkins installed.
  • Now, Create a multibranch pipeline Jenkins Job and provide it a name as nodejs-image-dockerhub by clicking on new item and selecting multibranch pipeline on the Dashboard.
  • Now click on nodejs-image-dockerhub job and click on configure it with git URL and then hit save.
  • As we will connect with docker hub we would need to add docker hub credentials. Now we will click on Dashboard >>Manage Jenkins >>Manage credentials >> click on global >> Add credentials
  • Now go to Jenkins server and make sure Jenkins user is added in docker group
sudo groupadd docker
sudo usermod -a -G docker jenkins
service docker restart
  • Make sure Jenkins users has sudo permissions
sudo vi  /etc/sudoers

jenkins ALL=(ALL) NOPASSWD: ALL

Now we are all set to run our First Jenkins Pipeline. Click on Scan Multibranch pipeline job and then you will notice your branch name. Then click on Branch and then click on Build Now.

  • Now verify if Docker image has been successfully pushed to docker hub by visiting Dockerhub repository.

Conclusion

In this tutorial we covered what is Jenkins pipeline, what is node.js. Also we demonstrated how to create a docker file, Jenkins file and node.js application and pushed it into repository and finally using Jenkins created docker image and pushed it in docker hub.

This tutorial is in depth and very knowledgeable post if somebody wants to work on docker and Jenkins together for automations. Hope you liked it and if so please share it.

The Ultimate Guide : Getting Started with Jenkins Pipeline

Application deployment is a daily task for developers and operations team. With Jenkins you can work with your deployment but for long deployment process you need a way to make things look easy and deploy in structured way.

To Bring simplicity in process of deployment Jenkins Pipeline are your best friend. They make the process look like as if river is flowing beautifully. Having said that , In this tutorial we will cover all about CI/CD and in depth knowledge of Jenkins Pipeline and Jenkins file.

Table of content

  1. What is CI/CD ( Continuous Integration and Continuous deployments)?
  2. What is Jenkins Pipeline?
  3. How to create a basic Jenkins Pipeline
  4. Handling Parameters in Jenkins Pipeline
  5. How to work with Input Parameters
  6. Conclusion

What is CI/CD ( Continuous Integration and Continuous deployments)

With CI/CD products are delivered to clients in a very smart and effective way by using different automated stages. With CI/CD it saves tons of time for both developer and operations team and there are very less chances of human errors. CI/CD stands for continuous integration and continuous deployments. It automates everything starting from integrating to deployments.

Continuous Integration

CI also known as Continuous integration is primarily used by developers. Successful Continuous integration means developers code is built , tested and then pushed to Shared repository whenever there is a change in code.

Developers push code changes every day, multiple times a day. For every push to the repository, you can create a set of scripts to build and test your application automatically. These scripts help decrease the chances that you introduce errors in your application.

This practice is known as Continuous Integration. Each change submitted to an application, even to development branches, is built and tested automatically and continuously.

Continuous Delivery

Continuous delivery is step beyond continuous integration . In this case not only application is continuously built and tested each time the code is pushed but application is also deployed continuously. However, with continuous delivery, you trigger the deployments manually.

Continuous delivery checks the code automatically, but it requires human intervention to deploy the changes.

Continuous Deployment

Continuous deployment is again a step beyond continuous integration the only difference between deployment and delivery is deployment automatically takes the code from shared repository and deploy the changes to environments such as Production where customers can see those changes. This is the final stage of CI/CD pipeline. With CD it takes hardly few minutes to deploy the code to the environments. It depends on heavy pre automation testing.

Examples of CI/CD Platform:

  • Spinnaker and Screwdriver built platform for CD
  • GitLab , Bamboo , CircleCI , Travis CI and GoCD are built platform for CI/CD

What is Jenkins Pipeline?

Jenkins Pipeline are group of plugins which helps to deliver a complete continuous delivery pipeline into Jenkins. Jenkins Pipeline plugin is automatically installed while installing the Jenkins with suggested plugins. This starts from building the code till deployment of the software right up to the customer. Jenkins pipeline allows you to write complex operations and code deployment as code with DSL language ( Domain specific language ) where we define a text file called “JENKINSFILE” which is checked into the repository.

  • Benefits of Jenkins pipeline
    • Pipeline can be written in code which can be more easier and gives more ability to review.
    • In case Jenkins stop you can still continue to write Jenkins file
    • With code capabilities you can allow waiting, approvals , stop and many other functionalities.
    • It support various extensions & plugins.
  • Jenkins file can be written with two syntax’s ( DSL: Domain Specific Language)
    • Declarative Pipeline : This is newer and writing code with this is much easier
    • Scripted Pipeline : This is older and writing code with this is little complicated
  • Scripted pipeline syntax can be generated from
http://Jenkins-server:8080/pipeline-syntax/
  • Declarative Pipeline syntax can be generated from
http://Jenkins-server:8080/directive-generator/

  • Jenkins Pipeline supports various environmental variables such as
    • BUILD_NUMBER: Displays the build number
    • BUILD_TAG: Displays the tag which is jenkins-${JOB_NAME}-${BUILD_NUMBER}
    • BUILD_URL: Displays the URL of the result of Build
    • JAVA_HOME: Path of Java home
    • NODE_NAME: It specifics the name of the node. For example set it to master is for Jenkins controller
    • JOB_NAME: Name of the Job
  • You can set the environmental variables dynamically in pipeline as well
    environment {
        AWS_ACCESS_KEY_ID     = credentials('jenkins-aws-secret-key-id')
        AWS_SECRET_ACCESS_KEY = credentials('jenkins-aws-secret-access-key')
        MY_KUBECONFIG = credentials('my-kubeconfig')
   }
  • Lets take a example of Jenkins file and understand the basic terms one by one
    • pipeline: It is Declarative Pipeline-specific syntax 
    • agent: Agent allows Jenkins to allocate an executor or a node. For example Jenkins slave
    • Stages: It include multiple tasks which Pipeline needs to perform. It can have a single task as well.
    • Stage: Stage is one single task under stages.
    • steps: These are steps which needs to be executed in every stage.
    • sh: sh is one of the step which executes shell command.
pipeline {
   agent any 
    stages {
        stage('Testing the Jenkins Version') {
            steps {
                echo 'Hello, Jenkins'
                sh 'service jenkins status'
               //  sh("kubectl --kubeconfig $MY_KUBECONFIG get pods")
            }
        }
    }
}

How to create a basic Jenkins Pipeline

  • Install Jenkins on the ubuntu machine. Please find the steps to install Jenkins from here
  • Once you have Jenkins Machine , visit Jenkins URL and Navigate to New Item
  • Choose Pipeline from the option and provide it a name such as pipeline-demo and click OK
  • Now add a Description such as my demo pipeline and add a Pipeline script as below
pipeline {
   agent any 
    stages {
        stage('Testing the Jenkins Version') {
            steps {
                echo 'Hello, Jenkins'
                sh 'service jenkins status'
            }
        }
    }
}
  • Click on Save & Finally click on Build Now
  • lets verify the code execution from console output of the Job. So click on the build number and click on it.

Handling Parameters in Jenkins Pipeline

If you wish to use Build with Parameters , so those parameters are accessible using params keyword in pipeline.

Lets see a quick example. In below code we have Profile as a parameter and it can be accessed as ${params.Profile} .Lets paste the code in pipeline script as we did earlier

pipeline {
  agent any
  parameters {
    string(name: 'Profile', defaultValue: 'devops-engineer', description: 'I am devops guy') 
}
 stages {
    stage('Testing DEVOPS') {
       steps {
          echo "${params.Profile} is a cloud profile"
       }
     }
   }
}
  • Lets build the Jenkins pipeline now.
  • Next verify the console output
  • Similarly we can use different parameters such
pipeline {
    agent any
    parameters {
        string(name: 'PERSON', defaultValue: 'AutomateInfra', description: 'PERSON')
        text(name: 'BIOGRAPHY', defaultValue: '', description: 'BIOGRAPHY')
        booleanParam(name: 'TOGGLE', defaultValue: true, description: 'TOGGLE')
        choice(name: 'CHOICE', choices: ['One', 'Two', 'Three'], description: 'CHOICE')
        password(name: 'PASSWORD', defaultValue: 'SECRET', description: 'PASSWORD')
    }
    stages {
        stage('All-Parameters') {
            steps {
                echo "I am ${params.PERSON}"
                echo "Biography: ${params.BIOGRAPHY}"
                echo "Toggle: ${params.TOGGLE}"
                echo "Choice: ${params.CHOICE}"
                echo "Password: ${params.PASSWORD}"
            }
        }
    }
}

How to work with Input Parameters

Input parameter allows you to provide an input using a input step. Unless input is provided the pipeline will be paused. Lets see a quick example in which Jenkins job will prompt for “should we continue” message. Unless we approve it will remain as it is else finally it will abort.

pipeline {
    agent any
    stages {
        stage('Testing input condition') {
            input {
                message "Should we continue?"
                ok "Yes, we should."
                submitter "automateinfra"
                parameters {
                    string(name: 'PERSON', defaultValue: 'Automate', description: 'Person')
                }
            }
            steps {
                echo "Hello, ${PERSON}, nice to meet you."
            }
        }
    }
}
  • Lets paste the content in Jenkins pipeline script and click on build now.
  • Let us verify by clicking on build Now

Conclusion

In this tutorial we learnt what is CI/CD and CI/CD open source tool Jenkins. We covered how to write pipeline and syntax of Jenkins pipeline using its language known as DSL ( domain specific language ) . Also we learnt in depth of Jenkins pipeline and created basic Jenkins pipeline and executed it.

Hope this tutorial will help you a kick start to how to work with Jenkins pipeline and execute them. If you like this please share it.

How to Create and Invoke AWS Lambda function using Terraform step by step

Managing your applications on server and Hardware has always remained a challenge for developers and system administrators. Some of the challenges are Memory leak, storage issues , system stopped responding , corrupt files by human error and many more. To avoid this AWS launched most widely and cost effective “server less” service which is AWS Lambda, which works almost with all code languages.

AWS Lambda doesn’t require any Hardware or servers to work on , it works on server less technology. So In this tutorial we will learn how to create Lambda function and invoke it using AWS Management console and Terraform. Now lets dive in.

Table of Content

  1. What is AWS Lambda ?
  2. Prerequisites
  3. How to create a basic Lambda function using AWS Management console
  4. How to Install terraform on ubuntu 18.04 LTS ?
  5. Terraform Configuration Files and Structure
  6. Configure terraform files to build AWS Lambda using Terraform
  7. Conclusion

What is AWS Lambda ?

AWS Lambda is a server less AWS service which doesn’t require any infrastrure to run. AWS Lambda service runs code without needing any server to manage that. It is a very scalable service when required it can even scale up to tons of request per second. The Best part with this service is whatever time we use it we just need to pay for that. With this service you don’t require any kind of administration such as managing memory, CPU, network and other resources.

AWS Lambda runs code which support various languages such as Node.js , Python , Ruby , Java , Go & dot (net) . AWS Lambda is generally used with certain events such as

  • Change in AWS S3 ( Simple Storage service ) data like upload, delete or update.
  • Update of any tables in DynamoDB
  • API Gateway requests
  • Any data process in Amazon kinesis

AWS Lambda allows you to create function and later you need to invoke Lambda function and later then monitor it with logs or data traces.

Prerequisites

  • You must have AWS account in order to setup Lambda function with full access Lambda access. If you don’t have AWS account, please create a account from here AWS account.
  • Ubuntu machine to run terraform, if you don’t have any machine you can create a ec2 instance on AWS account
  • Recommended to have 4GB RAM
  • At least 5GB of drive space
  • Ubuntu machine should have IAM role attached with Lambda function creation permissions or it is always great to have administrator permissions to work with demo’s.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to create a basic Lambda function using AWS Management console

  • Open AWS management console and on the top and search for Lambda
  • Once Lambda page opens click on Create function
  • Now, For demo we will use Author from scratch as a function type & Provide the name of function , Language in which you would like to code & finally click on Create function
  • After function is successfully created , click on TEST
  • Now enter the name of event
  • Now, again click on TEST

It confirms that demo and a very basic sample AWS Lambda is created and invoked succesfully.

How to Install Terraform on Ubuntu 18.04 LTS

  • Update your already existing system packages.
sudo apt update
  • Download the latest version of terraform in opt directory
wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
This image has an empty alt attribute; its file name is image-163.png
  • Install zip package which will be required to unzip
sudo apt-get install zip -y
  • unzip the Terraform download zip file
unzip terraform*.zip
  • Move the executable to executable directory
sudo mv terraform /usr/local/bin
  • Verify the terraform by checking terraform command and version of terraform
terraform               # To check if terraform is installed 

terraform -version      # To check the terraform version  
This image has an empty alt attribute; its file name is image-164.png
This image has an empty alt attribute; its file name is image-165.png
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Configure Terraform files to build AWS Lambda using Terraform

In this demonstration we will create IAM role and IAM policy which we will be assumed by Lambda to invoke a function . Later in this tutorial we will create and invoke Lambda function with proper configurations . Lets get started and configure terraform files which are required for creation of AWS Lambda function on AWS account.

  • Create a folder inside opt directory
mkdir /opt/terraform-lambda-demo
cd /opt/terraform-lambda-demo
  • Now create a file main.tf inside the directory you’re in
vi main.tf
  • Paste the below content in main.tf file

main.tf

# To Create IAM role and attach a policy so that Lambda can assume the role

resource "aws_iam_role" "lambda_role" {
 count  = var.create_function ? 1 : 0
 name   = var.iam_role_lambda
 assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}
# Generates IAM Policy document in JSON format.

data "aws_iam_policy_document" "doc" {
  statement {
  actions    = var.actions
  effect     = "Allow"
  resources  = ["*"]
    }
}

# IAM policy for logging from a lambda

resource "aws_iam_policy" "iam-policy" {

 count        = var.create_function ? 1 : 0
  name         = var.iam_policy_name
  path         = "/"
  description  = "IAM policy for logging from a lambda"
  policy       = data.aws_iam_policy_document.doc.json
}

# Policy Attachment on role.

resource "aws_iam_role_policy_attachment" "policy_attach" {
  count       = var.create_function ? 1 : 0
  role        = join("", aws_iam_role.lambda_role.*.name)
  policy_arn  = join("", aws_iam_policy.iam-policy.*.arn)
}

# Lambda Layers allow you to reuse code across multiple lambda functions.
# Layer is a .zip file archive that contains libraries, a custom runtime, or other dependencies      # Layers let you keep your deployment package small, which makes development easier

resource "aws_lambda_layer_version" "layer_version" {
  count               = length(var.names) > 0 && var.create_function ? length(var.names) : 0
  filename            = length(var.file_name) > 0 ?  element(var.file_name,count.index) : null
  layer_name          = element(var.names, count.index)
  compatible_runtimes = element(var.compatible_runtimes, count.index)
}

# Generates an archive from content, a file, or directory of files.

data "archive_file" "default" {
  count       = var.create_function && var.filename != null ? 1 : 0
  type        = "zip"
  source_dir  = "${path.module}/files/"
  output_path = "${path.module}/myzip/python.zip"
}

# Create a lambda function

resource "aws_lambda_function" "lambda-func" {
  count                          = var.create_function ? 1 :0
  filename                       = var.filename != null ? "${path.module}/myzip/python.zip"  : null
  function_name                  = var.function_name
  role                           = join("",aws_iam_role.lambda_role.*.arn)
  handler                        = var.handler
  layers                         = aws_lambda_layer_version.layer_version.*.arn
  runtime                        = var.runtime
  depends_on                     = [aws_iam_role_policy_attachment.policy_attach]
}

# Give External source (like CloudWatch Event, SNS or S3) permission to access the Lambda function.


resource "aws_lambda_permission" "default" {
  count   = length(var.lambda_actions) > 0 && var.create_function ? length(var.lambda_actions) : 0
  action        = element(var.lambda_actions,count.index)
  function_name = join("",aws_lambda_function.lambda-func.*.function_name)
  principal     = element(var.principal,count.index)

}
  • Now create another file vars.tf which should contains all the variables.

vars.tf

variable "create_function" {
  description = "Controls whether Lambda function should be created"
  type = bool
  default = true  
}
variable "iam_role_lambda" {}
variable "runtime" {}
variable "handler" {}
variable "actions" {
  type = list(any)
  default = []
  description = "The actions for Iam Role Policy."
}
 
variable "iam_policy_name" {}
variable "function_name" {}
variable "names" {
  type        = list(any)
  default     = []
  description = "A unique name for your Lambda Layer."
}
 
variable "file_name" {
  type        = list(any)
  default     = []
  description = "A unique file_name for your Lambda Layer."
}
variable "filename" {}
 
variable "create_layer" {
  description = "Controls whether layer should be created"
  type = bool
  default = false  
}
 
variable "lambda_actions" {
  type        = list(any)
  default     = []
  description = "The AWS Lambda action you want to allow in this statement. (e.g. lambda:InvokeFunction)."
}
 
variable "principal" {
  type        = list(any)
  default     = []
  description = "The principal who is getting this permission. e.g. s3.amazonaws.com, an AWS account ID, or any valid AWS service principal such as events.amazonaws.com or sns.amazonaws.com."
}
 
variable "compatible_runtimes" {
  type        = list(any)
  default     = []
  description = "A list of Runtimes this layer is compatible with. Up to 5 runtimes can be specified."
}
  • Next is to set the values of variables which we declared earlier in vars.tf. Lets create another file and name it terraform.tfvars

terraform.tfvars

iam_role_lambda = "iam_role_lambda"
actions = [
    "logs:CreateLogStream",
    "logs:CreateLogGroup",
    "logs:PutLogEvents"
]
lambda_actions = [
     "lambda:InvokeFunction"
  ]
principal= [
      "events.amazonaws.com" , "sns.amazonaws.com"
]
compatible_runtimes = [
     ["python3.8"]
]
runtime  = "python3.8"
iam_policy_name = "iam_policy_name"
 names = [
    "python_layer"
  ]
file_name = ["myzip/python.zip" ]
  
filename = "files"   
handler = "index.lambda_handler"
function_name = "terraformfunction"
  • Now create a directory called files inside /opt/terraform-lambda-demo and create a file inside it and name it index.py
cd /opt/terraform-lambda-demo
mkdir files/
cd /opt/terraform-lambda-demo/files/index.py

NOTE: We will use Python for this Lambda function

  • Paste the below Python code in /opt/terraform-lambda-demo/files/index.py which will be executed.
# index.py

import os
import json

def lambda_handler(event, context):
    json_region = os.environ['AWS_REGION']
    return {
        "statusCode": 200,
        "headers": {
            "Content-Type": "application/json"
        },
        "body": json.dumps({
            "Region ": json_region
        })
    }
  • Your folder structure should like below
  • Now your files and code are ready for execution . Initialize the terraform
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply
  • Let us verify in Amazon Management console if AWS Lambda function is created succesfully
  • Invoke the Lambda function and validate

Great , Lambda function is executed successfully and we can see proper response from python application.

Conclusion

In this demonstration we learnt how to create AWS Lambda using AWS Management console and invoke a Lambda function . Later in this tutorial we learnt to create and invoke Lambda function with proper configurations using Terraform.

Lambda is AWS server less and cost effective service a which is widely used everywhere and will help you get started with this in the organization. Once you are familiar with AWS Lambda I am sure you will forget using servers and code deployments on servers.

Please like and share with your friends if you like it. Hope this tutorial will be helpful.

Brilliant Guide to Check all possible ways to view Disk usage on Ubuntu Machine

Monitoring of application or system disk utilization has always remained a top most and crucial responsibility of any IT engineer. In the IT world with various software’s , automation and tools it is very important to keep a track of disk utilization regularly.

Having said that, In this tutorial we will show you best commands and tools to work with your disk utilization. Please follow me along to read and see these commands and their usage.

Table of content

  1. Check Disk Space using disk free or disk filesystems command ( df )
  2. Check Disk Space using disk usage command ( du )
  3. Check Disk Usage using ls command
  4. Check Disk Usage using pydf command
  5. Check Disk Usage using Ncdu command( Ncurses Disk Usage )
  6. Check Disk Usage using duc command
  7. conclusion

Check Disk Space using disk free or disk filesystems command (df)

It stands for disk free. This command provides us information about the available space and used space on a file system. There are multiple parameters which can be passed along with this utility to provide additional outputs. Lets look at some of the commands from this utility.

  • To see all disk space available on all the mounted file systems on ubuntu machine.
df
  • To see all disk space available on all the mounted file systems on ubuntu machine in human readable format.
    • You will notice a difference in this command output and a previous. The difference is instead of 1k-blocks you will see size which is human readable.
df -h
  • To check the disk usage along with type of filesystem
df -T
  • To check disk usage of particular Filesystem
df /dev/xvda1
  • To check disk usage of multiple directories.
df -h  /opt /var /etc /lib
  • To check only Percent of used disk space
df -h --output=source,pcent
  • To check data usage based on filesystem wise
df -h -t ext4

Check Disk Space using disk usage command ( du )

du command provides disk usage information. This command provides file and directories space utilization. Lets see some of the example .

  • To check disk usage of directory
du /lib # Here we are taking lib directory
  • To check disk usage of directory with different block size type .
    • M for MB
    • G for GB
    • T for TB
du -BM /var
  • To check disk usage according to the size
    • Here s represents summarize
    • Here k represents size in KB , you can use M, G or T and so on
    • Here sort represents sort
    • Here n represents in numerical order
    • Here r represents in reverse order
du -sk /opt/* | sort -nr

Check Disk Usage using ls command

ls command is used of listing of files but also provides information about disk utilized by directories and files. Lets see some of these command.

  • To list the files in human readable format.
ls -lh
  • To list the file in descending order of size of files.
ls -ls

Check Disk Usage using pydf command

pydf is a python based command-line tool which is used to display disk usage with different colors. Lets dive into command now.

  • To check the disk usage with pydf
pydf -h 

Check Disk Usage using Ncdu command (Ncurses Disk Usage)

Ncdu is a disk utility for Unix systems. This command is text-based user interface under the [n]curses programming library.Let us see a command from Ncdu

ncdu

Check Disk Usage using duc command

Duc is a  command line utility which queries the disk usage database and also create, maintain and the database.

  • Before we run a command using duc be sure to install duc package.
sudo apt install duc
  • duc is successfully installed , now lets now run a command
duc index /usr
  • To list the disk usage using duc command with user interface
duc ui /usr

Conclusion

There are various ways to identify and view disk usage in Linux or ubuntu operating system. In this tutorial we learnt and showed best commands and disk utilities to work with . Now are you are ready to troubleshoot disk usage issues or work with your files or application and identify the disk utilization.

Hope this tutorial gave you in depth understanding and best commands to work with disk usage . Hoping you never face any disk issues in your organization. Please share if you like.

How to create AWS EKS cluster using Terraform and connect Kubernetes cluster with ubuntu machine.

Working with container orchestration like kubernetes has always remained on top most priorities. Well , starting from kubernetes then toward EKS has benefited almost every single person on this planet who has used it. As this is all managed AWS very well by taking care of all your infrastructure , deployments and scaling of cluster.

Having said that, still there is some work to do such as creating AWS EKS with right permissions and policy , why not automate this as well? That would be exceptionally well if that happens and yes this will happen right in this tutorial as we will use terraform to automate your creation of EKS cluster.

So in this tutorial we will configure few files in terraform and then you can create as many as cluster in few seconds. Please follow along.

Table of content

  • What is AWS EKS
  • Prerequisites
  • How to install Terraform on ubuntu machine
  • Terraform Configuration Files and Structure
  • Configure Terraform files to create AWS EKS cluster
  • Configure & Connect your Ubuntu Machine to communicate with your cluster
  • Conclusion

What is AWS EKS ( Amazon Elastic Kubernetes Services) ?

Amazon provides its own service AWS EKS where you can host kubernetes without worrying about infrastructure like kubernetes nodes, installation of kubernetes etc. It gives you a platform to host kubernetes.

Some features of Amazon EKS ( Elastic kubernetes service)

  • It expands and scales across many availability zones so that there is always a high availability.
  • It automatically scales and fix any impacted or unhealthy node.
  • It is interlinked with various other AWS services such as IAM, VPC , ECR & ELB etc.
  • It is very secure service.

How does AWS EKS service work?

  • First step in EKS is to create EKS cluster using AWS CLI or AWS Management console.
  • Now, next you can have your own machines EC2 where you can deploy applications or deploy to AWS Fargate which manages it for you.
  • Now connect to kubernetes cluster with kubectl commands.
  • Finally deploy and run applications on EKS cluster.

Prerequisites

  • Ubuntu machine to run terraform preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account
  • Recommended to have 4GB RAM
  • At least 5GB of drive space
  • Ubuntu machine should have IAM role attached with AWS EKS full permissions or it is always great to have administrator permissions to work with demo’s.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Terraform on Ubuntu 18.04 LTS

  • Update your already existing system packages.
sudo apt update
  • Download the latest version of terraform in opt directory
wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
This image has an empty alt attribute; its file name is image-163.png
  • Install zip package which will be required to unzip
sudo apt-get install zip -y
  • unzip the Terraform download zip file
unzip terraform*.zip
  • Move the executable to executable directory
sudo mv terraform /usr/local/bin
  • Verify the terraform by checking terraform command and version of terraform
terraform               # To check if terraform is installed 

terraform -version      # To check the terraform version  
This image has an empty alt attribute; its file name is image-164.png
This image has an empty alt attribute; its file name is image-165.png
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Configure Terraform files to create AWS EKS cluster

In this demonstration we will create IAM role and IAM policy which we will attach to the same role. Later in this tutorial we will create Kubernetes cluster with proper network configurations. Lets get started and configure terraform files which are required for creation of AWS EKS on AWS account.

  • Create a folder inside opt directory
mkdir /opt/terraform-eks-demo
cd /opt/terraform-eks-demo
  • Now create a file main.tf inside the directory you’re in
vi main.tf
  • This is our main.tf file and paste the below code inside the file.
# Creating IAM role with assume policy so that it can be assumed while connecting with Kubernetes cluster.

resource "aws_iam_role" "iam-role-eks-cluster" {
  name = "terraform-eks-cluster"
  assume_role_policy = <<POLICY
{
 "Version": "2012-10-17",
 "Statement": [
   {
   "Effect": "Allow",
   "Principal": {
    "Service": "eks.amazonaws.com"
   },
   "Action": "sts:AssumeRole"
   }
  ]
 }
POLICY
}

# Attach both EKS-Service and EKS-Cluster policies to the role.

resource "aws_iam_role_policy_attachment" "eks-cluster-AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = "${aws_iam_role.iam-role-eks-cluster.name}"
}

resource "aws_iam_role_policy_attachment" "eks-cluster-AmazonEKSServicePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
  role       = "${aws_iam_role.iam-role-eks-cluster.name}"
}

# Crate security group for AWS EKS.

resource "aws_security_group" "eks-cluster" {
  name        = "SG-eks-cluster"
  vpc_id      = "vpc-XXXXXXXXXXX"  # Use your VPC here

  egress {                   # Outbound Rule
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {                  # Inbound Rule
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

}

# Create EKS cluster

resource "aws_eks_cluster" "eks_cluster" {
  name     = "terraformEKScluster"
  role_arn =  "${aws_iam_role.iam-role-eks-cluster.arn}"
  version  = "1.19"

  vpc_config {             # Configure EKS with vpc and network settings 
   security_group_ids = ["${aws_security_group.eks-cluster.id}"]
   subnet_ids         = ["subnet-XXXXX","subnet-XXXXX"] # Use Your Subnets here
    }

  depends_on = [
    "aws_iam_role_policy_attachment.eks-cluster-AmazonEKSClusterPolicy",
    "aws_iam_role_policy_attachment.eks-cluster-AmazonEKSServicePolicy",
   ]
}



# Creating IAM role for EKS nodes with assume policy so that it can assume 


resource "aws_iam_role" "eks_nodes" {
  name = "eks-node-group"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.eks_nodes.name
}

resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.eks_nodes.name
}

resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.eks_nodes.name
}

# Create EKS cluster node group

resource "aws_eks_node_group" "node" {
  cluster_name    = aws_eks_cluster.eks_cluster.name
  node_group_name = "node_tuto"
  node_role_arn   = aws_iam_role.eks_nodes.arn
  subnet_ids      = ["subnet-","subnet-"]

  scaling_config {
    desired_size = 1
    max_size     = 1
    min_size     = 1
  }

  # Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
  # Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
  depends_on = [
    aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
  ]
}
 
 
  • Below should directory of our demo look like.
  • Now your files and code are ready for execution . Initialize the terraform
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply
  • Generally EKS cluster take few minutes to launch.
  • Lets verify our AWS EKS cluster and other components which were created by terraform.
IAM Role with proper permissions.

  • Now verify Amazon EKS cluster
  • Finally verify the node group of the cluster.

Configure & Connect your Ubuntu Machine to communicate with your cluster

Up to now we created Kubernetes cluster in AWS EKS with proper IAM role permissions and configuration , but please make sure to configure AWS credentials on local machine to match with same IAM user or IAM role used while creating the cluster. That means use same IAM role credentials in local machine which we used to create Kubernetes cluster.

Here in this demonstration we are using IAM role credentials in EC2 instance from which we created AWS EKS using terraform. So on the same machine we can perform below steps.

  • Prerequisites – Make sure you have AWS CLI and kubectl installed on ubuntu machine in order to make connection. If you don’t have these two , don’t worry please visit our another article to find both the installation
  • On ubuntu machine configure kubeconfig to make communication from your local machine to Kubernetes cluster in AWS EKS
aws eks update-kubeconfig --region us-east-2 --name terraformEKScluster
  • Now finally test the communication between local machine and cluster .
kubectl get svc

Great we can see the connectivity from our local machine to Kubernetes cluster.

Conclusion:

In this tutorial . Firstly we went through detailed view of what is AWS Elastic kubernetes service and how to create kubernetes cluster using Terraform . Then we showcased connection between Kubernetes cluster and kubectl client on ubuntu machine.

Hope you had a great time seeing this tutorial with detailed explanations and with practical’s. . If you like this , please share with your friends and spread the word.

Getting Started with Amazon Elastic kubernetes Service (AWS EKS)

Kubernetes is scalable open source tool that manages the container orchestration in a very effective way. It provides you a platform to deploy your applications with few commands. AWS EKS stands for Amazon Elastic kubernetes service which is AWS managed service where it takes care of managing infrastructure to deployments to further scaling containerized applications.

In this tutorial, you will learn from basics of kubernetes to Amazon EKS.

Table of Content

  1. What is Kubernetes?
  2. What is Amazon Elastic Kubernetes Service (Amazon EKS)
  3. Prerequisites
  4. Install Kubectl in Windows
  5. Install Kubectl in Linux
  6. How to create new Kubernetes cluster in Amazon EKS
  7. Configure & Connect your Local machine to communicate with your cluster
  8. Conclusion

What is Kubernetes?

Kubernetes is an open source container orchestration engine for automating deployments, scaling and managing the containers applications. Kubernetes is an open source Google based tool. It is also known as k8s. It can run on any platforms such as on premises , hybrid or public cloud.

Features of kubernetes

  1. kubernetes scales very well.
  2. Load balancing
  3. Auto restarts if required
  4. Self healing and automatic rollbacks.
  5. You can manage configurations as well like secrets or passwords
  6. Kubernetes can be mounted with various storages such as EFS and local storage.
  7. Kubernetes works very well with networking components such as NFS , flocker etc. automatically.

Kubernetes Components

  • Pod: Pods are group of containers which have shared storage and network.
  • Service: Services are used when you want to expose the application outside of your local environment.
  • Ingress: Ingress helps in exposing http/https routes from outside world to the services in your cluster.
  • ConfigMap: Pod consume configmap as environmental values or command line argument in configuration file .
  • Secrets: Secrets as name suggest it stores sensitive information such as password, OAuth tokens, SSH keys etc.
  • Volumes: These are persistent storage for containers.
  • Deployment: Deployment is additional layer which helps to define how Pod and containers should be created using yaml files.

What is AWS EKS (Amazon Elastic Kubernetes Services) ?

Amazon provides its own managed service AWS EKS where you can host kubernetes without needing to install, operate, and maintain your own Kubernetes control plane or nodes etc. It gives you a platform to host kubernetes control node and applications or service inside it. There are some basic points related to EKS as follows:

  • It expands and scales Kubernetes control plane across many availability zones so that there is always a high availability.
  • It automatically scales and fix control plane instances if any instance is impacted or unhealthy node.
  • It is integrated with various other AWS services such as IAM for authentication, VPC for Isolation , ECR for container images & ELB for load distribution etc.
  • It is very secure service.

How does AWS EKS service work?

  • First step in EKS is to create EKS cluster using AWS CLI or AWS Management console.
  • Next launch self managed EC2 instance where you deploy applications or deploy workloads to AWS Fargate which manages it for you.
  • After cluster is setup , Connect to kubernetes cluster using kubectl commands.
  • Finally deploy and run applications on EKS cluster.

Prerequisites

  • You must have AWS account in order to setup cluster in AWS EKS with full access to AWS EKS. If you don’t have AWS account, please create a account from here AWS account.
  • AWS CLI installed. If you don’t have it already install it from here.

Install Kubectl on Windows machines

  • Open PowerShell and run the command.
curl -o kubectl.exe https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/windows/amd64/kubectl.exe
  • Now verify in C drive if binary file has been downloaded succesfully.
  • Now run kubectl binary file and verify the client.
  • Verify its version with the following command
kubectl version --short --client

Install Kubectl on Linux machine

  • Download the kubectl binary using curl command on ubuntu machine under home directory ie. $HOME
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
  • Apply execute permissions to the binary
chmod +x ./kubectl
  • Copy the binary to a folder in your PATH
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
  • Verify the kubectl version on ubuntu machine
kubectl version --short --client

Amazon EKS Clusters

An Amazon EKS cluster components:

  1. The Amazon EKS control plane is not shared between any account nor with any other clusters. Control Panel contains at least two API servers which are exposed via Amazon EKS endpoint associated with the cluster and three etcd instances which are associated with Amazon EBS volumes which are encrypted using AWS KMS. Amazon EKS automatically monitors load on control panel and removes unhealthy instances when needed. Amazon EKS uses Amazon VPC network policies to restrict traffic between control plane components to within a single cluster.
  2. Amazon EKS nodes are registered with the control plane via the API server endpoint and a certificate file that is created for your cluster. Your Amazon EKS cluster can schedule pods on any combination of Self-managed nodes, Amazon EKS Managed node groups, and AWS Fargate.
    • Self-managed nodes
      • Can run containers that require Windows and Linux.
      • Can run workloads that require Arm processors.
      • All of your pods on each of your nodes share a kernel runtime environment with other pods.
      • If the pod requires more resources than requested, and resources are available on the node, the pod can use additional resources.
      • Can assign IP addresses to pods from a different CIDR block than the IP address assigned to the node.
      • Can SSH into node
    • Amazon EKS Managed node groups
      • Can run containers that require Linux.
      • Can run workloads that require Arm processors.
      • All of your pods on each of your nodes share a kernel runtime environment with other pods.
      • If the pod requires more resources than requested, and resources are available on the node, the pod can use additional resources.
      • Can assign IP addresses to pods from a different CIDR block than the IP address assigned to the node.
      • Can SSH into node
    • AWS Fargate
      • Can run containers that require Linux.
      • Here Each pod has a dedicated kernel.
      • The pod can be re-deployed using a larger vCPU and memory configuration though.
      • There is no Node.
      • As there is no Node, you cannot SSH into node
  1. Workloads: A container contains one or more pods. Workloads define applications running on a Kubernetes cluster. Every workload controls pods. There are five types of workloads on a cluster.
    • Deployment: Ensures that a specific number of pods run and includes logic to deploy changes
    • ReplicaSet: Ensures that a specific number of pods run. Can be controlled by deployments.
    • StatefulSet : Manages the deployment of stateful applications
    • DaemonSet  Ensures that a copy of a pod runs on all (or some) nodes in the cluster
    • Job: Creates one or more pods and ensures that a specified number of them run to completion
  • By default, Amazon EKS clusters have three workloads:
    • coredns: A deployment that deploys two pods that provide name resolution for all pods in the cluster.
    • aws-node A Daemon Set that deploys one pod to each Amazon EC2 node in your cluster which runs the AWS VPC CNI controller, that provides VPC networking functionality to the pods and nodes in your cluster.
    • kube-proxy: A DaemonSet that deploys one pod to each Amazon EC2 node in your cluster which maintains network rules on nodes that enable networking communication to your pods.

Creating Kubernetes cluster in Amazon EKS

In this demonstration we will create and setup kubernetes cluster in Amazon EKS using Amazon management console and AWS CLI commands. Before we start make sure you have VPC created and IAM role with Full access to EKS permissions.

  • We already have one VPC in every AWS account by default. If you wish to create another VPC specifically for AWS EKS in AWS account you can create it.
  • Hop over to IAM service and create a IAM policy with full EKS permissions.
  • Click on Create policy and then click on choose service
  • Now give a name to the policy and click create
  • Now Go to IAM role and create a role.
  • Now choose EKS service and then select EKS cluster as your use case:
  • Give a name to role and then hit create role
  • Now attach a policy to IAM role which we created.
  • Also I point here , please add STS permissions to the role’s Trust relationship which will be required when client makes request.
  • Make sure your JSON policy should like this.
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*",
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
  • So now we are done with IAM role and policy attachment , we can now create and work with kubernetes cluster.
  • Now go to AWS EKS console and click on Create cluster
  • Now add all the configurations
  • Now add VPC details and two public subnets if you have. You can skip subnets as of now.
  • Keep hitting NEXT and finally click on Create cluster
  • Lets verify if cluster is up and active . It takes sometime for cluster to come up.

Now, kubernetes cluster on AWS EKS is successfully created. Now lets initiate communication from client which we installed to the kubernetes cluster.

Configure & Connect your Local machine to communicate with your cluster

Up to now we created Kubernetes cluster in AWS EKS with proper IAM role permissions and configuration , but please make sure to configure AWS credentials on local machine to match with same IAM user or IAM role used while creating the cluster. That means use same IAM user or IAM role credentials in local machine which we used to create Kubernetes cluster.

  • Open Visual studio or GIT bash or command prompt.
  • Now , configure kubeconfig to make communication from your local machine to Kubernetes cluster in AWS EKS
aws eks update-kubeconfig --region us-east-2 --name Myekscluster
  • Now finally test the communication between local machine and cluster .
kubectl get svc

Great you can see the connectivity from our local machine to Kubernetes cluster !!

Create nodes on Kubernetes cluster

Amazon EKS cluster can schedule pods on any combination of self managed nodes, Amazon EKS managed nodes and AWS Fargate.

Amazon EKS Managed node group

  • With Amazon managed node group you don’t need to provision or register Amazon EC2 instances. All the managed nodes are part of Amazon EC2 auto scaling group.
  • You can add a managed node group to new or existing clusters using the Amazon EKS console, eksctl, AWS CLI; AWS API, or using AWS Cloud Formation. Managed node group manage Amazon EC2 instances for you.
  • A managed node group’s Auto Scaling group spans all of the subnets that you specify when you create the group.
  • Amazon EKS managed node groups can be launched in both public and private subnets.
  • You can create multiple managed node groups within a single cluster

Creating Managed node group using AWS Management Console.

  • Go to Amazon EKS page and there navigate to the Configuration tab, select the Compute tab, and then choose Add Node Group and fill all the details such as name , node IAM role that you created earlier while creating the cluster.
  • Next, On the Set compute and scaling configuration page, enter all the details such as Instance type, Capacity type and then click on NEXT
  • Now , add the networking details such as VPC details, subnets, SSH Keys details.
  • You can also find details of node from your location machine by running the following commands.
aws eks update-kubeconfig --region us-east-2 --name "YOUR_CLUSTER_NAME"
kubectl get nodes --watch

Lets learn how to create Fargate(Linux) nodes and use them in kubernetes cluster.

  • In order to create Fargate(Linux) nodes first thing you need to do is to create Fargate profile. This profile is needed because when any pod gets deployed in Fargate it first matches the desired configuration from the profile then it gets deployed. The configuration contains permissions such as ability of pod to get the containers image from ECR etc. You can find the steps to create Fargate profile from here.

Conclusion

In this tutorial , we definitely learnt a lot . Firstly we went through detailed view of what is kubernetes , what is Amazon Elastic Kubernetes service ie. AWS EKS . Then we learnt how to install kubernetes client kubectl on windows as well as Linux machine and finally we created Kubernetes cluster and connected using kubectl client.

Hope you had wonderful experience going through this ultimate Guide of kubernetes and EKS. If you like this , please share with your friends and spread the word.

How to Delete EBS Snapshots from AWS account using Shell script

Well AWS EBS that is Elastic block store is a very important and useful service provided by AWS. Its a permanent and shared storage and is used with various applications deployed in AWS EC2 instance. Automation is playing a vital role in provisioning or managing all the Infrastructure and related components.

Having said that , In this tutorial we will learn what is an AWS EBS , AWS EBS Snapshots and many amazing things about storage types and how to delete EBS snapshots using shell script on AWS step by step.

AWS EBS is your pendrive for instances, always use it when necessary and share with other instances.

Table of Content

  1. What is Shell script ?
  2. What is AWS EBS ?
  3. What are EBS Snapshots in AWS ?
  4. Prerequisites
  5. Install AWS CLI Version 2 on windows machine
  6. How to Delete EBS Snapshots from AWS account using shell script
  7. Conclusion

What is Shell Scripting or Bash Scripting?

Shell Script is simply a text of file with various or lists of commands that are executed even on terminal or shell one by one. But in order to make thing little easier and run together as a group and in quick time we write them in single file and run it.

Main tasks which are performed by shell scripts are : file manipulation , printing text , program execution. We can include various environmental variables in script that can be used at multiple places , run programs and perform various activities are known as wrapper scripts.

A good shell script will have comments, preceded by a pound sign or hash mark, #, describing the steps. Also we can include conditions or pipe some commands to make more creative scripts.

When we execute a shell script, or function, a command interpreter goes through the ASCII text line-by-line, loop-by-loop, test-by-test, and executes each statement as each line is reached from the top to the bottom.

What is EBS ?

EBS stands for Amazon Elastic block store which is permanent storage just like your pendrive or harddisk. You can mount EBS volume to AWS EC2 instances. It is very much possible to create your own file system on top of these EBS volumes.

EBS volumes are mounted on AWS EC2 instance and are not dependent on AWS EC2 instance life. They remain persistent.

Amazon Elastic block store (EBS)

Key features of EBS

  • EBS can be created in any Availability zones
  • EBS cannot be directly attached with any instance in different Availability zone. We would need to create a Snapshot that is like a backup copy and then from that snapshot restore it to new volume and then finally use it in other Availability zone.

What are HDD and SSD storage ?

HDD ( Hard disk drive )

Hard disk drive is a old technology. They depends on spinning disks and platters to read and write data. There is a motor which spins the platter whenever any request comes to read or write the data. Platter contains tracks and each track contains severs sectors. These drives run slowly. They are less costly.

SSD ( Solid State drive )

Solid State drive is a new technology. It uses flash memory so they consume less energy and runs much faster as compared to HDD and is highly durable. It depends on electronic energy rather than mechanical energy so its easy to maintain and more efficient. They are more costly than HDD’s.

  • EBS are classified further into 4 types
    • General purpose SSD : Used in case of general use such as booting a machine or test labs.
    • Provisioned IOPS SSD: Used in case of scalable and high IOPS applications.
    • Throughput Optimized HDD: These are low cost magnetic storage which depends on throughput rather than IOPS such as EMR, data warehouse.
    • Cold HDD: These are also low cost magnetic storage which depends on throughput rather than IOPS.

How to create AWS EBS manually in AWS account?

  • You must have AWS account to create AWS EBS. If you don’t have AWS account please create from AWS account or AWS Account
  • Go to AWS account and on the top search for AWS EC2 service
  • Click on Create volume
  • Now fill all the details such as type of volume, size , IOPS , Tags etc.
  • Now click on Create volume and verify

What are EBS Snapshots in AWS ?

We just discussed about EBS that is storage. There are high chances that you might require backup to keep yourself in safe position. So basically EBS snapshots are backup of EBS volumes. There is also a option to backup your EBS with point in time snapshot which are incremental backups and these gets stored in AWS S3. This helps in saving tons of mins by keeping the snapshots with only difference to what changed in previous backup.

How to create EBS snapshots?

  • Go to AWS EBS console
  • Choose the AWS EBS volume for which you wish to create Snapshot
  • Add the description and Tag and then click on Create Snapshot.
  • Verify the Snapshot
  • If you wish to create AWS EBS snapshots using AWS CLI , please run the command ( Make sure you have AWS CLI installed and if not then we have explained below in this tutorial.
aws ec2 create-snapshot --volume-id <vol-1234567890> --description "My volume snapshot"

Prerequisites

  1. AWS account to create AWS IAM user. If you don’t have AWS account please create from AWS account or AWS Account
  2. Windows 7 or plus edition where you will execute the shell script.
  3. Python must be installed on windows machine which will be required by AWS cli. If you want to install python on windows machine follow here
  4. You must have Git bash already installed on your windows machine. If you don’t have install from here
  5. Code editor for writing the shell script on windows machine. I would recommend to use visual studio code on windows machine. If you wish to install visual studio on windows machine please find steps here

In this demo , we will use shell script to launch AWS IAM user. So In order to use shell scripts from your local machine that is windows you will require AWS CLI installed and configured. So First lets install AWS CLI and then configure it.

Install AWS CLI Version 2 on windows machine

  • Download the installed for AWS CLI on windows machine from here