How to Create and Invoke AWS Lambda function using Terraform step by step

Managing your applications on server and Hardware has always remained a challenge for developers and system administrators. Some of the challenges are Memory leak, storage issues , system stopped responding , corrupt files by human error and many more. To avoid this AWS launched most widely and cost effective “server less” service which is AWS Lambda, which works almost with all code languages.

AWS Lambda doesn’t require any Hardware or servers to work on , it works on server less technology. So In this tutorial we will learn how to create Lambda function and invoke it using AWS Management console and Terraform. Now lets dive in.

Table of Content

  1. What is AWS Lambda ?
  2. Prerequisites
  3. How to create a basic Lambda function using AWS Management console
  4. How to Install terraform on ubuntu 18.04 LTS ?
  5. Terraform Configuration Files and Structure
  6. Configure terraform files to build AWS Lambda using Terraform
  7. Conclusion

What is AWS Lambda ?

AWS Lambda is a server less AWS service which doesn’t require any infrastrure to run. AWS Lambda service runs code without needing any server to manage that. It is a very scalable service when required it can even scale up to tons of request per second. The Best part with this service is whatever time we use it we just need to pay for that. With this service you don’t require any kind of administration such as managing memory, CPU, network and other resources.

AWS Lambda runs code which support various languages such as Node.js , Python , Ruby , Java , Go & dot (net) . AWS Lambda is generally used with certain events such as

  • Change in AWS S3 ( Simple Storage service ) data like upload, delete or update.
  • Update of any tables in DynamoDB
  • API Gateway requests
  • Any data process in Amazon kinesis

AWS Lambda allows you to create function and later you need to invoke Lambda function and later then monitor it with logs or data traces.

Prerequisites

  • You must have AWS account in order to setup Lambda function with full access Lambda access. If you don’t have AWS account, please create a account from here AWS account.
  • Ubuntu machine to run terraform, if you don’t have any machine you can create a ec2 instance on AWS account
  • Recommended to have 4GB RAM
  • At least 5GB of drive space
  • Ubuntu machine should have IAM role attached with Lambda function creation permissions or it is always great to have administrator permissions to work with demo’s.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to create a basic Lambda function using AWS Management console

  • Open AWS management console and on the top and search for Lambda
  • Once Lambda page opens click on Create function
  • Now, For demo we will use Author from scratch as a function type & Provide the name of function , Language in which you would like to code & finally click on Create function
  • After function is successfully created , click on TEST
  • Now enter the name of event
  • Now, again click on TEST

It confirms that demo and a very basic sample AWS Lambda is created and invoked succesfully.

How to Install Terraform on Ubuntu 18.04 LTS

  • Update your already existing system packages.
sudo apt update
  • Download the latest version of terraform in opt directory
wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
This image has an empty alt attribute; its file name is image-163.png
  • Install zip package which will be required to unzip
sudo apt-get install zip -y
  • unzip the Terraform download zip file
unzip terraform*.zip
  • Move the executable to executable directory
sudo mv terraform /usr/local/bin
  • Verify the terraform by checking terraform command and version of terraform
terraform               # To check if terraform is installed 

terraform -version      # To check the terraform version  
This image has an empty alt attribute; its file name is image-164.png
This image has an empty alt attribute; its file name is image-165.png
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Configure Terraform files to build AWS Lambda using Terraform

In this demonstration we will create IAM role and IAM policy which we will be assumed by Lambda to invoke a function . Later in this tutorial we will create and invoke Lambda function with proper configurations . Lets get started and configure terraform files which are required for creation of AWS Lambda function on AWS account.

  • Create a folder inside opt directory
mkdir /opt/terraform-lambda-demo
cd /opt/terraform-lambda-demo
  • Now create a file main.tf inside the directory you’re in
vi main.tf
  • Paste the below content in main.tf file

main.tf

# To Create IAM role and attach a policy so that Lambda can assume the role

resource "aws_iam_role" "lambda_role" {
 count  = var.create_function ? 1 : 0
 name   = var.iam_role_lambda
 assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}
# Generates IAM Policy document in JSON format.

data "aws_iam_policy_document" "doc" {
  statement {
  actions    = var.actions
  effect     = "Allow"
  resources  = ["*"]
    }
}

# IAM policy for logging from a lambda

resource "aws_iam_policy" "iam-policy" {

 count        = var.create_function ? 1 : 0
  name         = var.iam_policy_name
  path         = "/"
  description  = "IAM policy for logging from a lambda"
  policy       = data.aws_iam_policy_document.doc.json
}

# Policy Attachment on role.

resource "aws_iam_role_policy_attachment" "policy_attach" {
  count       = var.create_function ? 1 : 0
  role        = join("", aws_iam_role.lambda_role.*.name)
  policy_arn  = join("", aws_iam_policy.iam-policy.*.arn)
}

# Lambda Layers allow you to reuse code across multiple lambda functions.
# Layer is a .zip file archive that contains libraries, a custom runtime, or other dependencies      # Layers let you keep your deployment package small, which makes development easier

resource "aws_lambda_layer_version" "layer_version" {
  count               = length(var.names) > 0 && var.create_function ? length(var.names) : 0
  filename            = length(var.file_name) > 0 ?  element(var.file_name,count.index) : null
  layer_name          = element(var.names, count.index)
  compatible_runtimes = element(var.compatible_runtimes, count.index)
}

# Generates an archive from content, a file, or directory of files.

data "archive_file" "default" {
  count       = var.create_function && var.filename != null ? 1 : 0
  type        = "zip"
  source_dir  = "${path.module}/files/"
  output_path = "${path.module}/myzip/python.zip"
}

# Create a lambda function

resource "aws_lambda_function" "lambda-func" {
  count                          = var.create_function ? 1 :0
  filename                       = var.filename != null ? "${path.module}/myzip/python.zip"  : null
  function_name                  = var.function_name
  role                           = join("",aws_iam_role.lambda_role.*.arn)
  handler                        = var.handler
  layers                         = aws_lambda_layer_version.layer_version.*.arn
  runtime                        = var.runtime
  depends_on                     = [aws_iam_role_policy_attachment.policy_attach]
}

# Give External source (like CloudWatch Event, SNS or S3) permission to access the Lambda function.


resource "aws_lambda_permission" "default" {
  count   = length(var.lambda_actions) > 0 && var.create_function ? length(var.lambda_actions) : 0
  action        = element(var.lambda_actions,count.index)
  function_name = join("",aws_lambda_function.lambda-func.*.function_name)
  principal     = element(var.principal,count.index)

}
  • Now create another file vars.tf which should contains all the variables.

vars.tf

variable "create_function" {
  description = "Controls whether Lambda function should be created"
  type = bool
  default = true  
}
variable "iam_role_lambda" {}
variable "runtime" {}
variable "handler" {}
variable "actions" {
  type = list(any)
  default = []
  description = "The actions for Iam Role Policy."
}
 
variable "iam_policy_name" {}
variable "function_name" {}
variable "names" {
  type        = list(any)
  default     = []
  description = "A unique name for your Lambda Layer."
}
 
variable "file_name" {
  type        = list(any)
  default     = []
  description = "A unique file_name for your Lambda Layer."
}
variable "filename" {}
 
variable "create_layer" {
  description = "Controls whether layer should be created"
  type = bool
  default = false  
}
 
variable "lambda_actions" {
  type        = list(any)
  default     = []
  description = "The AWS Lambda action you want to allow in this statement. (e.g. lambda:InvokeFunction)."
}
 
variable "principal" {
  type        = list(any)
  default     = []
  description = "The principal who is getting this permission. e.g. s3.amazonaws.com, an AWS account ID, or any valid AWS service principal such as events.amazonaws.com or sns.amazonaws.com."
}
 
variable "compatible_runtimes" {
  type        = list(any)
  default     = []
  description = "A list of Runtimes this layer is compatible with. Up to 5 runtimes can be specified."
}
  • Next is to set the values of variables which we declared earlier in vars.tf. Lets create another file and name it terraform.tfvars

terraform.tfvars

iam_role_lambda = "iam_role_lambda"
actions = [
    "logs:CreateLogStream",
    "logs:CreateLogGroup",
    "logs:PutLogEvents"
]
lambda_actions = [
     "lambda:InvokeFunction"
  ]
principal= [
      "events.amazonaws.com" , "sns.amazonaws.com"
]
compatible_runtimes = [
     ["python3.8"]
]
runtime  = "python3.8"
iam_policy_name = "iam_policy_name"
 names = [
    "python_layer"
  ]
file_name = ["myzip/python.zip" ]
  
filename = "files"   
handler = "index.lambda_handler"
function_name = "terraformfunction"
  • Now create a directory called files inside /opt/terraform-lambda-demo and create a file inside it and name it index.py
cd /opt/terraform-lambda-demo
mkdir files/
cd /opt/terraform-lambda-demo/files/index.py

NOTE: We will use Python for this Lambda function

  • Paste the below Python code in /opt/terraform-lambda-demo/files/index.py which will be executed.
# index.py

import os
import json

def lambda_handler(event, context):
    json_region = os.environ['AWS_REGION']
    return {
        "statusCode": 200,
        "headers": {
            "Content-Type": "application/json"
        },
        "body": json.dumps({
            "Region ": json_region
        })
    }
  • Your folder structure should like below
  • Now your files and code are ready for execution . Initialize the terraform
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply
  • Let us verify in Amazon Management console if AWS Lambda function is created succesfully
  • Invoke the Lambda function and validate

Great , Lambda function is executed successfully and we can see proper response from python application.

Conclusion

In this demonstration we learnt how to create AWS Lambda using AWS Management console and invoke a Lambda function . Later in this tutorial we learnt to create and invoke Lambda function with proper configurations using Terraform.

Lambda is AWS server less and cost effective service a which is widely used everywhere and will help you get started with this in the organization. Once you are familiar with AWS Lambda I am sure you will forget using servers and code deployments on servers.

Please like and share with your friends if you like it. Hope this tutorial will be helpful.

How to create AWS EKS cluster using Terraform and connect Kubernetes cluster with ubuntu machine.

Working with container orchestration like kubernetes has always remained on top most priorities. Well , starting from kubernetes then toward EKS has benefited almost every single person on this planet who has used it. As this is all managed AWS very well by taking care of all your infrastructure , deployments and scaling of cluster.

Having said that, still there is some work to do such as creating AWS EKS with right permissions and policy , why not automate this as well? That would be exceptionally well if that happens and yes this will happen right in this tutorial as we will use terraform to automate your creation of EKS cluster.

So in this tutorial we will configure few files in terraform and then you can create as many as cluster in few seconds. Please follow along.

Table of content

  • What is AWS EKS
  • Prerequisites
  • How to install Terraform on ubuntu machine
  • Terraform Configuration Files and Structure
  • Configure Terraform files to create AWS EKS cluster
  • Configure & Connect your Ubuntu Machine to communicate with your cluster
  • Conclusion

What is AWS EKS ( Amazon Elastic Kubernetes Services) ?

Amazon provides its own service AWS EKS where you can host kubernetes without worrying about infrastructure like kubernetes nodes, installation of kubernetes etc. It gives you a platform to host kubernetes.

Some features of Amazon EKS ( Elastic kubernetes service)

  • It expands and scales across many availability zones so that there is always a high availability.
  • It automatically scales and fix any impacted or unhealthy node.
  • It is interlinked with various other AWS services such as IAM, VPC , ECR & ELB etc.
  • It is very secure service.

How does AWS EKS service work?

  • First step in EKS is to create EKS cluster using AWS CLI or AWS Management console.
  • Now, next you can have your own machines EC2 where you can deploy applications or deploy to AWS Fargate which manages it for you.
  • Now connect to kubernetes cluster with kubectl commands.
  • Finally deploy and run applications on EKS cluster.

Prerequisites

  • Ubuntu machine to run terraform preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account
  • Recommended to have 4GB RAM
  • At least 5GB of drive space
  • Ubuntu machine should have IAM role attached with AWS EKS full permissions or it is always great to have administrator permissions to work with demo’s.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Terraform on Ubuntu 18.04 LTS

  • Update your already existing system packages.
sudo apt update
  • Download the latest version of terraform in opt directory
wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
This image has an empty alt attribute; its file name is image-163.png
  • Install zip package which will be required to unzip
sudo apt-get install zip -y
  • unzip the Terraform download zip file
unzip terraform*.zip
  • Move the executable to executable directory
sudo mv terraform /usr/local/bin
  • Verify the terraform by checking terraform command and version of terraform
terraform               # To check if terraform is installed 

terraform -version      # To check the terraform version  
This image has an empty alt attribute; its file name is image-164.png
This image has an empty alt attribute; its file name is image-165.png
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Configure Terraform files to create AWS EKS cluster

In this demonstration we will create IAM role and IAM policy which we will attach to the same role. Later in this tutorial we will create Kubernetes cluster with proper network configurations. Lets get started and configure terraform files which are required for creation of AWS EKS on AWS account.

  • Create a folder inside opt directory
mkdir /opt/terraform-eks-demo
cd /opt/terraform-eks-demo
  • Now create a file main.tf inside the directory you’re in
vi main.tf
  • This is our main.tf file and paste the below code inside the file.
# Creating IAM role with assume policy so that it can be assumed while connecting with Kubernetes cluster.

resource "aws_iam_role" "iam-role-eks-cluster" {
  name = "terraform-eks-cluster"
  assume_role_policy = <<POLICY
{
 "Version": "2012-10-17",
 "Statement": [
   {
   "Effect": "Allow",
   "Principal": {
    "Service": "eks.amazonaws.com"
   },
   "Action": "sts:AssumeRole"
   }
  ]
 }
POLICY
}

# Attach both EKS-Service and EKS-Cluster policies to the role.

resource "aws_iam_role_policy_attachment" "eks-cluster-AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = "${aws_iam_role.iam-role-eks-cluster.name}"
}

resource "aws_iam_role_policy_attachment" "eks-cluster-AmazonEKSServicePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
  role       = "${aws_iam_role.iam-role-eks-cluster.name}"
}

# Crate security group for AWS EKS.

resource "aws_security_group" "eks-cluster" {
  name        = "SG-eks-cluster"
  vpc_id      = "vpc-XXXXXXXXXXX"  # Use your VPC here

  egress {                   # Outbound Rule
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {                  # Inbound Rule
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

}

# Create EKS cluster

resource "aws_eks_cluster" "eks_cluster" {
  name     = "terraformEKScluster"
  role_arn =  "${aws_iam_role.iam-role-eks-cluster.arn}"
  version  = "1.19"

  vpc_config {             # Configure EKS with vpc and network settings 
   security_group_ids = ["${aws_security_group.eks-cluster.id}"]
   subnet_ids         = ["subnet-XXXXX","subnet-XXXXX"] # Use Your Subnets here
    }

  depends_on = [
    "aws_iam_role_policy_attachment.eks-cluster-AmazonEKSClusterPolicy",
    "aws_iam_role_policy_attachment.eks-cluster-AmazonEKSServicePolicy",
   ]
}



# Creating IAM role for EKS nodes with assume policy so that it can assume 


resource "aws_iam_role" "eks_nodes" {
  name = "eks-node-group"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.eks_nodes.name
}

resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.eks_nodes.name
}

resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.eks_nodes.name
}

# Create EKS cluster node group

resource "aws_eks_node_group" "node" {
  cluster_name    = aws_eks_cluster.eks_cluster.name
  node_group_name = "node_tuto"
  node_role_arn   = aws_iam_role.eks_nodes.arn
  subnet_ids      = ["subnet-","subnet-"]

  scaling_config {
    desired_size = 1
    max_size     = 1
    min_size     = 1
  }

  # Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
  # Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
  depends_on = [
    aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
  ]
}
 
 
  • Below should directory of our demo look like.
  • Now your files and code are ready for execution . Initialize the terraform
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply
  • Generally EKS cluster take few minutes to launch.
  • Lets verify our AWS EKS cluster and other components which were created by terraform.
IAM Role with proper permissions.

  • Now verify Amazon EKS cluster
  • Finally verify the node group of the cluster.

Configure & Connect your Ubuntu Machine to communicate with your cluster

Up to now we created Kubernetes cluster in AWS EKS with proper IAM role permissions and configuration , but please make sure to configure AWS credentials on local machine to match with same IAM user or IAM role used while creating the cluster. That means use same IAM role credentials in local machine which we used to create Kubernetes cluster.

Here in this demonstration we are using IAM role credentials in EC2 instance from which we created AWS EKS using terraform. So on the same machine we can perform below steps.

  • Prerequisites – Make sure you have AWS CLI and kubectl installed on ubuntu machine in order to make connection. If you don’t have these two , don’t worry please visit our another article to find both the installation
  • On ubuntu machine configure kubeconfig to make communication from your local machine to Kubernetes cluster in AWS EKS
aws eks update-kubeconfig --region us-east-2 --name terraformEKScluster
  • Now finally test the communication between local machine and cluster .
kubectl get svc

Great we can see the connectivity from our local machine to Kubernetes cluster.

Conclusion:

In this tutorial . Firstly we went through detailed view of what is AWS Elastic kubernetes service and how to create kubernetes cluster using Terraform . Then we showcased connection between Kubernetes cluster and kubectl client on ubuntu machine.

Hope you had a great time seeing this tutorial with detailed explanations and with practical’s. . If you like this , please share with your friends and spread the word.

Getting Started with Amazon Elastic kubernetes Service (AWS EKS)

Kubernetes is scalable open source tool that manages the container orchestration in a very effective way. It provides you a platform to deploy your applications with few commands. AWS EKS stands for Amazon Elastic kubernetes service which is AWS managed service where it takes care of managing infrastructure to deployments to further scaling containerized applications.

In this tutorial, you will learn from basics of kubernetes to Amazon EKS.

Table of Content

  1. What is Kubernetes?
  2. What is Amazon Elastic Kubernetes Service (Amazon EKS)
  3. Prerequisites
  4. Install Kubectl in Windows
  5. Install Kubectl in Linux
  6. How to create new Kubernetes cluster in Amazon EKS
  7. Configure & Connect your Local machine to communicate with your cluster
  8. Conclusion

What is Kubernetes?

Kubernetes is an open source container orchestration engine for automating deployments, scaling and managing the containers applications. Kubernetes is an open source Google based tool. It is also known as k8s. It can run on any platforms such as on premises , hybrid or public cloud.

Features of kubernetes

  1. kubernetes scales very well.
  2. Load balancing
  3. Auto restarts if required
  4. Self healing and automatic rollbacks.
  5. You can manage configurations as well like secrets or passwords
  6. Kubernetes can be mounted with various storages such as EFS and local storage.
  7. Kubernetes works very well with networking components such as NFS , flocker etc. automatically.

Kubernetes Components

  • Pod: Pods are group of containers which have shared storage and network.
  • Service: Services are used when you want to expose the application outside of your local environment.
  • Ingress: Ingress helps in exposing http/https routes from outside world to the services in your cluster.
  • ConfigMap: Pod consume configmap as environmental values or command line argument in configuration file .
  • Secrets: Secrets as name suggest it stores sensitive information such as password, OAuth tokens, SSH keys etc.
  • Volumes: These are persistent storage for containers.
  • Deployment: Deployment is additional layer which helps to define how Pod and containers should be created using yaml files.

What is AWS EKS (Amazon Elastic Kubernetes Services) ?

Amazon provides its own managed service AWS EKS where you can host kubernetes without needing to install, operate, and maintain your own Kubernetes control plane or nodes etc. It gives you a platform to host kubernetes control node and applications or service inside it. There are some basic points related to EKS as follows:

  • It expands and scales Kubernetes control plane across many availability zones so that there is always a high availability.
  • It automatically scales and fix control plane instances if any instance is impacted or unhealthy node.
  • It is integrated with various other AWS services such as IAM for authentication, VPC for Isolation , ECR for container images & ELB for load distribution etc.
  • It is very secure service.

How does AWS EKS service work?

  • First step in EKS is to create EKS cluster using AWS CLI or AWS Management console.
  • Next launch self managed EC2 instance where you deploy applications or deploy workloads to AWS Fargate which manages it for you.
  • After cluster is setup , Connect to kubernetes cluster using kubectl commands.
  • Finally deploy and run applications on EKS cluster.

Prerequisites

  • You must have AWS account in order to setup cluster in AWS EKS with full access to AWS EKS. If you don’t have AWS account, please create a account from here AWS account.
  • AWS CLI installed. If you don’t have it already install it from here.

Install Kubectl on Windows machines

  • Open PowerShell and run the command.
curl -o kubectl.exe https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/windows/amd64/kubectl.exe
  • Now verify in C drive if binary file has been downloaded succesfully.
  • Now run kubectl binary file and verify the client.
  • Verify its version with the following command
kubectl version --short --client

Install Kubectl on Linux machine

  • Download the kubectl binary using curl command on ubuntu machine under home directory ie. $HOME
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
  • Apply execute permissions to the binary
chmod +x ./kubectl
  • Copy the binary to a folder in your PATH
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
  • Verify the kubectl version on ubuntu machine
kubectl version --short --client

Amazon EKS Clusters

An Amazon EKS cluster components:

  1. The Amazon EKS control plane is not shared between any account nor with any other clusters. Control Panel contains at least two API servers which are exposed via Amazon EKS endpoint associated with the cluster and three etcd instances which are associated with Amazon EBS volumes which are encrypted using AWS KMS. Amazon EKS automatically monitors load on control panel and removes unhealthy instances when needed. Amazon EKS uses Amazon VPC network policies to restrict traffic between control plane components to within a single cluster.
  2. Amazon EKS nodes are registered with the control plane via the API server endpoint and a certificate file that is created for your cluster. Your Amazon EKS cluster can schedule pods on any combination of Self-managed nodes, Amazon EKS Managed node groups, and AWS Fargate.
    • Self-managed nodes
      • Can run containers that require Windows and Linux.
      • Can run workloads that require Arm processors.
      • All of your pods on each of your nodes share a kernel runtime environment with other pods.
      • If the pod requires more resources than requested, and resources are available on the node, the pod can use additional resources.
      • Can assign IP addresses to pods from a different CIDR block than the IP address assigned to the node.
      • Can SSH into node
    • Amazon EKS Managed node groups
      • Can run containers that require Linux.
      • Can run workloads that require Arm processors.
      • All of your pods on each of your nodes share a kernel runtime environment with other pods.
      • If the pod requires more resources than requested, and resources are available on the node, the pod can use additional resources.
      • Can assign IP addresses to pods from a different CIDR block than the IP address assigned to the node.
      • Can SSH into node
    • AWS Fargate
      • Can run containers that require Linux.
      • Here Each pod has a dedicated kernel.
      • The pod can be re-deployed using a larger vCPU and memory configuration though.
      • There is no Node.
      • As there is no Node, you cannot SSH into node
  1. Workloads: A container contains one or more pods. Workloads define applications running on a Kubernetes cluster. Every workload controls pods. There are five types of workloads on a cluster.
    • Deployment: Ensures that a specific number of pods run and includes logic to deploy changes
    • ReplicaSet: Ensures that a specific number of pods run. Can be controlled by deployments.
    • StatefulSet : Manages the deployment of stateful applications
    • DaemonSet  Ensures that a copy of a pod runs on all (or some) nodes in the cluster
    • Job: Creates one or more pods and ensures that a specified number of them run to completion
  • By default, Amazon EKS clusters have three workloads:
    • coredns: A deployment that deploys two pods that provide name resolution for all pods in the cluster.
    • aws-node A Daemon Set that deploys one pod to each Amazon EC2 node in your cluster which runs the AWS VPC CNI controller, that provides VPC networking functionality to the pods and nodes in your cluster.
    • kube-proxy: A DaemonSet that deploys one pod to each Amazon EC2 node in your cluster which maintains network rules on nodes that enable networking communication to your pods.

Creating Kubernetes cluster in Amazon EKS

In this demonstration we will create and setup kubernetes cluster in Amazon EKS using Amazon management console and AWS CLI commands. Before we start make sure you have VPC created and IAM role with Full access to EKS permissions.

  • We already have one VPC in every AWS account by default. If you wish to create another VPC specifically for AWS EKS in AWS account you can create it.
  • Hop over to IAM service and create a IAM policy with full EKS permissions.
  • Click on Create policy and then click on choose service
  • Now give a name to the policy and click create
  • Now Go to IAM role and create a role.
  • Now choose EKS service and then select EKS cluster as your use case:
  • Give a name to role and then hit create role
  • Now attach a policy to IAM role which we created.
  • Also I point here , please add STS permissions to the role’s Trust relationship which will be required when client makes request.
  • Make sure your JSON policy should like this.
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*",
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
  • So now we are done with IAM role and policy attachment , we can now create and work with kubernetes cluster.
  • Now go to AWS EKS console and click on Create cluster
  • Now add all the configurations
  • Now add VPC details and two public subnets if you have. You can skip subnets as of now.
  • Keep hitting NEXT and finally click on Create cluster
  • Lets verify if cluster is up and active . It takes sometime for cluster to come up.

Now, kubernetes cluster on AWS EKS is successfully created. Now lets initiate communication from client which we installed to the kubernetes cluster.

Configure & Connect your Local machine to communicate with your cluster

Up to now we created Kubernetes cluster in AWS EKS with proper IAM role permissions and configuration , but please make sure to configure AWS credentials on local machine to match with same IAM user or IAM role used while creating the cluster. That means use same IAM user or IAM role credentials in local machine which we used to create Kubernetes cluster.

  • Open Visual studio or GIT bash or command prompt.
  • Now , configure kubeconfig to make communication from your local machine to Kubernetes cluster in AWS EKS
aws eks update-kubeconfig --region us-east-2 --name Myekscluster
  • Now finally test the communication between local machine and cluster .
kubectl get svc

Great you can see the connectivity from our local machine to Kubernetes cluster !!

Create nodes on Kubernetes cluster

Amazon EKS cluster can schedule pods on any combination of self managed nodes, Amazon EKS managed nodes and AWS Fargate.

Amazon EKS Managed node group

  • With Amazon managed node group you don’t need to provision or register Amazon EC2 instances. All the managed nodes are part of Amazon EC2 auto scaling group.
  • You can add a managed node group to new or existing clusters using the Amazon EKS console, eksctl, AWS CLI; AWS API, or using AWS Cloud Formation. Managed node group manage Amazon EC2 instances for you.
  • A managed node group’s Auto Scaling group spans all of the subnets that you specify when you create the group.
  • Amazon EKS managed node groups can be launched in both public and private subnets.
  • You can create multiple managed node groups within a single cluster

Creating Managed node group using AWS Management Console.

  • Go to Amazon EKS page and there navigate to the Configuration tab, select the Compute tab, and then choose Add Node Group and fill all the details such as name , node IAM role that you created earlier while creating the cluster.
  • Next, On the Set compute and scaling configuration page, enter all the details such as Instance type, Capacity type and then click on NEXT
  • Now , add the networking details such as VPC details, subnets, SSH Keys details.
  • You can also find details of node from your location machine by running the following commands.
aws eks update-kubeconfig --region us-east-2 --name "YOUR_CLUSTER_NAME"
kubectl get nodes --watch

Lets learn how to create Fargate(Linux) nodes and use them in kubernetes cluster.

  • In order to create Fargate(Linux) nodes first thing you need to do is to create Fargate profile. This profile is needed because when any pod gets deployed in Fargate it first matches the desired configuration from the profile then it gets deployed. The configuration contains permissions such as ability of pod to get the containers image from ECR etc. You can find the steps to create Fargate profile from here.

Conclusion

In this tutorial , we definitely learnt a lot . Firstly we went through detailed view of what is kubernetes , what is Amazon Elastic Kubernetes service ie. AWS EKS . Then we learnt how to install kubernetes client kubectl on windows as well as Linux machine and finally we created Kubernetes cluster and connected using kubectl client.

Hope you had wonderful experience going through this ultimate Guide of kubernetes and EKS. If you like this , please share with your friends and spread the word.

How to Create a IAM user on AWS account using shell script

Are you using correct credentials and right permissions to login to AWS account ? It is very important from a security point of view to grant right permissions to users and identities which access AWS account. To solve access issue AWS has a very crucial service know as AWS IAM that is Identity and access management.

In this tutorial we will go through through what is AWS IAM that is Identity and access management and what is IAM user ? Also we will go through a shell script which will create IAM user from windows machine in AWS account.

Table of Content

  1. What is Shell script ?
  2. What is AWS IAM?
  3. What is AWS IAM user ?
  4. Prerequisites
  5. Install AWS CLI Version 2 on windows machine
  6. How to launch or create AWS IAM user in Amazon account using shell script
  7. Conclusion

What is Shell Scripting or Bash Scripting?

Shell Script is simply a text of file with various or lists of commands that are executed even on terminal or shell one by one. But in order to make thing little easier and run together as a group and in quick time we write them in single file and run it.

Main tasks which are performed by shell scripts are : file manipulation , printing text , program execution. We can include various environmental variables in script that can be used at multiple places , run programs and perform various activities are known as wrapper scripts.

A good shell script will have comments, preceded by a pound sign or hash mark, #, describing the steps. Also we can include conditions or pipe some commands to make more creative scripts.

When we execute a shell script, or function, a command interpreter goes through the ASCII text line-by-line, loop-by-loop, test-by-test, and executes each statement as each line is reached from the top to the bottom.

What is AWS IAM ?

AWS IAM stands for Amazon Managed service Identity and access management service . This is a Amazons of the most important service that helps to control who can access AWS account and what resources in AWS account can be accessed.

When you create AWS account , you have control on entire AWS account that is you have access to everything in the account and this user is known as root user. The Root user can login to AWS account using email address and password.

There are some basic terms which one must know before using IAM service.

  • Resources: Resources are objects that are stored in IAM such as user, role, policy , group and identity provider.
  • Entities: Entities are those objects which can authenticate on AWS account such as root user, IAM user , federated user and assumed IAM roles.
  • Principals: Applications or person who uses entities and work with AWS services. For example Python AWS Boto3 or any person such as Robert.
  • Identities: Identities are the objects which identifies themselves to other service such as IAM user “user1” has access to AWS Ec2 instance. This shows that user1 is showing its own identity that I have access to create ec2 instance . Examples of identity are group, users and role.

What is AWS IAM user?

As discussed earlier when you create AWS account , you have control on entire AWS account that is you have access to everything in the account and this user is known as root user . But this root user is a shared account with all privileges’ and it is not a recommended user to be used for any activity on AWS account.

Instead of using root user which is shared user we have IAM user identity which is an individual user and can have various permissions accordingly. Some user may have access to just EC2 some may have access to AWS S3 or AWS EC2 service or some user may have all permissions as root user.

How can you manually create IAM user in AWS account?

  • In order to create AWS IAM user you must have AWS account . If you don’t have AWS account please create from AWS account or AWS Account
  • Go to AWS console and search for IAM.
  • Click on users in IAM dashboard.
  • Click on Add user
  • Add the username , In case of Access type select both types and add a custom password. You can change this password later as well if you want and finally click on Next permissions
  • Click on Next permissions and choose Attach existing policies
  • For now skip tagging as this demo and finally click on create user
  • IAM user is created successfully . Now Save the Access key ID and Secret access key which is very important for further use.

Prerequisites

  1. AWS account to create AWS IAM user. If you don’t have AWS account please create from AWS account or AWS Account
  2. Windows 7 or plus edition where you will execute the shell script.
  3. Python must be installed on windows machine which will be required by AWS cli. If you want to install python on windows machine follow here
  4. You must have Git bash already installed on your windows machine. If you don’t have install from here
  5. Code editor for writing the shell script on windows machine. I would recommend to use visual studio code on windows machine. If you wish to install visual studio on windows machine please find steps here

In this demo , we will use shell script to launch AWS IAM user. So In order to use shell scripts from your local machine that is windows you will require AWS CLI installed and configured. So First lets install AWS CLI and then configure it.

Install AWS CLI Version 2 on windows machine

  • Download the installed for AWS CLI on windows machine from here
  • Select I accept the terms and then click next button
  • Do custom setup like location of installation and then click next button
  • Now you are ready to install the AWS CLI 2
  • Click finish and now verify the AWS cli
  • Verify the AWS version by going to command prompt and type
aws --version

Now AWS cli version 2 is successfully installed on windows machine, now its time to configure AWS credentials so that our shell script connects AWS account and execute commands.

  • Configure AWS Credentials by running the command on command prompt
aws configure
  • Enter the details such as AWS Access key , ID , region . You can skip the output format as default.
  • Check the location on your system C:\Users\YOUR_USER\.aws file to confirm the the AWS credentials
  • Now, you’re AWS credentials are configured successfully.

How to launch or create AWS IAM user in Amazon account using shell script

Now we have configured AWS cli on windows machine , its time to create our shell script to create AWS IAM user.

  • Create a folder on your desktop and under that create file create-iam-user.sh
#! /bin/bash
 

# To check if access key is setup in your system 

if ! grep -q aws_access_key_id ~/.aws/config; then      # grep -q  Turns off Writing to standard output
   if ! grep -q aws_access_key_id ~/.aws/credentials; then 
      echo "AWS config not found or CLI is not installed"
      exit 1
    fi 
fi


# read command will prompt you to enter the name of IAM user you wish to create 

read -r -p "Enter the username to create": username


# Using AWS CLI Command create IAM user 

aws iam create-user --user-name "${username}" --output json

# Here we are creating access and secret keys and then using query and storing the values in credentials

credentials=$(aws iam create-access-key --user-name "${username}" --query 'AccessKey.[AccessKeyId,SecretAccessKey]'  --output text)

# cut command formats the output with correct coloumn.

access_key_id=$(echo ${credentials} | cut -d " " -f 1)
secret_access_key=$(echo ${credentials} | cut --complement -d " " -f 1)

# echo command will print on the screen 

echo "The Username "${username}" has been created"
echo "The access key ID  of "${username}" is $access_key_id "
echo "The Secret access key of "${username}" is $secret_access_key "

  • Now open visual studio code and open the location of file create-iam-user.sh and choose terminal as Bash
  • Now run the script
./create-iam-user.sh
  • Script ran successfully , now lets verify the IAM user by going on AWS account.

Conclusion

In this tutorial, we demonstrated what is an IAM user  and learnt how to create AWS IAM user using shell script on AWS step by step. With IAM you get a individual access to AWS account and you can manage permissions accordingly.

Hope this tutorial will help you in understanding the shell script and provisioning the AWS IAM user on Amazon cloud. Please share with your friends

How to Launch an Amazon DynamoDB tables in AWS Account

With rise in number of database it has become a big challenge to make the right selection. As data grows our database should also scale and perform equally well.

Now Organizations have started to move toward big data and working with real time applications we certainly need a non relational and a good performance database. For these types of challenges and work AWS has always been on the top and served various services which solves our problems and one such service is AWS DynamoDB which manages non-relational databases for you and can store unlimited data and perform very well. .

Table of content

  1. What is Relational database management system ?
  2. What is SQL and NO SQL database?
  3. What is Amazon DynamoDB ?
  4. Prerequisites
  5. How to Create tables in DynamoDB in AWS Account
  6. Conclusion

What is Relational database management system ?

  • Relational database is based on tables and structured data
  • They have relationships which are logically connected.
  • Oracle Database, MySQL, Microsoft SQL Server, and IBM DB2. PostgreSQL , SQLite (for mobiles) are few example of RDMS.

Figure shows Relational Database Management System based on relational model

What is SQL and NO SQL database?

SQL:

  • The full form of SQL is structured query language which is used to manage data in relational database management system i.e RDMS.
  • SQL database belongs to the relational database management system.
  • The SQL type database follow structure pattern that’s why they are suitable for static or predefined schemas.
  • They are good in solving complex queries and highly scalable in nature but in vertical direction.
  • SQL database follows table based methodology and that’s the reason they are good for applications such as accounting systems.

NoSQL:

  • The full form of NoSQL is non-sql or non-relational.
  • This database is used for dynamic storage or those kind of managements where data is not fixed or static
  • This database is not tabular in nature rather its a key pair values.
  • They are good for big data and real time web application and scalable in nature but in horizontal direction
  • Some of the NoSQL databases which are DynamoDB, Foundation DB, Infinity DB, MemcacheDB, , Oracle NoSQL Database, , Redis MongoDB, Cassandra, Scylla, HBase.

What is Amazon DynamoDB ?

DynamoDB is a NoSQL database service that means it is different from the relational database which consists of tables in tabular form. DynamoDB service has very fast performance and is very scalable. DynamoDB service is one of the AWS managed service where you don’t need to worry about capacity , workload , setup , configuration , software patches , replications or even cluster scaling.

With DynamoDB service you just need to create tables where you can add data or retrieve data otherwise DynamoDB takes care of everything else. If you wish to monitor your resources you can do it on AWS console.

Whenever there is a traffic or high request coming in DynamoDB scales up while maintaining the performance.

Basic components of Amazon DynamoDB

  • Tables: It stores data.
    • In below example we used a database table
  • Items: Items are present in table. You can store as many item you wish in a table.
    • In below example different Employee ID are items.
  • Attributes: Each items contains one or more attributes.
    • In below example office , designation and phone are attributes of EmployeeID.

{
"EmployeeID": "1"
"office": "USA"
"Designation": "Devops engineer"
"Phone": "1234567890"
}


{
"EmployeeID": "2"
"office": "UK"
"Designation": "Senior Devops Engineer"
"Phone": "0123456789"
}

To work with Amazon DynamoDB , applications will need API’s to communicate.

  • Control Plane: It allows you to create and manage DynamoDB tables.
  • Data lane: It allows you to perform actions on the data in DynamoDB tables.

Prerequisites

  • You should have AWS account with Full access permissions on DynamoDB . If you don’t have AWS account, please create a account from here AWS account.

How to Create tables in DynamoDB in AWS Account

  • Go to AWS account and search for DynamoDB on the top of the page.
  • Click on Create Table and then you need to Enter the name of the Table and primary Key
  • Now click on Organisation that is table name
  • Now click on Items
  • Add the list of items such address , designation and phone number.
  • Verify if table has required details.

So this was the first way to use AWS provided web service and directly start creating DynamoDB tables . The other way is to download it manually on your machine , setup and then create you’re tables . You can find the steps here

Conclusion

You should now have a basic knowledge about relational database management system and non relational. We also learned about Amazon DynamoDB which is NO SQL database . We also covered on how to create tables on Amazon DynamoDB service & store the data .

This tutorial consists of all the practical’s which were done on our lab server with lots of hard work and efforts. Please share the word if you like it and hoping you get benefit out of this tutorial.

Why do we need cloud computing?

Introduction to Cloud Computing

Cloud computing offers modern businesses advantages, including allowing multiple users to view data in real-time and share projects effortlessly. Earlier People would run applications from software downloaded on physical server or servers in buildings but now with Cloud Computing they all use the same services online from anywhere in the world.

CLOUD is a model of computing where servers, networks , storage and even apps are enabled through the internet. Organization don’t require anymore to make huge investments in buying the equipment’s, train staff and provide ongoing maintenance which are now all handled by Cloud providers. Some of the examples of cloud providers are Amazon Web Service , Microsoft Azure , Oracle , IBM cloud etc.

DATACENTER OR SERVER IN BUILDINGS

Features of Cloud Computing

1. Cloud computing is flexible: If you need more services and more need of Servers you can easily scale up your cloud capacity or you may Scale down again.

2. Disaster Recovery – Cloud is helping organizations to move to that trend. With Allowing Automation and Creating Infrastructure with Different Automation Tools you can redeploy and rebuild your services ASAP. Also The Backups and format of recovery in Cloud is great.

3. Never Miss an Update: As The Service itself is not managed by organization, Provider takes care of your Infra and server Management so its Ideal solution than the reality.

4. Cloud services minimizes cloud Expenditure: Cloud Computing cuts Hardware costs. You simply pay a you go and Enjoy the services.

5. Workstations in the cloud: You can work from anywhere in the world and anytime.

6. Cloud computing offers security: Lost Laptops are Billion dollar Business problem but loss of expensive piece of data is exceptionally a big Problem. Here your data resides in Cloud with more Max security and tolerance which is a greatest advantage in my Option.

7. Mobility: Cloud computing allows mobile access to corporate data via smartphones and devices, which is a great way to ensure that no one is ever left out of the loop.

Cloud computing adoption is on the rise every year, and it doesn’t take long to see why enterprises recognize cloud computing benefits.

CLOUD COMPUTNG

Three main types of cloud environment (Deployment Model)

1- Public cloud: A public cloud environment is owned by an outsourced cloud provider and is accessible to many businesses through the internet on a pay-per-use model. With a public cloud, all hardware, software, and other supporting infrastructure is owned and managed by the cloud provider.

Example: Microsoft Azure , AWS , Google Cloud

2- Private cloud: Private cloud is a owned by a single business. Government institutions, financial institutions like banks, mid to large-sized companies, and any other organization dealing with sensitive information tend to prefer private clouds.

This cloud model is great for organizations concerned about sharing resources on a public cloud. It is implemented on servers owned and maintained by the organization and accessed over the internet or through a private internal network.

Example: Rackspace

3- Hybrid Cloud: Hybrid cloud is used for businesses that seek the benefits of both private and public cloud deployment models.

Service Categories in the cloud

SAAS (Software as a Service): With SaaS, an organization accesses a specific software application hosted on a remote server and managed by a third-party provider

PAAS (Platform as a Service): With PaaS, an organization accesses a pre-defined environment for software development that can be used to build, test, and run applications. This means that developers don’t need to start from scratch when creating apps.

IAAS (Infrastructure as a Service): With IaaS, an organization migrates its hardware—renting servers and data storage in the cloud rather than purchasing and maintaining its own infrastructure. IaaS provides an organization with the same technologies and capabilities as a traditional data center, including full control over server instances.

System administrators within the business are responsible for managing aspects such as databases, applications, runtime, security, etc., while the cloud provider manages the servers, hard drives, networking, storage, etc.