The Ultimate Guide on AWS EKS for Beginners [Easiest Way]

In this Ultimate Guide as a beginner you will learn everything you should know about AWS EKS and how to manage your AWS EKS cluster ?

Common! lets begin !

Table of Content

  1. What is AWS EKS ?
  2. Why do you need AWS EKS than Kubernetes?
  3. Installing tools to work with AWS EKS Cluster
  4. Creating AWS EKS using EKSCTL command line tool
  5. Adding one more Node group in the AWS EKS Cluster
  6. Cluster Autoscaler
  7. Creating and Deploying Cluster Autoscaler
  8. Nginx Deployment on the EKS cluster when Autoscaler is enabled.
  9. EKS Cluster Monitoring and Cloud watch Logging
  10. What is Helm?
  11. Creating AWS EKS Cluster Admin user
  12. Creating Read only user for the dedicated namespace
  13. EKS Networking
  14. IAM and RBAC Integration in AWS EKS
  15. Worker nodes join the cluster
  16. How to Scale Up and Down Kubernetes Pods
  17. Conclusion

What is AWS EKS ?

Amazon provides its own service AWS EKS where you can host kubernetes without worrying about infrastructure like kubernetes nodes, installation of kubernetes etc. It gives you a platform to host kubernetes.

Some features of Amazon EKS ( Elastic kubernetes service)

  1. It expands and scales across many availability zones so that there is always a high availability.
  2. It automatically scales and fix any impacted or unhealthy node.
  3. It is interlinked with various other AWS services such as IAM, VPC , ECR & ELB etc.
  4. It is very secure service.

How does AWS EKS service work?

  • First step in EKS is to create EKS cluster using AWS CLI or AWS Management console or using eksctl command line tool.
  • Now, next you can have your own machines EC2 where you can deploy applications or deploy to AWS Fargate which manages it for you.
  • Now connect to kubernetes cluster with kubectl or eksctl commands.
  • Finally deploy and run applications on EKS cluster.

Why do you need AWS EKS than Kubernetes?

If you are working with Kubernetes you are required to handle all the below thing yourself such as:

  1. Create and Operate K8s clusters.
  2. Deploy Master Nodes
  3. Deploy Etcd
  4. Setup CA for TLS encryption.
  5. Setup Monitoring, AutoScaling and Auto healing.
  6. Setup Worker Nodes.

But with AWS EKS you only need to manage worker node other all rest Masters node, etcd in high availability , API server, KubeDNS, Scheduler, Controller Manager, Cloud Controller all the things are taken care of Amazon EKS.

You need to pay 0.20 US dollar per hour for your AWS EKS cluster which takes you to 144 US Dollar per month.

Installing tools to work with AWS EKS Cluster

  1. AWS CLI: Required as a dependency of eksctl to obtain the authentication token. To install AWS cli run the below command.
pip3 install --user awscli
After you install aws cli make sure to set the access key and secret key id in aws cli so that it can create the EKS cluster.
  1. eksctl: To setup and operate EKS cluster. To install eksctl run the below commands. Below command will download the eksctl binary in the tmp directory.
curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/v0.69.0/eksctl_Linux_amd64.tar.gz" | tar xz -C /tmp
  • Next, move the eksctl directory in the executable directory.
sudo mv /tmp/eksctl /usr/local/bin
  • To check the version of eksctl and see if it is properly install run below command.
eksctl version
  1. kubectl: Interaction with k8s API server. To install the kubectl tool run the below first command that updates the system and installs the https package.
sudo apt-get update && sudo apt-get install -y apt-transport-https
  • Next, run the curl command that will add the gpg key in the system to verify the authentication with the kubernetes site.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  • Next, add the kubernetes repository
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
  • Again update the system so that it takes the effect after addition of new repository.
sudo apt-get update
  • Next install kubectl tool.
sudo apt-get install -y kubectl
  • Next, check the version of the kubectl tool by running below command.
kubectl version --short --client
  1. IAM user and IAM role:
  • Create an IAM user with administrator access and use that IAM user to explore the AWS resources on the console. This is the user which also be used in the EC2 instance that you will use to manage AWS EKS cluster by passing user’s credentials in aws cli.
  • Also make sure to create a IAM role that you will apply on the EC2 instance from where you will manage AWS EKS and other AWS resources.

Creating AWS EKS using EKSCTL command line tool

Up to now you installed and setup the tools that are required for creating an AWS EKS Cluster. To know how to create a cluster using the eksctl command then run the help command which will provide you flags that you need to use while creating a AWS EKS cluster.

eksctl create cluster --help 
  1. Lets begin to create a EKS cluster. To do that create a file named eks.yaml and copy and paste the below content.
    • apiVersion is the kubernetes API version that will mange the deployment.
    • Kind denotes what kind of resource/object will kubernetes will create. In the below case as you need to provision cluster you should give Clusterconfig
    • metadata: Data that helps uniquely identify the object, including a name string, UID, and optional namespace.
    • nodegroups: Provide the name of node group and other details required for node group that will be used in your EKS cluster.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-course-cluster
  region: us-east-1

nodeGroups:
  - name: ng-1
    instanceType: t2.small
    desiredCapacity: 3
    ssh: # use existing EC2 key
      publicKeyName: eks-course
  1. Now, execute the command below to create the cluster.
eksctl create cluster -f eks.yaml
  1. Once cluster is successfully created run the below command to know the details of the cluster.
eksctl get cluster
  1. Next, Verify the AWS EKS cluster on AWS console.
  1. Also verify the nodes of the nodegroups that were created along with the cluster by running the below commands.
kubectl get nodes
  1. Also, verify the Nodes on AWS console. To check the nodes navigate to EC2 instances.
  1. Verify the nodegroups in the EKS Cluster by running the eksctl command.
eksctl get nodegroup --cluster EKS-cluster
  1. Finally Verify the number of Pods in the EKS Cluster by running the below eksctl command.
eksctl get pods --all-namespaces

Adding one more Node group in the AWS EKS Cluster

To add another node group in EKS Cluster follow the below steps:

  1. Create a yaml file as shown below and copy/paste the below content. In below file you will notice that previous nodegroup is already mentioned otherwise if you run this file without it it will override previous changes and remove the ng-1 node group from the cluster.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-cluster
  region: us-east-1

nodeGroups:
  - name: ng-1
    instanceType: t2.small
    desiredCapacity: 3
    ssh: # use existing EC2 key
      publicKeyName: testing
# Adding the another Node group nodegroup2 with min/max capacity as 3 and 5 resp.
  - name: nodegroup2
    minSize: 2
    maxSize: 3
    instancesDistribution:
      maxPrice: 0.2
      instanceTypes: ["t2.small", "t3.small"]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 50
    ssh:
      publicKeyName: testing
  1. Next run the below command that will help you to create a nodegroups.
eksctl create nodegroup --config-file=node_group.yaml.yaml --include=' nodegroup2'
  1. If you wish to delete the node group in EKS Cluster run anyone of the below commands.
eksctl delete nodegroup --cluster=EKS-cluster --name=nodegroup2
eksctl delete nodegroup --config-file=eks.yaml --include='nodegroup2' --approve
  • To Scale the node group in EKS Cluster
eksctl scale nodegroup --cluster=name_of_the_cluster --nodes=5 --name=node_grp_2

Cluster Autoscaler

The cluster Autoscaler automatically launches additional worker nodes if more resources are needed, and shutdown worker nodes if they are underutilized. The AutoScaling works within a node group, so you should create a node group with Autoscaler feature enabled.

Cluster Autoscaler has the following features:

  • Cluster Autoscaler is used to scale up and down the nodes within the node group.
  • It runs as a deployment based on CPU and Memory utilization.
  • It can contain on demand and spot instances.
  • There are two types of scaling
    • Multi AZ Scaling: Node group with Multi AZ ( Stateless workload )
    • Single AZ Scaling: Node group with Single AZ ( Stateful workload)

Creating and Deploying Cluster Autoscaler

The main function and use of Autoscaler is it dynamically on the fly adds or removes the node within the nodegroup. The Autoscaler works as a deployment and depends on the CPU/Memory requests.

There are two types of scaling available : Multi AZ v/s Single AZ ( Stateful Workload) as EBS cannot be spread across multiple availability zone

To create the cluster Autoscaler you can add multiple nodegroups in the cluster as per need . In this examples lets consider to deploy 2 node groups with single AZ and 1 node groups across 3 AZs using spot instance with Autoscaler enabled

  1. Create a file create and name it as autoscaler.yaml.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-cluster
  region: us-east-1

nodeGroups:
  - name: scale-east1c
    instanceType: t2.small
    desiredCapacity: 1
    maxSize: 10
    availabilityZones: ["us-east-1c"]
# iam holds all IAM attributes of a NodeGroup
# enables IAM policy for cluster-autoscaler
    iam:
      withAddonPolicies:
        autoScaler: true
    labels:
      nodegroup-type: stateful-east1c
      instance-type: onDemand
    ssh: # use existing EC2 key
      publicKeyName: eks-ssh-key
  - name: scale-spot
    desiredCapacity: 1
    maxSize: 10
    instancesDistribution:
      instanceTypes: ["t2.small", "t3.small"]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 0
    availabilityZones: ["us-east-1c", "us-east-1d"]
    iam:
      withAddonPolicies:
        autoScaler: true
    labels:
      nodegroup-type: stateless-workload
      instance-type: spot
    ssh: 
      publicKeyName: eks-ssh-key

availabilityZones: ["us-east-1c", "us-east-1d"]
  1. Run the below commands to add a nodegroups or delete a nodegroups.
eksctl create nodegroup --config-file=autoscaler.yaml
  1. eksctl get nodegroups –cluster=EKS-Cluster
  1. Next, to deploy the Autoscaler run the below kubectl command.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
kubectl -n kube-system annotate deployment.apps/cluster-autoscaler cluster-autoscaler.kubernetes.io/safe-to-evict="false"
  1. To edit the deployment and set your AWS EKS cluster name run the below kubectl command.
kubectl -n kube-system edit deployment.apps/cluster-autoscaler
  1. Next, describe the deployment of the Autoscaler by running the below kubectl command.
kubectl -n kube-system describe deployment cluster-autoscaler
  1. Finally view the cluster Autoscaler logs by running the kubectl command on kube-system namespace.
kubectl -n kube-system logs deployment.apps/cluster-autoscaler
  1. Verify the Pods. You should notice below that first pod is for Nodegroup1 , similarly second is for Nodegroup2 and finally the third is Autoscaler pod itself.

Nginx Deployment on the EKS cluster when Autoscaler is enabled.

  1. To deploy the nginx application on the EKS cluster that you just created , create a yaml file and name it something which you find it convenient and copy/paste the below content into that.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        service: nginx
        app: nginx
    spec:
      containers:
      - image: nginx
        name: test-autoscaler
        resources:
          limits:
            cpu: 300m
            memory: 512Mi
          requests:
            cpu: 300m
            memory: 512Mi
      nodeSelector:
        instance-type: spot


  1. Now to apply the nginx deployment, run the below command.
kubectl apply -f nginx-deployment.yaml
  1. After successful deployment , check the number of Pods.
kubectl get pods
  1. Checking the number of nodes and type of node.
kubectl get nodes -l instance_type=spot
  • Scale the deployment to 3 replicas ( that is 3 pods will be scaled)
kubectl scale --replicas=3 deployment/test-autoscaler
  • Checking the logs and filtering the events.
kubectl -n kube-system logs deployment.apps/cluster-autoscaler | grep -A5 "Expanding Node Group"

EKS Cluster Monitoring and Cloud watch Logging

By, now you have already setup EKS cluster but it is also important to monitor your EKS cluster. To monitor your cluster follow the below steps:

  1. Create a below eks.yaml file and copy /paste below code into the file.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-cluster
  region: us-east-1

nodeGroups:
  - name: ng-1
    instanceType: t2.small
    desiredCapacity: 3
    ssh: # use existing EC2 key
      publicKeyName: eks-ssh-key
cloudWatch:
  clusterLogging:
    enableTypes: ["api", "audit", "authenticator"] # To select only few log_types
    # enableTypes: ["*"]  # If you need to enable all the log_types
  1. Now apply the cluster logging by running the command.
eksctl utils update-cluster-logging --config-file eks.yaml --approve 
  1. To Disable all the configuration types
eksctl utils update-cluster-logging --name=EKS-cluster --disable-types all

To get container metrics using cloudwatch: First add IAM policy (CloudWatchAgentServerPolicy ) to all your nodegroup(s)- to nodegroup(s) role and Deploy Cloudwatch Agent – After you deploy it will have its own namespace (cloudwatch-agent)

  1. Now run the below command.
curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/master/k8s-yaml-templates/quickstart/cwagent-fluentd-quickstart.yaml | sed "s/{{cluster_name}}/EKS-course-cluster/;s/{{region_name}}/us-east-1/" | kubectl apply -f -
  1. To check what all has been created in namespaces
kubectl get all -n amazon-cloudwatch

kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --limits=cpu=500m --expose --port=80
kubectl run --generator=run-pod/v1 -it --rm load-generator --image=busybox /bin/sh
Hit enter for command prompt
while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done

What is Helm?

Helm is the package manager similar to what you have in ubuntu or python such as apt or pip. Helm contains mainly three components.

  • Chart: All the dependency files and application files.
  • Config: Any configuration that you would like to deploy.
  • Release: It is an running instance of a chart.

Helm Components

  • Helm client: Manages repository, Managing releases, Communicates with Helm library.
  • Helm library: It interacts with Kubernetes API server.

Installing Helm

  • To install helm make sure to create the directory with below commands and then change the directory
mkdir helm && cd helm
  • Next, add official stable helm repository which contains sample charts to install
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
helm version
  • To find all the lists of the repo
helm repo list
  • To Update the repository
helm repo update
  • To check all the charts in the helm repository.
helm search repo
  • To install one of the charts. After running the below command then make sure to check the number of Pods running by using kubectl get pods command.
helm install name_of_the_chart stable/redis
  • To check the deployed charts
helm ls # 
  • To uninstall helm deployments.
helm uninstall <<name-of-release-from-previous-output>>

Creating AWS EKS Cluster Admin user

To manage all resources in the EKS cluster you need to have dedicated users either ( Admin or Read only ) to perform tasks accordingly. Lets begin by creating an admin user first.

  1. Create IAM user in AWS console (k8s-cluster-admin) and store the access key and secret key for this user locally on your machine.
  2. Next, add user to configmap aws-auth section within map Users section. But before you add a user, lets find all the configmap in kube-system namespace because we need to store all the users in aws-auth.
kubectl -n kube-system get cm
  1. Save the kubectl command in the yaml formatted file.
kubectl -n kube-system get cm aws-auth -o yaml > aws-auth-configmap.yaml
  1. Next, edit the aws-auth-configmap.yaml and add the mapUsers with the following information:
    • userarn
    • username
    • groups as ( system:masters) which has admin/all permissions basically a role
  1. Run the below command to apply the changes of newly added user.
kubectl apply -f aws-auth-configmap.yaml -n kube-system

After you apply changes you will notice that in AWS EKS you will not see any warning such as kubernetes objects cannot be accessed or something like that.

  1. Now check if user has been properly created by running the describe command.
kubectl -n kube-system describe cm aws-auth
  1. Next, add user to aws credentials file in dedicated section (profile) and then export it using export command or store it in aws cli command line.
export AWS_PROFILE="profile_name"
  1. Finally check which user is currently running the aws cli commands
aws sts get-caller-identity

Creating a read only user for the dedicated namespace

Similarly, now create a read only user for AWS-EKS service. Lets follow the below steps to create a read only user and map it in configmap with IAM.

  1. Create a namespace using below namespace.
kubectl create namespace production
  1. Create a IAM user on AWS Console
  1. Create a file rolebinding.yaml and add both the role and role bindings that includes the permissions that a kubernetes user will have.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: production
  name: prod-viewer-role
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["*"]  # can be further limited, e.g. ["deployments", "replicasets", "pods"]
  verbs: ["get", "list", "watch"] 
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: prod-viewer-binding
  namespace: production
subjects:
- kind: User
  name: prod-viewer
  apiGroup: ""
roleRef:
  kind: Role
  name: prod-viewer-role
  apiGroup: ""
  1. Now apply the role and role bindings using the below command.
kubectl apply -f rolebinding.yaml
  1. Next edit the yaml file and apply the changes such as userarn, role and username as you did previously.
kubectl -n kube-system get cm aws-auth -o yaml > aws-auth-configmap.yaml
kubectl apply -f aws-auth-configmap.yaml -n kube-system
  1. Finally test the user and setup

EKS Networking

  • Amazon VPC contains CNI Plugins from which each Pod receives IP address which is linked with ENI .
  • Pods have same IP address within the VPC that means inside and outside the EKS cluster.
  • Make sure to use maximum IP address by using CIDR/18 which has more IP address.
  • EC2 instance can also have limited amount of ENI/IP address that is each EC2 instance can have limited PODS ( like 36 or so according to Instance_type)

IAM and RBAC Integration in AWS EKS

  • Authentication is done by IAM
  • Authorization is done by kubernetes RBAC
  • You can assign RBAC directly to IAM entities.

kubectl ( USER SENDS AWS IDENTITY) >>> Connects with EKS >>> Verify AWS IDENTITY ( By Authorizing AWS Identity with Kubernetes RBAC )

Worker nodes join the cluster

  1. When you create a worker node, assign the IAM Role and authorize that IAM Role needs to be authorized in RBAC in order to join the cluster. Add system:bootstrappers and system:nodes groups in your ConfigMap. The value for rolearn is the NodeInstanceRole and then run the below command
kubectl apply -f aws-auth.yaml
  1. Check current state of cluster services and nodes
kubectl get svc,nodes -o wide

How to Scale Up and Down Kubernetes Pods

There are three ways of Scaling up/down the kubernetes Pods, Lets look at all of these three.

  1. Scale the deployment to 3 replicas ( that is 3 pods will be scaled) using kubectl scale command.
kubectl scale --replicas=3 deployment/nginx-deployment
  1. Next, update the yaml file with 3 replicas and run the below kubectl apply command. ( Lets say you have abc.yaml file)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        service: nginx
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx 
        resources:
          limits:
            cpu: 300m
            memory: 512Mi
          requests:
            cpu: 300m
            memory: 512Mi
      nodeSelector:
        instance-type: spot
kubectl apply -f abc.yaml
  1. You can scale the Pods using the kubernetes Dashboard.
  1. Apply the manifest file that you created earlier by running below command.
kubectl apply -f nginx.yaml
  1. Next verify if the deployment has been done succesfully.
kubectl get deployment --all-namespaces

Conclusion

In this tutorial you learned everything about AWS EKS from beginners to Advanced level.

Now, you have string understanding of AWS EKS which applications do you plan to manage on it ?

How to create AWS EKS cluster using Terraform and connect Kubernetes cluster with ubuntu machine.

Working with container orchestration like kubernetes has always remained on top most priorities. Well , starting from kubernetes then toward EKS has benefited almost every single person on this planet who has used it. As this is all managed AWS very well by taking care of all your infrastructure , deployments and scaling of cluster.

Having said that, still there is some work to do such as creating AWS EKS with right permissions and policy , why not automate this as well? That would be exceptionally well if that happens and yes this will happen right in this tutorial as we will use terraform to automate your creation of EKS cluster.

So in this tutorial we will configure few files in terraform and then you can create as many as cluster in few seconds. Please follow along.

Table of content

  • What is AWS EKS
  • Prerequisites
  • How to install Terraform on ubuntu machine
  • Terraform Configuration Files and Structure
  • Configure Terraform files to create AWS EKS cluster
  • Configure & Connect your Ubuntu Machine to communicate with your cluster
  • Conclusion

What is AWS EKS ( Amazon Elastic Kubernetes Services) ?

Amazon provides its own service AWS EKS where you can host kubernetes without worrying about infrastructure like kubernetes nodes, installation of kubernetes etc. It gives you a platform to host kubernetes.

Some features of Amazon EKS ( Elastic kubernetes service)

  • It expands and scales across many availability zones so that there is always a high availability.
  • It automatically scales and fix any impacted or unhealthy node.
  • It is interlinked with various other AWS services such as IAM, VPC , ECR & ELB etc.
  • It is very secure service.

How does AWS EKS service work?

  • First step in EKS is to create EKS cluster using AWS CLI or AWS Management console.
  • Now, next you can have your own machines EC2 where you can deploy applications or deploy to AWS Fargate which manages it for you.
  • Now connect to kubernetes cluster with kubectl commands.
  • Finally deploy and run applications on EKS cluster.

Prerequisites

  • Ubuntu machine to run terraform preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account
  • Recommended to have 4GB RAM
  • At least 5GB of drive space
  • Ubuntu machine should have IAM role attached with AWS EKS full permissions or it is always great to have administrator permissions to work with demo’s.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Terraform on Ubuntu 18.04 LTS

  • Update your already existing system packages.
sudo apt update
  • Download the latest version of terraform in opt directory
wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
This image has an empty alt attribute; its file name is image-163.png
  • Install zip package which will be required to unzip
sudo apt-get install zip -y
  • unzip the Terraform download zip file
unzip terraform*.zip
  • Move the executable to executable directory
sudo mv terraform /usr/local/bin
  • Verify the terraform by checking terraform command and version of terraform
terraform               # To check if terraform is installed 

terraform -version      # To check the terraform version  
This image has an empty alt attribute; its file name is image-164.png
This image has an empty alt attribute; its file name is image-165.png
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Configure Terraform files to create AWS EKS cluster

In this demonstration we will create IAM role and IAM policy which we will attach to the same role. Later in this tutorial we will create Kubernetes cluster with proper network configurations. Lets get started and configure terraform files which are required for creation of AWS EKS on AWS account.

  • Create a folder inside opt directory
mkdir /opt/terraform-eks-demo
cd /opt/terraform-eks-demo
  • Now create a file main.tf inside the directory you’re in
vi main.tf
  • This is our main.tf file and paste the below code inside the file.
# Creating IAM role with assume policy so that it can be assumed while connecting with Kubernetes cluster.

resource "aws_iam_role" "iam-role-eks-cluster" {
  name = "terraform-eks-cluster"
  assume_role_policy = <<POLICY
{
 "Version": "2012-10-17",
 "Statement": [
   {
   "Effect": "Allow",
   "Principal": {
    "Service": "eks.amazonaws.com"
   },
   "Action": "sts:AssumeRole"
   }
  ]
 }
POLICY
}

# Attach both EKS-Service and EKS-Cluster policies to the role.

resource "aws_iam_role_policy_attachment" "eks-cluster-AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = "${aws_iam_role.iam-role-eks-cluster.name}"
}

resource "aws_iam_role_policy_attachment" "eks-cluster-AmazonEKSServicePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
  role       = "${aws_iam_role.iam-role-eks-cluster.name}"
}

# Crate security group for AWS EKS.

resource "aws_security_group" "eks-cluster" {
  name        = "SG-eks-cluster"
  vpc_id      = "vpc-XXXXXXXXXXX"  # Use your VPC here

  egress {                   # Outbound Rule
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {                  # Inbound Rule
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

}

# Create EKS cluster

resource "aws_eks_cluster" "eks_cluster" {
  name     = "terraformEKScluster"
  role_arn =  "${aws_iam_role.iam-role-eks-cluster.arn}"
  version  = "1.19"

  vpc_config {             # Configure EKS with vpc and network settings 
   security_group_ids = ["${aws_security_group.eks-cluster.id}"]
   subnet_ids         = ["subnet-XXXXX","subnet-XXXXX"] # Use Your Subnets here
    }

  depends_on = [
    "aws_iam_role_policy_attachment.eks-cluster-AmazonEKSClusterPolicy",
    "aws_iam_role_policy_attachment.eks-cluster-AmazonEKSServicePolicy",
   ]
}



# Creating IAM role for EKS nodes with assume policy so that it can assume 


resource "aws_iam_role" "eks_nodes" {
  name = "eks-node-group"

  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.eks_nodes.name
}

resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.eks_nodes.name
}

resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.eks_nodes.name
}

# Create EKS cluster node group

resource "aws_eks_node_group" "node" {
  cluster_name    = aws_eks_cluster.eks_cluster.name
  node_group_name = "node_tuto"
  node_role_arn   = aws_iam_role.eks_nodes.arn
  subnet_ids      = ["subnet-","subnet-"]

  scaling_config {
    desired_size = 1
    max_size     = 1
    min_size     = 1
  }

  # Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
  # Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
  depends_on = [
    aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
  ]
}
 
 
  • Below should directory of our demo look like.
  • Now your files and code are ready for execution . Initialize the terraform
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply
  • Generally EKS cluster take few minutes to launch.
  • Lets verify our AWS EKS cluster and other components which were created by terraform.
IAM Role with proper permissions.

  • Now verify Amazon EKS cluster
  • Finally verify the node group of the cluster.

Configure & Connect your Ubuntu Machine to communicate with your cluster

Up to now we created Kubernetes cluster in AWS EKS with proper IAM role permissions and configuration , but please make sure to configure AWS credentials on local machine to match with same IAM user or IAM role used while creating the cluster. That means use same IAM role credentials in local machine which we used to create Kubernetes cluster.

Here in this demonstration we are using IAM role credentials in EC2 instance from which we created AWS EKS using terraform. So on the same machine we can perform below steps.

  • Prerequisites – Make sure you have AWS CLI and kubectl installed on ubuntu machine in order to make connection. If you don’t have these two , don’t worry please visit our another article to find both the installation
  • On ubuntu machine configure kubeconfig to make communication from your local machine to Kubernetes cluster in AWS EKS
aws eks update-kubeconfig --region us-east-2 --name terraformEKScluster
  • Now finally test the communication between local machine and cluster .
kubectl get svc

Great we can see the connectivity from our local machine to Kubernetes cluster.

Conclusion:

In this tutorial . Firstly we went through detailed view of what is AWS Elastic kubernetes service and how to create kubernetes cluster using Terraform . Then we showcased connection between Kubernetes cluster and kubectl client on ubuntu machine.

Hope you had a great time seeing this tutorial with detailed explanations and with practical’s. . If you like this , please share with your friends and spread the word.

Getting Started with Amazon Elastic kubernetes Service (AWS EKS)

Kubernetes is scalable open source tool that manages the container orchestration in a very effective way. It provides you a platform to deploy your applications with few commands. AWS EKS stands for Amazon Elastic kubernetes service which is AWS managed service where it takes care of managing infrastructure to deployments to further scaling containerized applications.

In this tutorial, you will learn from basics of kubernetes to Amazon EKS.

Table of Content

  1. What is Kubernetes?
  2. What is Amazon Elastic Kubernetes Service (Amazon EKS)
  3. Prerequisites
  4. Install Kubectl in Windows
  5. Install Kubectl in Linux
  6. How to create new Kubernetes cluster in Amazon EKS
  7. Configure & Connect your Local machine to communicate with your cluster
  8. Conclusion

What is Kubernetes?

Kubernetes is an open source container orchestration engine for automating deployments, scaling and managing the containers applications. Kubernetes is an open source Google based tool. It is also known as k8s. It can run on any platforms such as on premises , hybrid or public cloud.

Features of kubernetes

  1. kubernetes scales very well.
  2. Load balancing
  3. Auto restarts if required
  4. Self healing and automatic rollbacks.
  5. You can manage configurations as well like secrets or passwords
  6. Kubernetes can be mounted with various storages such as EFS and local storage.
  7. Kubernetes works very well with networking components such as NFS , flocker etc. automatically.

Kubernetes Components

  • Pod: Pods are group of containers which have shared storage and network.
  • Service: Services are used when you want to expose the application outside of your local environment.
  • Ingress: Ingress helps in exposing http/https routes from outside world to the services in your cluster.
  • ConfigMap: Pod consume configmap as environmental values or command line argument in configuration file .
  • Secrets: Secrets as name suggest it stores sensitive information such as password, OAuth tokens, SSH keys etc.
  • Volumes: These are persistent storage for containers.
  • Deployment: Deployment is additional layer which helps to define how Pod and containers should be created using yaml files.

What is AWS EKS (Amazon Elastic Kubernetes Services) ?

Amazon provides its own managed service AWS EKS where you can host kubernetes without needing to install, operate, and maintain your own Kubernetes control plane or nodes etc. It gives you a platform to host kubernetes control node and applications or service inside it. There are some basic points related to EKS as follows:

  • It expands and scales Kubernetes control plane across many availability zones so that there is always a high availability.
  • It automatically scales and fix control plane instances if any instance is impacted or unhealthy node.
  • It is integrated with various other AWS services such as IAM for authentication, VPC for Isolation , ECR for container images & ELB for load distribution etc.
  • It is very secure service.

How does AWS EKS service work?

  • First step in EKS is to create EKS cluster using AWS CLI or AWS Management console.
  • Next launch self managed EC2 instance where you deploy applications or deploy workloads to AWS Fargate which manages it for you.
  • After cluster is setup , Connect to kubernetes cluster using kubectl commands.
  • Finally deploy and run applications on EKS cluster.

Prerequisites

  • You must have AWS account in order to setup cluster in AWS EKS with full access to AWS EKS. If you don’t have AWS account, please create a account from here AWS account.
  • AWS CLI installed. If you don’t have it already install it from here.

Install Kubectl on Windows machines

  • Open PowerShell and run the command.
curl -o kubectl.exe https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/windows/amd64/kubectl.exe
  • Now verify in C drive if binary file has been downloaded succesfully.
  • Now run kubectl binary file and verify the client.
  • Verify its version with the following command
kubectl version --short --client

Install Kubectl on Linux machine

  • Download the kubectl binary using curl command on ubuntu machine under home directory ie. $HOME
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
  • Apply execute permissions to the binary
chmod +x ./kubectl
  • Copy the binary to a folder in your PATH
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
  • Verify the kubectl version on ubuntu machine
kubectl version --short --client

Amazon EKS Clusters

An Amazon EKS cluster components:

  1. The Amazon EKS control plane is not shared between any account nor with any other clusters. Control Panel contains at least two API servers which are exposed via Amazon EKS endpoint associated with the cluster and three etcd instances which are associated with Amazon EBS volumes which are encrypted using AWS KMS. Amazon EKS automatically monitors load on control panel and removes unhealthy instances when needed. Amazon EKS uses Amazon VPC network policies to restrict traffic between control plane components to within a single cluster.
  2. Amazon EKS nodes are registered with the control plane via the API server endpoint and a certificate file that is created for your cluster. Your Amazon EKS cluster can schedule pods on any combination of Self-managed nodes, Amazon EKS Managed node groups, and AWS Fargate.
    • Self-managed nodes
      • Can run containers that require Windows and Linux.
      • Can run workloads that require Arm processors.
      • All of your pods on each of your nodes share a kernel runtime environment with other pods.
      • If the pod requires more resources than requested, and resources are available on the node, the pod can use additional resources.
      • Can assign IP addresses to pods from a different CIDR block than the IP address assigned to the node.
      • Can SSH into node
    • Amazon EKS Managed node groups
      • Can run containers that require Linux.
      • Can run workloads that require Arm processors.
      • All of your pods on each of your nodes share a kernel runtime environment with other pods.
      • If the pod requires more resources than requested, and resources are available on the node, the pod can use additional resources.
      • Can assign IP addresses to pods from a different CIDR block than the IP address assigned to the node.
      • Can SSH into node
    • AWS Fargate
      • Can run containers that require Linux.
      • Here Each pod has a dedicated kernel.
      • The pod can be re-deployed using a larger vCPU and memory configuration though.
      • There is no Node.
      • As there is no Node, you cannot SSH into node
  1. Workloads: A container contains one or more pods. Workloads define applications running on a Kubernetes cluster. Every workload controls pods. There are five types of workloads on a cluster.
    • Deployment: Ensures that a specific number of pods run and includes logic to deploy changes
    • ReplicaSet: Ensures that a specific number of pods run. Can be controlled by deployments.
    • StatefulSet : Manages the deployment of stateful applications
    • DaemonSet  Ensures that a copy of a pod runs on all (or some) nodes in the cluster
    • Job: Creates one or more pods and ensures that a specified number of them run to completion
  • By default, Amazon EKS clusters have three workloads:
    • coredns: A deployment that deploys two pods that provide name resolution for all pods in the cluster.
    • aws-node A Daemon Set that deploys one pod to each Amazon EC2 node in your cluster which runs the AWS VPC CNI controller, that provides VPC networking functionality to the pods and nodes in your cluster.
    • kube-proxy: A DaemonSet that deploys one pod to each Amazon EC2 node in your cluster which maintains network rules on nodes that enable networking communication to your pods.

Creating Kubernetes cluster in Amazon EKS

In this demonstration we will create and setup kubernetes cluster in Amazon EKS using Amazon management console and AWS CLI commands. Before we start make sure you have VPC created and IAM role with Full access to EKS permissions.

  • We already have one VPC in every AWS account by default. If you wish to create another VPC specifically for AWS EKS in AWS account you can create it.
  • Hop over to IAM service and create a IAM policy with full EKS permissions.
  • Click on Create policy and then click on choose service
  • Now give a name to the policy and click create
  • Now Go to IAM role and create a role.
  • Now choose EKS service and then select EKS cluster as your use case:
  • Give a name to role and then hit create role
  • Now attach a policy to IAM role which we created.
  • Also I point here , please add STS permissions to the role’s Trust relationship which will be required when client makes request.
  • Make sure your JSON policy should like this.
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*",
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
  • So now we are done with IAM role and policy attachment , we can now create and work with kubernetes cluster.
  • Now go to AWS EKS console and click on Create cluster
  • Now add all the configurations
  • Now add VPC details and two public subnets if you have. You can skip subnets as of now.
  • Keep hitting NEXT and finally click on Create cluster
  • Lets verify if cluster is up and active . It takes sometime for cluster to come up.

Now, kubernetes cluster on AWS EKS is successfully created. Now lets initiate communication from client which we installed to the kubernetes cluster.

Configure & Connect your Local machine to communicate with your cluster

Up to now we created Kubernetes cluster in AWS EKS with proper IAM role permissions and configuration , but please make sure to configure AWS credentials on local machine to match with same IAM user or IAM role used while creating the cluster. That means use same IAM user or IAM role credentials in local machine which we used to create Kubernetes cluster.

  • Open Visual studio or GIT bash or command prompt.
  • Now , configure kubeconfig to make communication from your local machine to Kubernetes cluster in AWS EKS
aws eks update-kubeconfig --region us-east-2 --name Myekscluster
  • Now finally test the communication between local machine and cluster .
kubectl get svc

Great you can see the connectivity from our local machine to Kubernetes cluster !!

Create nodes on Kubernetes cluster

Amazon EKS cluster can schedule pods on any combination of self managed nodes, Amazon EKS managed nodes and AWS Fargate.

Amazon EKS Managed node group

  • With Amazon managed node group you don’t need to provision or register Amazon EC2 instances. All the managed nodes are part of Amazon EC2 auto scaling group.
  • You can add a managed node group to new or existing clusters using the Amazon EKS console, eksctl, AWS CLI; AWS API, or using AWS Cloud Formation. Managed node group manage Amazon EC2 instances for you.
  • A managed node group’s Auto Scaling group spans all of the subnets that you specify when you create the group.
  • Amazon EKS managed node groups can be launched in both public and private subnets.
  • You can create multiple managed node groups within a single cluster

Creating Managed node group using AWS Management Console.

  • Go to Amazon EKS page and there navigate to the Configuration tab, select the Compute tab, and then choose Add Node Group and fill all the details such as name , node IAM role that you created earlier while creating the cluster.
  • Next, On the Set compute and scaling configuration page, enter all the details such as Instance type, Capacity type and then click on NEXT
  • Now , add the networking details such as VPC details, subnets, SSH Keys details.
  • You can also find details of node from your location machine by running the following commands.
aws eks update-kubeconfig --region us-east-2 --name "YOUR_CLUSTER_NAME"
kubectl get nodes --watch

Lets learn how to create Fargate(Linux) nodes and use them in kubernetes cluster.

  • In order to create Fargate(Linux) nodes first thing you need to do is to create Fargate profile. This profile is needed because when any pod gets deployed in Fargate it first matches the desired configuration from the profile then it gets deployed. The configuration contains permissions such as ability of pod to get the containers image from ECR etc. You can find the steps to create Fargate profile from here.

Conclusion

In this tutorial , we definitely learnt a lot . Firstly we went through detailed view of what is kubernetes , what is Amazon Elastic Kubernetes service ie. AWS EKS . Then we learnt how to install kubernetes client kubectl on windows as well as Linux machine and finally we created Kubernetes cluster and connected using kubectl client.

Hope you had wonderful experience going through this ultimate Guide of kubernetes and EKS. If you like this , please share with your friends and spread the word.