The Ultimate Guide on AWS EKS for Beginners [Easiest Way]

In this Ultimate Guide as a beginner you will learn everything you should know about AWS EKS and how to manage your AWS EKS cluster ?

Common! lets begin !

Table of Content

  1. What is AWS EKS ?
  2. Why do you need AWS EKS than Kubernetes?
  3. Installing tools to work with AWS EKS Cluster
  4. Creating AWS EKS using EKSCTL command line tool
  5. Adding one more Node group in the AWS EKS Cluster
  6. Cluster Autoscaler
  7. Creating and Deploying Cluster Autoscaler
  8. Nginx Deployment on the EKS cluster when Autoscaler is enabled.
  9. EKS Cluster Monitoring and Cloud watch Logging
  10. What is Helm?
  11. Creating AWS EKS Cluster Admin user
  12. Creating Read only user for the dedicated namespace
  13. EKS Networking
  14. IAM and RBAC Integration in AWS EKS
  15. Worker nodes join the cluster
  16. How to Scale Up and Down Kubernetes Pods
  17. Conclusion

What is AWS EKS ?

Amazon provides its own service AWS EKS where you can host kubernetes without worrying about infrastructure like kubernetes nodes, installation of kubernetes etc. It gives you a platform to host kubernetes.

Some features of Amazon EKS ( Elastic kubernetes service)

  1. It expands and scales across many availability zones so that there is always a high availability.
  2. It automatically scales and fix any impacted or unhealthy node.
  3. It is interlinked with various other AWS services such as IAM, VPC , ECR & ELB etc.
  4. It is very secure service.

How does AWS EKS service work?

  • First step in EKS is to create EKS cluster using AWS CLI or AWS Management console or using eksctl command line tool.
  • Now, next you can have your own machines EC2 where you can deploy applications or deploy to AWS Fargate which manages it for you.
  • Now connect to kubernetes cluster with kubectl or eksctl commands.
  • Finally deploy and run applications on EKS cluster.

Why do you need AWS EKS than Kubernetes?

If you are working with Kubernetes you are required to handle all the below thing yourself such as:

  1. Create and Operate K8s clusters.
  2. Deploy Master Nodes
  3. Deploy Etcd
  4. Setup CA for TLS encryption.
  5. Setup Monitoring, AutoScaling and Auto healing.
  6. Setup Worker Nodes.

But with AWS EKS you only need to manage worker node other all rest Masters node, etcd in high availability , API server, KubeDNS, Scheduler, Controller Manager, Cloud Controller all the things are taken care of Amazon EKS.

You need to pay 0.20 US dollar per hour for your AWS EKS cluster which takes you to 144 US Dollar per month.

Installing tools to work with AWS EKS Cluster

  1. AWS CLI: Required as a dependency of eksctl to obtain the authentication token. To install AWS cli run the below command.
pip3 install --user awscli
After you install aws cli make sure to set the access key and secret key id in aws cli so that it can create the EKS cluster.
  1. eksctl: To setup and operate EKS cluster. To install eksctl run the below commands. Below command will download the eksctl binary in the tmp directory.
curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/v0.69.0/eksctl_Linux_amd64.tar.gz" | tar xz -C /tmp
  • Next, move the eksctl directory in the executable directory.
sudo mv /tmp/eksctl /usr/local/bin
  • To check the version of eksctl and see if it is properly install run below command.
eksctl version
  1. kubectl: Interaction with k8s API server. To install the kubectl tool run the below first command that updates the system and installs the https package.
sudo apt-get update && sudo apt-get install -y apt-transport-https
  • Next, run the curl command that will add the gpg key in the system to verify the authentication with the kubernetes site.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  • Next, add the kubernetes repository
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
  • Again update the system so that it takes the effect after addition of new repository.
sudo apt-get update
  • Next install kubectl tool.
sudo apt-get install -y kubectl
  • Next, check the version of the kubectl tool by running below command.
kubectl version --short --client
  1. IAM user and IAM role:
  • Create an IAM user with administrator access and use that IAM user to explore the AWS resources on the console. This is the user which also be used in the EC2 instance that you will use to manage AWS EKS cluster by passing user’s credentials in aws cli.
  • Also make sure to create a IAM role that you will apply on the EC2 instance from where you will manage AWS EKS and other AWS resources.

Creating AWS EKS using EKSCTL command line tool

Up to now you installed and setup the tools that are required for creating an AWS EKS Cluster. To know how to create a cluster using the eksctl command then run the help command which will provide you flags that you need to use while creating a AWS EKS cluster.

eksctl create cluster --help 
  1. Lets begin to create a EKS cluster. To do that create a file named eks.yaml and copy and paste the below content.
    • apiVersion is the kubernetes API version that will mange the deployment.
    • Kind denotes what kind of resource/object will kubernetes will create. In the below case as you need to provision cluster you should give Clusterconfig
    • metadata: Data that helps uniquely identify the object, including a name string, UID, and optional namespace.
    • nodegroups: Provide the name of node group and other details required for node group that will be used in your EKS cluster.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-course-cluster
  region: us-east-1

nodeGroups:
  - name: ng-1
    instanceType: t2.small
    desiredCapacity: 3
    ssh: # use existing EC2 key
      publicKeyName: eks-course
  1. Now, execute the command below to create the cluster.
eksctl create cluster -f eks.yaml
  1. Once cluster is successfully created run the below command to know the details of the cluster.
eksctl get cluster
  1. Next, Verify the AWS EKS cluster on AWS console.
  1. Also verify the nodes of the nodegroups that were created along with the cluster by running the below commands.
kubectl get nodes
  1. Also, verify the Nodes on AWS console. To check the nodes navigate to EC2 instances.
  1. Verify the nodegroups in the EKS Cluster by running the eksctl command.
eksctl get nodegroup --cluster EKS-cluster
  1. Finally Verify the number of Pods in the EKS Cluster by running the below eksctl command.
eksctl get pods --all-namespaces

Adding one more Node group in the AWS EKS Cluster

To add another node group in EKS Cluster follow the below steps:

  1. Create a yaml file as shown below and copy/paste the below content. In below file you will notice that previous nodegroup is already mentioned otherwise if you run this file without it it will override previous changes and remove the ng-1 node group from the cluster.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-cluster
  region: us-east-1

nodeGroups:
  - name: ng-1
    instanceType: t2.small
    desiredCapacity: 3
    ssh: # use existing EC2 key
      publicKeyName: testing
# Adding the another Node group nodegroup2 with min/max capacity as 3 and 5 resp.
  - name: nodegroup2
    minSize: 2
    maxSize: 3
    instancesDistribution:
      maxPrice: 0.2
      instanceTypes: ["t2.small", "t3.small"]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 50
    ssh:
      publicKeyName: testing
  1. Next run the below command that will help you to create a nodegroups.
eksctl create nodegroup --config-file=node_group.yaml.yaml --include=' nodegroup2'
  1. If you wish to delete the node group in EKS Cluster run anyone of the below commands.
eksctl delete nodegroup --cluster=EKS-cluster --name=nodegroup2
eksctl delete nodegroup --config-file=eks.yaml --include='nodegroup2' --approve
  • To Scale the node group in EKS Cluster
eksctl scale nodegroup --cluster=name_of_the_cluster --nodes=5 --name=node_grp_2

Cluster Autoscaler

The cluster Autoscaler automatically launches additional worker nodes if more resources are needed, and shutdown worker nodes if they are underutilized. The AutoScaling works within a node group, so you should create a node group with Autoscaler feature enabled.

Cluster Autoscaler has the following features:

  • Cluster Autoscaler is used to scale up and down the nodes within the node group.
  • It runs as a deployment based on CPU and Memory utilization.
  • It can contain on demand and spot instances.
  • There are two types of scaling
    • Multi AZ Scaling: Node group with Multi AZ ( Stateless workload )
    • Single AZ Scaling: Node group with Single AZ ( Stateful workload)

Creating and Deploying Cluster Autoscaler

The main function and use of Autoscaler is it dynamically on the fly adds or removes the node within the nodegroup. The Autoscaler works as a deployment and depends on the CPU/Memory requests.

There are two types of scaling available : Multi AZ v/s Single AZ ( Stateful Workload) as EBS cannot be spread across multiple availability zone

To create the cluster Autoscaler you can add multiple nodegroups in the cluster as per need . In this examples lets consider to deploy 2 node groups with single AZ and 1 node groups across 3 AZs using spot instance with Autoscaler enabled

  1. Create a file create and name it as autoscaler.yaml.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-cluster
  region: us-east-1

nodeGroups:
  - name: scale-east1c
    instanceType: t2.small
    desiredCapacity: 1
    maxSize: 10
    availabilityZones: ["us-east-1c"]
# iam holds all IAM attributes of a NodeGroup
# enables IAM policy for cluster-autoscaler
    iam:
      withAddonPolicies:
        autoScaler: true
    labels:
      nodegroup-type: stateful-east1c
      instance-type: onDemand
    ssh: # use existing EC2 key
      publicKeyName: eks-ssh-key
  - name: scale-spot
    desiredCapacity: 1
    maxSize: 10
    instancesDistribution:
      instanceTypes: ["t2.small", "t3.small"]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 0
    availabilityZones: ["us-east-1c", "us-east-1d"]
    iam:
      withAddonPolicies:
        autoScaler: true
    labels:
      nodegroup-type: stateless-workload
      instance-type: spot
    ssh: 
      publicKeyName: eks-ssh-key

availabilityZones: ["us-east-1c", "us-east-1d"]
  1. Run the below commands to add a nodegroups or delete a nodegroups.
eksctl create nodegroup --config-file=autoscaler.yaml
  1. eksctl get nodegroups –cluster=EKS-Cluster
  1. Next, to deploy the Autoscaler run the below kubectl command.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
kubectl -n kube-system annotate deployment.apps/cluster-autoscaler cluster-autoscaler.kubernetes.io/safe-to-evict="false"
  1. To edit the deployment and set your AWS EKS cluster name run the below kubectl command.
kubectl -n kube-system edit deployment.apps/cluster-autoscaler
  1. Next, describe the deployment of the Autoscaler by running the below kubectl command.
kubectl -n kube-system describe deployment cluster-autoscaler
  1. Finally view the cluster Autoscaler logs by running the kubectl command on kube-system namespace.
kubectl -n kube-system logs deployment.apps/cluster-autoscaler
  1. Verify the Pods. You should notice below that first pod is for Nodegroup1 , similarly second is for Nodegroup2 and finally the third is Autoscaler pod itself.

Nginx Deployment on the EKS cluster when Autoscaler is enabled.

  1. To deploy the nginx application on the EKS cluster that you just created , create a yaml file and name it something which you find it convenient and copy/paste the below content into that.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        service: nginx
        app: nginx
    spec:
      containers:
      - image: nginx
        name: test-autoscaler
        resources:
          limits:
            cpu: 300m
            memory: 512Mi
          requests:
            cpu: 300m
            memory: 512Mi
      nodeSelector:
        instance-type: spot


  1. Now to apply the nginx deployment, run the below command.
kubectl apply -f nginx-deployment.yaml
  1. After successful deployment , check the number of Pods.
kubectl get pods
  1. Checking the number of nodes and type of node.
kubectl get nodes -l instance_type=spot
  • Scale the deployment to 3 replicas ( that is 3 pods will be scaled)
kubectl scale --replicas=3 deployment/test-autoscaler
  • Checking the logs and filtering the events.
kubectl -n kube-system logs deployment.apps/cluster-autoscaler | grep -A5 "Expanding Node Group"

EKS Cluster Monitoring and Cloud watch Logging

By, now you have already setup EKS cluster but it is also important to monitor your EKS cluster. To monitor your cluster follow the below steps:

  1. Create a below eks.yaml file and copy /paste below code into the file.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-cluster
  region: us-east-1

nodeGroups:
  - name: ng-1
    instanceType: t2.small
    desiredCapacity: 3
    ssh: # use existing EC2 key
      publicKeyName: eks-ssh-key
cloudWatch:
  clusterLogging:
    enableTypes: ["api", "audit", "authenticator"] # To select only few log_types
    # enableTypes: ["*"]  # If you need to enable all the log_types
  1. Now apply the cluster logging by running the command.
eksctl utils update-cluster-logging --config-file eks.yaml --approve 
  1. To Disable all the configuration types
eksctl utils update-cluster-logging --name=EKS-cluster --disable-types all

To get container metrics using cloudwatch: First add IAM policy (CloudWatchAgentServerPolicy ) to all your nodegroup(s)- to nodegroup(s) role and Deploy Cloudwatch Agent – After you deploy it will have its own namespace (cloudwatch-agent)

  1. Now run the below command.
curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/master/k8s-yaml-templates/quickstart/cwagent-fluentd-quickstart.yaml | sed "s/{{cluster_name}}/EKS-course-cluster/;s/{{region_name}}/us-east-1/" | kubectl apply -f -
  1. To check what all has been created in namespaces
kubectl get all -n amazon-cloudwatch

kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --limits=cpu=500m --expose --port=80
kubectl run --generator=run-pod/v1 -it --rm load-generator --image=busybox /bin/sh
Hit enter for command prompt
while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done

What is Helm?

Helm is the package manager similar to what you have in ubuntu or python such as apt or pip. Helm contains mainly three components.

  • Chart: All the dependency files and application files.
  • Config: Any configuration that you would like to deploy.
  • Release: It is an running instance of a chart.

Helm Components

  • Helm client: Manages repository, Managing releases, Communicates with Helm library.
  • Helm library: It interacts with Kubernetes API server.

Installing Helm

  • To install helm make sure to create the directory with below commands and then change the directory
mkdir helm && cd helm
  • Next, add official stable helm repository which contains sample charts to install
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
helm version
  • To find all the lists of the repo
helm repo list
  • To Update the repository
helm repo update
  • To check all the charts in the helm repository.
helm search repo
  • To install one of the charts. After running the below command then make sure to check the number of Pods running by using kubectl get pods command.
helm install name_of_the_chart stable/redis
  • To check the deployed charts
helm ls # 
  • To uninstall helm deployments.
helm uninstall <<name-of-release-from-previous-output>>

Creating AWS EKS Cluster Admin user

To manage all resources in the EKS cluster you need to have dedicated users either ( Admin or Read only ) to perform tasks accordingly. Lets begin by creating an admin user first.

  1. Create IAM user in AWS console (k8s-cluster-admin) and store the access key and secret key for this user locally on your machine.
  2. Next, add user to configmap aws-auth section within map Users section. But before you add a user, lets find all the configmap in kube-system namespace because we need to store all the users in aws-auth.
kubectl -n kube-system get cm
  1. Save the kubectl command in the yaml formatted file.
kubectl -n kube-system get cm aws-auth -o yaml > aws-auth-configmap.yaml
  1. Next, edit the aws-auth-configmap.yaml and add the mapUsers with the following information:
    • userarn
    • username
    • groups as ( system:masters) which has admin/all permissions basically a role
  1. Run the below command to apply the changes of newly added user.
kubectl apply -f aws-auth-configmap.yaml -n kube-system

After you apply changes you will notice that in AWS EKS you will not see any warning such as kubernetes objects cannot be accessed or something like that.

  1. Now check if user has been properly created by running the describe command.
kubectl -n kube-system describe cm aws-auth
  1. Next, add user to aws credentials file in dedicated section (profile) and then export it using export command or store it in aws cli command line.
export AWS_PROFILE="profile_name"
  1. Finally check which user is currently running the aws cli commands
aws sts get-caller-identity

Creating a read only user for the dedicated namespace

Similarly, now create a read only user for AWS-EKS service. Lets follow the below steps to create a read only user and map it in configmap with IAM.

  1. Create a namespace using below namespace.
kubectl create namespace production
  1. Create a IAM user on AWS Console
  1. Create a file rolebinding.yaml and add both the role and role bindings that includes the permissions that a kubernetes user will have.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: production
  name: prod-viewer-role
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["*"]  # can be further limited, e.g. ["deployments", "replicasets", "pods"]
  verbs: ["get", "list", "watch"] 
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: prod-viewer-binding
  namespace: production
subjects:
- kind: User
  name: prod-viewer
  apiGroup: ""
roleRef:
  kind: Role
  name: prod-viewer-role
  apiGroup: ""
  1. Now apply the role and role bindings using the below command.
kubectl apply -f rolebinding.yaml
  1. Next edit the yaml file and apply the changes such as userarn, role and username as you did previously.
kubectl -n kube-system get cm aws-auth -o yaml > aws-auth-configmap.yaml
kubectl apply -f aws-auth-configmap.yaml -n kube-system
  1. Finally test the user and setup

EKS Networking

  • Amazon VPC contains CNI Plugins from which each Pod receives IP address which is linked with ENI .
  • Pods have same IP address within the VPC that means inside and outside the EKS cluster.
  • Make sure to use maximum IP address by using CIDR/18 which has more IP address.
  • EC2 instance can also have limited amount of ENI/IP address that is each EC2 instance can have limited PODS ( like 36 or so according to Instance_type)

IAM and RBAC Integration in AWS EKS

  • Authentication is done by IAM
  • Authorization is done by kubernetes RBAC
  • You can assign RBAC directly to IAM entities.

kubectl ( USER SENDS AWS IDENTITY) >>> Connects with EKS >>> Verify AWS IDENTITY ( By Authorizing AWS Identity with Kubernetes RBAC )

Worker nodes join the cluster

  1. When you create a worker node, assign the IAM Role and authorize that IAM Role needs to be authorized in RBAC in order to join the cluster. Add system:bootstrappers and system:nodes groups in your ConfigMap. The value for rolearn is the NodeInstanceRole and then run the below command
kubectl apply -f aws-auth.yaml
  1. Check current state of cluster services and nodes
kubectl get svc,nodes -o wide

How to Scale Up and Down Kubernetes Pods

There are three ways of Scaling up/down the kubernetes Pods, Lets look at all of these three.

  1. Scale the deployment to 3 replicas ( that is 3 pods will be scaled) using kubectl scale command.
kubectl scale --replicas=3 deployment/nginx-deployment
  1. Next, update the yaml file with 3 replicas and run the below kubectl apply command. ( Lets say you have abc.yaml file)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        service: nginx
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx 
        resources:
          limits:
            cpu: 300m
            memory: 512Mi
          requests:
            cpu: 300m
            memory: 512Mi
      nodeSelector:
        instance-type: spot
kubectl apply -f abc.yaml
  1. You can scale the Pods using the kubernetes Dashboard.
  1. Apply the manifest file that you created earlier by running below command.
kubectl apply -f nginx.yaml
  1. Next verify if the deployment has been done succesfully.
kubectl get deployment --all-namespaces

Conclusion

In this tutorial you learned everything about AWS EKS from beginners to Advanced level.

Now, you have string understanding of AWS EKS which applications do you plan to manage on it ?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s