How to Deploy kubernetes stateful application or kubernetes StatefulSets in AWS EKS cluster

Are you looking for permanent storage for your Kubernetes applications or Kubernetes Pods? If yes, you are at the right place to learn about Kubernetes stateful sets that manage the deployment and scaling of a set of Pods and provide guarantees about the ordering and uniqueness of these Pods.

In this tutorial, you will learn how to deploy a Kubernetes stateful sets application deployment step by step. Let’s get into it.

Join 53 other followers

Table of Content

  1. Prerequisites
  2. What is kubernetes statefulsets deployment?
  3. Deploying kubernetes statefulsets deployment in Kubernetes Cluster
  4. Creating Kubernetes Namespace for kubernetes stateful sets deployment
  5. Creating a Storage class required for Persistent Volume (PV)
  6. Creating a persistent volume claim (PVC)
  7. Creating Kubernetes secrets to store passwords
  8. Creating the Stateful backend deployment in the cluster
  9. Creating the Stateful Frontend WordPress deployment
  10. Kubernetes Stateful application using AWS EBS vs Kubernetes Stateful application using AWS EFS
  11. Conclusion

Prerequisites

  • AWS EKS cluster already created.
  • AWS account

What is kubernetes statefulsets deployment?

Kubernetes stateful sets manage stateful applications such as MySQL, Databases, MongoDB, which need persistent storage. Kubernetes stateful sets manage the deployment and scaling of a set of Pods and provide guarantees about the ordering and uniqueness of these Pods.

With Kubernetes stateful sets with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1} and are terminated in reverse order, from {N-1..0}.

Deploying kubernetes statefulsets deployment in Kubernetes Cluster

In this article, you will deploy the Kubernetes stateful sets deployment with the following components:

  1. Frontend applications a wordpress service deployed as Kubenretes Stateful set deployment containing the persistent volume AWS EBS to store HTML pages.
  2. Backend applications MySQL service deployed as Kubenretes deployment containing Persistent volume AWS EBS to store MySQL data.
  3. Load balacer on the top frontend application. The Load balancer will route the traffic to WordPress pods, and WordPress site pods will store data in MySQL pod by routing it via MySQL service as shown in below picture.
Deploying kubernetes stateful sets deployment in Kubernetes Cluster
Deploying Kubernetes stateful sets deployment in Kubernetes Cluster

Creating Kubernetes Namespace for kubernetes stateful sets deployment

Now that you know what is Kubernetes stateful sets and what all components you need to deploy Kubernetes stateful sets in the Kubernetes cluster. But before you deploy should deploy it in a particular namespace to make things simple. Let’s create the Kubernetes namespace.

  • Create a Kubernetes namespace with below command. Creation of Kubernetes namespace allows you to separate a particular project or a team or env.
kubectl create namespace stateful-deployment
Kubernetes namespace created
Kubernetes namespace created

Creating a Storage class required for Persistent Volume (PV)

Once you have the Kubernetes namespace created in the Kubernetes cluster, you will need to create storage for storing the website and database data.

In the AWS EKS service, the PersistentVolume (PV) is a piece of storage in the cluster implemented via an EBS volume, which has to be declared or dynamically provisioned using Storage Classes.

  • Lets begin by creating a storage class that is required for persistent volume in the kubernetes cluster. To create the storage class first create a file gp2-storage-class.yaml and copy/paste the below code.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
reclaimPolicy: Retain
mountOptions:
  - debug
  • Now, create the Storage class by running the below command.
kubectl apply -f gp2-storage-class.yaml --namespace=stateful-deployment
Creating the Kubernetes Storage class in Kubernetes cluster.
Creating the Kubernetes Storage class in the Kubernetes cluster.

In case you receive any error then run below command.

kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' --namespace=stateful-deployment
  • Next, verify all the storage class that are present in the Kubernetes cluster.
kubectl get storageclasses --all-namespaces
Verifying the Kubernetes Storage class
Verifying the Kubernetes Storage class

Creating a persistent volume claim (PVC)

Now that you have created a storage class that persistent volume will use, create a Persistent volume claim (PVC) so that a stateful app can then request a volume by specifying a persistent volume claim (PVC) and mount it in its corresponding pod.

  • Again create a file named pvc.yaml and copy/paste the below content. The below code creates the two PVC, one for wordpress frontend application and the other for mysql backend application service.
apiVersion: v1
kind: PersistentVolumeClaim
# Creating persistent volume claim (PVC) for WordPress ( frontend )
metadata:
  name: mysql-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
# Creating persistent volume claim (PVC) for MySQL  ( Backend )
metadata:
  name: wp-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
  • Now execute the apply command to create the persistent volume.
kubectl apply -f pvc.yaml --namespace=stateful-deployment
Creating the Persistent Volume claim for WordPress and MySQL application
Creating the Persistent Volume claim for WordPress and MySQL application
  • Verify the recently created persistent volume in the kubernetes cluster. These PVC are actually created as AWS EBS volumes.
kubectl get pvc --namespace=stateful-deployment
Verify the recently created persistent volume claims in the kubernetes cluster
Verify the recently created persistent volume claims in the Kubernetes cluster
  • Also verify the storage in AWS EBS you will find the below two storages.
Verifying the Persistent volumes claims in AWS EBS
Verifying the Persistent volumes claims in AWS EBS

Creating Kubernetes secrets to store passwords

Up to now, you created Kubernetes namespace and persistent volume successfully, but MySQL application password will be stored as Kubernetes secrets. So let’s jump into and create Kubernetes secrets that will be used to store passwords for the MySQL application.

  • Create secret which stores mysql password (mysql-pw) which will be injected as env var into container.
kubectl create secret generic mysql-pass --from-literal=password=mysql-pw --namespace=stateful-deployment
Creating Kubernetes secrets to store passwords
Creating Kubernetes secrets to store passwords
  • Next, verify the secrets that were recently created by using kubectl get command.
kubectl get secrets --namespace=stateful-deployment
verify the Kubernetes secrets that were recently created by using kubectl get command
verify the Kubernetes secrets that were recently created by using the kubectl get command

Creating the Stateful backend deployment in the cluster

Kubernetes Stateful deployment can happen either with AWS EBS or AWS EFS

Now that you have Kubernetes namespace, Persistent volume, secrets that you will consume in the application. Let’s get into building the stateful backend deployment.

  • Create a file mysql.yaml for the deployment and copy/paste the below code. apiVersion is the kubernetes API version to manage the object. For Deployment/Replicasets its apps/v1 and for Pod and service it is v1.
apiVersion: v1
# Kind denotes what kind of resource/object will kubernetes will create
kind: Service
# metadata helps uniquely identify the object, including a name string, UID, and optional namespace.
metadata:
  name: wordpress-mysql
# Labels are key/value pairs to specify attributes of objects that are meaningful and relevant to users.
  labels:
    app: wordpress
# spec define what state you desire for the object
spec:
  ports:
    - port: 3306
# The selector field allows deployment to identify which Pods to manage.
  selector:
    app: wordpress
    tier: mysql
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
# Creating the enviornment variable MYSQL_ROOT_PASSWORD whose value will be taken from secrets 
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 3306
          name: mysql
# Volumes that we created PVC will be mounted here.
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
# Defining the volumes ( PVC ).
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim
  • Now create mysql deployment and service by running the below command.
kubectl apply -f mysql.yaml --namespace=stateful-deployment
Creating the Stateful backend deployment in the cluster
Creating the Stateful backend deployment in the cluster
  • Further check the Pods of MySQL backend deployment by running below command.
kubectl get pods -o wide --namespace=stateful-deployment
Verifying the Stateful backend deployment in the cluster
Verifying the Stateful backend deployment in the cluster

In case of deployment with AWS EBS, all the Kubernetes Pods are created on the same AWS EC2 node and Persistent Volume is attached (EBS). However in case of StatefulSet with EBS Kubernetes Pods can be created on various nodes with different EBS attached.

Creating the Stateful Frontend WordPress deployment

Previously, you created a Stateful backend MySQL application deployment, which is great, but you will need to create a WordPress Front application deployment for a complete setup. Let’s get into it now.

  • Create a file wordpress.yaml for the deployment and copy/paste the below code. apiVersion is the kubernetes API version to manage the object. For Deployment/Replicasets its apps/v1 and for Pod and service it is v1.
apiVersion: v1
kind: Service
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - port: 80
  selector:
    app: wordpress
    tier: frontend
  type: LoadBalancer
---
apiVersion: apps/v1
# Creating the WordPress deployment as stateful where multiple EC2 will have multiple pods with diff EBS
kind: StatefulSet
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  replicas: 1
  serviceName: wordpress-stateful
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
      - image: wordpress:4.8-apache
        name: wordpress
        env:
        - name: WORDPRESS_DB_HOST
          value: wordpress-mysql
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 80
          name: wordpress
        volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /var/www/html
# Below section of volume is valid only for deployments not for statefulset 
      volumes:
      - name: wordpress-persistent-storage
        persistentVolumeClaim:
          claimName: wp-pv-claim
# Below section is valid only for statefulset not for deployments as volumes will be created dynamically

 volumeClaimTemplates: 
 - metadata:
         name: wordpress-persistant-storage
    spec: 
        accessModes:
            - ReadWriteOnce
        resources:
            requests: 
                storage: 10Gi
        storageClassName: gp2  
  • Now create wordpress deployment and service by running the below command.
kubectl apply -f wordpress.yaml --namespace=stateful-deployment
  • Further check the Pods of WordPress deployment by running below command.
kubectl get pods -o wide --namespace=stateful-deployment

Kubernetes Stateful application using AWS EBS vs Kubernetes Stateful application using AWS EFS

As discussed earlier, AWS EBS volumes are tied to only one Availability Zone, so recreated pods can be only started in the same Availability Zone of the previous AWS EBS volume.

For example, if you have a Pod running on AWS EC2 instance in the Availability zone (a) with AWS EBS attached in the same zone, then if in case your pods get restarted in another AWS EC2 instance, Pod will be able to attach the same AWS EBS however if in case pod gets restarted in another instance in different Availability zone (b) then it won’t be able to attach to the same previous AWS EBS rather it will require a new AWS EBS in Availability zone (b).

Kubernetes Stateful application using AWS EBS
Kubernetes Stateful application using AWS EBS

As discussed with AWS EBS, things are a little complicated as AWS EBS are not shared volumes as they belong to a particular AZ rather than multi-AZ; however, by using shared volumes AWS EFS ( Elastic file system) across Multi-AZ and Pods, it is possible.

AWS EFS volumes are mounted as network file systems on multiple AWS EC2 instances regardless of AZ. and work efficiently in multi-AZ and are highly available.

Kubernetes Stateful application using AWS EFS
Kubernetes Stateful application using AWS EFS

Conclusion

In this article, you learned how to create permanent storage for your Kubernetes applications and mount it. Also, you learned that there are two ways to mount permanent storage to Kubernetes applications by using AWS EBS and AWS EFS.

Now, which applications you do plan to deploy in the AWS EKS cluster with permanent storage?

Advertisement

The Ultimate Guide on AWS EKS for Beginners [Easiest Way]

In this Ultimate Guide as a beginner you will learn everything you should know about AWS EKS and how to manage your AWS EKS cluster ?

Common! lets begin !

Table of Content

  1. What is AWS EKS ?
  2. Why do you need AWS EKS than Kubernetes?
  3. Installing tools to work with AWS EKS Cluster
  4. Creating AWS EKS using EKSCTL command line tool
  5. Adding one more Node group in the AWS EKS Cluster
  6. Cluster Autoscaler
  7. Creating and Deploying Cluster Autoscaler
  8. Nginx Deployment on the EKS cluster when Autoscaler is enabled.
  9. EKS Cluster Monitoring and Cloud watch Logging
  10. What is Helm?
  11. Creating AWS EKS Cluster Admin user
  12. Creating Read only user for the dedicated namespace
  13. EKS Networking
  14. IAM and RBAC Integration in AWS EKS
  15. Worker nodes join the cluster
  16. How to Scale Up and Down Kubernetes Pods
  17. Conclusion

What is AWS EKS ?

Amazon provides its own service AWS EKS where you can host kubernetes without worrying about infrastructure like kubernetes nodes, installation of kubernetes etc. It gives you a platform to host kubernetes.

Some features of Amazon EKS ( Elastic kubernetes service)

  1. It expands and scales across many availability zones so that there is always a high availability.
  2. It automatically scales and fix any impacted or unhealthy node.
  3. It is interlinked with various other AWS services such as IAM, VPC , ECR & ELB etc.
  4. It is very secure service.

How does AWS EKS service work?

  • First step in EKS is to create EKS cluster using AWS CLI or AWS Management console or using eksctl command line tool.
  • Now, next you can have your own machines EC2 where you can deploy applications or deploy to AWS Fargate which manages it for you.
  • Now connect to kubernetes cluster with kubectl or eksctl commands.
  • Finally deploy and run applications on EKS cluster.

Why do you need AWS EKS than Kubernetes?

If you are working with Kubernetes you are required to handle all the below thing yourself such as:

  1. Create and Operate K8s clusters.
  2. Deploy Master Nodes
  3. Deploy Etcd
  4. Setup CA for TLS encryption.
  5. Setup Monitoring, AutoScaling and Auto healing.
  6. Setup Worker Nodes.

But with AWS EKS you only need to manage worker node other all rest Masters node, etcd in high availability , API server, KubeDNS, Scheduler, Controller Manager, Cloud Controller all the things are taken care of Amazon EKS.

You need to pay 0.20 US dollar per hour for your AWS EKS cluster which takes you to 144 US Dollar per month.

Installing tools to work with AWS EKS Cluster

  1. AWS CLI: Required as a dependency of eksctl to obtain the authentication token. To install AWS cli run the below command.
pip3 install --user awscli
After you install aws cli make sure to set the access key and secret key id in aws cli so that it can create the EKS cluster.
  1. eksctl: To setup and operate EKS cluster. To install eksctl run the below commands. Below command will download the eksctl binary in the tmp directory.
curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/v0.69.0/eksctl_Linux_amd64.tar.gz" | tar xz -C /tmp
  • Next, move the eksctl directory in the executable directory.
sudo mv /tmp/eksctl /usr/local/bin
  • To check the version of eksctl and see if it is properly install run below command.
eksctl version
  1. kubectl: Interaction with k8s API server. To install the kubectl tool run the below first command that updates the system and installs the https package.
sudo apt-get update && sudo apt-get install -y apt-transport-https
  • Next, run the curl command that will add the gpg key in the system to verify the authentication with the kubernetes site.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  • Next, add the kubernetes repository
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
  • Again update the system so that it takes the effect after addition of new repository.
sudo apt-get update
  • Next install kubectl tool.
sudo apt-get install -y kubectl
  • Next, check the version of the kubectl tool by running below command.
kubectl version --short --client
  1. IAM user and IAM role:
  • Create an IAM user with administrator access and use that IAM user to explore the AWS resources on the console. This is the user which also be used in the EC2 instance that you will use to manage AWS EKS cluster by passing user’s credentials in aws cli.
  • Also make sure to create a IAM role that you will apply on the EC2 instance from where you will manage AWS EKS and other AWS resources.

Creating AWS EKS using EKSCTL command line tool

Up to now you installed and setup the tools that are required for creating an AWS EKS Cluster. To know how to create a cluster using the eksctl command then run the help command which will provide you flags that you need to use while creating a AWS EKS cluster.

eksctl create cluster --help 
  1. Lets begin to create a EKS cluster. To do that create a file named eks.yaml and copy and paste the below content.
    • apiVersion is the kubernetes API version that will mange the deployment.
    • Kind denotes what kind of resource/object will kubernetes will create. In the below case as you need to provision cluster you should give Clusterconfig
    • metadata: Data that helps uniquely identify the object, including a name string, UID, and optional namespace.
    • nodegroups: Provide the name of node group and other details required for node group that will be used in your EKS cluster.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-course-cluster
  region: us-east-1

nodeGroups:
  - name: ng-1
    instanceType: t2.small
    desiredCapacity: 3
    ssh: # use existing EC2 key
      publicKeyName: eks-course
  1. Now, execute the command below to create the cluster.
eksctl create cluster -f eks.yaml
  1. Once cluster is successfully created run the below command to know the details of the cluster.
eksctl get cluster
  1. Next, Verify the AWS EKS cluster on AWS console.
  1. Also verify the nodes of the nodegroups that were created along with the cluster by running the below commands.
kubectl get nodes
  1. Also, verify the Nodes on AWS console. To check the nodes navigate to EC2 instances.
  1. Verify the nodegroups in the EKS Cluster by running the eksctl command.
eksctl get nodegroup --cluster EKS-cluster
  1. Finally Verify the number of Pods in the EKS Cluster by running the below eksctl command.
eksctl get pods --all-namespaces

Adding one more Node group in the AWS EKS Cluster

To add another node group in EKS Cluster follow the below steps:

  1. Create a yaml file as shown below and copy/paste the below content. In below file you will notice that previous nodegroup is already mentioned otherwise if you run this file without it it will override previous changes and remove the ng-1 node group from the cluster.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-cluster
  region: us-east-1

nodeGroups:
  - name: ng-1
    instanceType: t2.small
    desiredCapacity: 3
    ssh: # use existing EC2 key
      publicKeyName: testing
# Adding the another Node group nodegroup2 with min/max capacity as 3 and 5 resp.
  - name: nodegroup2
    minSize: 2
    maxSize: 3
    instancesDistribution:
      maxPrice: 0.2
      instanceTypes: ["t2.small", "t3.small"]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 50
    ssh:
      publicKeyName: testing
  1. Next run the below command that will help you to create a nodegroups.
eksctl create nodegroup --config-file=node_group.yaml.yaml --include=' nodegroup2'
  1. If you wish to delete the node group in EKS Cluster run anyone of the below commands.
eksctl delete nodegroup --cluster=EKS-cluster --name=nodegroup2
eksctl delete nodegroup --config-file=eks.yaml --include='nodegroup2' --approve
  • To Scale the node group in EKS Cluster
eksctl scale nodegroup --cluster=name_of_the_cluster --nodes=5 --name=node_grp_2

Cluster Autoscaler

The cluster Autoscaler automatically launches additional worker nodes if more resources are needed, and shutdown worker nodes if they are underutilized. The AutoScaling works within a node group, so you should create a node group with Autoscaler feature enabled.

Cluster Autoscaler has the following features:

  • Cluster Autoscaler is used to scale up and down the nodes within the node group.
  • It runs as a deployment based on CPU and Memory utilization.
  • It can contain on demand and spot instances.
  • There are two types of scaling
    • Multi AZ Scaling: Node group with Multi AZ ( Stateless workload )
    • Single AZ Scaling: Node group with Single AZ ( Stateful workload)

Creating and Deploying Cluster Autoscaler

The main function and use of Autoscaler is it dynamically on the fly adds or removes the node within the nodegroup. The Autoscaler works as a deployment and depends on the CPU/Memory requests.

There are two types of scaling available : Multi AZ v/s Single AZ ( Stateful Workload) as EBS cannot be spread across multiple availability zone

To create the cluster Autoscaler you can add multiple nodegroups in the cluster as per need . In this examples lets consider to deploy 2 node groups with single AZ and 1 node groups across 3 AZs using spot instance with Autoscaler enabled

  1. Create a file create and name it as autoscaler.yaml.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-cluster
  region: us-east-1

nodeGroups:
  - name: scale-east1c
    instanceType: t2.small
    desiredCapacity: 1
    maxSize: 10
    availabilityZones: ["us-east-1c"]
# iam holds all IAM attributes of a NodeGroup
# enables IAM policy for cluster-autoscaler
    iam:
      withAddonPolicies:
        autoScaler: true
    labels:
      nodegroup-type: stateful-east1c
      instance-type: onDemand
    ssh: # use existing EC2 key
      publicKeyName: eks-ssh-key
  - name: scale-spot
    desiredCapacity: 1
    maxSize: 10
    instancesDistribution:
      instanceTypes: ["t2.small", "t3.small"]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 0
    availabilityZones: ["us-east-1c", "us-east-1d"]
    iam:
      withAddonPolicies:
        autoScaler: true
    labels:
      nodegroup-type: stateless-workload
      instance-type: spot
    ssh: 
      publicKeyName: eks-ssh-key

availabilityZones: ["us-east-1c", "us-east-1d"]
  1. Run the below commands to add a nodegroups or delete a nodegroups.
eksctl create nodegroup --config-file=autoscaler.yaml
  1. eksctl get nodegroups –cluster=EKS-Cluster
  1. Next, to deploy the Autoscaler run the below kubectl command.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
kubectl -n kube-system annotate deployment.apps/cluster-autoscaler cluster-autoscaler.kubernetes.io/safe-to-evict="false"
  1. To edit the deployment and set your AWS EKS cluster name run the below kubectl command.
kubectl -n kube-system edit deployment.apps/cluster-autoscaler
  1. Next, describe the deployment of the Autoscaler by running the below kubectl command.
kubectl -n kube-system describe deployment cluster-autoscaler
  1. Finally view the cluster Autoscaler logs by running the kubectl command on kube-system namespace.
kubectl -n kube-system logs deployment.apps/cluster-autoscaler
  1. Verify the Pods. You should notice below that first pod is for Nodegroup1 , similarly second is for Nodegroup2 and finally the third is Autoscaler pod itself.

Nginx Deployment on the EKS cluster when Autoscaler is enabled.

  1. To deploy the nginx application on the EKS cluster that you just created , create a yaml file and name it something which you find it convenient and copy/paste the below content into that.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        service: nginx
        app: nginx
    spec:
      containers:
      - image: nginx
        name: test-autoscaler
        resources:
          limits:
            cpu: 300m
            memory: 512Mi
          requests:
            cpu: 300m
            memory: 512Mi
      nodeSelector:
        instance-type: spot


  1. Now to apply the nginx deployment, run the below command.
kubectl apply -f nginx-deployment.yaml
  1. After successful deployment , check the number of Pods.
kubectl get pods
  1. Checking the number of nodes and type of node.
kubectl get nodes -l instance_type=spot
  • Scale the deployment to 3 replicas ( that is 3 pods will be scaled)
kubectl scale --replicas=3 deployment/test-autoscaler
  • Checking the logs and filtering the events.
kubectl -n kube-system logs deployment.apps/cluster-autoscaler | grep -A5 "Expanding Node Group"

EKS Cluster Monitoring and Cloud watch Logging

By, now you have already setup EKS cluster but it is also important to monitor your EKS cluster. To monitor your cluster follow the below steps:

  1. Create a below eks.yaml file and copy /paste below code into the file.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-cluster
  region: us-east-1

nodeGroups:
  - name: ng-1
    instanceType: t2.small
    desiredCapacity: 3
    ssh: # use existing EC2 key
      publicKeyName: eks-ssh-key
cloudWatch:
  clusterLogging:
    enableTypes: ["api", "audit", "authenticator"] # To select only few log_types
    # enableTypes: ["*"]  # If you need to enable all the log_types
  1. Now apply the cluster logging by running the command.
eksctl utils update-cluster-logging --config-file eks.yaml --approve 
  1. To Disable all the configuration types
eksctl utils update-cluster-logging --name=EKS-cluster --disable-types all

To get container metrics using cloudwatch: First add IAM policy (CloudWatchAgentServerPolicy ) to all your nodegroup(s)- to nodegroup(s) role and Deploy Cloudwatch Agent – After you deploy it will have its own namespace (cloudwatch-agent)

  1. Now run the below command.
curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/master/k8s-yaml-templates/quickstart/cwagent-fluentd-quickstart.yaml | sed "s/{{cluster_name}}/EKS-course-cluster/;s/{{region_name}}/us-east-1/" | kubectl apply -f -
  1. To check what all has been created in namespaces
kubectl get all -n amazon-cloudwatch

kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --limits=cpu=500m --expose --port=80
kubectl run --generator=run-pod/v1 -it --rm load-generator --image=busybox /bin/sh
Hit enter for command prompt
while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done

What is Helm?

Helm is the package manager similar to what you have in ubuntu or python such as apt or pip. Helm contains mainly three components.

  • Chart: All the dependency files and application files.
  • Config: Any configuration that you would like to deploy.
  • Release: It is an running instance of a chart.

Helm Components

  • Helm client: Manages repository, Managing releases, Communicates with Helm library.
  • Helm library: It interacts with Kubernetes API server.

Installing Helm

  • To install helm make sure to create the directory with below commands and then change the directory
mkdir helm && cd helm
  • Next, add official stable helm repository which contains sample charts to install
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
helm version
  • To find all the lists of the repo
helm repo list
  • To Update the repository
helm repo update
  • To check all the charts in the helm repository.
helm search repo
  • To install one of the charts. After running the below command then make sure to check the number of Pods running by using kubectl get pods command.
helm install name_of_the_chart stable/redis
  • To check the deployed charts
helm ls # 
  • To uninstall helm deployments.
helm uninstall <<name-of-release-from-previous-output>>

Creating AWS EKS Cluster Admin user

To manage all resources in the EKS cluster you need to have dedicated users either ( Admin or Read only ) to perform tasks accordingly. Lets begin by creating an admin user first.

  1. Create IAM user in AWS console (k8s-cluster-admin) and store the access key and secret key for this user locally on your machine.
  2. Next, add user to configmap aws-auth section within map Users section. But before you add a user, lets find all the configmap in kube-system namespace because we need to store all the users in aws-auth.
kubectl -n kube-system get cm
  1. Save the kubectl command in the yaml formatted file.
kubectl -n kube-system get cm aws-auth -o yaml > aws-auth-configmap.yaml
  1. Next, edit the aws-auth-configmap.yaml and add the mapUsers with the following information:
    • userarn
    • username
    • groups as ( system:masters) which has admin/all permissions basically a role
  1. Run the below command to apply the changes of newly added user.
kubectl apply -f aws-auth-configmap.yaml -n kube-system

After you apply changes you will notice that in AWS EKS you will not see any warning such as kubernetes objects cannot be accessed or something like that.

  1. Now check if user has been properly created by running the describe command.
kubectl -n kube-system describe cm aws-auth
  1. Next, add user to aws credentials file in dedicated section (profile) and then export it using export command or store it in aws cli command line.
export AWS_PROFILE="profile_name"
  1. Finally check which user is currently running the aws cli commands
aws sts get-caller-identity

Creating a read only user for the dedicated namespace

Similarly, now create a read only user for AWS-EKS service. Lets follow the below steps to create a read only user and map it in configmap with IAM.

  1. Create a namespace using below namespace.
kubectl create namespace production
  1. Create a IAM user on AWS Console
  1. Create a file rolebinding.yaml and add both the role and role bindings that includes the permissions that a kubernetes user will have.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: production
  name: prod-viewer-role
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["*"]  # can be further limited, e.g. ["deployments", "replicasets", "pods"]
  verbs: ["get", "list", "watch"] 
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: prod-viewer-binding
  namespace: production
subjects:
- kind: User
  name: prod-viewer
  apiGroup: ""
roleRef:
  kind: Role
  name: prod-viewer-role
  apiGroup: ""
  1. Now apply the role and role bindings using the below command.
kubectl apply -f rolebinding.yaml
  1. Next edit the yaml file and apply the changes such as userarn, role and username as you did previously.
kubectl -n kube-system get cm aws-auth -o yaml > aws-auth-configmap.yaml
kubectl apply -f aws-auth-configmap.yaml -n kube-system
  1. Finally test the user and setup

EKS Networking

  • Amazon VPC contains CNI Plugins from which each Pod receives IP address which is linked with ENI .
  • Pods have same IP address within the VPC that means inside and outside the EKS cluster.
  • Make sure to use maximum IP address by using CIDR/18 which has more IP address.
  • EC2 instance can also have limited amount of ENI/IP address that is each EC2 instance can have limited PODS ( like 36 or so according to Instance_type)

IAM and RBAC Integration in AWS EKS

  • Authentication is done by IAM
  • Authorization is done by kubernetes RBAC
  • You can assign RBAC directly to IAM entities.

kubectl ( USER SENDS AWS IDENTITY) >>> Connects with EKS >>> Verify AWS IDENTITY ( By Authorizing AWS Identity with Kubernetes RBAC )

Worker nodes join the cluster

  1. When you create a worker node, assign the IAM Role and authorize that IAM Role needs to be authorized in RBAC in order to join the cluster. Add system:bootstrappers and system:nodes groups in your ConfigMap. The value for rolearn is the NodeInstanceRole and then run the below command
kubectl apply -f aws-auth.yaml
  1. Check current state of cluster services and nodes
kubectl get svc,nodes -o wide

How to Scale Up and Down Kubernetes Pods

There are three ways of Scaling up/down the kubernetes Pods, Lets look at all of these three.

  1. Scale the deployment to 3 replicas ( that is 3 pods will be scaled) using kubectl scale command.
kubectl scale --replicas=3 deployment/nginx-deployment
  1. Next, update the yaml file with 3 replicas and run the below kubectl apply command. ( Lets say you have abc.yaml file)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        service: nginx
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx 
        resources:
          limits:
            cpu: 300m
            memory: 512Mi
          requests:
            cpu: 300m
            memory: 512Mi
      nodeSelector:
        instance-type: spot
kubectl apply -f abc.yaml
  1. You can scale the Pods using the kubernetes Dashboard.
  1. Apply the manifest file that you created earlier by running below command.
kubectl apply -f nginx.yaml
  1. Next verify if the deployment has been done succesfully.
kubectl get deployment --all-namespaces

Conclusion

In this tutorial you learned everything about AWS EKS from beginners to Advanced level.

Now, you have string understanding of AWS EKS which applications do you plan to manage on it ?

How to create AWS EKS cluster using Terraform and connect Kubernetes cluster with ubuntu machine.

If you work with container orchestration tools like Kubernetes and want to shift towards the Cloud infrastructure, consider using AWS EKS to automate containerized applications’ deployment, scaling, and management.

AWS EKS service allows you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes and containerized applications

This tutorial will teach you what AWS EKS is and how to create an AWS EKS cluster using Terraform and connect the Kubernetes cluster with the Ubuntu machine.

Join 53 other followers

Table of Content

  1. What is Amazon Kubernetes Service (AWS EKS) ?
  2. AWS EKS Working
  3. Prerequisites
  4. Terraform files and Terraform directory structure
  5. Building Terraform Configuration files to Create AWS EKS Cluster
  6. Connecting to AWS EKS cluster or kubernetes cluster
  7. Conclusion

What is Amazon Kubernetes Service (AWS EKS) ?

Amazon Kubernetes Service (AWS EKS) allows you to host Kubernetes without worrying about infrastructure components such as Kubernetes nodes, installation of Kubernetes, etc. Some features of Amazon EKS are:

  • AWS EKS service expands and scales across many availability zones so that there is always a high availability.
  • AWS EKS service automatically scales and fix any impacted or unhealthy node.
  • AWS EKS service is interlinked with various other AWS services such as IAM, VPC , ECR & ELB etc.
  • AWS EKS service is a secure service.

AWS EKS Working

Now that you have a basic understanding of AWS EKS, it is important to know how it works.

  • First step in AWS EKS service is to create AWS EKS cluster using AWS CLI or AWS Management console.
  • While creating the AWS EKS cluster you have two options either choose your own AWS EC2 instances or instances managed by AWS EKS ie. AWS Fargate.
  • Once the AWS EKS cluster is succesfully created, connect to kubernetes cluster with kubectl commands.
  • Finally deploy and run applications on EKS cluster.
AWS EKS Working
AWS EKS Working

Prerequisites

  • Ubuntu machine to run terraform command, if you don’t have Ubuntu machine you can create an AWS EC2 instance on AWS account with 4GB RAM and at least 5GB of drive space.
  • Terraform Installed on Ubuntu Machine. If you don’t have Terraform installed refer Terraform on Windows Machine / Terraform on Ubuntu Machine
  • Ubuntu machine should have IAM role attached with AWS EKS full permissions oadmin rights.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

Terraform files and Terraform directory structure

Now that you know what is Amazon Elastic search and Amazon OpenSearch service are. Let’s now dive into Terraform files and Terraform directory structure that will help you write the Terraform configuration files later in this tutorial.

Terraform code, that is, Terraform configuration files, are written in a tree-like structure to ease the overall understanding of code with .tf format or .tf.json or .tfvars format. These configuration files are placed inside the Terraform modules.

Terraform modules are on the top level in the hierarchy where configuration files reside. Terraform modules can further call another child to terraform modules from local directories or anywhere in disk or Terraform Registry.

Terraform contains mainly five files as main.tf , vars.tf , providers.tf , output.tf and terraform.tfvars.

  1. main.tf – Terraform main.tf file contains the main code where you define which resources you need to build, update or manage.
  2. vars.tf – Terraform vars.tf file contains the input variables which are customizable and defined inside the main.tf configuration file.
  3. output.tf : The Terraform output.tf file is the file where you declare what output paraeters you wish to fetch after Terraform has been executed that is after terraform apply command.
  4. .terraform: This directory contains cached provider , modules plugins and also contains the last known backend configuration. This is managed by terraform and created after you run terraform init command.
  5. terraform.tfvars files contains the values which are required to be passed for variables that are refered in main.tf and actually decalred in vars.tf file.
  6. providers.tf – The povider.tf is the most important file whrere you define your terraform providers such as terraform aws provider, terraform azure provider etc to authenticate with the cloud provider.

Building Terraform Configuration files to Create AWS EKS Cluster

Now that you know what are Terraform configurations files look like and how to declare each of them. Before running Terraform commands, let’s learn how to build Terraform configuration files to create AWS EKS Cluster on the AWS account. Let’s get into it.

  • Log in to the Ubuntu machine using your favorite SSH client.
  • Create a folder in opt directory named terraform-eks-demo and switch to that folder.
mkdir /opt/terraform-eks-demo
cd /opt/terraform-eks-demo
  • Create a file named main.tf inside the /opt/terraform-eks-demo directory and copy/paste the below content. The below file creates the below components:
    • Creates the IAM role that can be assumed while connecting with Kubernetes cluster.
    • Create security group, nodes for AWS EKS.
    • Creates the AWS EKS cluster and node groups.
# Creating IAM role so that it can be assumed while connecting to the Kubernetes cluster.

resource "aws_iam_role" "iam-role-eks-cluster" {
  name = "terraform-eks-cluster"
  assume_role_policy = <<POLICY
{
 "Version": "2012-10-17",
 "Statement": [
   {
   "Effect": "Allow",
   "Principal": {
    "Service": "eks.amazonaws.com"
   },
   "Action": "sts:AssumeRole"
   }
  ]
 }
POLICY
}

# Attach the AWS EKS service and AWS EKS cluster policies to the role.

resource "aws_iam_role_policy_attachment" "eks-cluster-AmazonEKSClusterPolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
  role       = "${aws_iam_role.iam-role-eks-cluster.name}"
}

resource "aws_iam_role_policy_attachment" "eks-cluster-AmazonEKSServicePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
  role       = "${aws_iam_role.iam-role-eks-cluster.name}"
}

# Create security group for AWS EKS.

resource "aws_security_group" "eks-cluster" {
  name        = "SG-eks-cluster"
# Use your VPC here
  vpc_id      = "vpc-XXXXXXXXXXX"  
 # Outbound Rule
  egress {                
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  # Inbound Rule
  ingress {                
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

# Creating the AWS EKS cluster

resource "aws_eks_cluster" "eks_cluster" {
  name     = "terraformEKScluster"
  role_arn =  "${aws_iam_role.iam-role-eks-cluster.arn}"
  version  = "1.19"
 # Configure EKS with vpc and network settings 
  vpc_config {            
   security_group_ids = ["${aws_security_group.eks-cluster.id}"]
# Configure subnets below
   subnet_ids         = ["subnet-XXXXX","subnet-XXXXX"] 
    }
  depends_on = [
    "aws_iam_role_policy_attachment.eks-cluster-AmazonEKSClusterPolicy",
    "aws_iam_role_policy_attachment.eks-cluster-AmazonEKSServicePolicy",
   ]
}

# Creating IAM role for AWS EKS nodes with assume policy so that it can assume 

resource "aws_iam_role" "eks_nodes" {
  name = "eks-node-group"
  assume_role_policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
  role       = aws_iam_role.eks_nodes.name
}

resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
  role       = aws_iam_role.eks_nodes.name
}

resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
  role       = aws_iam_role.eks_nodes.name
}

# Create AWS EKS cluster node group

resource "aws_eks_node_group" "node" {
  cluster_name    = aws_eks_cluster.eks_cluster.name
  node_group_name = "node_tuto"
  node_role_arn   = aws_iam_role.eks_nodes.arn
  subnet_ids      = ["subnet-","subnet-"]
  scaling_config {
    desired_size = 1
    max_size     = 1
    min_size     = 1
  }

  depends_on = [
    aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
    aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
    aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
  ]
}
  • Create one more file provider.tf file inside the /opt/terraform-eks-demo directory and copy/paste below content. The provider.tf file will allows Terraform to connect to the AWS cloud.
provider "aws" {
  region = "us-east-2"
}
  • Now the folder structure of all the files should like below.
The folder structure of all the files in the /opt/terraform-eks-demo
The folder structure of all the files in the /opt/terraform-eks-demo
  • Now your files and code are ready for execution. Initialize the terraform using the terraform init command.
terraform init
Initialize the terraform using the terraform init command.
Initialize the terraform using the terraform init command.
Successful execution of terraform init command.
Successful execution of Terraform init command.
  • Terraform initialized successfully , now its time to run the plan command which provides you the details of the deployment. Run terraform plan command to confirm if correct resources is going to provisioned or deleted.
terraform plan
Running the terraform plan command
Running the terraform plan command
Output of the terraform plan command
The output of the terraform plan command
  • After verification, now its time to actually deploy the code using terraform apply command.
terraform apply
Terraform apply command execution
Terraform apply command execution

Terraform commands terraform init→ terraform plan→ terraform apply all executed successfully. But it is important to manually verify the AWS EKS cluster launched in the AWS Management console.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘EKS’, and click on the EKS menu item. Generally EKS cluster take few minutes to launch.
IAM Role with proper permissions.
IAM Role with proper permissions.

  • Now verify Amazon EKS cluster
Verifying the AWS EKS cluster
Verifying the AWS EKS cluster
  • Finally verify the node group of the cluster.
 verify the node group of the cluster.
verify the node group of the cluster.

Connecting to AWS EKS cluster or kubernetes cluster

Now you have a newly created AWS EKS cluster in AWS EKS service with proper IAM role permissions and configuration, but let’s learn how to connect to AWS EKS cluster from your ubuntu machine.

  • Configure AWS credentials on Ubuntu machine using AWS CLI.

Make sure the AWS credentails should match with the IAM user or IAM role that created the cluster ie. use same IAM role credentials in Ubuntu machine that you used to create Kubernetes cluster.

  • To connect to AWS EKS cluster you will need AWS CLI and kubectl installed on ubuntu machine. If you don’t have Refer here
  • On ubuntu machine configure kubeconfig using the below command to make communication from your local machine to Kubernetes cluster in AWS EKS
aws eks update-kubeconfig --region us-east-2 --name terraformEKScluster
 configure kubeconfig on the ubuntu machine
configure kubeconfig on the ubuntu machine
  • Once the configuration is added, test the communication between local machine and AWS EKS cluster using kubectl get svc command. As you can see below you will get the service details within the cluster confirms the connectivity from Ubuntu machine to Kubernetes cluster.
kubectl get svc
Verify the Kubernetes service to test the connectivity from ubuntu machine to EKS cluster
Verify the Kubernetes service to test the connectivity from ubuntu machine to EKS cluster

Join 53 other followers

Conclusion

In this tutorial, you learned what is AWS Elastic Kubernetes service is and how to create a Kubernetes cluster using Terraform, followed by connecting the Kubernetes cluster using the kubectl client from the Ubuntu machine.

Now that you have the AWS EKS cluster created, which applications do you plan to deploy on it?

Kubernetes in Cloud: Getting Started with Amazon EKS or AWS EKS

Kubernetes is a scalable open-source tool that manages container orchestration extremely effectively, but does Kubernetes work in Cloud as well? Yes, it does work with the most widely used service AWS EKS which stands for Amazon Elastic Kubernetes.

Yes, you can manage Kubernetes in public clouds, such as GCP, AWS, etc to deploy and scale containerized applications.

In this tutorial, you will learn the basics of Kubernetes, Amazon EKS, or AWS EKS.

Join 53 other followers

Table of Content

  1. What is Kubernetes?
  2. kubernetes architecture and kubernetes components
  3. What is AWS EKS (Amazon EKS) ?
  4. How does AWS EKS service work?
  5. Prerequisites
  6. AWS EKS Clusters components
  7. AWS EKS Control Pannel
  8. Workload nodes
  9. How to create aws eks cluster in AWS EKS
  10. AWS EKS cluster setup: Additional nodes on AWS EKS cluster
  11. Connecting AWS EKS Cluster using aws eks update kubeconfig
  12. How to Install Kubectl on Windows machines
  13. Install Kubectl on Ubuntu machine
  14. Conclusion

What is Kubernetes?

Kubernetes is an open-source container orchestration engine for automating deployments, scaling, and managing the container’s applications. Kubernetes is an open-source Google-based tool. It is also known as k8s. It can run on any platform, such as on-premises, hybrid, or public cloud. Some of the features of Kubernetes are:

  • kubernetes cluster scales when needed and is load balanced.
  • kubernetes cluster has the capability to self-heal and automatically provide rollbacks.
  • kubernetes allows you to store configurations, secrets, or passwords.
  • Kubernetes can be mounted with various stores such as EFS and local storage.
  • Kubernetes works well with networking components such as NFS, locker, etc.

kubernetes architecture and kubernetes components

When you Install Kubernetes, you create a Kubernetes cluster that mainly contains two components master or the controller nodes and worker nodes. Nodes are the machines that contain their own Linux environment, which could be a virtual machine or either physical machine.

The application and services are deployed in the containers within the Pods inside the worker nodes. Pods contain one or more docker containers. When a Pod runs multiple containers, all the containers are considered a single entity and share the Node resources.

Bird-eye view of kubernetes cluster
Bird-eye view of Kubernetes cluster
  • Pod: Pods are groups of containers that have shared storage and network.
  • Service: Services are used when you want to expose the application outside of your local environment.
  • Ingress: Ingress helps in exposing http/https routes from the outside world to the services in your cluster.
  • ConfigMap: Pod consumes configmap as environmental values or command-line arguments in the configuration file.
  • Secrets: Secrets as the name suggest stores sensitive information such as password, OAuth tokens, SSH keys, etc.
  • Volumes: These are persistent storage for containers.
  • Deployment: Deployment is an additional layer that helps to define how Pod and containers should be created using yaml files.
kubernetes components
kubernetes components

What is AWS EKS (Amazon EKS) ?

Amazon provides an AWS managed service AWS EKS that allows hosting Kubernetes without needing you to install, operate, and maintain Kubernetes control plane or nodes, services, etc. Some of the features of AWS EKS are:

  • AWS EKS expands and scales Kubernetes control plane across many availability zones so that there is always a high availability.
  • It automatically scales and fix control plane instances if any instance is impacted or unhealthy node.
  • It is integrated with various other AWS services such as IAM for authentication, VPC for Isolation , ECR for container images & ELB for load distribution etc.
  • It is very secure service.

How does AWS EKS service work?

Previously you learned what is AWS EKS now; let’s learn how AWS EKS works. The first step in AWS EKS is to create an EKS cluster using AWS CLI or AWS Management console by specifying whether you need self-managed AWS EC2 instance or deploy workloads to AWS Fargate, which automatically manages everything.

Further, once the Kubernetes cluster is set up, connect to the cluster using kubectl commands and deploy applications.

AWS EKS cluster using EC2 or AWS Fargate
AWS EKS cluster using EC2 or AWS Fargate

Prerequisites

  • You must have AWS account in order to setup cluster in AWS EKS with admin rights on AWS EKS and IAM. If you don’t have AWS account, please create a account from here AWS account.
  • AWS CLI installed. If you don’t have it already install it from here.
  • Ubuntu 16 or plus version machine.
  • Windows 7 or plus machine.

AWS EKS Clusters components

Now that you have a basic idea of the AWS EKS cluster, it is important to know the components of AWS EKS Clusters. Let’s discuss each of them now.

AWS EKS Control Pannel

AWS EKS control plane is not shared between any AWS account or other EKS clusters. Control Panel contains at least two API servers exposed via Amazon EKS endpoint and three etcd instances associated with Amazon EBS volumes.

Amazon EKS automatically monitors the load on the control panel and removes unhealthy instances when needed. Amazon EKS uses Amazon VPC network policies to restrict traffic between control plane components within a single cluster.

AWS EKS nodes

Amazon EKS nodes are registered with the control plane via the API server endpoint and a certificate file created for your cluster. Your Amazon EKS cluster can schedule pods on AWS EKS nodes which may be self-managed, Amazon EKS Managed node groups, or AWS Fargate.

Self-managed nodes

Self-managed nodes are Windows and Linux machines that are managed by you. The nodes contain pods that share kernel runtime environments. Also, if the pod requires more resources than requested, then additional resources are aligned by you, such as memory or CPU, and you assign IP addresses from a different CIDR block than the IP address assigned to the node.

Amazon EKS Managed node groups

Previously you learned about self-managed nodes managed by you but in the case of AWS EKS managed node groups, you don’t need to provision or register Amazon EC2 instances. All the managed nodes are part of the Amazon EC2 auto-scaling group.

AWS takes care of everything starting from managing nodes, scaling, and aligning the resources such as IP address, CPU, memory. Although everything is managed by AWS still, you are allowed to SSH into the nodes. Like self-managed nodes, the nodes containing the pods share the same kernel.

You can add a managed node group to new or existing clusters using the Amazon EKS console, eksctl, AWS CLI, AWS API, or AWS Cloud Formation. Amazon EKS managed node groups can be launched in public and private subnets. You can create multiple managed node groups within a single cluster.

AWS Fargate

AWS Fargate is a serverless technology that you can use with Amazon ECS to run containers without managing servers or clusters of Amazon EC2 instances. With Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. But with AWS Fargate, the pod has a dedicated kernel. As there are no nodes, you cannot SSH into the node.

Kubernetes cluster architecture
Kubernetes cluster architecture

Workload nodes

The workload is a node containing applications running on a Kubernetes cluster. Every workload controls pods. There are five types of workloads on a cluster.

  • Deployment: Ensures that a specific number of pods run and includes logic to deploy changes. Deployments can be rolled back and stopped.
  • ReplicaSet: Ensures that a specific number of pods run. Can be controlled by deployments. Replicasets cannot be rolled back and stopped.
  • StatefulSet: Manages the deployment of stateful applications where you need persistant storage.
  • DaemonSet  Ensures that a copy of a pod runs on all (or some) nodes in the cluster
  • Job: Creates one or more pods and ensures that a specified number of them run to completion

By default, Amazon EKS clusters have three workloads:

  • coredns: For name resolution for all pods in the cluster.
  • aws-node To provide VPC networking functionality to the pods and nodes in your cluster.
  • kube-proxy:To manage network rules on nodes that enable networking communication to your pods.

How to create AWS EKS cluster in AWS EKS

Now that you have an idea about the AWS EKS cluster and its components. Let’s learn how to create an AWS EKS cluster and set up Amazon EKS using the Amazon management console, and AWS CLI commands.

  • Make a note of VPC that you want to choose to create the AWS EKS cluster.
Choosing the correct AWS VPC
Choosing the correct AWS VPC
  • Next on IAM page create a IAM policy with full EKS permissions.
Creating an IAM Policy
Creating an IAM Policy
  • Click on Create policy and then click on choose service as EKS.
Choosing the configuration on IAM Policy
Choosing the configuration on IAM Policy
  • Now provide the name to the policy and click create.
Reviewing the details and creating the IAM Policy
Reviewing the details and creating the IAM Policy
IAM Policy created successfully
IAM Policy created successfully
  • Next, navigate to IAM role and create a role.
Choosing the Create role button
Choosing the Create role button
  • Now in role choose AWS EKS service and then select EKS cluster as your use case:
Configure the IAM role
Configure the IAM role
Selecting the use case in IAM role
Selecting the use case in IAM role
  • Further specify the name to role and then click on create role.
Creating the IAM role
Creating the IAM role
  • Now attach a IAM policy that you created previously and EKSclusterpolicy to IAM role.
Attaching the IAM policy on IAM role
Attaching the IAM policy on the IAM role
Adding permission on the IAM role
Adding permission on the IAM role
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*",
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
Adding the Trusted entities
Adding the Trusted entities

Now that you have the IAM role created for AWS EKS Cluster and IAM policy attachment. Let’s dive into the creation of the Kubernetes cluster.

  • Now navigate to AWS EKS console and click on Create cluster
creating AWS EKS Cluster
creating AWS EKS Cluster
  • Next, add all the configurations related to cluster as shown below.
Configure AWS EKS Cluster
Configure AWS EKS Cluster
  • Furthe provide networking details such as VPC, subnets etc. You may skip subnets as of now.
Configure network settings of AWS EKS Cluster
Configure network settings of AWS EKS Cluster
  • Keep hitting NEXT and finally click on Create cluster. It may take few minutes for cluster to come up.
AWS EKS Cluster creation is in progress
AWS EKS Cluster creation is in progress
  • Lets verify if cluster is up and active. As you can see below
Verifying the AWS EKS CLuster
Verifying the AWS EKS CLuster

Now, the Kubernetes cluster on AWS EKS is successfully created. Now let’s initiate communication from the client we installed to the Kubernetes cluster.

AWS EKS cluster setup: Additional nodes on AWS EKS cluster

As discussed previously, the Amazon EKS cluster can schedule pods on any combination of self-managed nodes, Amazon EKS managed nodes, and AWS Fargate. In this section, let’s learn if you can add additional using the Amazon EKS Managed node group.

To create Managed node group using AWS Management Console.

  • Navigate to the Amazon EKS page ➔ Configuration tab ➔ Compute tab ➔ Add Node Group and provide all the details such as name, node IAM role that you created previously.
Checking the AWS EKS Node groups
Checking the AWS EKS Node groups

Further specify Instance type, Capacity type, networking details such as VPC details, subnets, SSH Keys details, and click create. As you can see below, the nodes are added successfully by creating a new group.

Verifying the new nodes in Checking in the AWS EKS Node groups
Verifying the new nodes in Checking in the AWS EKS Node groups
  • To find node details from your machine run the below commands.
aws eks update-kubeconfig --region us-east-2 --name "YOUR_CLUSTER_NAME"
kubectl get nodes --watch
AWS EKS nodes details
AWS EKS nodes details

To create Fargate(Linux) nodes you need to create a Fargate profile as when any pod gets deployed in Fargate it first matches the desired configuration from the profile then it gets deployed. The configuration contains permissions such as the ability of the pod to get the container’s image from ECR etc. To create a Fargate profile click here.

Connecting AWS EKS Cluster using aws eks update kubeconfig

You have created and set up the AWS EKS cluster successfully and learned how you can add additional nodes on the AWS EKS cluster, which is great. But do you know how to connect the AWS EKS cluster from your local machine? Let’s learn how to connect the AWS EKS cluster using eks update kubeconfig.

Make sure to configure AWS credentials on local machine to match with same IAM user or IAM role that you used while creating the AWS EKS cluster.

  • Open Visual studio or GIT bash or command prompt.
  • Now, configure kubeconfig to make communication from your local machine to Kubernetes cluster in AWS EKS
aws eks update-kubeconfig --region us-east-2 --name Myekscluster
aws eks update kubeconfig command
aws eks update kubeconfig command
  • Finally test the communication between local machine and cluster after adding the configurations. Great you can see the connectivity from our local machine to Kubernetes cluster !!
kubectl get svc
Verifying the connectivity from local machine to AWS EKS cluster
Verifying the connectivity from local machine to AWS EKS cluster

How to Install Kubectl on Windows machines

Now that you have some basic idea of the What is EKS cluster, it is also managed by the kubectl tool. Although you can manage the AWS EKS cluster manually with the AWS management console but running kubectl is easy and straightforward. Let’s dive into how to install kubectl on a windows machine.

  • Open PowerShell on your windows machine and run the below curl command the command on any folder of your choice. The below command will download the kubectl binary on windows machine.
curl -o kubectl.exe https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/windows/amd64/kubectl.exe
  • Now verify in the C drive if binary file has been downloaded succesfully.
Downloading the kubectl binary
Downloading the kubectl binary
  • Next run kubectl binary file i.e kubectl.exe.
Running kubectl binary
Running kubectl binary
  • Verify if Kubectl is properly installed by running kubectl version command.
kubectl version --short --client
Verifying the kubectl version
Verifying the kubectl version

Install Kubectl on Ubuntu machine

Previously you learned how to install kubectl on a windows machine but let’s quickly check out the how-to install Kubectl on an Ubuntu machine.

  • Login to the Ubuntu machine using SSH client.
  • Download the kubectl binary using curl command on ubuntu machine under home directory ie. $HOME
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
Installing Kubectl on Ubuntu machine
Installing Kubectl on Ubuntu machine
  • Next, after installing kubectl you will need to grant execute permissions to the binary to start it.
chmod +x ./kubectl
  • Copy the binary to a folder in your PATH so that kubectl command can run from anywhere on your machine.
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
  • Verify the kubectl version on ubuntu machine again by running kubectl version.
kubectl version --short --client
Kubectl version on Ubuntu machine
Kubectl version on Ubuntu machine

Conclusion

In this tutorial, you learned Kubernetes, Amazon Elastic Kubernetes service, ie. AWS EKS, how to install Kubernetes client kubectl on Windows and Linux machine and finally created AWS EKS cluster and connected the same using kubectl client.

Now that you have a newly launched AWS EKS cluster setup, what do you plan to deploy on it?