kubernetes microservice architecture with kubernetes deployment example

In this article we will go through the kubernetes microservice architecture with kubernetes deployment example.

Table of Content

  1. Prerequisites
  2. kubernetes microservice architecture
  3. Docker run command to deploy Microservice
  4. Preparing-kubernetes-deployment-yaml-or-kubernetes-deployment-yml-file-for-Voting-App-along-with-kubernetes-deployment-environment-variables
  5. Preparing kubernetes deployment yaml or kubernetes deployment yml file for Redis app along with kubernetes deployment environment variables
  6. Preparing kubernetes deployment yaml or kubernetes deployment yml file for PostgresApp along with kubernetes deployment environment variables
  7. Preparing kubernetes deployment yaml or kubernetes deployment yml file for Worker App along with kubernetes deployment environment variables
  8. Preparing kubernetes deployment yaml or kubernetes deployment yml file for Result App along with kubernetes deployment environment variables
  9. Creating kubernetes nodeport or k8s nodeport or kubernetes service nodeport YAML file
  10. Creating kubernetes clusterip or kubernetes service clusterip YAML file
  11. Running kubernetes service and Kubernetes deployments.
  12. Conclusion

Prerequisites

This will be step by step tutorial,

  • Ubuntu or Linux machine with Kubernetes cluster running or a minikube.
  • kubectl command installed

kubernetes microservice architecture

In the below kubernetes microservice architecture you will see an application where you vote and the result will be displayed based on the votes and below are the components:

  • Voting app based on Python which is UI based app where you will add your vote.
  • In Memory app based on Redis which will store your vote in memory.
  • Worker app which is .net based app converts in built memory data into Postgres DB.
  • Postgres DB app which is based on Postgres DB collects the data and store it in database.
  • Result-app which is UI based app fetches the data from DB and displays the vote to the users.

Docker run command to deploy Microservice

We will start this tutorial by showing you docker commands, if we would have run all these applications in docker itself instead of kubernetes.

docker run -d --name=redis redis

docker run -d --name=db postgres:9.4

docker run -d --name=vote -p 5000:80 --link redis:redis voting-app

docker run -d --name=result -p 5001:80 --link db:db  result-app

docker run -d --name=worker  --link redis:redis --link db:db worker

Preparing kubernetes deployment yaml or kubernetes deployment yml file for Voting App along with kubernetes deployment environment variables

As this tutorial is to deploy all applications in kubernetes, we will prepare all the YAML files and in the end of tutorial we will deploy them using kubectl command.

In the below deployment file we are creating voting app and it will run on those pods whose labels matches with name as voting-app-pod and app as demo-voting-app.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: voting-app-deploy
  labels:
    name: voting-app-deploy
    app: demo-voting-app
spec:
  replicas: 3
  selector:
    matchLabels:
      name: voting-app-pod
      app: demo-voting-app  
  template:
    metadata:
      name: voting-app-pod
      labels:
        name: voting-app-pod
        app: demo-voting-app
    spec:
      containers:
        - name: voting-app        
          image: kodekloud/examplevotingapp_voting:v1
          resources:
            limits:
              memory: "4Gi"
              cpu: "1"
            requests:
              memory: "2Gi" 
              cpu: "2"         
          ports:
            - containerPort: 80

Preparing kubernetes deployment yaml or kubernetes deployment yml file for Redis app along with kubernetes deployment environment variables

In the below deployment file we are creating redis app and it will run on those pods whose labels matches with name as redis-pod and app as demo-voting-app.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-deploy
  labels:
    name: redis-deploy
    app: demo-voting-app
spec:  
    replicas: 1
    selector:
      matchlabels:
        name: redis-pod
        app: demo-voting-app
    template:    
      metadata:
        name: redis-pod
        labels:
          name: redis-pod
          app: demo-voting-app

      spec:
        containers:
          - name: redis
            image: redis
            resources:
              limits:
                memory: "4Gi"
                cpu: "1"
              requests:
                memory: "2Gi" 
                cpu: "2"                     
            ports:
              - containerPort: 6379          

Preparing kubernetes deployment yaml or kubernetes deployment yml file for PostgresApp along with kubernetes deployment environment variables

In the below deployment file we are creating postgres app and it will run on those pods whose labels matches with name as postgres-pod and app as demo-voting-app.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres-deploy
  labels:
    name: postgres-deploy
    app: demo-voting-app
spec:
    replicas: 1
    selector:
      matchLabels:
        name: postgres-pod
        app: demo-voting-app
    template: 
      metadata:
        name: postgres-deploy
        labels:
          name: postgres-deploy
          app: demo-voting-app
        spec:
          containers:
            - name: postgres       
              image: postgres
              resources:
                limits:
                  memory: "4Gi"
                  cpu: "1"
                requests:
                  memory: "2Gi" 
                  cpu: "2"         
              ports:
                - containerPort: 5432
              env:
                - name: POSTGRES_USER
                  value: "posgres"
                - name: POSTGRES_PASSWORD
                  value: "posgres"     

Preparing kubernetes deployment yaml or kubernetes deployment yml file for Worker App along with kubernetes deployment environment variables

In the below deployment file we are creating postgres app and it will run on those pods whose labels matches with name as worker-app-pod and app as demo-voting-app.

apiVersion: app/v1
kind: Deployment
metadata:
  name: worker-app-deploy
  labels:
    name: worker-app-deploy
    app: demo-voting-app
spec:
  selector:
    matchLabels:
      name: worker-app-pod
      app: demo-voting-app  
  replicas: 3
  template:
    metadata:
      name: worker-app-pod
      labels:
        name: worker-app-pod
        app: demo-voting-app
    spec: 
    containers:
      - name: worker
        resources:
          limits:
            memory: "4Gi"
            cpu: "1"
          requests:
            memory: "2Gi" 
            cpu: "2"       
        image: kodekloud/examplevotingapp_worker:v1

Preparing kubernetes deployment yaml or kubernetes deployment yml file for Result App along with kubernetes deployment environment variables

In the below deployment file we are creating result app and it will run on those pods whose labels matches with name as result-app-pod and app as demo-voting-app.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: result-app-deploy
    app: demo-voting-app
spec:
   replicas: 1
   selector:
     matchLabels:
       name: result-app-pod
       app: demo-voting-app
   template:
     metadata:
       name: result-app-pod
       labels:
          name: result-app-pod
          app: demo-voting-app
       spec:
          containers:
            - name: result-app        
              image: kodekloud/examplevotingapp_result:v1
              resources:
                limits:
                  memory: "4Gi"
                  cpu: "1"
                requests:
                  memory: "2Gi" 
                  cpu: "2"         
              ports:
                - containerPort: 80

Creating kubernetes nodeport or k8s nodeport or kubernetes service nodeport YAML file

Now that we have created the deployment files for each of the application our voting app and result app will be expose to the outside world so we will declare both of them as NodePort as shown below.

kind: Service 
apiVersion: v1 
metadata:
  name: voting
  labels:
    name: voting-service
    app: demo-voting-app
spec:
  type: NodePort
  selector:
    name: voting-app-pod
    app: demo-voting-app
  ports:      
    - port: 80    
      targetPort: 80   
      nodePort: 30004
kind: Service 
apiVersion: v1 
metadata:
  name: result
  labels:
    name: result-service
    app: demo-voting-app
spec:
  type: NodePort
  selector:
    name: result-pod
    app: demo-voting-app
  ports:      
    - port: 6379    
      targetPort: 6379   
      nodePort: 30005

Creating kubernetes clusterip or kubernetes service clusterip YAML file

Now that we have created the deployment files for each of the application our Redis app and Postgres app will be expose to the internal cluster only world so we will declare both of them as ClusterIP as shown below.

kind: Service 
apiVersion: v1 
metadata:
  name: db
  labels:
    name: postgres-service
    app: demo-voting-app
spec:
  type: ClusterIP
  selector:
    name: postges-pod
    app: demo-voting-app
  ports:      
    - port: 5432    
      targetPort: 5432   
kind: Service 
apiVersion: v1 
metadata:
  name: redis
  labels:
    name: redis-service
    app: demo-voting-app
spec:
  type: ClusterIP
  selector:
    name: redis-pod
    app: demo-voting-app
  ports:      
    - port: 6379    
      targetPort: 6379   

Running kubernetes service and Kubernetes deployments.

Now we will run the kubernetes services and kubernetes deployments using the below commands.

kubectl apply -f postgres-app-deploy.yml
kubectl apply -f redis-app-deploy.yml
kubectl apply -f result-app-deploy.yml
kubectl apply -f worker-app-deploy.yml
kubectl apply -f voting-app-deploy.yml



kubectl apply -f postgres-app-service.yml
kubectl apply -f redis-app-service.yml
kubectl apply -f result-app-service.yml
kubectl apply -f voting-app-service.yml

Conclusion

In this article we went through the kubernetes microservice architecture with kubernetes deployment example.

Advertisement

How to Install kubernetes on ubuntu 20.04 step by step

If you are looking to dive into the Kubernetes world, learning how to install Kubernetes is equally important.

Kubernetes is more than just management of containers as it keeps the load balanced between the cluster nodes, provides a self-healing mechanism, zero downtime deployment capabilities, automatic rollback, and many more features.

Let’s dive into this tutorial and learn how to install Kubernetes on ubuntu 20.04.

Join 50 other followers

Table of Content

Prerequisites

  • Two Ubuntu machines, one for Kubernetes Master and the other for Kubernetes enslaved person or worker node.
  • On both the Linux machines, make sure Inbound and outbound rules are all open to the world as this is the demonstration.

In the production environment for the control Panel and worker node needs following ports to be open: Master 6443,10250/10251/10252 2379,2380 [All Inbound] and for worker node 30000-32767

  • Docker is installed on both the Ubuntu machines. To check if docker is running, use the below command.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

service docker status
Checking the docker status
Checking the docker status

Setup Prerequisites for Kubernetes installation on ubuntu 18.04 machine

Before installing Kubernetes on Ubuntu, you should first run through a few prerequisite tasks to ensure the installation goes smoothly.

To get started, open your favorite SSH client, connect to MASTER and Worker node and follow along.

  • Install transport-https and curl package using apt-get install the command. Transport-https package allows the use of repositories accessed via the HTTP Secure protocol, and curl allows you to transfer data to or from a server or download, etc.
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
Installing the transport-https and curl package on each ubuntu system
Installing the transport-https and curl package on each ubuntu system
  • Add the GPG key for the official Kubernetes repository to your system using curl command.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  • Add the Kubernetes repository to APT sources and update the system.
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update

You can also use sudo apt-add-repository “deb http://apt.kubernetes.io/ kubernetes-xenial main” command to add the kubernetes repository

  • Finally, rerun the sudo apt update command to read the new package repository list and ensure all of the latest packages are available for installation.

Installing Kubernetes on the Master and Worker Nodes

Now that you have the prerequisite packages installed on both MASTER and WORKER, it’s time to set up Kubernetes. Kubernetes consists of three packages/tools, kubeadmkubelet, and kubectl. Each of these packages contains all of the binaries and configurations necessary to set up a Kubernetes cluster.

Assuming you are still connected to the MASTER and Worker node via SSH:

  • Now Install Kubectl ( which manages cluster), kubeadm (which starts cluster), and kubelet ( which manages Pods and containers) on both the machines.
sudo apt-get install -y kubelet kubeadm kubectl
Installing the kubeadm kubelet kubectl package/tool on each ubuntu machine
Installing the kubeadm kubelet kubectl package/tool on each ubuntu machine

If you don’t specify the runtime, then kubeadm automatically detects an installed container. For Docker runtime the Path to Unix socket is /var/run/docker.sock & for containerd it’srun/containerd/containerd.sock

Initialize Kubernetes cluster

Now that you have Kubernetes installed on your controller node and worker node. But unless you initialize it, it is doing nothing. Kubernetes is initialized on the controller node; let’s do it.

  • Initialize your Cluster using Kubeadm init command on the Controller node, i.e., the control panel node.

The below command tells Kubernetes the IP address where its kube-apiserver is located with the --apiserver-advertise-address parameter. In this case, that IP address is the controller node itself.

The command below also defines the range of IP addresses to use for the pod network using the -pod-network-cidr parameter. The pod network allows pods to communicate with each other. Setting the pod network like this will automatically instruct the controller node to assign IP addresses for every node.

kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.111.4.79
THE MASTER NODE STARTS THE CLUSTER AND ASKS YOU TO  JOIN YOUR WORKER NODE
THE CONTROLLER NODE STARTS THE CLUSTER AND ASKS YOU TO JOIN YOUR WORKER NODE
  • Once your controller node that is the control panel is started is initialized, run the below commands on Controller Node to run the Kubernetes cluster with a regular user.
# Run the below commands on Master Node to run Kubernetes cluster with a regular user
# Creating a directory that will hold configurations such as the admin key files, which are required to connect to the cluster, and the cluster’s API address.
   mkdir -p $HOME/.kube
   # Copy all the admin configurations into the newly created directory 
   sudo cp -i /etc/Kubernetes/admin.conf $HOME/.kube/config
   # Change the user from root to regular user that is non-root account
   sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm join I0.111.4.79:6443 --token: zxicp.......................................
  • After running the command, the worker node joins the control panel successfully.
WORKER NODE JOINS THE CLUSTER
WORKER NODE JOINS THE CLUSTER
  • Now, verify the nodes on your controller node by running the kubectl command as below.
kubectl get nodes
Checking the kubernetes nodes
Checking the Kubernetes nodes
  • You will notice that the status of both the nodes is NotReady because there is no networking configured between both the nodes. To check the network connectivity, run the kubectl command as shown below.
kubectl get pods --all-namespaces
  • Below, you can see that coredns pod is in Pending, which configures network connecting between both the nodes. To configure the networking, they must be in Running status.
Checking the kubernetes Pods
Checking the Kubernetes Pods

To fix the networking issue, you will need to Install a Pod network on the cluster so that your Pods can talk to each other. Let’s do that !!

Install a Pod network on the cluster

Earlier, you installed Kubernetes on the Controller node, and the worker node was able to join it, but to establish the network connectivity between two nodes, you need to deploy a pod network on the Controller node, and one of the most widely used pod networks is Flannel. Let’s deploy it with the kubectl apply command.

Kubernetes allows you to set up pod networks via YAML configuration files. One of the most popular pod networks is called Flannel. Flannel is responsible for allocating an IP address lease to each node.

The Flannel YAML file contains the configuration necessary for setting up the pod network.

  • Run the below kubectl apply command on the Controller node.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  • After running this command, you will see the below output.
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
  • Now re-run kubectl commands to verify if both the nodes are in ready status and the coredns pod is running.
kubectl get nodes
kubectl get pods --all-namespaces
kubernetes network is setup
Kubernetes network is set up.
  • To check the cluster status, run the kubectl cluster-info command.
Checking the Kubernetes cluster status

Join 50 other followers

Conclusion

You should now know how to install Kubernetes on Ubuntu. Throughout this tutorial, you walked through each step to get a Kubernetes cluster set up and deploy your first application. Good job!

Now that you have a Kubernetes cluster set up, what applications will you deploy next to it?

The Ultimate Kubernetes Interview questions for Kubernetes Certification (CKA)

If you are preparing for a DevOps interview or for Kubernetes Interview questions or Kubernetes Certification, consider marrying this Ultimate Kubernetes Interview questions for Kubernetes Certification (CKA) tutorial, which will help you forever in any Kubernetes interview.

Without further delay, let’s get into this Kubernetes Interview questions for Kubernetes Certification (CKA).

Join 50 other followers

Table of Content

Related: Kubernetes Tutorial for Kubernetes Certification [PART-1]

Related: Kubernetes Tutorial for Kubernetes Certification [PART-2]

PAPER-1

Q1. How to create kubernetes namespace using kubectl command.

Answer: Kubernetes namespace can be created using the kubectl create command.

kubectl create namespace namespace-name

Q2. How to create a kubernetes namespace named my-namespace using a manifest file?

Answer: Create the file named namespace.yaml as shown below.

apiVersion: v1
kind: Namespace
metadata: 
    name: my-namespace
  • Now execute the below kubectl command as shown below.
kubectl create -f namespace.yaml
Creating the Kubernetes namespace(my-namespace)
Creating the Kubernetes namespace(my-namespace)

Q3. How to switch from one Kubernetes namespace to another Kubernetes namespace ?

Answer: To switch beetween two kubernetes namespaces run the kubectl config set command.

kubectl config set-context $(kubectl config current-context) --namespace my-namespace2
switch from one Kubernetes namespace to other Kubernetes namespace
switch from one Kubernetes namespace to another Kubernetes namespace

Q4. How To List the Kubernetes namespaces in a Kubernetes cluster ?

Answer: Run the kubectl get command as shown below.

kubectl get namespaces

Q5. How to create the Kubernetes namespaces in a Kubernetes cluster ?

Answer: Execute the below kubectl command.

kubectl create namespace namespace-name

Q6. To delete kubernetes namespace using kubectl command ?

Answer: kubectl delete command allows you to delete the Kubernetes API objects.

kubectl delete namespaces namespace-name

Q7. How to create a new Kubernetes pod with nginx image?

Answer: Use Kubectl run command to launch a new Kubernetes Pod.

kubectl run nginx-pod --image=nginx
Running kubectl run command to create a new Pod.
Running kubectl run command to create a new Pod.

Q8. How to Create a new Kubernetes pod in different Kubernetes namespace?

Answer: Use Kubectl run command to launch a new Kubernetes Pod followed by namspace flag.

kubectl run nginx-pod --image=nginx --namespace=kube-system
Creating a new Kubernetes pod in different Kubernetes namespace
Creating a new Kubernetes pod in a different Kubernetes namespace

Q9. How to check the running Kubernetes pods in the Kubernetes cluster?

Answer:

kubectl get pods
Checking the running Kubernetes pods
Checking the running Kubernetes pods

Q10. How to check the running Kubernetes pods in the Kubernetes cluster in different kubernetes namespace?

Answer:

 kubectl get pods  --namespace=kube-system | grep nginx
Checking the running Kubernetes pods in different kubernetes namespace
Checking the running Kubernetes pods in different kubernetes namespace

Q11. How to check the Docker image name for a running Kubernetes pod and get all the details?

Answer: Execute the kubernetes describe command.

kubectl describe pod-name
Describing the kubernetes Pod
Describing the kubernetes Pod

Q12. How to Check the name of the Kubernetes node on which Kubernetes pods are deployed?

Answer:

kubectl get pods -o wide
Checking the name of the Kubernetes node
Checking the name of the Kubernetes node

Q13. How to check the details of docker containers in the Kubernetes pod ?

Answer:

kubectl describe pod pod-name
Checking the details of docker containers
Checking the details of docker containers

Q14. What does READY status signify in kubectl command output?

Answer: The READY status gives the stats of the number of running containers and the total containers in the cluster.

kubectl get pod -o wide command
Checking the Ready Status
Checking the Ready Status

Q15. How to delete the Kubernetes pod in the kubernetes cluster?

Answer: Use the kubectl delete command.

kubetcl delete pod webapp
Deleting the Kubernetes pod
Deleting the Kubernetes pod

Q16. How to edit the Docker image of the container in the Kubernetes Pod ?

Answer: Use the Kubernetes edit command.

kubectl edit pod webapp

Q17. How to Create a manifest file to launch a Kubernetes pod without actually creating the Kubernetes pod?

Answer: –dry-run=client flag should be used

kubectl run nginx --image=nginx --dry-run=client -o yaml > my-file.yaml
launch a Kubernetes pod without actually creating the Kubernetes pod
launch a Kubernetes pod without actually creating the Kubernetes pod

Q18. How to check the number of Kubernetes Replicasets running in the kubernetes cluster ?

Answer: Run Kubectl get command.

kubectl get rs
kubectl get replicasets
Checking the Replicasets in kubernetes cluster
Checking the Replicasets in kubernetes cluster

Q19. How to find the correct version of the Kubernetes Replicaset or in Kubernetes deployments ?

Answer:

kubectl explain rs | grep VERSION
Finding the Kubernetes replicaset or kubernetes deployment version
Finding the Kubernetes replicaset or kubernetes deployment version

Q20. How to delete the Kubernetes Replicasets in the Kubernetes cluster?

Answer: Run the below command.

kubectl delete rs replicaset-1 replicaset-2
delete the Kubernetes Replicasets
delete the Kubernetes Replicasets

Q21. How to edit the Kubernetes Replicasets in the Kubernetes cluster?

Answer: Run the below command.

kubectl edit rs replicaset-name

Q22. How to Scale the Kubernetes Replicasets in the Kubernetes cluster?

Answer: To scale the Kubernetes Replicasets you can use any of three below commands.

kubectl scale  --replicas=5 rs rs_name
kubectl scale --replicas=6 -f file.yml # Doesnt change the number of replicas in the file.
kubectl replace -f file.yml

Q23. How to Create the Kubernetes deployment in the kubernetes Cluster?

Answer: Use the kubernetes create command.

kubectl create deployment nginx-deployment --image=nginx
Create the Kubernetes deployment
Creating the Kubernetes deployment
kubectl create deployment my-deployment --image=httpd:2.4-alpine
Create the Kubernetes deployment
Creating the Kubernetes deployment

Note: Deployment strategy are of two types:

  • Recreate strategy where we replace all the pods of deployment together and create new pods
  • Rolling update Strategy where we replace few pods with newly created pods.

To Update the deployment use the below commands.

  • To update the deployments
kubectl apply deployment-definition.yml
  • To update the deployment such as using nginx:1.16.1 instead of nginx:1.14.2
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record

Q24. How to Scale the Kubernetes deployment in the kubernetes Cluster?

Answer:

kubectl scale deployment my-deployment --replicas=3
Scaling the Kubernetes deployment
Scaling the Kubernetes deployment

Q25. How to Edit the Kubernetes deployment in the kubernetes Cluster?

Answer:

kubectl edit deployment my-deployment
Editing the Kubernetes deployment
Editing the Kubernetes deployment

Q26. How to Describe the Kubernetes deployment in the kubernetes Cluster?

Answer:

kubectl describe deployment my-deployment
 Describing the Kubernetes deployment
Describing the Kubernetes deployment

Q27. How to pause the Kubernetes deployment in the kubernetes Cluster?

Answer: Use the Kubectl rollout command.

kubectl rollout pause deployment.v1.apps/my-deployment
Pausing the kubernetes deployment
Pausing the kubernetes deployment
Viewing the Paused kubernetes deployment
Viewing the Paused kubernetes deployment
  • To check the status of Rollout and then check all the revisions and rollouts you can check using below command.
kubectl rollout status deployment.v1.apps/my-deployment

kubectl rollout history deployment.v1.apps/my-deployment

Q28. How to resume the Kubernetes deployment in the kubernetes Cluster?

Answer:

kubectl rollout resume deployment.v1.apps/my-deployment
Resuming the Kubernetes deployment
Resuming the Kubernetes deployment

Q29. How to check the history the Kubernetes deployment in the kubernetes Cluster?

Answer:

For Incorrect Kubernetes deployments such as an incorrect image the deployment crashes. Make sure to stop the deployment using cltr + c and execute rollout history command.

kubectl rollout history deployment.v1.apps/nginx-deployment

Q30. How to rollback to the previous kubernetes deployment version which was stable in the kubernetes Cluster?

Answer: Run the undo command as shown below.

kubectl rollout undo deployment.v1.apps/nginx-deployment

Q31. How to Create a manifest file to create a Kubernetes deployment without actually creating the Kubernetes deployment?

Answer: use the –dry-run=client command.

kubectl create deployment nginx --image=nginx --dry-run=client -o yaml
Creating the kubernetes deployment manifest file
Creating the kubernetes deployment manifest file

Q32. How to Create a manifest file to create a Kubernetes deployment with Replicasets without actually creating the Kubernetes deployment?

Answer: use the –dry-run=client command.

kubectl create deployment nginx --image=nginx --replicas=4 --dry-run=client -o yaml
Creating the kubernetes deployment with replicasets with manifest file
Creating the kubernetes deployment with replicasets with manifest file

Q33. How to Create a Kubernetes service using manifest file ?

Answer: Create the kubernetes file and then run kubernetes create commnad.

kubectl create -f service-defination.yml

Q34. How to Check running Kubernetes service in the kubernetes cluster?

Answer: To check the running Kubernetes services in the kubernetes cluster run below command.

kubectl get svc
kubectl get services
Checking Kubernetes service in kubernetes cluster
Checking Kubernetes service in kubernetes cluster

Q35. How to Check details of kubernetes service such as targetport, labels, endpoints in the kubernetes cluster?

Answer:

kubectl describe service 
Describing the Kubernetes service in kubernetes cluster
Describing the Kubernetes service in kubernetes cluster

Q36. How to Create a Kubernetes NodePort service in the kubernetes cluster?

Answer: Run kubectl expose command.

kubectl expose deployment nginx-deploy --name=my-service --target-port=8080 --type=NodePort --port=8080 -o yaml -n default  # Make sure to add NodePort seperately
 Kubernetes NodePort service
Kubernetes NodePort service

Q37. How to Create a Kubernetes ClusterIP service named nginx-pod running on port 6379 in the kubernetes cluster?

Answer: Create a pod then expose the Pod using kubectl expose command.

kubectl run nginx --image=nginx --namespace=kube-system
kubectl expose pod --port=6379 --name nginx-pod -o yaml --namespace=kube-system
Creating the Kubernetes Pods
Creating the Kubernetes Pods
 Kubernetes ClusterIP service
Kubernetes ClusterIP service
Verifying the Kubernetes ClusterIP service
Verifying the Kubernetes ClusterIP service

Q38. How to Create a Kubernetes ClusterIP service named redis-service in the kubernetes cluster?

Answer:

kubectl create service clusterip --tcp=6379:6379  redis-service --dry-run=client -o yaml
Creating the Kubernetes ClusterIP
Creating the Kubernetes ClusterIP

Q39. How to Create a Kubernetes NodePort service named redis-service in the kubernetes cluster?

Answer: kubectl expose command.

kubectl create service nodeport --tcp=6379:6379  redis-service  -o yaml
Creating the Kubernetes NodePort
Creating the Kubernetes NodePort

Q40. How to save a Kubernetes manifest file while creating a Kubernetes depployment in the kubernetes cluster?

Answer: Use > nginx-deployment.yaml

kubectl create deployment nginx --image=nginx --dry-run=client -o yaml > nginx-deployment.yaml

Join 50 other followers

Related: Kubernetes Tutorial for Kubernetes Certification [PART-1]

Related: Kubernetes Tutorial for Kubernetes Certification [PART-2]

Conclusion

In this Ultimate guide (Kubernetes Interview questions for Kubernetes Certification (CKA), you had a chance to revise everything you needed to pass and crack the Kubernetes interview.

Now that you have sound knowledge of Kubernetes and are ready for your upcoming interview.

Kubernetes Tutorial for Kubernetes Certification [PART-2]

In the previous Kubernetes Tutorial for Kubernetes Certification [PART-1], you got a jump start into the Kubernetes world; why not gain a more advanced level of knowledge of Kubernetes that you need to become a Kubernetes pro.

In this Kubernetes Tutorial for Kubernetes Certification [PART-2] guide, you will learn more advanced levels of Kubernetes concepts such as Kubernetes deployment, kubernetes volumes, Kubernetes ReplicaSets, and many more.

Without further delay, let’s get into it.

Join 50 other followers

Table of Content

  1. kubernetes deployment
  2. Kubernetes ReplicaSets
  3. Kubernetes DaemonSet
  4. Kubernetes Jobs
  5. What is a kubernetes service
  6. Kubernetes ClusterIP
  7. Kubernetes NodePort
  8. kubernetes loadbalancer service
  9. Kubernetes Ingress
  10. kubernetes configmap or k8s configmap
  11. Kubernetes Secrets
  12. Kubernetes Volume and kubernetes volume mounts
  13. kubernetes stateful sets
  14. Conclusion

Introduction to YAML

YAML format is easier to understand and to see three different types of syntax, lets checkout below.

The below is the XML syntax.

<servers>
      <server>
               <name>server1</name>
               <owner>sagar</owner>
               <status>active</status>
     <server>
<servers>

The below is the JSON syntax.

{
         Servers: [
      {
        name: server1,
        owner: sagar,
        status: active,
     }
   ]
}

The below is the YAML syntax.

servers:
      -name: server1
      -owner: sagar
       -status: active

The below is again an example of the YAML syntax.

Fruits:
  - Apple:
        Calories: 95
        Fat: 0.3
        Carbs: 25
  - Banana:
      Calories: 105
      Fat: 0.4
      Carbs: 27
  - Orange:
        Calories: 45
        Fat: 0.1
        Carbs: 11
Vegetables:
  - Carrot:
        Calories: 25
        Fat: 0.1
        Carbs: 6  
  - Tomato:
        Calories: 22
        Fat: 0.2
        Carbs: 4.8  
  - Cucumber:
        Calories: 8
        Fat: 0.1
        Carbs: 1.9          

kubernetes deployment

Kubernetes deployments allow you to create Kubernetes Pods and containers using YAML files. Using Kubernetes deployment, you specify the number of pods or replica sets you need for a particular Kubernetes deployment.

Unlike kubernetes replicaset, Kubernetes deployment allows you to roll back, update the rollouts, resume or pause the deployment and never cause downtime. When you create a Kubernetes deployment by defining the replicas the kubernetes replicaset are also created.

A ReplicaSet ensures that a specified number of Pods are running simultaneously; however, a Kubernetes deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to pods along with a lot of other useful features.

Let’s check out an example to create Kubernetes deployments.

  • Create a file named deployment.yaml and copy/paste the below content into the file.
    • The name of the deployment is nginx-deployment defined in metadata.name field.
    • The deployment will create three kubernetes Pods using the spec.replicas field.
    • Kubernetes pods characterstics ae defined using the spec.selector field.
    • Pods will be launched if matches deployment Label defined using spec.selector.matchlabels.app
    • Pods are labeled using spec.template.metadata.labels.app
    • Containers specifications are done using spec.template.spec respectively.

When you execute the kubectl apply command to create the kubernetes object then your YAML file or requuest to Kube API server is first converted into JSON format.

The below is an example of the Pod YAML syntax.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment  # Name of the deployment
  labels: 
     app:nginx  # Declaring the deployments labels.
spec:
  replicas: 3  # Declaring the number of Pods required
  selector:
    matchLabels:
      app: nginx # Pods will be launched if matches deployment Label.
  template:
    metadata:
      labels:
        app: nginx # Labels of the Pods.
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
  • Run the commands below.
kubectl create deployment --help

kubectl create -f deployment.yml

kubectl create deployment my-dep --image=busybox --replicas=3 
  • Now, run kubectl get deployments to check if the Kubernetes deployment has been created.
kubectl get deployments
Creating kubernetes deployments
Creating kubernetes deployments
  • Next, run kubectl get rs to check the Kubernetes ReplicaSets created by the Deployment,
kubectl get deployments
Checking the kubernetes deployments
Checking the kubernetes deployments
  • If you wish to check the labels which are automatically generated for each Pod, run the below command.
kubectl get pods --show-labels
Checking labels of Pods
Checking labels of Pods
  • To check the information of the deployment use the below command.
kubectl describe deployment nginx-deployment
  • To update the deployment such as using nginx:1.16.1 instead of nginx:1.14.2
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record
  • To create an NGINX Pod
kubectl run nginx --image=nginx

–dry-run=client , The will not create the resource instead tell you weather resource can be created and command is correct.

-o yaml : This will output the resource definition in YAML format on screen.

  • Generate POD Manifest YAML file (-o yaml). Don’t create it(–dry-run)
kubectl run nginx --image=nginx --dry-run=client -o yaml
  • To create a deployment
kubectl create deployment --image=nginx nginx
  • To generate Deployment YAML file. Hint use (-o yaml) to create the deployment file and to avoid creating deployment use –dry-run
kubectl create deployment --image=nginx nginx --dry-run=client -o yaml
  • To generate Deployment YAML file (-o yaml). Don’t create it(–dry-run) and save it to a file.
kubectl create deployment --image=nginx nginx --dry-run=client -o yaml > nginx-deployment.yaml
  • To make necessary changes to the file (for example, adding more replicas) and then create the deployment.
kubectl create -f nginx-deployment.yaml
  • The –replicas option to create a deployment with 4 replicas.
kubectl create deployment --image=nginx nginx --replicas=4 --dry-run=client -o yaml > nginx-deployment.yaml

Kubernetes Update and Rollback

  • First check the numbers of pods and make sure no resource is present in this namespace.
kubectl get pods
  • Next create the deployment. Record option record the changes.
kubectl create -f deployment.yml --record=true
  • Next check the status of the rollout by using below command.
kubectl rollout status deployment app-deployment
  • Next check the history of the deployment by using below command..
kubectl rollout history deployment app-deployment
  • Next describe the deployment by using below command.
kubectl create -f deployment.yml
  • Next Edit the deployment by using below command. For example change the image version.
kubectl create -f deployment.yml
  • Next if any issues in the deployment then you can undo the deployment by using below command..
kubectl rollout undo deployment app-deployment
  • Next check the status of the rollout by using below command.
kubectl rollout status deployment app-deployment

Imperative and Declarative Kubernetes commands

Imperative commands are those commands that are run one by one and are run when there are steps to be performed such as:

kubectl run --image=nginx nginx
kubectl create deployment --image=nginx nginx
kubectl expose deployment nginx --port 80
kubectl edit deployment nginx
kubectl scale deployment --replicas=5
kubectl set image deployment nginx nginx=nginx:1.18
kubectl create -f nginx.yaml
kubectl edit -f nginx.yaml
kubectl delete -f nginx.yaml

Note: the edit command just edits the deployment however it doesn’t change anything in the manifest file. It opens the manifest file within the kubernetes memory not the original file.

Also in case of imperative commands if you already have pods running and if you create a deployment or again run commands then it will give error from server.

In case of declarative commands the yaml manifest file is created and then you run the below command.

kubectl apply -f nginx.yaml

Kubernetes ReplicaSets

Kubernetes ReplicaSets maintains a set of Kubernetes Pods running simultaneously and makes sure the pods are load-balanced properly; however, a Kubernetes deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to pods along with a lot of other useful features.

Even if you declare the replica sets as 1, kubernetes makes sure that you have this 1 pod running all the time.

Kubernetes Replicasets are deployed in the same way as Kubernetes deployments. For ReplicaSets, the kind is always a ReplicaSet, and you can scale delete the pods with the same kubectl command as you did for deployments.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx-replicasets
  labels: 
      app:nginx 
spec:
  replicas: 3 
  selector:
    matchLabels:   # Replicaset Label To create replicasets only when it matches label app: nginx 
      app: nginx 
  template:
    metadata:
     labels:      # Container label app: nginx 
        app: nginx 
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
  • Next run the below command to create the kubernetes Replicaset.
kubectl apply -f replicasets.yml
kubectl create -f rc-definition.yml
  • To replace the Kubenetes Replicaset run the below command.
kubectl replace -f replicasets.yml
  • To scale the Kubernetes Replicasets run the below command.

Changing the Kubernetes Replicasets doesn’t change the number of replicas in the Kubernetes manifest file.

kubectl scale --replicas=6 -f replicasets.yml 
kubectl scale  --replicas=6 replicaset name-of-the-replicaset-in-metadadata
Kubectl commands to work with kubernetes replicasets
Kubectl commands to work with kubernetes replicasets
  • To find the replicasets run the below command.
kubectl get replicationcontroller
  • Some of the important commands of replicasets are:
kubectl create -f replicaset-definition.yml
kubectl get replicaset
kubectl delete replicaset myapp-replicaset
kubectl replcase -f replicaset-definition.yml
kubectl scale --replicas=6 -f replicaset-definition.yml

If your replicaset are running with same labels previously and you try to create new pods with same label then it will terminate the new pods by replicasets.

Kubernetes DaemonSet

Kubernetes DaemonSet ensures that each node in the Kubernetes cluster runs a copy of Pod. When any node is added to the cluster, it ensures Pods are added to that node, or when a node is removed, Pods are also removed, keeping the Kubernetes cluster clean rather than getting stored in the garbage collector.

Generally, the node that a Kubernetes Pod runs on is chosen by the Kubernetes scheduler; however, for Kubernetes, DaemonSet pods are created and scheduled by the DaemonSet controller. To deploy, replace or update the Kubernetes Daemonset, you need to use the same Kubectl command for Kubernetes deployments.

  • Create a file named daemonset.yaml and copy/paste the below code.
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
  • Now, execute the kubectl apply command to create a Kubernetes daemonset.
kubectl apply -f  daemonset.yaml
creating a Kubernetes daemonset
creating a Kubernetes daemonset

Kubernetes Jobs

The main function of the Kubernetes job is to create one or more Kubernetes pods and check the successful deployment of the pods. Deleting a Kubernetes job will remove the Pods it created, and suspending a Kubernetes job will delete its active Pods until it is resumed again.

For example, while creating a new Pod, if it fails or is deleted due to a node hardware failure or a node reboot, the Kubernetes Job will provide the same. Kubernetes Job wallows you to run multiple Pods parallel or on a particular schedule.

When a Kubernetes Job completes, no more Pods are created or deleted, allowing you to still view the logs of completed pods to check for errors, warnings, etc. The Kubernetes job remains until you delete it using the kubectl delete job command.

  • To create a Kubernetes Job create a file named job.yaml and copy/paste the below content into it.
apiVersion: batch/v1
kind: Job
metadata:
  name: tomcatjob
spec:                  # It is of List and a array
  template:
    # This is the pod template
    spec:
      containers:
      - name: tomcatcon
        image: Tomcat
        command: ['sh', '-c', 'echo "Hello, Tomcat!" && sleep 3600']
      restartPolicy: OnFailure

  • To create the Kubernetes Jobs run the kubectl apply command followed by kubectl get job command to verify.
kubectl apply -f job.yaml

kubectl get jobs
creating the Kubernetes Jobs
creating the Kubernetes Jobs
  • To list all the Pods that belong to a Kubernetes Job use kubectl get pods command as shown below.
pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
echo $pods
list all the Pods that belong to a Kubernetes Job
list all the Pods that belong to a Kubernetes Job
list all the Pods that belong to a Kubernetes Job
list all the Pods that belong to a Kubernetes Job

What is a kubernetes service

Kubernetes service allows you to expose applications running on a set of Pods as a network service. Every Kubernetes Pods gets a unique IP address and DNS name, and sometimes these are deleted or added to match the state of your cluster, leading to a problem as IP addresses are changed.

To solve Kubernetes service was introduced, which aligns static Permanent IP address on a set of Pods as a network service. There are different Kubernetes service types: ClusterIP, NodePort, Loadbalancer, and ExternalName.

Kubernetes ClusterIP

Kubernetes ClusterIP exposes the service on an internal IP and is reachable within the cluster only and possibly only within the cluster nodes. You cannot access the ClusterIP service outside the Kubernetes cluster. When you create a Kubernetes ClusterIP, then a virtual IP is assigned.

Kubernetes ClusterIP architecture
Kubernetes ClusterIP architecture
  • Lets learn to create a ClusterIP using a file named clusterip.yaml and copy/paste the below content.
kind: Service 
apiVersion: v1 
metadata:
  name: backend-service 
spec:
  type: ClusterIP
  selector:
    app: myapp 
  ports:      
    - port: 8080     # Declaring the ClusterIP service port
# target port is the pod's port and If not set then it takes the same value as the service port
      targetPort: 80   
  • To create the ClusterIP service run the kubectl apply command followed by kubectl get service command to verify.
kubectl apply -f clusterip.yaml

kubectl get service
Creating the ClusterIP and verifying
Creating the ClusterIP and verifying
  • To create a Service named redis-service of type ClusterIP to expose pod redis on port 6379
kubectl expose pod redis --port=6379 --name redis-service --dry-run=client -o yaml


kubectl create service clusterip redis --tcp=6379:6379 

Kubernetes NodePort

Kubernetes NodePort exposes the Kubernetes service to be accessible outside your cluster on a specific port called the NodePort. Each node proxies the NodePort (the same port number on every Node) into your Service. The Kubernetes control plane allocates a port default: 30000-32767, If you want a specific port number, you can specify a value in the nodePort field.

Kubernetes NodePort architecture
Kubernetes NodePort architecture

Let’s learn how to create a simple Kubernetes NodePort service. In the below nodeport.yaml manifest file:

  • Kind should be set to Service as you are about to launch a new service.
  • The name of the service is hostname-service.
  • Expose the service on a static port on each node to access the service from it outside the cluster. When the node receives a request on the static port, 30162 then forwards the request to one of the pods with the label “app: echo-hostname”.
  • Three types of ports for a service are as follows:
    • nodePort – The static port assigned to each node.
    • port – The service port exposed internally in the cluster.
    • targetPort – Container port or pod Port on which application is hosted.
kind: Service 
apiVersion: v1 
metadata:
  name: hostname-service 
spec:
  type: NodePort
  selector:
    app: echo-hostname 
# Client access the Node Port which is forwarded to the service Port and to the target Port
  ports:       
    - nodePort: 30162  # Node Port
      port: 8080 # Service Port
      targetPort: 80   # Pod Port ( If not set then it takes the same service Port)

  • To create the Kubernetes NodePort service run the kubectl apply command followed by kubectl get service command to verify.
kubectl apply -f nodeport.yaml

kubectl get service
Checking Kubernetes NodePort service
Checking Kubernetes NodePort service

If there is a single pod on a single node or multiple pods on a single node or multiple pods on multiple nodes then NodePort remains the same but with a different URL for the client.

https://node1:30008
https://node2:30008
https://node3:30008
  • To create a Service named nginx-service of type NodePort to expose pod nginx on port 80 and node port 30080.  You have to generate a definition file and then add the node port in manually before creating the service with the pod.)
kubectl expose pod nginx --port=80 --type=NodePort --name nginx-service --dry-run=client -o yaml 


kubectl create service nodeport nginx --tcp=80:80  --node-port=30080  --dry-run=client -o yaml

kubernetes loadbalancer service

Kubernetes load balancer service exposes the service externally using a cloud provider’s load balancer. If you access the service with NodePort, you will need to use different URLs to access and overcome this use load balancer.

  • Let’s learn how to create a simple kubernetes loadbalancer service. In the below lb.yaml manifest file:
kind: Service 
apiVersion: v1 
metadata:
  name: loadbalancer-service 
spec:
  type: LoadBalancer
  selector:
    app: echo-hostname 
# Client access the Load balancer which forwards to NodePort to the targetPort.
  ports:  
    - nodePort: 30163  # Node Port
      port: 8080 # Service Port
      targetPort: 80   # Pod Port ( If not set then it takes the same service Port)
  • To create the kubernetes Loadbalancer service run the kubectl apply command followed by kubectl get service command to verify.
kubectl apply -f lb.yaml

kubectl get service
Checking Kubernetes Load balancer service
Checking Kubernetes Load balancer service

Kubernetes Service commands

kubectl get service

kubectl get svc

kubectl describe svc <name-of-service>


Kubernetes Ingress

Earlier in the previous section, you learned how to enable the Kubernetes load balancer or NodePort service to access the Kubernetes service from outside the cluster. But as your environment grows, you need to expose the service on a proper link, configure multiple URL redirection, apply SSL certificates, etc. To achieve this, you need to have Kubernetes Ingress.

To deploy Kubernetes Ingress, you need a Kubernetes ingress controller and Ingress resources as they are not automatically deployed within a cluster. As you can see in the below image, Ingress sends all its traffic to Kubernetes Service and further to the Pods.

Kubernetes Ingress architecture
Kubernetes Ingress architecture

Let’s learn how to create a Kubernetes Ingress resource. The name of an Ingress object must be a valid DNS subdomain name, and annotations configure the Ingress controller. The Ingress spec configures a load balancer or proxy server and the rules.

  • If you don’t specify any host within the spec parameter then the rule is applied applies to all inbound HTTP traffic via IP address.
  • /testpath is the path associated with backend service and port.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test
            port:
              number: 80
Kubernetes Ingress architecture diagram
Kubernetes Ingress architecture diagram

kubernetes configmap or k8s configmap

Kubernetes configmap allows you to store non-confidential data in key-value pairs such as environmental values or command-line arguments or as a configuration file in a volume such as a database subdomain name.

Kubernetes ConfigMaps does not provide secrecy or encryption. If the data you want to store are confidential, use a Secret rather than a ConfigMap.

  • There are multiple waysto use kubernetes configmap to configure containers inside a Pod such as.
    • By using commands in the containers.
    • Environmental variable on containers.
    • Attaching it in the volume.
    • Write a code or script which Kubernetes API reads configmap.
Kubernetes Configmaps architecture diagram
Kubernetes Configmaps architecture diagram
  • Let’s learn how to create a k8s configmap using the below manifest file.
apiVersion: v1
kind: ConfigMap
metadata:
  name: game-demo
data:
  players: "3"
  ui_properties_file_name: "user-interface.properties"
  • Now that you have created Kubernetes configmap, lets use values from game-demo Kubernetes configmap to configure a Pod:
apiVersion: v1
kind: Pod
metadata:
  name: configmap-demo-pod
spec:
  containers:
    - name: demo
      image: alpine
      command: ["sleep", "3600"]
      env:
        # Define the environment variable
        - name: PLAYER 
          valueFrom:
            configMapKeyRef:
              name: game-demo          
              key: players 

Kubernetes Secrets

Kubernetes Secrets allow you to store sensitive information such as passwords, OAuth tokens, SSH keys and enable encryption. There are three ways to use Kubernetes Secrets with POD like environmental variable on the container, attach as a file in volume and use by kubelet when you pull the image.

Let’s learn how to create Kubernetes Secrets using the below manifest file.

apiVersion: v1
kind: Secret
metadata:
  name: secret-basic-auth
type: kubernetes.io/basic-auth
stringData:
  username: admin
  password: password123

You can also create Kubernetes secrets using kubectl command.

kubectl create secret docker-registry secret-tiger-docker \
  --docker-username=user \
  --docker-password=pass \
  --docker-email=automateinfra@gmail.com

Kubernetes Volume and kubernetes volume mounts

Kubernetes volumes are used to store data for containers in Pod. If you store the data locally on a container, then it’s a risk as, and when pod or a container dies, the data is lost. Kubernetes volumes remain persistent and are backed up easily.

Kubernetes volumes can be mounted to other Kubernetes volumes. Each container in the Pod’s configuration must independently specify Kubernetes volume mounts.

  • There are different persistent volumes which kubernetes supports such as:
    • AWS EBS : An AWS EBS volume mounts into your pod provided your nodes on which pods are running must be AWS EC2 instances
    • azure disk : The azure Disk volume type mounts a Microsoft Azure Data Disk into a pod
    • Fiber channel: Allows an existing fiber channel block storage volume to mount to a Pod.
  • Let’s learn how to declare Kubernetes volume using AWS EBS configuration example.
apiVersion: v1
kind: Pod
metadata:
  name: test-ebs
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /test-ebs
      name: test-volume
  volumes:
  - name: test-volume
    # This AWS EBS volume must already exist.
    awsElasticBlockStore:
      volumeID: "<volume id>"
      fsType: ext4

kubernetes stateful sets

Kubernetes stateful sets manage stateful applications such as MySQL, Databases, MongoDB, which need persistent storage. Kubernetes stateful sets manage the deployment and scaling of a set of Pods and provide guarantees about the ordering and uniqueness of these Pods.

With Kubernetes stateful sets with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1} and are terminated in reverse order, from {N-1..0}.

Let’s check out how to declare Kubernetes stateful sets configuration example below.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels

Kubectl apply command

When you need to perform the deployment then you need to run the below command using the local file.

kubectl apply -f nginx.yaml 

After the apply command the same file similar to local file is created within the kubernetes and is known as Live Object configuration.

Also the local file configuration is converted into json format also known as Last applied configuration and stored within kubernetes and as annotations . The reason is it enables us to identify the difference between the last configurations and current configurations.

annotations:
     kubectl.kubernetes.io/last-applied-configuration

Conclusion

Now that you have learned everything you should know about Kubernetes, you are sure going to be the Kubernetes leader in your upcoming projects or team or organizations.

So with that, which applications do you plan to host on Kubernetes in your next adventure?

Kubernetes Tutorial for Kubernetes Certification [PART-1]

Kubernetes Tutorial for Kubernetes Certification [PART-1]

If you are looking to learn to Kubernetes, you are at the right place; this Kubernetes Tutorial for Kubernetes Certification tutorial will help you gain complete knowledge that you need from basics to becoming a Kubernetes pro.

Kubernetes is more than just management of docker containers as it keeps the load balanced between the cluster nodes, provides a self-healing mechanism such as replacing a new healthy container and many features.

Let’s get started with Kubernetes Tutorial for Kubernetes Certification without further delay.

Join 50 other followers

Table of Content

  1. What is kubernetes?
  2. Why Kubernetes?
  3. Docker swarm vs kubernetes
  4. kubernetes Architecture: Deep dive into Kubernetes Cluster
  5. kubernetes master components or kubernetes master node
  6. Worker node in kubernetes Cluster
  7. Highly Available Kubernetes Cluster
  8. What is kubernetes namespace?
  9. Kubernetes Objects and their Specifications
  10. Kubernetes Workloads
  11. What is a kubernetes Pod? 
  12. Deploying multi container Pod
  13. Conclusion

What is kubernetes?

Kubernetes is an open-source Google-based container orchestration engine for automating deployments, scaling, and managing the container’s applications. It is also called k8s because eight letters are between the “K” and the “s” alphabet.

Kubernetes is an container orchestration tool which means it orchestrate the docker technology.

Kubernetes is portable and extensible which supports declarative as well as automatic approaches.

Kubernetes also helps in service discovery, such as exposing a container using the DNS name or using their own IP address, provides a container runtime, zero downtime deployment capabilities, automatic rollback, automatic storage allocation such as local storage, public cloud providers, etc.

Kubernetes has the ability to scale when needed, which is known as AutoScaling. You can automatically manage configurations like secrets or passwords and mount EFS or other storage when required.

Why Kubernetes?

Now that you have a basic idea about what is Kubernetes. Earlier applications used to run on the physical server that had issues related to resource allocation, such as CPU memory. You would need more physical servers, which were too expensive.

To solve the resource allocation issue, virtualization was adopted in which you could isolate applications and align the necessary resources as per the need. With virtualization, you can run multiple virtual machines running from single hardware, allowing better utilization of resources and saving hardware costs.

Later, the containerizations-based approach was followed, such as docker and after Kubernetes, which is light-weighted and allowed portable deployments in which containers share the same OS, CPU, and memory from the host but have their own file systems and can launch anywhere from local machines or on cloud infrastructure.

Finally, Kubernetes takes care of scaling and failover for your applications and easily manages the canary deployment of your system.

Some of the key features of Kubernetes are:

  • Kubernetes exposes a container using a DNS name or using an IP address.
  • Kubernetes allows you to mount storage system of your choice such as local storage, public cloud providers and more.
  • You can rollback the state anytime for your deployments.
  • Kubernetes replaces containers that fails or whose health check fails.
  • Kubernetes allows you to store secrets and sensitive information such as passwords, OAuth tokens and SSH keys. Also you can update the secret information multiple times without impacting container images.

Every Kubernetes object contains two nested objects (object spec and object status) whee spec describes the description of the object you set and status shows

Physical Server to Virtualization to Containerization
Physical Server to Virtualization to Containerization

Docker swarm vs kubernetes

In previous sections, you learned what Kubernetes is and why there is a shift from physical to virtual machines and towards docker, the container-based technology.

Docker is a light weighted application that allows you to launch multiple containers. Still, to manage or orchestrate the containers, you need orchestration tools such as the Docker swarm or the Kubernetes.

Let’s look at some of the key differences between Docker swarm vs Kubernetes.

Docker SwarmKubernetes
Docker swarm use YAML files and deploy on nodesUsers can encrypt data between nodes.
Users can encrypt data between nodesAll Pods can interact with each other without encryption
Kubernetes Installation is difficult, but the cluster is powerfulDocker swarm is easy to install, but the cluster doesn’t ha many advanced features.
There is no autoscaling enabled in the Docker swarm.Can do autoscaling
Docker swarm is easy to install but the cluster doesn’t ha many advanced features.Kubernetes Installation is difficult but the cluster is very strong
Docker swarm vs Kubernetes

kubernetes Architecture: Deep dive into Kubernetes Cluster

When you Install Kubernetes, you create a Kubernetes cluster that mainly contains two components master or the controller nodes and worker nodes. Nodes are the machines that contain their own Linux environment, which could be a virtual machine or either physical machine.

The application and services are deployed in the containers within the Pods inside the worker nodes. Pods contain one or more docker containers. When a Pod runs multiple containers, all the containers are considered a single entity and share the Node resources.

Bird-eye view of kubernetes cluster
Bird-eye view of Kubernetes cluster

kubernetes master components or kubernetes master node

Kubernetes master components or Kubernetes master node manages the Kubernetes clusters state, storage information about the different nodes, container alignments, the data, cluster events, scheduling new Pods, etc.

Kubernetes master components or Kubernetes master node contains various components such as Kube-apiserver, an etcd storage, a Kube-controller-manager, and a Kube-scheduler.

Let’s learn about each Kubernetes master component or Kubernetes master node.

kube api server

The most important component in the Kubernetes master node is the kube API server or API server that orchestrates all the operations within the cluster. Kubernetes cluster exposes the Kube API server and acts as a gateway or an authenticator for users.

Kube API server also connects with worker node and other control panel components. It also allows you to query and manipulate the state of API objects in Kubernetes such as Pods, Namespaces, ConfigMaps, and events from the etcd server.

First of all the user request is authenticated with kube-apiserver, validates the request with etcd and then later performs the operation such as creation of pods etc. Once the pod is updated created it is assigned then scheduler monitors it and assign it to the appropriate node using the api server and kubelet component of the worker node. Later api server updates the information to the etcd.

The kubectl command-line interface or kubeadm uses the kube API server to execute the commands.

If you deploy the kube API server using the kubeadm tool then ApiServer is installed as a Pod and the manifest file is located at below path

cat /etc/kubernetes/manifests/kube-apiserver.yaml
  • However for non-kubeadm setup that means if you install it manually then you will install it using below command.
wget https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-apiserver
  • To check the kube-apiserver service then go to the below path.
 cat /etc/systemd/system/kube-apiserver.service
  • To check if kube api server is running in the kubernetes cluster using kubectl command. You will notice the pod will already be created.
kubectl get pods --all-namespaces
Checking the kube API server in the Kubernetes cluster with the kubectl command
Checking the kube API server in the Kubernetes cluster with the kubectl command
  • To check if kube api server is running in the kubernetes cluster with the process command.
ps -aux | grep kube-apiserver
Checking the kube API server in the Kubernetes cluster with process command
Checking the kube API server in the Kubernetes cluster with process command

You can also use client libraries if you want to write an application using Kubenetes API server based on different languages.

etcd kubernetes

etcd is again an important component in the Kubernetes master node that allows storing the cluster data, cluster state, nodes, roles, secrets, configs, pod state, etc. in key-value pair format. etcd holds two types of state; one is desired, and the other is the current state for all resources and keeps them in sync.

When you run the kubectl get command, the request goes to the etcd server via kube-apiserver, and the same command to add or update anything in the Kubernetes cluster using kubectl add or kubectl update, etcd is updated.

For example, the user runs the kubectl command, then the request goes to ➜ the Kube API server (Authenticator) ➜ , etcd ( reads the value) and pushes those values back to the kube API server.

Note: When you install etcd using the kubeadm tool, it is installed as a Pod and runs on port 2379

Tabular or relational database
Tabular or relational database
Key-value store
Key-value store

Note:

What are binary files ?

The binary file is stored in the binary format. Binary files are computer readable rather than human readable.
All the executable programs are stored in binary format.

What are MSI files ?

MSI files are primarily created for software installations and utilize the Windows Installer service. MSI files are database files that carry information about software installation.

What are EXE (executable) files ?

Exe files are used by Windows operating systems to launch software programs. EXE files are self-contained executable files that may execute a number of functions, including program installation, as opposed to MSI files, which are created exclusively for software installations. Executable files are .BAT, .COM, .EXE, and .BIN.

To install ETCD you will need to perform the below steps:

  • Download etcd binaries
curl -L https://github.com/etcd-io/etcd/releases/download/v3.3.11/etc-v3.3.11-linux-amd64.tar.gz
  • Extract etcd binary

tar zxvf etc-v3.3.11-linux-amd64.tar.gz

  • Run etcd service. This service by default runs on port 2379.

./etcd

After ETCD is installed it comes with etcdctl command line and we can run below commands. ETCDCTL is the CLI tool used to interact with ETCD. ETCDCTL can interact with ETCD Server using 2 API versions – Version 2 and Version 3.  By default its set to use Version 2. Each version has different sets of commands.

./etcdctl --version                 # To check the etcd version

./etcdctl  set key1 value1     # To update the data in etcd using key and value pair

./etcdctl get key1                  # To retrieve the specific keys from the etcd store.

export ETCDCTL_API=3    # To update the APiI version using the environment variable 

Kube scheduler

Kube scheduler helps only in deciding or scheduling new Pods and containers to the appropriate worker nodes according to the pod’s requirement, such as CPU or memory, before allocating the pods to the worker node of the cluster.
Whenever the controller manager finds any discrepancies in the cluster, it forwards the request to Scheduler via the kube API server to fix the gap. For example, If there is any change in node or if pod is created without assigned node, then:

  • Scheduler monitors the kube API server continously.
  • Kube API server checks with etcd and etcd respond back to kube API server with required information.
  • Next Controller manager informs Kube API server to schedule new pods using Scheduler.
  • Scheduler takes use of Kube API server asks kublet to assigns the node to the Pod.
  • Kubectl after assigning the pod responds back to kube API server with the information and kube API further communicates to etcd to update.
Scheduling a Pod
Scheduling a Pod

Installing Scheduler

  • Deploying or installing Scheduler manually. It will be installed as service.
wget https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-scheduler
  • Deploying or installing scheduler using kubeadm tool. Note: You will see scheduler as pod if you install via kubeadm. To check you can use below command.
kubectl get pods -n kube-system
  • When you deploy controller using kubeadm then you will see the manifest file on the below path.
cat /etc/kubernetes/manifests/kube-scheduler.yaml

Kube controller manager

Kube controller manager runs the controller process. Kubernetes comes with a set of built-in controllers that run inside the kube-controller-manager. These built-in controllers provide important core behaviors.

  • Node Controller: Node controller in kube controller manager checks the status of the node like when would node gets on or off. By default it checks the status of the node every 5 seconds.
  • Replication controller: Replication controller in kube controller manager maintains the correct number of containers are running in the replication group.
  • Endpoint controller: Providers endpoints of pods and services.
  • Service and token controller: Create Accounts and API access tokens.

In Kubernetes kube controller managers control loops that watch the state of your cluster, then make or request changes where needed. Each controller tries to move the current cluster state closer to the desired state and if there are any gaps, then forwards the request to Scheduler via the kube API server to fix the gap.

Installing Kube-controller-manager

  • Deploying or installing Kube-controller-manager manually. It will be installed as service.
wget https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-controller-manager
  • When you deploy it as service then it is located at below path
cat /etc/systemd/system/kube-controller-manager.service

ps -aux | grep kube-controller-manager
  • Deploying or installing Kube-controller-manager using kubeadm tool. Note: You will see Kube-controller-manager as pod if you install via kubeadm. To check you can use below command.
kubectl get pods -n kube-system
  • When you deploy controller using kubeadm then you will see the manifest file on the below path.
cat /etc/kubernetes/manifests/kube-controller-manager.yaml

Worker node in kubernetes Cluster

Worker Node is part of a Kubernetes cluster used to manage and run containerized applications. The worker node performs any actions when any Kube API server triggers any request. Each node is managed by the control plane or Master Node that contains the services necessary to run Pods.

The Worker node contains various components, including a Kubelet, Kube-proxy, container runtime, and Node components run on every node, maintaining the details of all running pods.

kubelet in kubernetes

kubelet in Kubernetes is an agent that runs on each worker node and manages containers in the pod after communicating with the kube API server. Kubelet command listens to the Kube API server and acts accordingly, such as adding or deleting containers.

Kube API server fetches the information from kubelet about the worker nodes’ health condition and, if necessary, schedules the necessary resources with the help of Scheduler.

Main function of Kublet is :

  • Registers node.
  • Creates Pods
  • Monitor nodes and pods.

Kubelet is not installed as a pod with the kubeadm tool; you must install it manually.

kube proxy in kubernetes

Kube proxy is a networking component that runs on each worker node in the Kubernetes cluster, forwards traffic within the worker nodes, and handles network communications. Pods are able to communicate with each other.

The job of kube proxy is to look at the new kubernetes service is created and as soon as service is created kube proxy creates the appropriate rule on each of the worker node to forward traffic to those services to backend pods. IP table rules are one of the rules that are configured by the kube proxy.

Installing kube proxy

  • Deploying or installing kube-proxy manually. It will be installed as service.
wget https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-proxy

Container Runtime

Container Runtime is an important component responsible for providing and maintaining the runtime environment to containers running inside the Pod. The most common container runtime is Docker, but others like containerd or CRI-O may also be possible.

High Level summary of kubernetes Architecture

Initially a request is made to kube-apiserver which retrieves the information or updates it in etcd component. If any pod or deployment is created then scheduler accordingly connects to kube api server to schedule the pod. Later kube-api server connects with kublet component of worker node to schedule the pod accordingly.

Other than Master or Worker Node

  • Now tha you know that a Kubernetes cluster contains a master and worker node but it also neds a DNS server which servers DNS records for kubernetes service.
  • Next, optional but it is a good practice that you to install or setup Kubernetes Dashbaod (UI) which allows users to manage and troubleshoot applications running in the cluster.

Highly Available Kubernetes Cluster

Now that you have a good idea and knowledge about the Kubernetes cluster components. Do you know Kubernetes auto-scales your cluster if required, and there are two ways to achieve it?

  • With etcd co-located with control panel nodes and have a stacked etcd.
  • With etcd running on separate nodes from the control panel nodes and have a external stacked etcd.

etcd is co-located with control panel

In the case of etcd are co-located with control panel all the three components API server, scheduler, controller manager communicates with etcd separately.

In this case, if any node gets down, both the components are down, i.e., API processor, etcd. To solve this, add more nodes to make it Highly Available. This approach requires less infrastructure.

etcd is co-located with control panel
etcd is co-located with control panel

etcd running on separate nodes from the control panel

In the second case of etcd running on separate nodes with control panel all the three components kube API server, scheduler, controller manager communicates with etcd externally with an external stacked etcd.

In this case, if any node gets down, your etcd is not impacted, and you still have a highly available environment than stacked etcd, but this approach requires more infrastructure.

etcd running on separate nodes from the control panel
etcd running on separate nodes from the control panel

What is kubernetes namespace?

Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called Kubernetes namespace.

In the Kubernetes namespace, all the resources should have a unique name, but not across namespaces. Kubernetes namespaces help different projects, teams, or customers to share a Kubernetes cluster and divide cluster resources between multiple users.

There are three types of Kubernetes namespaces when you launch a Kubernetes cluster:

  • Kube-system: kube-system contains the all the cluster components such as etcd, api-server, networking, proxy server etc.
  • Default: Default namespace is where you will actually launch the resources or other kubernetes objects by default.
  • Kube-public: This namespace is available for all users accessing the kubernetes service publically.

Let’s look at an example related to the Kubernetes namespace.

  1. If you wish to connect service named db-service within the same namespace then you can access the service directly as:
 mysql.connect("db-service")
  1. To access service named db-service service in other namespace like dev then you should access the service as:
    • <service-name>.<namespace-name>.svc.cluster.local because when you create a service the DNS entry is created
    • svc is subdomain for the service.
    • cluster.local is default domain name of the kubernetes cluster
mysql.connect("db-service.dev.svc.cluster.local") 

Most Kubernetes resources (e.g., pods, services, replication controllers, and others) are created in the same namespace or different depending on the requirements.

  • To List the current namespaces in a cluster run the kubectl command as below.
kubectl get namespaces
kubectl describe namespaces
kubectl create namespace namespace-name
  • Another way to create a kubernetes namespace is.
apiVersion: v1
kind: Namespace
metadata: 
    name: dev
  • To switch from one Kubernetes namespace to another Kubernetes namespace

To switch beetween two kubernetes namespaces run the kubectl config set command.

kubectl config set-context $(kubectl config current-context) --namespace my-namespace2
kubectl delete namespaces namespace-name
  • To allocate resource quota to namespace, create a file named resource.yaml and run the kubectl command.
apiVersion: v1
kind: ResourceQuota
metadata: 
    name: compute-quota
    namespace: my-namespace2  # This is where you will define the namespace
spec:
  hard:
    pods: "10"
    requests.cpu: "1"
    requests.memory: 0.5Gi
    limits.cpu: "1"
    limits.memory: 10Gi
kubectl create -f resource.yaml
allocate resource quota to namespace,
allocate resource quota to namespace
  • To check the resource consumption for a particular namespace run the below command.
kubectl describe resourcequota compute-quota
checking the resource consumption for a particular namespace
checking the resource consumption for a particular namespace
  • To check all the resources in all the namespaces run the below command.
kubectl get pods --all-namespace

Kubernetes Objects and their Specifications

Kubernetes objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster, such as how many containers are running inside the Pod and on which node, what all resources are available If there are any policies on applications.

These Kubernetes objects are declared in .YAML formats are used during deployments. The YAML file is used by the kubectl command, which parses it and converts it into JSON.

  • Spec : While you create the object you need to specify the spc parameter which define the characteristics of the resources you want in the kubernetes cluster.
  • Labels are key/value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are relevant to users. Labels are used to organize and to select subsets of objects.
  • apiVersion – Which version of the Kubernetes API you’re using to create this object
  • kind – What kind of object you want to create
  • metadata – Data that helps uniquely identify the object, including a name string, UID, and optional namespace
apiVersion: apps/v1             # Which Version your kubernetes API server uses
kind:       Deployment          # What kind of Object you would like to have
metadata:   tomcat deployment   # To Identify the Object
specs:                                  # What would you like to achieve using this template
   replicas: 2                        # Run 2 pods matching the template
   template:
     metadata:                  # Data that identify object like name, UID and namespace
       labels: 
         app : my_tomcat_app
   spec:
     containers:
     - name: my_tomcat_container
       image: tomcat
       ports:
       - containerPort : 8080

Kubernetes Workloads

The workload is the applications running on the Kubernetes cluster. Workload resources manage the set of Kubernetes Pods in the form of Kubernetes deployments, Kubernetes Replica Set, Kubernetes StatefulSet, Kubernetes DaemonSet, Kubernetes Job, etc.

Kubernetes deployments, Kubernetes Replica Set, Kubernetes StatefulSet, Kubernetes DaemonSet, Kubernetes Job, etc
Kubernetes deployments, Kubernetes Replica Set, Kubernetes StatefulSet, Kubernetes DaemonSet, Kubernetes Job, etc

Let’s learn each of the Kubernetes in the upcoming sections.

What is a kubernetes Pod?

The Kubernetes pod is a Kubernetes entity where your docker container resides hosting the applications. The number of pods in the node increases, not containers, if there is an increase in traffic on the apps.

The Kubernetes pod contains a single or group of containers that work with shared storage and network. It is recommended to add Pods than adding containers in the Pod because more containers mean more complex structures and interconnections.

The Kubernetes Pods are created using workload resources such as Kubernetes deployment or Kubernetes Job with the help of yaml file or using directly calling Kubernetes API and are assigned unique IP addresses.

To create a highly available application, you should consider deploying multiple Pods known as replicas. Healing of Pods is done by controller-manager as it keeps monitoring the health of each pod and later asks the scheduler to replace a new Pod.

All containers in the Pod can access the shared volumes, allowing those containers to share data same network namespace, including the IP address and network ports. Inside a Pod, the containers that belong to the Pod can communicate with one another using localhost.

The below is an example syntax

apiVersion: v1
kind: Pod
metadata:
  name: postgres
  labels:
    tier: db-tier
spec:
  containers:
    - name: postgres
      image: postgres
      env: 
       - name: POSTGRES_PASSWORD
         value: mysecretpassword
  • To create a Kubernetes Pod create a yaml file named pod.yaml and copy/paste the below content.
# pod.yaml template file that creates pod
apiVersion: v1        # It is of type String
kind: Pod               # It is of type String
metadata:             # It is of type Dictionary and contains data about the object 
  name: nginx
  labels: 
    app: nginx
    tier: frontend
spec:                  # It is of type List and Array because it can have multiple containers
  containers:
  - name: nginx
    image: nginx
  • Now to create a Kubernetes Pod execute the kubectl command.
kubectl create -f pod-defination.yml
kubectl apply -f pod.yaml  # To run the above pod.yaml manifest file
Creating a Kubernetes Pod
Creating a Kubernetes Pod
  • You can also use below kubectl command to run a pod in kubernetes cluster.
kubectl run nginx --image nginx  # Running a pod

kubectl get pods -o wide  # To verify the Kubernetes pods.

Kubectl describe pod nginx # To describe the pod in more better way

  • To link pods with each other in the kubernetes cluster, run the below command.
docker run helper --link app3
  • To create Pod using API request use the below command.
curl -X POST /api/v1/namespaces/default/pods

Deploying multi container Pod

In the previous section, you learned how to launch a Pod with a single container, but you sometimes need to run Kubernetes pods with multiple containers. Let’s learn how you can achieve this.

  • To create a multi container Kubernetes Pod create a yaml file named multi-container-demo.yaml and copy/paste the below content.
apiVersion: v1
kind: Pod
metadata:
  name: multicontainer-pod
spec:
  restartPolicy: Never
  volumes:
  - name: shared-data
    emptyDir: {}
  containers:
  - name: nginx-container-1                  # Container 1
    image: nginx
    volumeMounts:
    - name: shared-data
      mountPath: /usr/share/nginx/html
  - name: ubuntu-container-2                  # Container 2
    image: nginx
  • Now to create multi container Kubernetes Pod execute the kubectl command.
kubectl apply -f multi-container-demo.yaml  # To run the above pod.yaml manifest file
  • To check the Kubernetes Pod run kubectl get pods command.
Creating multi-container Kubernetes Pod
Creating multi-container Kubernetes Pod
  • To describe both the containers in the kubernetes Pods run kubectl describe command as shown below.
kubectl describe pod multicontainer-pod
describe both the containers in the kubernetes Pods
describe both the containers in the Kubernetes Pods

Join 50 other followers

Conclusion

In this Ultimate Guide, you learned what is Kubernetes, Kubernetes architecture, and understood Kubernetes cluster end to end and how to declare Kubernetes manifest files to launch Kubernetes Pods.

Now that you have gained a handful of Knowledge on Kubernetes, continue with the PART-2 guide and become the pro of Kubernetes.

Kubernetes Tutorial for Kubernetes Certification [PART-2]

How to Deploy kubernetes stateful application or kubernetes StatefulSets in AWS EKS cluster

Are you looking for permanent storage for your Kubernetes applications or Kubernetes Pods? If yes, you are at the right place to learn about Kubernetes stateful sets that manage the deployment and scaling of a set of Pods and provide guarantees about the ordering and uniqueness of these Pods.

In this tutorial, you will learn how to deploy a Kubernetes stateful sets application deployment step by step. Let’s get into it.

Join 50 other followers

Table of Content

  1. Prerequisites
  2. What is kubernetes statefulsets deployment?
  3. Deploying kubernetes statefulsets deployment in Kubernetes Cluster
  4. Creating Kubernetes Namespace for kubernetes stateful sets deployment
  5. Creating a Storage class required for Persistent Volume (PV)
  6. Creating a persistent volume claim (PVC)
  7. Creating Kubernetes secrets to store passwords
  8. Creating the Stateful backend deployment in the cluster
  9. Creating the Stateful Frontend WordPress deployment
  10. Kubernetes Stateful application using AWS EBS vs Kubernetes Stateful application using AWS EFS
  11. Conclusion

Prerequisites

  • AWS EKS cluster already created.
  • AWS account

What is kubernetes statefulsets deployment?

Kubernetes stateful sets manage stateful applications such as MySQL, Databases, MongoDB, which need persistent storage. Kubernetes stateful sets manage the deployment and scaling of a set of Pods and provide guarantees about the ordering and uniqueness of these Pods.

With Kubernetes stateful sets with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1} and are terminated in reverse order, from {N-1..0}.

Deploying kubernetes statefulsets deployment in Kubernetes Cluster

In this article, you will deploy the Kubernetes stateful sets deployment with the following components:

  1. Frontend applications a wordpress service deployed as Kubenretes Stateful set deployment containing the persistent volume AWS EBS to store HTML pages.
  2. Backend applications MySQL service deployed as Kubenretes deployment containing Persistent volume AWS EBS to store MySQL data.
  3. Load balacer on the top frontend application. The Load balancer will route the traffic to WordPress pods, and WordPress site pods will store data in MySQL pod by routing it via MySQL service as shown in below picture.
Deploying kubernetes stateful sets deployment in Kubernetes Cluster
Deploying Kubernetes stateful sets deployment in Kubernetes Cluster

Creating Kubernetes Namespace for kubernetes stateful sets deployment

Now that you know what is Kubernetes stateful sets and what all components you need to deploy Kubernetes stateful sets in the Kubernetes cluster. But before you deploy should deploy it in a particular namespace to make things simple. Let’s create the Kubernetes namespace.

  • Create a Kubernetes namespace with below command. Creation of Kubernetes namespace allows you to separate a particular project or a team or env.
kubectl create namespace stateful-deployment
Kubernetes namespace created
Kubernetes namespace created

Creating a Storage class required for Persistent Volume (PV)

Once you have the Kubernetes namespace created in the Kubernetes cluster, you will need to create storage for storing the website and database data.

In the AWS EKS service, the PersistentVolume (PV) is a piece of storage in the cluster implemented via an EBS volume, which has to be declared or dynamically provisioned using Storage Classes.

  • Lets begin by creating a storage class that is required for persistent volume in the kubernetes cluster. To create the storage class first create a file gp2-storage-class.yaml and copy/paste the below code.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
reclaimPolicy: Retain
mountOptions:
  - debug
  • Now, create the Storage class by running the below command.
kubectl apply -f gp2-storage-class.yaml --namespace=stateful-deployment
Creating the Kubernetes Storage class in Kubernetes cluster.
Creating the Kubernetes Storage class in the Kubernetes cluster.

In case you receive any error then run below command.

kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' --namespace=stateful-deployment
  • Next, verify all the storage class that are present in the Kubernetes cluster.
kubectl get storageclasses --all-namespaces
Verifying the Kubernetes Storage class
Verifying the Kubernetes Storage class

Creating a persistent volume claim (PVC)

Now that you have created a storage class that persistent volume will use, create a Persistent volume claim (PVC) so that a stateful app can then request a volume by specifying a persistent volume claim (PVC) and mount it in its corresponding pod.

  • Again create a file named pvc.yaml and copy/paste the below content. The below code creates the two PVC, one for wordpress frontend application and the other for mysql backend application service.
apiVersion: v1
kind: PersistentVolumeClaim
# Creating persistent volume claim (PVC) for WordPress ( frontend )
metadata:
  name: mysql-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
# Creating persistent volume claim (PVC) for MySQL  ( Backend )
metadata:
  name: wp-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
  • Now execute the apply command to create the persistent volume.
kubectl apply -f pvc.yaml --namespace=stateful-deployment
Creating the Persistent Volume claim for WordPress and MySQL application
Creating the Persistent Volume claim for WordPress and MySQL application
  • Verify the recently created persistent volume in the kubernetes cluster. These PVC are actually created as AWS EBS volumes.
kubectl get pvc --namespace=stateful-deployment
Verify the recently created persistent volume claims in the kubernetes cluster
Verify the recently created persistent volume claims in the Kubernetes cluster
  • Also verify the storage in AWS EBS you will find the below two storages.
Verifying the Persistent volumes claims in AWS EBS
Verifying the Persistent volumes claims in AWS EBS

Creating Kubernetes secrets to store passwords

Up to now, you created Kubernetes namespace and persistent volume successfully, but MySQL application password will be stored as Kubernetes secrets. So let’s jump into and create Kubernetes secrets that will be used to store passwords for the MySQL application.

  • Create secret which stores mysql password (mysql-pw) which will be injected as env var into container.
kubectl create secret generic mysql-pass --from-literal=password=mysql-pw --namespace=stateful-deployment
Creating Kubernetes secrets to store passwords
Creating Kubernetes secrets to store passwords
  • Next, verify the secrets that were recently created by using kubectl get command.
kubectl get secrets --namespace=stateful-deployment
verify the Kubernetes secrets that were recently created by using kubectl get command
verify the Kubernetes secrets that were recently created by using the kubectl get command

Creating the Stateful backend deployment in the cluster

Kubernetes Stateful deployment can happen either with AWS EBS or AWS EFS

Now that you have Kubernetes namespace, Persistent volume, secrets that you will consume in the application. Let’s get into building the stateful backend deployment.

  • Create a file mysql.yaml for the deployment and copy/paste the below code. apiVersion is the kubernetes API version to manage the object. For Deployment/Replicasets its apps/v1 and for Pod and service it is v1.
apiVersion: v1
# Kind denotes what kind of resource/object will kubernetes will create
kind: Service
# metadata helps uniquely identify the object, including a name string, UID, and optional namespace.
metadata:
  name: wordpress-mysql
# Labels are key/value pairs to specify attributes of objects that are meaningful and relevant to users.
  labels:
    app: wordpress
# spec define what state you desire for the object
spec:
  ports:
    - port: 3306
# The selector field allows deployment to identify which Pods to manage.
  selector:
    app: wordpress
    tier: mysql
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
# Creating the enviornment variable MYSQL_ROOT_PASSWORD whose value will be taken from secrets 
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 3306
          name: mysql
# Volumes that we created PVC will be mounted here.
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
# Defining the volumes ( PVC ).
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim
  • Now create mysql deployment and service by running the below command.
kubectl apply -f mysql.yaml --namespace=stateful-deployment
Creating the Stateful backend deployment in the cluster
Creating the Stateful backend deployment in the cluster
  • Further check the Pods of MySQL backend deployment by running below command.
kubectl get pods -o wide --namespace=stateful-deployment
Verifying the Stateful backend deployment in the cluster
Verifying the Stateful backend deployment in the cluster

In case of deployment with AWS EBS, all the Kubernetes Pods are created on the same AWS EC2 node and Persistent Volume is attached (EBS). However in case of StatefulSet with EBS Kubernetes Pods can be created on various nodes with different EBS attached.

Creating the Stateful Frontend WordPress deployment

Previously, you created a Stateful backend MySQL application deployment, which is great, but you will need to create a WordPress Front application deployment for a complete setup. Let’s get into it now.

  • Create a file wordpress.yaml for the deployment and copy/paste the below code. apiVersion is the kubernetes API version to manage the object. For Deployment/Replicasets its apps/v1 and for Pod and service it is v1.
apiVersion: v1
kind: Service
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - port: 80
  selector:
    app: wordpress
    tier: frontend
  type: LoadBalancer
---
apiVersion: apps/v1
# Creating the WordPress deployment as stateful where multiple EC2 will have multiple pods with diff EBS
kind: StatefulSet
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  replicas: 1
  serviceName: wordpress-stateful
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
      - image: wordpress:4.8-apache
        name: wordpress
        env:
        - name: WORDPRESS_DB_HOST
          value: wordpress-mysql
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 80
          name: wordpress
        volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /var/www/html
# Below section of volume is valid only for deployments not for statefulset 
      volumes:
      - name: wordpress-persistent-storage
        persistentVolumeClaim:
          claimName: wp-pv-claim
# Below section is valid only for statefulset not for deployments as volumes will be created dynamically

 volumeClaimTemplates: 
 - metadata:
         name: wordpress-persistant-storage
    spec: 
        accessModes:
            - ReadWriteOnce
        resources:
            requests: 
                storage: 10Gi
        storageClassName: gp2  
  • Now create wordpress deployment and service by running the below command.
kubectl apply -f wordpress.yaml --namespace=stateful-deployment
  • Further check the Pods of WordPress deployment by running below command.
kubectl get pods -o wide --namespace=stateful-deployment

Kubernetes Stateful application using AWS EBS vs Kubernetes Stateful application using AWS EFS

As discussed earlier, AWS EBS volumes are tied to only one Availability Zone, so recreated pods can be only started in the same Availability Zone of the previous AWS EBS volume.

For example, if you have a Pod running on AWS EC2 instance in the Availability zone (a) with AWS EBS attached in the same zone, then if in case your pods get restarted in another AWS EC2 instance, Pod will be able to attach the same AWS EBS however if in case pod gets restarted in another instance in different Availability zone (b) then it won’t be able to attach to the same previous AWS EBS rather it will require a new AWS EBS in Availability zone (b).

Kubernetes Stateful application using AWS EBS
Kubernetes Stateful application using AWS EBS

As discussed with AWS EBS, things are a little complicated as AWS EBS are not shared volumes as they belong to a particular AZ rather than multi-AZ; however, by using shared volumes AWS EFS ( Elastic file system) across Multi-AZ and Pods, it is possible.

AWS EFS volumes are mounted as network file systems on multiple AWS EC2 instances regardless of AZ. and work efficiently in multi-AZ and are highly available.

Kubernetes Stateful application using AWS EFS
Kubernetes Stateful application using AWS EFS

Conclusion

In this article, you learned how to create permanent storage for your Kubernetes applications and mount it. Also, you learned that there are two ways to mount permanent storage to Kubernetes applications by using AWS EBS and AWS EFS.

Now, which applications you do plan to deploy in the AWS EKS cluster with permanent storage?

The Ultimate Guide on AWS EKS for Beginners [Easiest Way]

In this Ultimate Guide as a beginner you will learn everything you should know about AWS EKS and how to manage your AWS EKS cluster ?

Common! lets begin !

Table of Content

  1. What is AWS EKS ?
  2. Why do you need AWS EKS than Kubernetes?
  3. Installing tools to work with AWS EKS Cluster
  4. Creating AWS EKS using EKSCTL command line tool
  5. Adding one more Node group in the AWS EKS Cluster
  6. Cluster Autoscaler
  7. Creating and Deploying Cluster Autoscaler
  8. Nginx Deployment on the EKS cluster when Autoscaler is enabled.
  9. EKS Cluster Monitoring and Cloud watch Logging
  10. What is Helm?
  11. Creating AWS EKS Cluster Admin user
  12. Creating Read only user for the dedicated namespace
  13. EKS Networking
  14. IAM and RBAC Integration in AWS EKS
  15. Worker nodes join the cluster
  16. How to Scale Up and Down Kubernetes Pods
  17. Conclusion

What is AWS EKS ?

Amazon provides its own service AWS EKS where you can host kubernetes without worrying about infrastructure like kubernetes nodes, installation of kubernetes etc. It gives you a platform to host kubernetes.

Some features of Amazon EKS ( Elastic kubernetes service)

  1. It expands and scales across many availability zones so that there is always a high availability.
  2. It automatically scales and fix any impacted or unhealthy node.
  3. It is interlinked with various other AWS services such as IAM, VPC , ECR & ELB etc.
  4. It is very secure service.

How does AWS EKS service work?

  • First step in EKS is to create EKS cluster using AWS CLI or AWS Management console or using eksctl command line tool.
  • Now, next you can have your own machines EC2 where you can deploy applications or deploy to AWS Fargate which manages it for you.
  • Now connect to kubernetes cluster with kubectl or eksctl commands.
  • Finally deploy and run applications on EKS cluster.

Why do you need AWS EKS than Kubernetes?

If you are working with Kubernetes you are required to handle all the below thing yourself such as:

  1. Create and Operate K8s clusters.
  2. Deploy Master Nodes
  3. Deploy Etcd
  4. Setup CA for TLS encryption.
  5. Setup Monitoring, AutoScaling and Auto healing.
  6. Setup Worker Nodes.

But with AWS EKS you only need to manage worker node other all rest Masters node, etcd in high availability , API server, KubeDNS, Scheduler, Controller Manager, Cloud Controller all the things are taken care of Amazon EKS.

You need to pay 0.20 US dollar per hour for your AWS EKS cluster which takes you to 144 US Dollar per month.

Installing tools to work with AWS EKS Cluster

  1. AWS CLI: Required as a dependency of eksctl to obtain the authentication token. To install AWS cli run the below command.
pip3 install --user awscli
After you install aws cli make sure to set the access key and secret key id in aws cli so that it can create the EKS cluster.
  1. eksctl: To setup and operate EKS cluster. To install eksctl run the below commands. Below command will download the eksctl binary in the tmp directory.
curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/v0.69.0/eksctl_Linux_amd64.tar.gz" | tar xz -C /tmp
  • Next, move the eksctl directory in the executable directory.
sudo mv /tmp/eksctl /usr/local/bin
  • To check the version of eksctl and see if it is properly install run below command.
eksctl version
  1. kubectl: Interaction with k8s API server. To install the kubectl tool run the below first command that updates the system and installs the https package.
sudo apt-get update && sudo apt-get install -y apt-transport-https
  • Next, run the curl command that will add the gpg key in the system to verify the authentication with the kubernetes site.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  • Next, add the kubernetes repository
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
  • Again update the system so that it takes the effect after addition of new repository.
sudo apt-get update
  • Next install kubectl tool.
sudo apt-get install -y kubectl
  • Next, check the version of the kubectl tool by running below command.
kubectl version --short --client
  1. IAM user and IAM role:
  • Create an IAM user with administrator access and use that IAM user to explore the AWS resources on the console. This is the user which also be used in the EC2 instance that you will use to manage AWS EKS cluster by passing user’s credentials in aws cli.
  • Also make sure to create a IAM role that you will apply on the EC2 instance from where you will manage AWS EKS and other AWS resources.

Creating AWS EKS using EKSCTL command line tool

Up to now you installed and setup the tools that are required for creating an AWS EKS Cluster. To know how to create a cluster using the eksctl command then run the help command which will provide you flags that you need to use while creating a AWS EKS cluster.

eksctl create cluster --help 
  1. Lets begin to create a EKS cluster. To do that create a file named eks.yaml and copy and paste the below content.
    • apiVersion is the kubernetes API version that will mange the deployment.
    • Kind denotes what kind of resource/object will kubernetes will create. In the below case as you need to provision cluster you should give Clusterconfig
    • metadata: Data that helps uniquely identify the object, including a name string, UID, and optional namespace.
    • nodegroups: Provide the name of node group and other details required for node group that will be used in your EKS cluster.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-course-cluster
  region: us-east-1

nodeGroups:
  - name: ng-1
    instanceType: t2.small
    desiredCapacity: 3
    ssh: # use existing EC2 key
      publicKeyName: eks-course
  1. Now, execute the command below to create the cluster.
eksctl create cluster -f eks.yaml
  1. Once cluster is successfully created run the below command to know the details of the cluster.
eksctl get cluster
  1. Next, Verify the AWS EKS cluster on AWS console.
  1. Also verify the nodes of the nodegroups that were created along with the cluster by running the below commands.
kubectl get nodes
  1. Also, verify the Nodes on AWS console. To check the nodes navigate to EC2 instances.
  1. Verify the nodegroups in the EKS Cluster by running the eksctl command.
eksctl get nodegroup --cluster EKS-cluster
  1. Finally Verify the number of Pods in the EKS Cluster by running the below eksctl command.
eksctl get pods --all-namespaces

Adding one more Node group in the AWS EKS Cluster

To add another node group in EKS Cluster follow the below steps:

  1. Create a yaml file as shown below and copy/paste the below content. In below file you will notice that previous nodegroup is already mentioned otherwise if you run this file without it it will override previous changes and remove the ng-1 node group from the cluster.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-cluster
  region: us-east-1

nodeGroups:
  - name: ng-1
    instanceType: t2.small
    desiredCapacity: 3
    ssh: # use existing EC2 key
      publicKeyName: testing
# Adding the another Node group nodegroup2 with min/max capacity as 3 and 5 resp.
  - name: nodegroup2
    minSize: 2
    maxSize: 3
    instancesDistribution:
      maxPrice: 0.2
      instanceTypes: ["t2.small", "t3.small"]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 50
    ssh:
      publicKeyName: testing
  1. Next run the below command that will help you to create a nodegroups.
eksctl create nodegroup --config-file=node_group.yaml.yaml --include=' nodegroup2'
  1. If you wish to delete the node group in EKS Cluster run anyone of the below commands.
eksctl delete nodegroup --cluster=EKS-cluster --name=nodegroup2
eksctl delete nodegroup --config-file=eks.yaml --include='nodegroup2' --approve
  • To Scale the node group in EKS Cluster
eksctl scale nodegroup --cluster=name_of_the_cluster --nodes=5 --name=node_grp_2

Cluster Autoscaler

The cluster Autoscaler automatically launches additional worker nodes if more resources are needed, and shutdown worker nodes if they are underutilized. The AutoScaling works within a node group, so you should create a node group with Autoscaler feature enabled.

Cluster Autoscaler has the following features:

  • Cluster Autoscaler is used to scale up and down the nodes within the node group.
  • It runs as a deployment based on CPU and Memory utilization.
  • It can contain on demand and spot instances.
  • There are two types of scaling
    • Multi AZ Scaling: Node group with Multi AZ ( Stateless workload )
    • Single AZ Scaling: Node group with Single AZ ( Stateful workload)

Creating and Deploying Cluster Autoscaler

The main function and use of Autoscaler is it dynamically on the fly adds or removes the node within the nodegroup. The Autoscaler works as a deployment and depends on the CPU/Memory requests.

There are two types of scaling available : Multi AZ v/s Single AZ ( Stateful Workload) as EBS cannot be spread across multiple availability zone

To create the cluster Autoscaler you can add multiple nodegroups in the cluster as per need . In this examples lets consider to deploy 2 node groups with single AZ and 1 node groups across 3 AZs using spot instance with Autoscaler enabled

  1. Create a file create and name it as autoscaler.yaml.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-cluster
  region: us-east-1

nodeGroups:
  - name: scale-east1c
    instanceType: t2.small
    desiredCapacity: 1
    maxSize: 10
    availabilityZones: ["us-east-1c"]
# iam holds all IAM attributes of a NodeGroup
# enables IAM policy for cluster-autoscaler
    iam:
      withAddonPolicies:
        autoScaler: true
    labels:
      nodegroup-type: stateful-east1c
      instance-type: onDemand
    ssh: # use existing EC2 key
      publicKeyName: eks-ssh-key
  - name: scale-spot
    desiredCapacity: 1
    maxSize: 10
    instancesDistribution:
      instanceTypes: ["t2.small", "t3.small"]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 0
    availabilityZones: ["us-east-1c", "us-east-1d"]
    iam:
      withAddonPolicies:
        autoScaler: true
    labels:
      nodegroup-type: stateless-workload
      instance-type: spot
    ssh: 
      publicKeyName: eks-ssh-key

availabilityZones: ["us-east-1c", "us-east-1d"]
  1. Run the below commands to add a nodegroups or delete a nodegroups.
eksctl create nodegroup --config-file=autoscaler.yaml
  1. eksctl get nodegroups –cluster=EKS-Cluster
  1. Next, to deploy the Autoscaler run the below kubectl command.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
kubectl -n kube-system annotate deployment.apps/cluster-autoscaler cluster-autoscaler.kubernetes.io/safe-to-evict="false"
  1. To edit the deployment and set your AWS EKS cluster name run the below kubectl command.
kubectl -n kube-system edit deployment.apps/cluster-autoscaler
  1. Next, describe the deployment of the Autoscaler by running the below kubectl command.
kubectl -n kube-system describe deployment cluster-autoscaler
  1. Finally view the cluster Autoscaler logs by running the kubectl command on kube-system namespace.
kubectl -n kube-system logs deployment.apps/cluster-autoscaler
  1. Verify the Pods. You should notice below that first pod is for Nodegroup1 , similarly second is for Nodegroup2 and finally the third is Autoscaler pod itself.

Nginx Deployment on the EKS cluster when Autoscaler is enabled.

  1. To deploy the nginx application on the EKS cluster that you just created , create a yaml file and name it something which you find it convenient and copy/paste the below content into that.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        service: nginx
        app: nginx
    spec:
      containers:
      - image: nginx
        name: test-autoscaler
        resources:
          limits:
            cpu: 300m
            memory: 512Mi
          requests:
            cpu: 300m
            memory: 512Mi
      nodeSelector:
        instance-type: spot


  1. Now to apply the nginx deployment, run the below command.
kubectl apply -f nginx-deployment.yaml
  1. After successful deployment , check the number of Pods.
kubectl get pods
  1. Checking the number of nodes and type of node.
kubectl get nodes -l instance_type=spot
  • Scale the deployment to 3 replicas ( that is 3 pods will be scaled)
kubectl scale --replicas=3 deployment/test-autoscaler
  • Checking the logs and filtering the events.
kubectl -n kube-system logs deployment.apps/cluster-autoscaler | grep -A5 "Expanding Node Group"

EKS Cluster Monitoring and Cloud watch Logging

By, now you have already setup EKS cluster but it is also important to monitor your EKS cluster. To monitor your cluster follow the below steps:

  1. Create a below eks.yaml file and copy /paste below code into the file.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-cluster
  region: us-east-1

nodeGroups:
  - name: ng-1
    instanceType: t2.small
    desiredCapacity: 3
    ssh: # use existing EC2 key
      publicKeyName: eks-ssh-key
cloudWatch:
  clusterLogging:
    enableTypes: ["api", "audit", "authenticator"] # To select only few log_types
    # enableTypes: ["*"]  # If you need to enable all the log_types
  1. Now apply the cluster logging by running the command.
eksctl utils update-cluster-logging --config-file eks.yaml --approve 
  1. To Disable all the configuration types
eksctl utils update-cluster-logging --name=EKS-cluster --disable-types all

To get container metrics using cloudwatch: First add IAM policy (CloudWatchAgentServerPolicy ) to all your nodegroup(s)- to nodegroup(s) role and Deploy Cloudwatch Agent – After you deploy it will have its own namespace (cloudwatch-agent)

  1. Now run the below command.
curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/master/k8s-yaml-templates/quickstart/cwagent-fluentd-quickstart.yaml | sed "s/{{cluster_name}}/EKS-course-cluster/;s/{{region_name}}/us-east-1/" | kubectl apply -f -
  1. To check what all has been created in namespaces
kubectl get all -n amazon-cloudwatch

kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --limits=cpu=500m --expose --port=80
kubectl run --generator=run-pod/v1 -it --rm load-generator --image=busybox /bin/sh
Hit enter for command prompt
while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done

What is Helm?

Helm is the package manager similar to what you have in ubuntu or python such as apt or pip. Helm contains mainly three components.

  • Chart: All the dependency files and application files.
  • Config: Any configuration that you would like to deploy.
  • Release: It is an running instance of a chart.

Helm Components

  • Helm client: Manages repository, Managing releases, Communicates with Helm library.
  • Helm library: It interacts with Kubernetes API server.

Installing Helm

  • To install helm make sure to create the directory with below commands and then change the directory
mkdir helm && cd helm
  • Next, add official stable helm repository which contains sample charts to install
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
helm version
  • To find all the lists of the repo
helm repo list
  • To Update the repository
helm repo update
  • To check all the charts in the helm repository.
helm search repo
  • To install one of the charts. After running the below command then make sure to check the number of Pods running by using kubectl get pods command.
helm install name_of_the_chart stable/redis
  • To check the deployed charts
helm ls # 
  • To uninstall helm deployments.
helm uninstall <<name-of-release-from-previous-output>>

Creating AWS EKS Cluster Admin user

To manage all resources in the EKS cluster you need to have dedicated users either ( Admin or Read only ) to perform tasks accordingly. Lets begin by creating an admin user first.

  1. Create IAM user in AWS console (k8s-cluster-admin) and store the access key and secret key for this user locally on your machine.
  2. Next, add user to configmap aws-auth section within map Users section. But before you add a user, lets find all the configmap in kube-system namespace because we need to store all the users in aws-auth.
kubectl -n kube-system get cm
  1. Save the kubectl command in the yaml formatted file.
kubectl -n kube-system get cm aws-auth -o yaml > aws-auth-configmap.yaml
  1. Next, edit the aws-auth-configmap.yaml and add the mapUsers with the following information:
    • userarn
    • username
    • groups as ( system:masters) which has admin/all permissions basically a role
  1. Run the below command to apply the changes of newly added user.
kubectl apply -f aws-auth-configmap.yaml -n kube-system

After you apply changes you will notice that in AWS EKS you will not see any warning such as kubernetes objects cannot be accessed or something like that.

  1. Now check if user has been properly created by running the describe command.
kubectl -n kube-system describe cm aws-auth
  1. Next, add user to aws credentials file in dedicated section (profile) and then export it using export command or store it in aws cli command line.
export AWS_PROFILE="profile_name"
  1. Finally check which user is currently running the aws cli commands
aws sts get-caller-identity

Creating a read only user for the dedicated namespace

Similarly, now create a read only user for AWS-EKS service. Lets follow the below steps to create a read only user and map it in configmap with IAM.

  1. Create a namespace using below namespace.
kubectl create namespace production
  1. Create a IAM user on AWS Console
  1. Create a file rolebinding.yaml and add both the role and role bindings that includes the permissions that a kubernetes user will have.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: production
  name: prod-viewer-role
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["*"]  # can be further limited, e.g. ["deployments", "replicasets", "pods"]
  verbs: ["get", "list", "watch"] 
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: prod-viewer-binding
  namespace: production
subjects:
- kind: User
  name: prod-viewer
  apiGroup: ""
roleRef:
  kind: Role
  name: prod-viewer-role
  apiGroup: ""
  1. Now apply the role and role bindings using the below command.
kubectl apply -f rolebinding.yaml
  1. Next edit the yaml file and apply the changes such as userarn, role and username as you did previously.
kubectl -n kube-system get cm aws-auth -o yaml > aws-auth-configmap.yaml
kubectl apply -f aws-auth-configmap.yaml -n kube-system
  1. Finally test the user and setup

EKS Networking

  • Amazon VPC contains CNI Plugins from which each Pod receives IP address which is linked with ENI .
  • Pods have same IP address within the VPC that means inside and outside the EKS cluster.
  • Make sure to use maximum IP address by using CIDR/18 which has more IP address.
  • EC2 instance can also have limited amount of ENI/IP address that is each EC2 instance can have limited PODS ( like 36 or so according to Instance_type)

IAM and RBAC Integration in AWS EKS

  • Authentication is done by IAM
  • Authorization is done by kubernetes RBAC
  • You can assign RBAC directly to IAM entities.

kubectl ( USER SENDS AWS IDENTITY) >>> Connects with EKS >>> Verify AWS IDENTITY ( By Authorizing AWS Identity with Kubernetes RBAC )

Worker nodes join the cluster

  1. When you create a worker node, assign the IAM Role and authorize that IAM Role needs to be authorized in RBAC in order to join the cluster. Add system:bootstrappers and system:nodes groups in your ConfigMap. The value for rolearn is the NodeInstanceRole and then run the below command
kubectl apply -f aws-auth.yaml
  1. Check current state of cluster services and nodes
kubectl get svc,nodes -o wide

How to Scale Up and Down Kubernetes Pods

There are three ways of Scaling up/down the kubernetes Pods, Lets look at all of these three.

  1. Scale the deployment to 3 replicas ( that is 3 pods will be scaled) using kubectl scale command.
kubectl scale --replicas=3 deployment/nginx-deployment
  1. Next, update the yaml file with 3 replicas and run the below kubectl apply command. ( Lets say you have abc.yaml file)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        service: nginx
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx 
        resources:
          limits:
            cpu: 300m
            memory: 512Mi
          requests:
            cpu: 300m
            memory: 512Mi
      nodeSelector:
        instance-type: spot
kubectl apply -f abc.yaml
  1. You can scale the Pods using the kubernetes Dashboard.
  1. Apply the manifest file that you created earlier by running below command.
kubectl apply -f nginx.yaml
  1. Next verify if the deployment has been done succesfully.
kubectl get deployment --all-namespaces

Conclusion

In this tutorial you learned everything about AWS EKS from beginners to Advanced level.

Now, you have string understanding of AWS EKS which applications do you plan to manage on it ?

How to Create your first Helm Charts kubernetes

Are you spending hours or even days to deploy applications in kubernetes because of dozens of deployment yaml files which are unorganised? Instead why not consider helm charts the best tool for deploying and building efficient clusters in Kubernetes.

In this tutorial, you will learn step-by-step how to create a Helm chart, set up, and deploy on a web server. Helm charts simplify application deployment on a Kubernetes cluster.

Without any delay, lets dive into it.

Join 50 other followers

Table of Content

  1. What is Helm?
  2. What is Helm charts in kubernetes?
  3. Prerequisites
  4. How to Install Helm on windows 10
  5. How to Install Helm on Ubuntu machine
  6. Installing Minikube on Ubuntu machine
  7. Creating Helm charts
  8. Configuring the Helm chart
  9. Deploying Helm chart on ubuntu machine
  10. Verifying the Kubernetes application deploying using Helm chart
  11. Conclusion

What is Helm?

Helm is a package manager for kubernetes which makes application deployments and management easier. Helm is a command-line tool that allows you to create a helm charts.

What is Helm charts in kubernetes?

Helm charts is a collection of templates and settings which defines a set of kubernetes resources. In Helm charts, you need to define all the resources which are needed as part of the application and using REST API it communicate with kubernetes cluster.

Helm chart allows you to deploy or manage the application deployment easier in kubernetes cluster and stores various versions of charts.

Prerequisites

  • An ubuntu machine with Kubectl and docker installed. This tutorial will use Ubuntu 20.04 version.

How to Install Helm on windows 10

Now that you have basic idea of what is helm and helm charts in kubernetes, lets kick off this section by learning how to install helm on windows 10 machine.

  • Open the browser and navigate to the Github repository where Helm package is already stored on https://github.com/helm/helm/releases. On the Github page search for Windows amd64 download link.
Downloading the Helm package manager from Github
Downloading the Helm package manager from Github
  • Now, extract the windows-amd64 zip to the preferred location. After you extract you will see helm application.
Extract the Helm package manage on Windows machine
Extract the Helm package manage on Windows machine
  • Now open command prompt and navigate to the same path on which you extracted helm package. Next, on the same path run helm.exe.
Executing the helm package manager
Executing the helm package manager
  • Once helm is installed properly, verify helm by running the below helm version command.
Verifying the helm package manager
Verifying the helm package manager

How to Install Helm on Ubuntu machine

Previously you learned how to install helm on windows 10 machine but in this section lets learn how to install Helm on ubuntu machine.

  • Log in to the Ubuntu machine using your favorite SSH client.
  • Next, download the latest version of Helm package using the below command.
 wget https://get.helm.sh/helm-v3.4.1-linux-amd64.tar.gz
Downloading the latest version of Helm package on ubuntu machine
Downloading the latest version of Helm package on ubuntu machine
  •  Further, package is downloaded then unpack the helm package manager using tar command.
tar xvf helm-v3.4.1-linux-amd64.tar.gz

Unpack the helm package manager
Unpack the helm package manager
  • Now move linux-amd64/helm to /usr/local/bin so that helm command can run from anywhere on ubuntu machine.
sudo mv linux-amd64/helm /usr/local/bin
  • Finally verify helm package manager by running the helm version.
helm version
Verifying the helm package manager on ubuntu machine
Verifying the helm package manager on ubuntu machine

Installing Minikube on Ubuntu machine

Now that you have installed helm package manager successfully on ubuntu machine. But to deploy helm charts you needs kubernetes to be installed on your machine and one of the most widely used lightweight kubernetes cluster is minikube just like a local Kubernetes focusing on making it easy to learn and develop for Kubernetes.

So, lets dive in and install minikube on Ubuntu machine.

  • First download the minikube package on ubuntu machine by running curl command as shown below.
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
  • Next, install minikube on Ubuntu machine by running dpkg command.
sudo dpkg -i minikube_latest_amd64.deb
Installing minikube on ubuntu machine
Installing minikube on ubuntu machine
  • Now start minikube with normal user but not with root user by running minikube start command.
minikube start
Starting minikube on ubuntu machine
Starting minikube on ubuntu machine
  • Now verify if minikube is installed properly by running minikube status command.
minikube status
Verifying the minikube on ubuntu machine
Verifying the minikube on ubuntu machine

Creating Helm charts

Now that you have helm and minikube installed successfully on Ubuntu machine. To create helm charts follow the below steps.

  • In the home directory of your ubuntu machine create helm chart by running below command.
helm create automate
Creating a new helm chart
Creating a new helm chart
  • Once helm chart is created it also creates a folder with the same name containing different files.
Viewing the files and folders of helm chart created.
Viewing the files and folders of helm chart created.

Configuring the Helm chart

Now you have Helm chart created successfully which is great, but to deploy application you need to configure files that got generated earlier with helm create command.

  • Chart.yaml file contains details of helm chart such as name, description, api version, chart version to be deployed etc.
  • template: It contains the configurations files required for application that will be deployed to the cluster such as ingress.yaml , service.yaml etc. For this tutorial you dont need to modify this directory.
template directory inside the helm chart folder
template directory inside the helm chart folder
  • charts: This directory contains is empty initially. Other dependent charts are added if required. For this tutorial you dont need to modify this directory.
  • values.yml: This file contains all the configuration related to deployments. Edit this file as below:
    • replicaCount: is set to 1 that means only 1 pod will come up.
    • pullPolicy : update it to Always.
    • nameOverride: automate-app
    • fullnameOverride: automate-chart
    • There are two types of networking options available a) ClusterIP address which exposes service on cluster internal IP and b) NodePort exposes service on each kubernetes node IP address. You will use NodePort for this tutorial.

Your values.yaml file should look like something below.

replicaCount: 1

image:
  repository: nginx
  pullPolicy: Always
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""

imagePullSecrets: []
nameOverride: "automate-app"
fullnameOverride: "automate-chart"

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: "automateinfra"

podAnnotations: {}

podSecurityContext: {}
  # fsGroup: 2000

securityContext: {}
service:
  type: NodePort
  port: 80

ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths: []
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: {}
autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

nodeSelector: {}

tolerations: []

affinity: {}

Deploying Helm chart on ubuntu machine

Now that you’ve made the necessary changes in the configuration file to create a Helm chart, next you need to deploy it using a helm command.

helm install automate-chart automate/ --values automate/values.yaml
Deploying the applications using helm chart
Deploying the applications using helm chart
  • Helm install command deployed the application successfully, next run export commands to retrive Node_Port and Node_Ip details as shown below.
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services automate-chart)

export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")

Verifying the Kubernetes application deploying using Helm chart

Previously you deployed application using helm install command but it is important to verify if application is successfully deployed. To verfiy perform below steps:

  • Run echo command as shown below to obtain the application URL. Node_Port and Node_IP were fetched when you executed export command in previous section.
echo http://$NODE_IP:$NODE_PORT
Verifying the kubernetes application URL
Obtaining the kubernetes application URL
  • Next run the curl command to test the application.
Running the Kubernetes application
Running the Kubernetes application

Verifying the Kubernetes pods deployed using Helm charts

Application is deployed successfully and as you can see nginx page loaded but lets verify from kubernetes pods by running the below kubectl command.

kubectl get nodes
kubectl get pods
Kubernetes pods and Kubernetes Nodes
Kubernetes pods and Kubernetes Nodes

Conclusion

After following the outlined step-by-step instructions, you have a Helm chart created, set up, and deployed on a web server. Helm charts simplify application deployment on a Kubernetes cluster.

Which applications do you plan to deploy next on kubernetes cluster using helm charts.

Kubernetes in Cloud: Getting Started with Amazon EKS or AWS EKS

Kubernetes is a scalable open-source tool that manages container orchestration extremely effectively, but does Kubernetes work in Cloud as well? Yes, it does work with the most widely used service AWS EKS which stands for Amazon Elastic Kubernetes.

Yes, you can manage Kubernetes in public clouds, such as GCP, AWS, etc to deploy and scale containerized applications.

In this tutorial, you will learn the basics of Kubernetes, Amazon EKS, or AWS EKS.

Join 50 other followers

Table of Content

  1. What is Kubernetes?
  2. kubernetes architecture and kubernetes components
  3. What is AWS EKS (Amazon EKS) ?
  4. How does AWS EKS service work?
  5. Prerequisites
  6. AWS EKS Clusters components
  7. AWS EKS Control Pannel
  8. Workload nodes
  9. How to create aws eks cluster in AWS EKS
  10. AWS EKS cluster setup: Additional nodes on AWS EKS cluster
  11. Connecting AWS EKS Cluster using aws eks update kubeconfig
  12. How to Install Kubectl on Windows machines
  13. Install Kubectl on Ubuntu machine
  14. Conclusion

What is Kubernetes?

Kubernetes is an open-source container orchestration engine for automating deployments, scaling, and managing the container’s applications. Kubernetes is an open-source Google-based tool. It is also known as k8s. It can run on any platform, such as on-premises, hybrid, or public cloud. Some of the features of Kubernetes are:

  • kubernetes cluster scales when needed and is load balanced.
  • kubernetes cluster has the capability to self-heal and automatically provide rollbacks.
  • kubernetes allows you to store configurations, secrets, or passwords.
  • Kubernetes can be mounted with various stores such as EFS and local storage.
  • Kubernetes works well with networking components such as NFS, locker, etc.

kubernetes architecture and kubernetes components

When you Install Kubernetes, you create a Kubernetes cluster that mainly contains two components master or the controller nodes and worker nodes. Nodes are the machines that contain their own Linux environment, which could be a virtual machine or either physical machine.

The application and services are deployed in the containers within the Pods inside the worker nodes. Pods contain one or more docker containers. When a Pod runs multiple containers, all the containers are considered a single entity and share the Node resources.

Bird-eye view of kubernetes cluster
Bird-eye view of Kubernetes cluster
  • Pod: Pods are groups of containers that have shared storage and network.
  • Service: Services are used when you want to expose the application outside of your local environment.
  • Ingress: Ingress helps in exposing http/https routes from the outside world to the services in your cluster.
  • ConfigMap: Pod consumes configmap as environmental values or command-line arguments in the configuration file.
  • Secrets: Secrets as the name suggest stores sensitive information such as password, OAuth tokens, SSH keys, etc.
  • Volumes: These are persistent storage for containers.
  • Deployment: Deployment is an additional layer that helps to define how Pod and containers should be created using yaml files.
kubernetes components
kubernetes components

What is AWS EKS (Amazon EKS) ?

Amazon provides an AWS managed service AWS EKS that allows hosting Kubernetes without needing you to install, operate, and maintain Kubernetes control plane or nodes, services, etc. Some of the features of AWS EKS are:

  • AWS EKS expands and scales Kubernetes control plane across many availability zones so that there is always a high availability.
  • It automatically scales and fix control plane instances if any instance is impacted or unhealthy node.
  • It is integrated with various other AWS services such as IAM for authentication, VPC for Isolation , ECR for container images & ELB for load distribution etc.
  • It is very secure service.

How does AWS EKS service work?

Previously you learned what is AWS EKS now; let’s learn how AWS EKS works. The first step in AWS EKS is to create an EKS cluster using AWS CLI or AWS Management console by specifying whether you need self-managed AWS EC2 instance or deploy workloads to AWS Fargate, which automatically manages everything.

Further, once the Kubernetes cluster is set up, connect to the cluster using kubectl commands and deploy applications.

AWS EKS cluster using EC2 or AWS Fargate
AWS EKS cluster using EC2 or AWS Fargate

Prerequisites

  • You must have AWS account in order to setup cluster in AWS EKS with admin rights on AWS EKS and IAM. If you don’t have AWS account, please create a account from here AWS account.
  • AWS CLI installed. If you don’t have it already install it from here.
  • Ubuntu 16 or plus version machine.
  • Windows 7 or plus machine.

AWS EKS Clusters components

Now that you have a basic idea of the AWS EKS cluster, it is important to know the components of AWS EKS Clusters. Let’s discuss each of them now.

AWS EKS Control Pannel

AWS EKS control plane is not shared between any AWS account or other EKS clusters. Control Panel contains at least two API servers exposed via Amazon EKS endpoint and three etcd instances associated with Amazon EBS volumes.

Amazon EKS automatically monitors the load on the control panel and removes unhealthy instances when needed. Amazon EKS uses Amazon VPC network policies to restrict traffic between control plane components within a single cluster.

AWS EKS nodes

Amazon EKS nodes are registered with the control plane via the API server endpoint and a certificate file created for your cluster. Your Amazon EKS cluster can schedule pods on AWS EKS nodes which may be self-managed, Amazon EKS Managed node groups, or AWS Fargate.

Self-managed nodes

Self-managed nodes are Windows and Linux machines that are managed by you. The nodes contain pods that share kernel runtime environments. Also, if the pod requires more resources than requested, then additional resources are aligned by you, such as memory or CPU, and you assign IP addresses from a different CIDR block than the IP address assigned to the node.

Amazon EKS Managed node groups

Previously you learned about self-managed nodes managed by you but in the case of AWS EKS managed node groups, you don’t need to provision or register Amazon EC2 instances. All the managed nodes are part of the Amazon EC2 auto-scaling group.

AWS takes care of everything starting from managing nodes, scaling, and aligning the resources such as IP address, CPU, memory. Although everything is managed by AWS still, you are allowed to SSH into the nodes. Like self-managed nodes, the nodes containing the pods share the same kernel.

You can add a managed node group to new or existing clusters using the Amazon EKS console, eksctl, AWS CLI, AWS API, or AWS Cloud Formation. Amazon EKS managed node groups can be launched in public and private subnets. You can create multiple managed node groups within a single cluster.

AWS Fargate

AWS Fargate is a serverless technology that you can use with Amazon ECS to run containers without managing servers or clusters of Amazon EC2 instances. With Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. But with AWS Fargate, the pod has a dedicated kernel. As there are no nodes, you cannot SSH into the node.

Kubernetes cluster architecture
Kubernetes cluster architecture

Workload nodes

The workload is a node containing applications running on a Kubernetes cluster. Every workload controls pods. There are five types of workloads on a cluster.

  • Deployment: Ensures that a specific number of pods run and includes logic to deploy changes. Deployments can be rolled back and stopped.
  • ReplicaSet: Ensures that a specific number of pods run. Can be controlled by deployments. Replicasets cannot be rolled back and stopped.
  • StatefulSet: Manages the deployment of stateful applications where you need persistant storage.
  • DaemonSet  Ensures that a copy of a pod runs on all (or some) nodes in the cluster
  • Job: Creates one or more pods and ensures that a specified number of them run to completion

By default, Amazon EKS clusters have three workloads:

  • coredns: For name resolution for all pods in the cluster.
  • aws-node To provide VPC networking functionality to the pods and nodes in your cluster.
  • kube-proxy:To manage network rules on nodes that enable networking communication to your pods.

How to create AWS EKS cluster in AWS EKS

Now that you have an idea about the AWS EKS cluster and its components. Let’s learn how to create an AWS EKS cluster and set up Amazon EKS using the Amazon management console, and AWS CLI commands.

  • Make a note of VPC that you want to choose to create the AWS EKS cluster.
Choosing the correct AWS VPC
Choosing the correct AWS VPC
  • Next on IAM page create a IAM policy with full EKS permissions.
Creating an IAM Policy
Creating an IAM Policy
  • Click on Create policy and then click on choose service as EKS.
Choosing the configuration on IAM Policy
Choosing the configuration on IAM Policy
  • Now provide the name to the policy and click create.
Reviewing the details and creating the IAM Policy
Reviewing the details and creating the IAM Policy
IAM Policy created successfully
IAM Policy created successfully
  • Next, navigate to IAM role and create a role.
Choosing the Create role button
Choosing the Create role button
  • Now in role choose AWS EKS service and then select EKS cluster as your use case:
Configure the IAM role
Configure the IAM role
Selecting the use case in IAM role
Selecting the use case in IAM role
  • Further specify the name to role and then click on create role.
Creating the IAM role
Creating the IAM role
  • Now attach a IAM policy that you created previously and EKSclusterpolicy to IAM role.
Attaching the IAM policy on IAM role
Attaching the IAM policy on the IAM role
Adding permission on the IAM role
Adding permission on the IAM role
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*",
        "Service": "eks.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
Adding the Trusted entities
Adding the Trusted entities

Now that you have the IAM role created for AWS EKS Cluster and IAM policy attachment. Let’s dive into the creation of the Kubernetes cluster.

  • Now navigate to AWS EKS console and click on Create cluster
creating AWS EKS Cluster
creating AWS EKS Cluster
  • Next, add all the configurations related to cluster as shown below.
Configure AWS EKS Cluster
Configure AWS EKS Cluster
  • Furthe provide networking details such as VPC, subnets etc. You may skip subnets as of now.
Configure network settings of AWS EKS Cluster
Configure network settings of AWS EKS Cluster
  • Keep hitting NEXT and finally click on Create cluster. It may take few minutes for cluster to come up.
AWS EKS Cluster creation is in progress
AWS EKS Cluster creation is in progress
  • Lets verify if cluster is up and active. As you can see below
Verifying the AWS EKS CLuster
Verifying the AWS EKS CLuster

Now, the Kubernetes cluster on AWS EKS is successfully created. Now let’s initiate communication from the client we installed to the Kubernetes cluster.

AWS EKS cluster setup: Additional nodes on AWS EKS cluster

As discussed previously, the Amazon EKS cluster can schedule pods on any combination of self-managed nodes, Amazon EKS managed nodes, and AWS Fargate. In this section, let’s learn if you can add additional using the Amazon EKS Managed node group.

To create Managed node group using AWS Management Console.

  • Navigate to the Amazon EKS page ➔ Configuration tab ➔ Compute tab ➔ Add Node Group and provide all the details such as name, node IAM role that you created previously.
Checking the AWS EKS Node groups
Checking the AWS EKS Node groups

Further specify Instance type, Capacity type, networking details such as VPC details, subnets, SSH Keys details, and click create. As you can see below, the nodes are added successfully by creating a new group.

Verifying the new nodes in Checking in the AWS EKS Node groups
Verifying the new nodes in Checking in the AWS EKS Node groups
  • To find node details from your machine run the below commands.
aws eks update-kubeconfig --region us-east-2 --name "YOUR_CLUSTER_NAME"
kubectl get nodes --watch
AWS EKS nodes details
AWS EKS nodes details

To create Fargate(Linux) nodes you need to create a Fargate profile as when any pod gets deployed in Fargate it first matches the desired configuration from the profile then it gets deployed. The configuration contains permissions such as the ability of the pod to get the container’s image from ECR etc. To create a Fargate profile click here.

Connecting AWS EKS Cluster using aws eks update kubeconfig

You have created and set up the AWS EKS cluster successfully and learned how you can add additional nodes on the AWS EKS cluster, which is great. But do you know how to connect the AWS EKS cluster from your local machine? Let’s learn how to connect the AWS EKS cluster using eks update kubeconfig.

Make sure to configure AWS credentials on local machine to match with same IAM user or IAM role that you used while creating the AWS EKS cluster.

  • Open Visual studio or GIT bash or command prompt.
  • Now, configure kubeconfig to make communication from your local machine to Kubernetes cluster in AWS EKS
aws eks update-kubeconfig --region us-east-2 --name Myekscluster
aws eks update kubeconfig command
aws eks update kubeconfig command
  • Finally test the communication between local machine and cluster after adding the configurations. Great you can see the connectivity from our local machine to Kubernetes cluster !!
kubectl get svc
Verifying the connectivity from local machine to AWS EKS cluster
Verifying the connectivity from local machine to AWS EKS cluster

How to Install Kubectl on Windows machines

Now that you have some basic idea of the What is EKS cluster, it is also managed by the kubectl tool. Although you can manage the AWS EKS cluster manually with the AWS management console but running kubectl is easy and straightforward. Let’s dive into how to install kubectl on a windows machine.

  • Open PowerShell on your windows machine and run the below curl command the command on any folder of your choice. The below command will download the kubectl binary on windows machine.
curl -o kubectl.exe https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/windows/amd64/kubectl.exe
  • Now verify in the C drive if binary file has been downloaded succesfully.
Downloading the kubectl binary
Downloading the kubectl binary
  • Next run kubectl binary file i.e kubectl.exe.
Running kubectl binary
Running kubectl binary
  • Verify if Kubectl is properly installed by running kubectl version command.
kubectl version --short --client
Verifying the kubectl version
Verifying the kubectl version

Install Kubectl on Ubuntu machine

Previously you learned how to install kubectl on a windows machine but let’s quickly check out the how-to install Kubectl on an Ubuntu machine.

  • Login to the Ubuntu machine using SSH client.
  • Download the kubectl binary using curl command on ubuntu machine under home directory ie. $HOME
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
Installing Kubectl on Ubuntu machine
Installing Kubectl on Ubuntu machine
  • Next, after installing kubectl you will need to grant execute permissions to the binary to start it.
chmod +x ./kubectl
  • Copy the binary to a folder in your PATH so that kubectl command can run from anywhere on your machine.
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
  • Verify the kubectl version on ubuntu machine again by running kubectl version.
kubectl version --short --client
Kubectl version on Ubuntu machine
Kubectl version on Ubuntu machine

Conclusion

In this tutorial, you learned Kubernetes, Amazon Elastic Kubernetes service, ie. AWS EKS, how to install Kubernetes client kubectl on Windows and Linux machine and finally created AWS EKS cluster and connected the same using kubectl client.

Now that you have a newly launched AWS EKS cluster setup, what do you plan to deploy on it?