The Ultimate Kubernetes Interview questions for Kubernetes Certification (CKA)

If you are preparing for a DevOps interview or for Kubernetes Interview questions or Kubernetes Certification, consider marrying this Ultimate Kubernetes Interview questions for Kubernetes Certification (CKA) tutorial, which will help you forever in any Kubernetes interview.

Without further delay, let’s get into this Kubernetes Interview questions for Kubernetes Certification (CKA).

Join 50 other followers

Table of Content

Related: Kubernetes Tutorial for Kubernetes Certification [PART-1]

Related: Kubernetes Tutorial for Kubernetes Certification [PART-2]

PAPER-1

Q1. How to create kubernetes namespace using kubectl command.

Answer: Kubernetes namespace can be created using the kubectl create command.

kubectl create namespace namespace-name

Q2. How to create a kubernetes namespace named my-namespace using a manifest file?

Answer: Create the file named namespace.yaml as shown below.

apiVersion: v1
kind: Namespace
metadata: 
    name: my-namespace
  • Now execute the below kubectl command as shown below.
kubectl create -f namespace.yaml
Creating the Kubernetes namespace(my-namespace)
Creating the Kubernetes namespace(my-namespace)

Q3. How to switch from one Kubernetes namespace to another Kubernetes namespace ?

Answer: To switch beetween two kubernetes namespaces run the kubectl config set command.

kubectl config set-context $(kubectl config current-context) --namespace my-namespace2
switch from one Kubernetes namespace to other Kubernetes namespace
switch from one Kubernetes namespace to another Kubernetes namespace

Q4. How To List the Kubernetes namespaces in a Kubernetes cluster ?

Answer: Run the kubectl get command as shown below.

kubectl get namespaces

Q5. How to create the Kubernetes namespaces in a Kubernetes cluster ?

Answer: Execute the below kubectl command.

kubectl create namespace namespace-name

Q6. To delete kubernetes namespace using kubectl command ?

Answer: kubectl delete command allows you to delete the Kubernetes API objects.

kubectl delete namespaces namespace-name

Q7. How to create a new Kubernetes pod with nginx image?

Answer: Use Kubectl run command to launch a new Kubernetes Pod.

kubectl run nginx-pod --image=nginx
Running kubectl run command to create a new Pod.
Running kubectl run command to create a new Pod.

Q8. How to Create a new Kubernetes pod in different Kubernetes namespace?

Answer: Use Kubectl run command to launch a new Kubernetes Pod followed by namspace flag.

kubectl run nginx-pod --image=nginx --namespace=kube-system
Creating a new Kubernetes pod in different Kubernetes namespace
Creating a new Kubernetes pod in a different Kubernetes namespace

Q9. How to check the running Kubernetes pods in the Kubernetes cluster?

Answer:

kubectl get pods
Checking the running Kubernetes pods
Checking the running Kubernetes pods

Q10. How to check the running Kubernetes pods in the Kubernetes cluster in different kubernetes namespace?

Answer:

 kubectl get pods  --namespace=kube-system | grep nginx
Checking the running Kubernetes pods in different kubernetes namespace
Checking the running Kubernetes pods in different kubernetes namespace

Q11. How to check the Docker image name for a running Kubernetes pod and get all the details?

Answer: Execute the kubernetes describe command.

kubectl describe pod-name
Describing the kubernetes Pod
Describing the kubernetes Pod

Q12. How to Check the name of the Kubernetes node on which Kubernetes pods are deployed?

Answer:

kubectl get pods -o wide
Checking the name of the Kubernetes node
Checking the name of the Kubernetes node

Q13. How to check the details of docker containers in the Kubernetes pod ?

Answer:

kubectl describe pod pod-name
Checking the details of docker containers
Checking the details of docker containers

Q14. What does READY status signify in kubectl command output?

Answer: The READY status gives the stats of the number of running containers and the total containers in the cluster.

kubectl get pod -o wide command
Checking the Ready Status
Checking the Ready Status

Q15. How to delete the Kubernetes pod in the kubernetes cluster?

Answer: Use the kubectl delete command.

kubetcl delete pod webapp
Deleting the Kubernetes pod
Deleting the Kubernetes pod

Q16. How to edit the Docker image of the container in the Kubernetes Pod ?

Answer: Use the Kubernetes edit command.

kubectl edit pod webapp

Q17. How to Create a manifest file to launch a Kubernetes pod without actually creating the Kubernetes pod?

Answer: –dry-run=client flag should be used

kubectl run nginx --image=nginx --dry-run=client -o yaml > my-file.yaml
launch a Kubernetes pod without actually creating the Kubernetes pod
launch a Kubernetes pod without actually creating the Kubernetes pod

Q18. How to check the number of Kubernetes Replicasets running in the kubernetes cluster ?

Answer: Run Kubectl get command.

kubectl get rs
kubectl get replicasets
Checking the Replicasets in kubernetes cluster
Checking the Replicasets in kubernetes cluster

Q19. How to find the correct version of the Kubernetes Replicaset or in Kubernetes deployments ?

Answer:

kubectl explain rs | grep VERSION
Finding the Kubernetes replicaset or kubernetes deployment version
Finding the Kubernetes replicaset or kubernetes deployment version

Q20. How to delete the Kubernetes Replicasets in the Kubernetes cluster?

Answer: Run the below command.

kubectl delete rs replicaset-1 replicaset-2
delete the Kubernetes Replicasets
delete the Kubernetes Replicasets

Q21. How to edit the Kubernetes Replicasets in the Kubernetes cluster?

Answer: Run the below command.

kubectl edit rs replicaset-name

Q22. How to Scale the Kubernetes Replicasets in the Kubernetes cluster?

Answer: To scale the Kubernetes Replicasets you can use any of three below commands.

kubectl scale  --replicas=5 rs rs_name
kubectl scale --replicas=6 -f file.yml # Doesnt change the number of replicas in the file.
kubectl replace -f file.yml

Q23. How to Create the Kubernetes deployment in the kubernetes Cluster?

Answer: Use the kubernetes create command.

kubectl create deployment nginx-deployment --image=nginx
Create the Kubernetes deployment
Creating the Kubernetes deployment
kubectl create deployment my-deployment --image=httpd:2.4-alpine
Create the Kubernetes deployment
Creating the Kubernetes deployment

Note: Deployment strategy are of two types:

  • Recreate strategy where we replace all the pods of deployment together and create new pods
  • Rolling update Strategy where we replace few pods with newly created pods.

To Update the deployment use the below commands.

  • To update the deployments
kubectl apply deployment-definition.yml
  • To update the deployment such as using nginx:1.16.1 instead of nginx:1.14.2
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record

Q24. How to Scale the Kubernetes deployment in the kubernetes Cluster?

Answer:

kubectl scale deployment my-deployment --replicas=3
Scaling the Kubernetes deployment
Scaling the Kubernetes deployment

Q25. How to Edit the Kubernetes deployment in the kubernetes Cluster?

Answer:

kubectl edit deployment my-deployment
Editing the Kubernetes deployment
Editing the Kubernetes deployment

Q26. How to Describe the Kubernetes deployment in the kubernetes Cluster?

Answer:

kubectl describe deployment my-deployment
 Describing the Kubernetes deployment
Describing the Kubernetes deployment

Q27. How to pause the Kubernetes deployment in the kubernetes Cluster?

Answer: Use the Kubectl rollout command.

kubectl rollout pause deployment.v1.apps/my-deployment
Pausing the kubernetes deployment
Pausing the kubernetes deployment
Viewing the Paused kubernetes deployment
Viewing the Paused kubernetes deployment
  • To check the status of Rollout and then check all the revisions and rollouts you can check using below command.
kubectl rollout status deployment.v1.apps/my-deployment

kubectl rollout history deployment.v1.apps/my-deployment

Q28. How to resume the Kubernetes deployment in the kubernetes Cluster?

Answer:

kubectl rollout resume deployment.v1.apps/my-deployment
Resuming the Kubernetes deployment
Resuming the Kubernetes deployment

Q29. How to check the history the Kubernetes deployment in the kubernetes Cluster?

Answer:

For Incorrect Kubernetes deployments such as an incorrect image the deployment crashes. Make sure to stop the deployment using cltr + c and execute rollout history command.

kubectl rollout history deployment.v1.apps/nginx-deployment

Q30. How to rollback to the previous kubernetes deployment version which was stable in the kubernetes Cluster?

Answer: Run the undo command as shown below.

kubectl rollout undo deployment.v1.apps/nginx-deployment

Q31. How to Create a manifest file to create a Kubernetes deployment without actually creating the Kubernetes deployment?

Answer: use the –dry-run=client command.

kubectl create deployment nginx --image=nginx --dry-run=client -o yaml
Creating the kubernetes deployment manifest file
Creating the kubernetes deployment manifest file

Q32. How to Create a manifest file to create a Kubernetes deployment with Replicasets without actually creating the Kubernetes deployment?

Answer: use the –dry-run=client command.

kubectl create deployment nginx --image=nginx --replicas=4 --dry-run=client -o yaml
Creating the kubernetes deployment with replicasets with manifest file
Creating the kubernetes deployment with replicasets with manifest file

Q33. How to Create a Kubernetes service using manifest file ?

Answer: Create the kubernetes file and then run kubernetes create commnad.

kubectl create -f service-defination.yml

Q34. How to Check running Kubernetes service in the kubernetes cluster?

Answer: To check the running Kubernetes services in the kubernetes cluster run below command.

kubectl get svc
kubectl get services
Checking Kubernetes service in kubernetes cluster
Checking Kubernetes service in kubernetes cluster

Q35. How to Check details of kubernetes service such as targetport, labels, endpoints in the kubernetes cluster?

Answer:

kubectl describe service 
Describing the Kubernetes service in kubernetes cluster
Describing the Kubernetes service in kubernetes cluster

Q36. How to Create a Kubernetes NodePort service in the kubernetes cluster?

Answer: Run kubectl expose command.

kubectl expose deployment nginx-deploy --name=my-service --target-port=8080 --type=NodePort --port=8080 -o yaml -n default  # Make sure to add NodePort seperately
 Kubernetes NodePort service
Kubernetes NodePort service

Q37. How to Create a Kubernetes ClusterIP service named nginx-pod running on port 6379 in the kubernetes cluster?

Answer: Create a pod then expose the Pod using kubectl expose command.

kubectl run nginx --image=nginx --namespace=kube-system
kubectl expose pod --port=6379 --name nginx-pod -o yaml --namespace=kube-system
Creating the Kubernetes Pods
Creating the Kubernetes Pods
 Kubernetes ClusterIP service
Kubernetes ClusterIP service
Verifying the Kubernetes ClusterIP service
Verifying the Kubernetes ClusterIP service

Q38. How to Create a Kubernetes ClusterIP service named redis-service in the kubernetes cluster?

Answer:

kubectl create service clusterip --tcp=6379:6379  redis-service --dry-run=client -o yaml
Creating the Kubernetes ClusterIP
Creating the Kubernetes ClusterIP

Q39. How to Create a Kubernetes NodePort service named redis-service in the kubernetes cluster?

Answer: kubectl expose command.

kubectl create service nodeport --tcp=6379:6379  redis-service  -o yaml
Creating the Kubernetes NodePort
Creating the Kubernetes NodePort

Q40. How to save a Kubernetes manifest file while creating a Kubernetes depployment in the kubernetes cluster?

Answer: Use > nginx-deployment.yaml

kubectl create deployment nginx --image=nginx --dry-run=client -o yaml > nginx-deployment.yaml

Join 50 other followers

Related: Kubernetes Tutorial for Kubernetes Certification [PART-1]

Related: Kubernetes Tutorial for Kubernetes Certification [PART-2]

Conclusion

In this Ultimate guide (Kubernetes Interview questions for Kubernetes Certification (CKA), you had a chance to revise everything you needed to pass and crack the Kubernetes interview.

Now that you have sound knowledge of Kubernetes and are ready for your upcoming interview.

Advertisement

Kubernetes Tutorial for Kubernetes Certification [PART-1]

If you are looking to learn to Kubernetes, you are at the right place; this Kubernetes Tutorial for Kubernetes Certification tutorial will help you gain complete knowledge that you need from basics to becoming a Kubernetes pro.

Kubernetes is more than just management of docker containers as it keeps the load balanced between the cluster nodes, provides a self-healing mechanism such as replacing a new healthy container and many features.

Let’s get started with Kubernetes Tutorial for Kubernetes Certification without further delay.

Join 50 other followers

Table of Content

  1. What is kubernetes?
  2. Why Kubernetes?
  3. Docker swarm vs kubernetes
  4. kubernetes Architecture: Deep dive into Kubernetes Cluster
  5. kubernetes master components or kubernetes master node
  6. Worker node in kubernetes Cluster
  7. Highly Available Kubernetes Cluster
  8. What is kubernetes namespace?
  9. Kubernetes Objects and their Specifications
  10. Kubernetes Workloads
  11. What is a kubernetes Pod? 
  12. Deploying multi container Pod
  13. Conclusion

What is kubernetes?

Kubernetes is an open-source Google-based container orchestration engine for automating deployments, scaling, and managing the container’s applications. It is also called k8s because eight letters are between the “K” and the “s” alphabet.

Kubernetes is an container orchestration tool which means it orchestrate the docker technology.

Kubernetes is portable and extensible which supports declarative as well as automatic approaches.

Kubernetes also helps in service discovery, such as exposing a container using the DNS name or using their own IP address, provides a container runtime, zero downtime deployment capabilities, automatic rollback, automatic storage allocation such as local storage, public cloud providers, etc.

Kubernetes has the ability to scale when needed, which is known as AutoScaling. You can automatically manage configurations like secrets or passwords and mount EFS or other storage when required.

Why Kubernetes?

Now that you have a basic idea about what is Kubernetes. Earlier applications used to run on the physical server that had issues related to resource allocation, such as CPU memory. You would need more physical servers, which were too expensive.

To solve the resource allocation issue, virtualization was adopted in which you could isolate applications and align the necessary resources as per the need. With virtualization, you can run multiple virtual machines running from single hardware, allowing better utilization of resources and saving hardware costs.

Later, the containerizations-based approach was followed, such as docker and after Kubernetes, which is light-weighted and allowed portable deployments in which containers share the same OS, CPU, and memory from the host but have their own file systems and can launch anywhere from local machines or on cloud infrastructure.

Finally, Kubernetes takes care of scaling and failover for your applications and easily manages the canary deployment of your system.

Some of the key features of Kubernetes are:

  • Kubernetes exposes a container using a DNS name or using an IP address.
  • Kubernetes allows you to mount storage system of your choice such as local storage, public cloud providers and more.
  • You can rollback the state anytime for your deployments.
  • Kubernetes replaces containers that fails or whose health check fails.
  • Kubernetes allows you to store secrets and sensitive information such as passwords, OAuth tokens and SSH keys. Also you can update the secret information multiple times without impacting container images.

Every Kubernetes object contains two nested objects (object spec and object status) whee spec describes the description of the object you set and status shows

Physical Server to Virtualization to Containerization
Physical Server to Virtualization to Containerization

Docker swarm vs kubernetes

In previous sections, you learned what Kubernetes is and why there is a shift from physical to virtual machines and towards docker, the container-based technology.

Docker is a light weighted application that allows you to launch multiple containers. Still, to manage or orchestrate the containers, you need orchestration tools such as the Docker swarm or the Kubernetes.

Let’s look at some of the key differences between Docker swarm vs Kubernetes.

Docker SwarmKubernetes
Docker swarm use YAML files and deploy on nodesUsers can encrypt data between nodes.
Users can encrypt data between nodesAll Pods can interact with each other without encryption
Kubernetes Installation is difficult, but the cluster is powerfulDocker swarm is easy to install, but the cluster doesn’t ha many advanced features.
There is no autoscaling enabled in the Docker swarm.Can do autoscaling
Docker swarm is easy to install but the cluster doesn’t ha many advanced features.Kubernetes Installation is difficult but the cluster is very strong
Docker swarm vs Kubernetes

kubernetes Architecture: Deep dive into Kubernetes Cluster

When you Install Kubernetes, you create a Kubernetes cluster that mainly contains two components master or the controller nodes and worker nodes. Nodes are the machines that contain their own Linux environment, which could be a virtual machine or either physical machine.

The application and services are deployed in the containers within the Pods inside the worker nodes. Pods contain one or more docker containers. When a Pod runs multiple containers, all the containers are considered a single entity and share the Node resources.

Bird-eye view of kubernetes cluster
Bird-eye view of Kubernetes cluster

kubernetes master components or kubernetes master node

Kubernetes master components or Kubernetes master node manages the Kubernetes clusters state, storage information about the different nodes, container alignments, the data, cluster events, scheduling new Pods, etc.

Kubernetes master components or Kubernetes master node contains various components such as Kube-apiserver, an etcd storage, a Kube-controller-manager, and a Kube-scheduler.

Let’s learn about each Kubernetes master component or Kubernetes master node.

kube api server

The most important component in the Kubernetes master node is the kube API server or API server that orchestrates all the operations within the cluster. Kubernetes cluster exposes the Kube API server and acts as a gateway or an authenticator for users.

Kube API server also connects with worker node and other control panel components. It also allows you to query and manipulate the state of API objects in Kubernetes such as Pods, Namespaces, ConfigMaps, and events from the etcd server.

First of all the user request is authenticated with kube-apiserver, validates the request with etcd and then later performs the operation such as creation of pods etc. Once the pod is updated created it is assigned then scheduler monitors it and assign it to the appropriate node using the api server and kubelet component of the worker node. Later api server updates the information to the etcd.

The kubectl command-line interface or kubeadm uses the kube API server to execute the commands.

If you deploy the kube API server using the kubeadm tool then ApiServer is installed as a Pod and the manifest file is located at below path

cat /etc/kubernetes/manifests/kube-apiserver.yaml
  • However for non-kubeadm setup that means if you install it manually then you will install it using below command.
wget https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-apiserver
  • To check the kube-apiserver service then go to the below path.
 cat /etc/systemd/system/kube-apiserver.service
  • To check if kube api server is running in the kubernetes cluster using kubectl command. You will notice the pod will already be created.
kubectl get pods --all-namespaces
Checking the kube API server in the Kubernetes cluster with the kubectl command
Checking the kube API server in the Kubernetes cluster with the kubectl command
  • To check if kube api server is running in the kubernetes cluster with the process command.
ps -aux | grep kube-apiserver
Checking the kube API server in the Kubernetes cluster with process command
Checking the kube API server in the Kubernetes cluster with process command

You can also use client libraries if you want to write an application using Kubenetes API server based on different languages.

etcd kubernetes

etcd is again an important component in the Kubernetes master node that allows storing the cluster data, cluster state, nodes, roles, secrets, configs, pod state, etc. in key-value pair format. etcd holds two types of state; one is desired, and the other is the current state for all resources and keeps them in sync.

When you run the kubectl get command, the request goes to the etcd server via kube-apiserver, and the same command to add or update anything in the Kubernetes cluster using kubectl add or kubectl update, etcd is updated.

For example, the user runs the kubectl command, then the request goes to ➜ the Kube API server (Authenticator) ➜ , etcd ( reads the value) and pushes those values back to the kube API server.

Note: When you install etcd using the kubeadm tool, it is installed as a Pod and runs on port 2379

Tabular or relational database
Tabular or relational database
Key-value store
Key-value store

Note:

What are binary files ?

The binary file is stored in the binary format. Binary files are computer readable rather than human readable.
All the executable programs are stored in binary format.

What are MSI files ?

MSI files are primarily created for software installations and utilize the Windows Installer service. MSI files are database files that carry information about software installation.

What are EXE (executable) files ?

Exe files are used by Windows operating systems to launch software programs. EXE files are self-contained executable files that may execute a number of functions, including program installation, as opposed to MSI files, which are created exclusively for software installations. Executable files are .BAT, .COM, .EXE, and .BIN.

To install ETCD you will need to perform the below steps:

  • Download etcd binaries
curl -L https://github.com/etcd-io/etcd/releases/download/v3.3.11/etc-v3.3.11-linux-amd64.tar.gz
  • Extract etcd binary

tar zxvf etc-v3.3.11-linux-amd64.tar.gz

  • Run etcd service. This service by default runs on port 2379.

./etcd

After ETCD is installed it comes with etcdctl command line and we can run below commands. ETCDCTL is the CLI tool used to interact with ETCD. ETCDCTL can interact with ETCD Server using 2 API versions – Version 2 and Version 3.  By default its set to use Version 2. Each version has different sets of commands.

./etcdctl --version                 # To check the etcd version

./etcdctl  set key1 value1     # To update the data in etcd using key and value pair

./etcdctl get key1                  # To retrieve the specific keys from the etcd store.

export ETCDCTL_API=3    # To update the APiI version using the environment variable 

Kube scheduler

Kube scheduler helps only in deciding or scheduling new Pods and containers to the appropriate worker nodes according to the pod’s requirement, such as CPU or memory, before allocating the pods to the worker node of the cluster.
Whenever the controller manager finds any discrepancies in the cluster, it forwards the request to Scheduler via the kube API server to fix the gap. For example, If there is any change in node or if pod is created without assigned node, then:

  • Scheduler monitors the kube API server continously.
  • Kube API server checks with etcd and etcd respond back to kube API server with required information.
  • Next Controller manager informs Kube API server to schedule new pods using Scheduler.
  • Scheduler takes use of Kube API server asks kublet to assigns the node to the Pod.
  • Kubectl after assigning the pod responds back to kube API server with the information and kube API further communicates to etcd to update.
Scheduling a Pod
Scheduling a Pod

Installing Scheduler

  • Deploying or installing Scheduler manually. It will be installed as service.
wget https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-scheduler
  • Deploying or installing scheduler using kubeadm tool. Note: You will see scheduler as pod if you install via kubeadm. To check you can use below command.
kubectl get pods -n kube-system
  • When you deploy controller using kubeadm then you will see the manifest file on the below path.
cat /etc/kubernetes/manifests/kube-scheduler.yaml

Kube controller manager

Kube controller manager runs the controller process. Kubernetes comes with a set of built-in controllers that run inside the kube-controller-manager. These built-in controllers provide important core behaviors.

  • Node Controller: Node controller in kube controller manager checks the status of the node like when would node gets on or off. By default it checks the status of the node every 5 seconds.
  • Replication controller: Replication controller in kube controller manager maintains the correct number of containers are running in the replication group.
  • Endpoint controller: Providers endpoints of pods and services.
  • Service and token controller: Create Accounts and API access tokens.

In Kubernetes kube controller managers control loops that watch the state of your cluster, then make or request changes where needed. Each controller tries to move the current cluster state closer to the desired state and if there are any gaps, then forwards the request to Scheduler via the kube API server to fix the gap.

Installing Kube-controller-manager

  • Deploying or installing Kube-controller-manager manually. It will be installed as service.
wget https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-controller-manager
  • When you deploy it as service then it is located at below path
cat /etc/systemd/system/kube-controller-manager.service

ps -aux | grep kube-controller-manager
  • Deploying or installing Kube-controller-manager using kubeadm tool. Note: You will see Kube-controller-manager as pod if you install via kubeadm. To check you can use below command.
kubectl get pods -n kube-system
  • When you deploy controller using kubeadm then you will see the manifest file on the below path.
cat /etc/kubernetes/manifests/kube-controller-manager.yaml

Worker node in kubernetes Cluster

Worker Node is part of a Kubernetes cluster used to manage and run containerized applications. The worker node performs any actions when any Kube API server triggers any request. Each node is managed by the control plane or Master Node that contains the services necessary to run Pods.

The Worker node contains various components, including a Kubelet, Kube-proxy, container runtime, and Node components run on every node, maintaining the details of all running pods.

kubelet in kubernetes

kubelet in Kubernetes is an agent that runs on each worker node and manages containers in the pod after communicating with the kube API server. Kubelet command listens to the Kube API server and acts accordingly, such as adding or deleting containers.

Kube API server fetches the information from kubelet about the worker nodes’ health condition and, if necessary, schedules the necessary resources with the help of Scheduler.

Main function of Kublet is :

  • Registers node.
  • Creates Pods
  • Monitor nodes and pods.

Kubelet is not installed as a pod with the kubeadm tool; you must install it manually.

kube proxy in kubernetes

Kube proxy is a networking component that runs on each worker node in the Kubernetes cluster, forwards traffic within the worker nodes, and handles network communications. Pods are able to communicate with each other.

The job of kube proxy is to look at the new kubernetes service is created and as soon as service is created kube proxy creates the appropriate rule on each of the worker node to forward traffic to those services to backend pods. IP table rules are one of the rules that are configured by the kube proxy.

Installing kube proxy

  • Deploying or installing kube-proxy manually. It will be installed as service.
wget https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-proxy

Container Runtime

Container Runtime is an important component responsible for providing and maintaining the runtime environment to containers running inside the Pod. The most common container runtime is Docker, but others like containerd or CRI-O may also be possible.

High Level summary of kubernetes Architecture

Initially a request is made to kube-apiserver which retrieves the information or updates it in etcd component. If any pod or deployment is created then scheduler accordingly connects to kube api server to schedule the pod. Later kube-api server connects with kublet component of worker node to schedule the pod accordingly.

Other than Master or Worker Node

  • Now tha you know that a Kubernetes cluster contains a master and worker node but it also neds a DNS server which servers DNS records for kubernetes service.
  • Next, optional but it is a good practice that you to install or setup Kubernetes Dashbaod (UI) which allows users to manage and troubleshoot applications running in the cluster.

Highly Available Kubernetes Cluster

Now that you have a good idea and knowledge about the Kubernetes cluster components. Do you know Kubernetes auto-scales your cluster if required, and there are two ways to achieve it?

  • With etcd co-located with control panel nodes and have a stacked etcd.
  • With etcd running on separate nodes from the control panel nodes and have a external stacked etcd.

etcd is co-located with control panel

In the case of etcd are co-located with control panel all the three components API server, scheduler, controller manager communicates with etcd separately.

In this case, if any node gets down, both the components are down, i.e., API processor, etcd. To solve this, add more nodes to make it Highly Available. This approach requires less infrastructure.

etcd is co-located with control panel
etcd is co-located with control panel

etcd running on separate nodes from the control panel

In the second case of etcd running on separate nodes with control panel all the three components kube API server, scheduler, controller manager communicates with etcd externally with an external stacked etcd.

In this case, if any node gets down, your etcd is not impacted, and you still have a highly available environment than stacked etcd, but this approach requires more infrastructure.

etcd running on separate nodes from the control panel
etcd running on separate nodes from the control panel

What is kubernetes namespace?

Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called Kubernetes namespace.

In the Kubernetes namespace, all the resources should have a unique name, but not across namespaces. Kubernetes namespaces help different projects, teams, or customers to share a Kubernetes cluster and divide cluster resources between multiple users.

There are three types of Kubernetes namespaces when you launch a Kubernetes cluster:

  • Kube-system: kube-system contains the all the cluster components such as etcd, api-server, networking, proxy server etc.
  • Default: Default namespace is where you will actually launch the resources or other kubernetes objects by default.
  • Kube-public: This namespace is available for all users accessing the kubernetes service publically.

Let’s look at an example related to the Kubernetes namespace.

  1. If you wish to connect service named db-service within the same namespace then you can access the service directly as:
 mysql.connect("db-service")
  1. To access service named db-service service in other namespace like dev then you should access the service as:
    • <service-name>.<namespace-name>.svc.cluster.local because when you create a service the DNS entry is created
    • svc is subdomain for the service.
    • cluster.local is default domain name of the kubernetes cluster
mysql.connect("db-service.dev.svc.cluster.local") 

Most Kubernetes resources (e.g., pods, services, replication controllers, and others) are created in the same namespace or different depending on the requirements.

  • To List the current namespaces in a cluster run the kubectl command as below.
kubectl get namespaces
kubectl describe namespaces
kubectl create namespace namespace-name
  • Another way to create a kubernetes namespace is.
apiVersion: v1
kind: Namespace
metadata: 
    name: dev
  • To switch from one Kubernetes namespace to another Kubernetes namespace

To switch beetween two kubernetes namespaces run the kubectl config set command.

kubectl config set-context $(kubectl config current-context) --namespace my-namespace2
kubectl delete namespaces namespace-name
  • To allocate resource quota to namespace, create a file named resource.yaml and run the kubectl command.
apiVersion: v1
kind: ResourceQuota
metadata: 
    name: compute-quota
    namespace: my-namespace2  # This is where you will define the namespace
spec:
  hard:
    pods: "10"
    requests.cpu: "1"
    requests.memory: 0.5Gi
    limits.cpu: "1"
    limits.memory: 10Gi
kubectl create -f resource.yaml
allocate resource quota to namespace,
allocate resource quota to namespace
  • To check the resource consumption for a particular namespace run the below command.
kubectl describe resourcequota compute-quota
checking the resource consumption for a particular namespace
checking the resource consumption for a particular namespace
  • To check all the resources in all the namespaces run the below command.
kubectl get pods --all-namespace

Kubernetes Objects and their Specifications

Kubernetes objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster, such as how many containers are running inside the Pod and on which node, what all resources are available If there are any policies on applications.

These Kubernetes objects are declared in .YAML formats are used during deployments. The YAML file is used by the kubectl command, which parses it and converts it into JSON.

  • Spec : While you create the object you need to specify the spc parameter which define the characteristics of the resources you want in the kubernetes cluster.
  • Labels are key/value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are relevant to users. Labels are used to organize and to select subsets of objects.
  • apiVersion – Which version of the Kubernetes API you’re using to create this object
  • kind – What kind of object you want to create
  • metadata – Data that helps uniquely identify the object, including a name string, UID, and optional namespace
apiVersion: apps/v1             # Which Version your kubernetes API server uses
kind:       Deployment          # What kind of Object you would like to have
metadata:   tomcat deployment   # To Identify the Object
specs:                                  # What would you like to achieve using this template
   replicas: 2                        # Run 2 pods matching the template
   template:
     metadata:                  # Data that identify object like name, UID and namespace
       labels: 
         app : my_tomcat_app
   spec:
     containers:
     - name: my_tomcat_container
       image: tomcat
       ports:
       - containerPort : 8080

Kubernetes Workloads

The workload is the applications running on the Kubernetes cluster. Workload resources manage the set of Kubernetes Pods in the form of Kubernetes deployments, Kubernetes Replica Set, Kubernetes StatefulSet, Kubernetes DaemonSet, Kubernetes Job, etc.

Kubernetes deployments, Kubernetes Replica Set, Kubernetes StatefulSet, Kubernetes DaemonSet, Kubernetes Job, etc
Kubernetes deployments, Kubernetes Replica Set, Kubernetes StatefulSet, Kubernetes DaemonSet, Kubernetes Job, etc

Let’s learn each of the Kubernetes in the upcoming sections.

What is a kubernetes Pod?

The Kubernetes pod is a Kubernetes entity where your docker container resides hosting the applications. The number of pods in the node increases, not containers, if there is an increase in traffic on the apps.

The Kubernetes pod contains a single or group of containers that work with shared storage and network. It is recommended to add Pods than adding containers in the Pod because more containers mean more complex structures and interconnections.

The Kubernetes Pods are created using workload resources such as Kubernetes deployment or Kubernetes Job with the help of yaml file or using directly calling Kubernetes API and are assigned unique IP addresses.

To create a highly available application, you should consider deploying multiple Pods known as replicas. Healing of Pods is done by controller-manager as it keeps monitoring the health of each pod and later asks the scheduler to replace a new Pod.

All containers in the Pod can access the shared volumes, allowing those containers to share data same network namespace, including the IP address and network ports. Inside a Pod, the containers that belong to the Pod can communicate with one another using localhost.

The below is an example syntax

apiVersion: v1
kind: Pod
metadata:
  name: postgres
  labels:
    tier: db-tier
spec:
  containers:
    - name: postgres
      image: postgres
      env: 
       - name: POSTGRES_PASSWORD
         value: mysecretpassword
  • To create a Kubernetes Pod create a yaml file named pod.yaml and copy/paste the below content.
# pod.yaml template file that creates pod
apiVersion: v1        # It is of type String
kind: Pod               # It is of type String
metadata:             # It is of type Dictionary and contains data about the object 
  name: nginx
  labels: 
    app: nginx
    tier: frontend
spec:                  # It is of type List and Array because it can have multiple containers
  containers:
  - name: nginx
    image: nginx
  • Now to create a Kubernetes Pod execute the kubectl command.
kubectl create -f pod-defination.yml
kubectl apply -f pod.yaml  # To run the above pod.yaml manifest file
Creating a Kubernetes Pod
Creating a Kubernetes Pod
  • You can also use below kubectl command to run a pod in kubernetes cluster.
kubectl run nginx --image nginx  # Running a pod

kubectl get pods -o wide  # To verify the Kubernetes pods.

Kubectl describe pod nginx # To describe the pod in more better way

  • To link pods with each other in the kubernetes cluster, run the below command.
docker run helper --link app3
  • To create Pod using API request use the below command.
curl -X POST /api/v1/namespaces/default/pods

Deploying multi container Pod

In the previous section, you learned how to launch a Pod with a single container, but you sometimes need to run Kubernetes pods with multiple containers. Let’s learn how you can achieve this.

  • To create a multi container Kubernetes Pod create a yaml file named multi-container-demo.yaml and copy/paste the below content.
apiVersion: v1
kind: Pod
metadata:
  name: multicontainer-pod
spec:
  restartPolicy: Never
  volumes:
  - name: shared-data
    emptyDir: {}
  containers:
  - name: nginx-container-1                  # Container 1
    image: nginx
    volumeMounts:
    - name: shared-data
      mountPath: /usr/share/nginx/html
  - name: ubuntu-container-2                  # Container 2
    image: nginx
  • Now to create multi container Kubernetes Pod execute the kubectl command.
kubectl apply -f multi-container-demo.yaml  # To run the above pod.yaml manifest file
  • To check the Kubernetes Pod run kubectl get pods command.
Creating multi-container Kubernetes Pod
Creating multi-container Kubernetes Pod
  • To describe both the containers in the kubernetes Pods run kubectl describe command as shown below.
kubectl describe pod multicontainer-pod
describe both the containers in the kubernetes Pods
describe both the containers in the Kubernetes Pods

Join 50 other followers

Conclusion

In this Ultimate Guide, you learned what is Kubernetes, Kubernetes architecture, and understood Kubernetes cluster end to end and how to declare Kubernetes manifest files to launch Kubernetes Pods.

Now that you have gained a handful of Knowledge on Kubernetes, continue with the PART-2 guide and become the pro of Kubernetes.

Kubernetes Tutorial for Kubernetes Certification [PART-2]

How to Deploy kubernetes stateful application or kubernetes StatefulSets in AWS EKS cluster

Are you looking for permanent storage for your Kubernetes applications or Kubernetes Pods? If yes, you are at the right place to learn about Kubernetes stateful sets that manage the deployment and scaling of a set of Pods and provide guarantees about the ordering and uniqueness of these Pods.

In this tutorial, you will learn how to deploy a Kubernetes stateful sets application deployment step by step. Let’s get into it.

Join 50 other followers

Table of Content

  1. Prerequisites
  2. What is kubernetes statefulsets deployment?
  3. Deploying kubernetes statefulsets deployment in Kubernetes Cluster
  4. Creating Kubernetes Namespace for kubernetes stateful sets deployment
  5. Creating a Storage class required for Persistent Volume (PV)
  6. Creating a persistent volume claim (PVC)
  7. Creating Kubernetes secrets to store passwords
  8. Creating the Stateful backend deployment in the cluster
  9. Creating the Stateful Frontend WordPress deployment
  10. Kubernetes Stateful application using AWS EBS vs Kubernetes Stateful application using AWS EFS
  11. Conclusion

Prerequisites

  • AWS EKS cluster already created.
  • AWS account

What is kubernetes statefulsets deployment?

Kubernetes stateful sets manage stateful applications such as MySQL, Databases, MongoDB, which need persistent storage. Kubernetes stateful sets manage the deployment and scaling of a set of Pods and provide guarantees about the ordering and uniqueness of these Pods.

With Kubernetes stateful sets with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1} and are terminated in reverse order, from {N-1..0}.

Deploying kubernetes statefulsets deployment in Kubernetes Cluster

In this article, you will deploy the Kubernetes stateful sets deployment with the following components:

  1. Frontend applications a wordpress service deployed as Kubenretes Stateful set deployment containing the persistent volume AWS EBS to store HTML pages.
  2. Backend applications MySQL service deployed as Kubenretes deployment containing Persistent volume AWS EBS to store MySQL data.
  3. Load balacer on the top frontend application. The Load balancer will route the traffic to WordPress pods, and WordPress site pods will store data in MySQL pod by routing it via MySQL service as shown in below picture.
Deploying kubernetes stateful sets deployment in Kubernetes Cluster
Deploying Kubernetes stateful sets deployment in Kubernetes Cluster

Creating Kubernetes Namespace for kubernetes stateful sets deployment

Now that you know what is Kubernetes stateful sets and what all components you need to deploy Kubernetes stateful sets in the Kubernetes cluster. But before you deploy should deploy it in a particular namespace to make things simple. Let’s create the Kubernetes namespace.

  • Create a Kubernetes namespace with below command. Creation of Kubernetes namespace allows you to separate a particular project or a team or env.
kubectl create namespace stateful-deployment
Kubernetes namespace created
Kubernetes namespace created

Creating a Storage class required for Persistent Volume (PV)

Once you have the Kubernetes namespace created in the Kubernetes cluster, you will need to create storage for storing the website and database data.

In the AWS EKS service, the PersistentVolume (PV) is a piece of storage in the cluster implemented via an EBS volume, which has to be declared or dynamically provisioned using Storage Classes.

  • Lets begin by creating a storage class that is required for persistent volume in the kubernetes cluster. To create the storage class first create a file gp2-storage-class.yaml and copy/paste the below code.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
reclaimPolicy: Retain
mountOptions:
  - debug
  • Now, create the Storage class by running the below command.
kubectl apply -f gp2-storage-class.yaml --namespace=stateful-deployment
Creating the Kubernetes Storage class in Kubernetes cluster.
Creating the Kubernetes Storage class in the Kubernetes cluster.

In case you receive any error then run below command.

kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' --namespace=stateful-deployment
  • Next, verify all the storage class that are present in the Kubernetes cluster.
kubectl get storageclasses --all-namespaces
Verifying the Kubernetes Storage class
Verifying the Kubernetes Storage class

Creating a persistent volume claim (PVC)

Now that you have created a storage class that persistent volume will use, create a Persistent volume claim (PVC) so that a stateful app can then request a volume by specifying a persistent volume claim (PVC) and mount it in its corresponding pod.

  • Again create a file named pvc.yaml and copy/paste the below content. The below code creates the two PVC, one for wordpress frontend application and the other for mysql backend application service.
apiVersion: v1
kind: PersistentVolumeClaim
# Creating persistent volume claim (PVC) for WordPress ( frontend )
metadata:
  name: mysql-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
# Creating persistent volume claim (PVC) for MySQL  ( Backend )
metadata:
  name: wp-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
  • Now execute the apply command to create the persistent volume.
kubectl apply -f pvc.yaml --namespace=stateful-deployment
Creating the Persistent Volume claim for WordPress and MySQL application
Creating the Persistent Volume claim for WordPress and MySQL application
  • Verify the recently created persistent volume in the kubernetes cluster. These PVC are actually created as AWS EBS volumes.
kubectl get pvc --namespace=stateful-deployment
Verify the recently created persistent volume claims in the kubernetes cluster
Verify the recently created persistent volume claims in the Kubernetes cluster
  • Also verify the storage in AWS EBS you will find the below two storages.
Verifying the Persistent volumes claims in AWS EBS
Verifying the Persistent volumes claims in AWS EBS

Creating Kubernetes secrets to store passwords

Up to now, you created Kubernetes namespace and persistent volume successfully, but MySQL application password will be stored as Kubernetes secrets. So let’s jump into and create Kubernetes secrets that will be used to store passwords for the MySQL application.

  • Create secret which stores mysql password (mysql-pw) which will be injected as env var into container.
kubectl create secret generic mysql-pass --from-literal=password=mysql-pw --namespace=stateful-deployment
Creating Kubernetes secrets to store passwords
Creating Kubernetes secrets to store passwords
  • Next, verify the secrets that were recently created by using kubectl get command.
kubectl get secrets --namespace=stateful-deployment
verify the Kubernetes secrets that were recently created by using kubectl get command
verify the Kubernetes secrets that were recently created by using the kubectl get command

Creating the Stateful backend deployment in the cluster

Kubernetes Stateful deployment can happen either with AWS EBS or AWS EFS

Now that you have Kubernetes namespace, Persistent volume, secrets that you will consume in the application. Let’s get into building the stateful backend deployment.

  • Create a file mysql.yaml for the deployment and copy/paste the below code. apiVersion is the kubernetes API version to manage the object. For Deployment/Replicasets its apps/v1 and for Pod and service it is v1.
apiVersion: v1
# Kind denotes what kind of resource/object will kubernetes will create
kind: Service
# metadata helps uniquely identify the object, including a name string, UID, and optional namespace.
metadata:
  name: wordpress-mysql
# Labels are key/value pairs to specify attributes of objects that are meaningful and relevant to users.
  labels:
    app: wordpress
# spec define what state you desire for the object
spec:
  ports:
    - port: 3306
# The selector field allows deployment to identify which Pods to manage.
  selector:
    app: wordpress
    tier: mysql
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
# Creating the enviornment variable MYSQL_ROOT_PASSWORD whose value will be taken from secrets 
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 3306
          name: mysql
# Volumes that we created PVC will be mounted here.
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
# Defining the volumes ( PVC ).
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim
  • Now create mysql deployment and service by running the below command.
kubectl apply -f mysql.yaml --namespace=stateful-deployment
Creating the Stateful backend deployment in the cluster
Creating the Stateful backend deployment in the cluster
  • Further check the Pods of MySQL backend deployment by running below command.
kubectl get pods -o wide --namespace=stateful-deployment
Verifying the Stateful backend deployment in the cluster
Verifying the Stateful backend deployment in the cluster

In case of deployment with AWS EBS, all the Kubernetes Pods are created on the same AWS EC2 node and Persistent Volume is attached (EBS). However in case of StatefulSet with EBS Kubernetes Pods can be created on various nodes with different EBS attached.

Creating the Stateful Frontend WordPress deployment

Previously, you created a Stateful backend MySQL application deployment, which is great, but you will need to create a WordPress Front application deployment for a complete setup. Let’s get into it now.

  • Create a file wordpress.yaml for the deployment and copy/paste the below code. apiVersion is the kubernetes API version to manage the object. For Deployment/Replicasets its apps/v1 and for Pod and service it is v1.
apiVersion: v1
kind: Service
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - port: 80
  selector:
    app: wordpress
    tier: frontend
  type: LoadBalancer
---
apiVersion: apps/v1
# Creating the WordPress deployment as stateful where multiple EC2 will have multiple pods with diff EBS
kind: StatefulSet
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  replicas: 1
  serviceName: wordpress-stateful
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
      - image: wordpress:4.8-apache
        name: wordpress
        env:
        - name: WORDPRESS_DB_HOST
          value: wordpress-mysql
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 80
          name: wordpress
        volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /var/www/html
# Below section of volume is valid only for deployments not for statefulset 
      volumes:
      - name: wordpress-persistent-storage
        persistentVolumeClaim:
          claimName: wp-pv-claim
# Below section is valid only for statefulset not for deployments as volumes will be created dynamically

 volumeClaimTemplates: 
 - metadata:
         name: wordpress-persistant-storage
    spec: 
        accessModes:
            - ReadWriteOnce
        resources:
            requests: 
                storage: 10Gi
        storageClassName: gp2  
  • Now create wordpress deployment and service by running the below command.
kubectl apply -f wordpress.yaml --namespace=stateful-deployment
  • Further check the Pods of WordPress deployment by running below command.
kubectl get pods -o wide --namespace=stateful-deployment

Kubernetes Stateful application using AWS EBS vs Kubernetes Stateful application using AWS EFS

As discussed earlier, AWS EBS volumes are tied to only one Availability Zone, so recreated pods can be only started in the same Availability Zone of the previous AWS EBS volume.

For example, if you have a Pod running on AWS EC2 instance in the Availability zone (a) with AWS EBS attached in the same zone, then if in case your pods get restarted in another AWS EC2 instance, Pod will be able to attach the same AWS EBS however if in case pod gets restarted in another instance in different Availability zone (b) then it won’t be able to attach to the same previous AWS EBS rather it will require a new AWS EBS in Availability zone (b).

Kubernetes Stateful application using AWS EBS
Kubernetes Stateful application using AWS EBS

As discussed with AWS EBS, things are a little complicated as AWS EBS are not shared volumes as they belong to a particular AZ rather than multi-AZ; however, by using shared volumes AWS EFS ( Elastic file system) across Multi-AZ and Pods, it is possible.

AWS EFS volumes are mounted as network file systems on multiple AWS EC2 instances regardless of AZ. and work efficiently in multi-AZ and are highly available.

Kubernetes Stateful application using AWS EFS
Kubernetes Stateful application using AWS EFS

Conclusion

In this article, you learned how to create permanent storage for your Kubernetes applications and mount it. Also, you learned that there are two ways to mount permanent storage to Kubernetes applications by using AWS EBS and AWS EFS.

Now, which applications you do plan to deploy in the AWS EKS cluster with permanent storage?