Kubernetes Series (PART-2)

TABLE OF CONTENT

  1. Kubernetes Cluster Objects and their Specifications
  2. Kubernetes command line tools
  3. Kubernetes deployment components
  4. kubernetes Pod
  5. Kubernetes Workload
  6. Kubernetes deployments
  7. Kubernetes Replicasets
  8. Kubernetes DaemonSets
  9. kubernetes Service
  10. kubernetes Ingress
  11. kubernetes ConfigMap
  12. kubernetes Secrets
  13. kubernetes Volumes

Kubernetes Cluster Objects and their Specifications

Kubernetes objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster such as like how many container are running inside the Pod and on which node , how many Pods are running , what all resources are available, If there are any policies on applications.

These Objects are declared in .YAML format’s and are used while deployments. This YAML file is used by kubectl command and parse it and converts into JSON while taking to kube-apiserver which further takes care of API’s in kubernetes.

  • Spec : For objects that have a spec, you have to set this when you create the object, providing a description of the characteristics you want the resource to have its desired state.
  • Status describes the current state of the object, supplied and updated by the Kubernetes system and its components.
  • Labels are key/value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are relevant to users. Labels are used to organize and to select subsets of objects. Labels allow for efficient queries for use in UIs and CLIs. Labels enable users to map their own organizational structures onto system objects in a loosely coupled fashion, without requiring clients to store these mappings. Example labels: “release” : “stable”, “release” : “canary” , “environment” : “dev”, “environment” : “qa”, “environment” : “production”
  • Annotation: Kubernetes annotations to attach arbitrary non-identifying metadata to objects. Clients such as tools and libraries can retrieve this metadata. Labels can be used to select objects and to find collections of objects that satisfy certain conditions. In contrast, annotations are not used to identify and select objects. The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels. Examples: Client library or tool information that can be used for debugging purposes like name, version, and build information, user or tool/system information, such as URLs of related objects from other ecosystem components, Build, release, or image information like timestamps, release IDs, git branch, PR numbers, image hashes, and registry address.

Lets checkout an example here! In below.yaml file you will see the required fields and object spec for a Kubernetes Deployment.

apiVersion: apps/v1             # Which Version your kubernetes API server uses
kind:       Deployment          # What kind of Object you would like to have
metadata:    tomcat deployment  # To Identify the Object
specs:                          # What would you like to achieve using this template
     replicas: 2                     # Run 2 pods matching the template
   template:
     metadata:                  # Data that identify object like name, UID and namespace
       labels: 
         app : my_tomcat_app
     spec:
       containers:
       - name: my_tomcat_container
         image: tomcat
         ports:
         - containerPort : 8080

  • Now create a Deployment using a .yaml file with kubectl command as shown below.
 kubectl apply -f my.yaml 

Kubernetes command line tools

kubectl is a command line tool that supports various ways to create and manage kubernetes objects.

  • To create a deployment named nginx-deployment using kubectl command.
kubectl create deployment nginx-deployment --image=nginx
  • To edit the deployment
kubectl edit deployment nginx-deployment
  • To check the running pods in the kubernetes cluster.
kubectl get pod
  • To check the running services in the kubernetes cluster.
kubectl get services
  • To check the the nodes in the kubernetes cluster.
kubectl get nodes
  • To describe the pod
kubectl describe Pod_name
  • To check the logs of the Pod.
kubectl logs Pod_name
  • kind: It lets you run kubernetes on your local machine , install docker before.
  • minkube: Similar to kind and it runs on single node.
  • kubeadm: Create and Manage kubernetes.

Kubernetes deployment Components

kubernetes deployment contains various components that you need to have a for complete application deployments. You deploy pods and services using kubectl command on kubernetes cluster.

kubernetes Pod

Pod contains single or group of containers which work with shared storage and network and a specification for how to run the containers. Pods are created using workload resources such as deployment or Job.

To create a highly available application you should consider deploying on different Pods which is also known as Replication. Healing of Pods is done by controller-manager as it keeps monitoring the health of each pod and later asks scheduler to replace a new Pod. All the Pods are assigned unique IP address and communicate with each other.

You can use workload resources to create and manage multiple Pods for you such as deployment, DaemonSets or StatefulSet. All containers in the Pod can access the shared volumes, allowing those containers to share data.

Each Pod is assigned a unique IP address for each address family. Every container in a Pod shares the network namespace, including the IP address and network ports. Inside a Pod (and only then), the containers that belong to the Pod can communicate with one another using localhost. For outside the pod , containers communicate using the port or different network resources.

  • Lets see a template file that creates pod after running it.
apiVersion: batch/v1
kind: Job
metadata:
  name: hello_Tomcat
spec:
  template:
    # This is the pod template
    spec:
      containers:
      - name: Tomcat_container
        image: Tomcat
        command: ['sh', '-c', 'echo "Hello, Tomcat!" && sleep 3600']
      restartPolicy: OnFailure
    # The pod template ends here

Kubernetes Workloads

Workload is the applications running on kubernetes cluster. Workload can be a single component or multiple components. Workload resources manages the set of Pods. Kubernetes provides several built-in workload resources such as deployments, Replica Set , StatefulSet , DaemonSet , Job and Cronjob.

Kubernetes Deployments

Deployments allows you to create Pods and containers using YAML files. Using Deployment You define how many replicas you require or how many new replica sets required to be added (Number of Pods replicas running at same time) or remove the existing deployments.

Lets checkout an example to create replica set using deployments to bring 3 nginx nodes.

  1. Name of the deployment is nginx-deployment that is using metadata.name field.
  2. Deployment will create 3 replicated Pods using spec.replicas field.
  3. It provides the information about which specific pods to manage using spec.selector field.
  4. Pods which needs to be managed using Pod template that is using .template.app
  5. Pods are labeled using .metadata.labels
  6. Containers specification’s are done using spec.template.spec respectively.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment ----------------- 1
  labels: 
     app:nginx ----------------------------5
spec:
  replicas: 3 -----------------------------2
  selector:
    matchLabels:
      app: nginx --------------------------3
  template:
    metadata:
      labels:
        app: nginx -------------------------4
    spec:------------------------------------6
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
  • Now, Create the deployment using kubectl command as shown below.
kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml
  • Run kubectl get deployments to check if the Deployment was created.
  • To see the ReplicaSet (rs) created by the Deployment, run kubectl get rs
  • To see the labels automatically generated for each Pod, run kubectl get pods --show-labels
  • To update the deployment , use any of the below command. In this case you will use nginx:1.16.1 instead of nginx:1.14.2
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record
  • To get the details of your deployments run the below command.
kubectl describe deployments
  • In case deployment happens incorrectly like use of incorrect image then in that case deployment crashes. Next, stop the deployment using cltr + c and it comes to Rollout History. To check the history use below command.
kubectl rollout history deployment.v1.apps/nginx-deployment
  • Next, to rollback to the previous version which was stable run the undo command as shown below.
kubectl rollout undo deployment.v1.apps/nginx-deployment
  • To Scale your deployments you the below command.
kubectl scale deployment.v1.apps/nginx-deployment --replicas=10
  • To Pause the deployment
kubectl rollout pause deployment.v1.apps/nginx-deployment
  • To Resume the deployment.
kubectl rollout resume deployment.v1.apps/nginx-deployment

Kubernetes Replica Sets

Main job of Replica sets is to maintain stable set of replica Pods running at a given time. When Replica sets needs to create set of Pods it use Pod templates. Replica Set then fulfills its purpose by creating and deleting Pods as needed to reach the desired number. Replica sets are deployed in same way as deployments happen.

For Replica Sets, the kind is always a Replica Set. You can scale, delete the pods with same kubectl command as you did for deployments.

Kubernetes DaemonSet

Kubernetes DaemonSet ensures that each run a copy of Pod. As an when nodes are added to the cluster it ensures Pods are added to the Node. Also Deleting a DaemonSet will clean up the Pods it created which is generally garbage collected. Some of the use cases : To monitor every node, use monitoring daemon, to collect the logs and use storage on each node again use daemon.

Garbage Collection

The role of the Kubernetes garbage collector is to delete objects that once had an owner, but no longer have an owner. For example, a Replicasets is the owner of a set of Pods. The owned objects are called dependents of the owner object.

A DaemonSet ensures that all eligible nodes run a copy of a Pod. Normally, the node that a Pod runs on is chosen by the Kubernetes scheduler. However, for DaemonSet pods are created and scheduled by the DaemonSet controller instead.

In a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon. A more complex setup might use multiple DaemonSet for a single type of daemon, but with different flag.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
  • You can use the same kubectl commands to work with daemon sets similar to that of kubernetes deployments.

Kubernetes Jobs

The main function of a job is to create one or more pod and checks the the success deployment of the pods. They make sure that the specified number of pods are deployed successfully. When a specified number of successful run of pods is completed, then the job is considered complete.

Deleting a Job will clean up the Pods it created. Suspending a Job will delete its active Pods until the Job is resumed again.

  • One of the Simple scenario is The Job object will start a new Pod if the first Pod fails or is deleted due to a node hardware failure or a node reboot.
apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4
  • To list all the Pods that belong to a Job in a machine readable form use kubectl get pods command and include the job name and store it as a output. Next, to print the number of Pods use echo command.
pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
echo $pods

When a Job completes, no more Pods are created, but the Pods are not deleted either. Keeping them around allows you to still view the logs of completed pods to check for errors, warnings, etc. . The job object also remains after it is completed so that you can view its status. You can choose to delete old jobs after noting their status using kubectl delete job command.

Kubernetes Service

Service provides you a way to expose your application running on a set of Pods as a network service. Kubernetes provide unique IP address to each Pods and a single DNS name. Sometimes, Kubernetes Pods are created and destroyed to match the state of your cluster. Pods are nonpermanent resources. If you use a Deployment to run your app, it can create and destroy Pods dynamically.

Each Pod gets its own IP address, however in a Deployment, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later which leads to a problem. To solve this kubernetes service was introduced which aligns static Permanent IP address on a set of Pods as a network service .

Kubernetes offers several options when exposing your service based on a feature called Kubernetes Service-types and they are:

  • ClusterIP – This Service-type exposes the service on an internal IP, reachable only within the cluster, and possibly only within the cluster-nodes.
  • NodePort – Exposes your service to be accessible outside of your cluster, on a specific port called the NodePort on every node in the cluster. The Kubernetes control plane allocates a port default: 30000-32767, If you want a specific port number, you can specify a value in the nodePort field.
  • Load Balancer – Exposes the Service externally using a cloud provider’s load balancer.
  • External name -External Name map a Service to a DNS name such as externalName: my.database.example.com

To create a service in Kubernetes , there are two ways 1) To create using manifest file or 2) Using the command line. Lets learn both the ways.

Create a service using yml manifest file

  • To provide a service definition to the API server, lets create a service.yaml file.

In the below YAML manifest file Kind should be Service as we are creating the service. Name of the service is hostname-service. Expose the service on a static port on each node so that we can access the service from it outside the cluster. When the node receives a request on the static port 30163 then forwards the request to one of the pods with the label "app: echo-hostname".

  • Three types of ports for a service
    • nodePort – a static port assigned on each the node
    • port – Port exposed internally in the cluster
    • targetPort – Container port or Pod Port
kind: Service 
apiVersion: v1 
metadata:
  name: hostname-service 
spec:

  type: NodePort
  selector:
    app: echo-hostname 

  ports:
    - nodePort: 30163
      port: 8080 
      targetPort: 80

Create a Kubernetes service using command line.

kubectl create nodeport NAME [--tcp=port:targetPort]

Kubernetes Ingress

API Object that helps in managing the external access of services in the cluster. It provides Load Balancing , SSL termination and name based hosting. Exposes HTTP/HTTPS routes from outside world to services in your cluster but does not expose arbitrary ports or protocols.

To deploy Ingress you also need ingress controller which is responsible for fulfilling the Ingress, usually with a load balancer. Unlike other types of controllers which run as part of the kube-controller-manager binary, Ingress controllers are not started automatically with a cluster. Example of AWS Ingress controller: AWS Load Balancer Controller is a controller to help manage Elastic Load Balancers for a Kubernetes cluster.

Here is a simple diagram where an Ingress sends all its traffic to one Service.

Below, lets see how to create a minimum ingress resource. apiVersionkind, and metadata fields are required for details. The name of an Ingress object must be a valid DNS subdomain name and annotations configures Ingress controller. The Ingress spec has all the information needed to configure a load balancer or proxy server. In case of rules inside the spec, if no host is configured that means so the rule applies to all inbound HTTP traffic through the IP address specified. /testpath is the path which has an associated backend defined with a service name and a service port name. A backend is a combination of Service and port names.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test
            port:
              number: 80

Kubernetes ConfigMaps

A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pod can consume them as environmental values or command line argument or as a configuration file. These contains information such as Database subdomain names etc. ConfigMap allows you to decouple environment-specific configuration from your container images, so that your applications are easily portable.

  • There are four ways in which you use configmap to configure containers inside a Pod.
    • Using commands on containers.
    • Environmental variable on containers.
    • Attach in the volume
    • Write a code or script which Kubernetes API use to read configmap

Lets take a example and create two files one with ConfigMap configurations and other file building the Pod referring ConfigMap Name.

  • Here’s an example ConfigMap that has some keys with single values
apiVersion: v1
kind: ConfigMap
metadata:
  name: game-demo
data:
  # property-like keys; each key maps to a simple value
  player_initial_lives: "3"
  ui_properties_file_name: "user-interface.properties"
  • Here’s an example Pod that uses values from game-demo to configure a Pod:
apiVersion: v1
kind: Pod
metadata:
  name: configmap-demo-pod
spec:
  containers:
    - name: demo
      image: alpine
      command: ["sleep", "3600"]
      env:
        # Define the environment variable
        - name: PLAYER_INITIAL_LIVES 
          valueFrom:
            configMapKeyRef:
              name: game-demo          
              key: player_initial_lives 

Kubernetes Secrets

Kubernetes Secrets allow you to store sensitive information such as password, OAuth tokens, SSH keys. It is important to enable encryption at REST using base64 encoded. This is also used as environmental variable and can be consumed by POD. There are 3 ways in which you can use secrets with POD like environmental variable on container, attach as a file in volume and used by kubelet when you pull image.

There are some built in types available for secrets such as kubernetes.io/ssh-auth for credentials for SSH authentication, kubernetes.io/tls for data for a TLS client or server , kubernetes.io/basic-auth for credentials for basic authentication and kubernetes.io/dockerconfigjson etc.

apiVersion: v1
kind: Secret
metadata:
  name: secret-basic-auth
type: kubernetes.io/basic-auth
stringData:
  username: admin
  password: t0p-Secret

You can also create kubernetes secrets using kubectl command.

kubectl create secret docker-registry secret-tiger-docker \
  --docker-username=user \
  --docker-password=pass \
  --docker-email=automateinfragmail.com

Kubernetes Volume

This is used to store data for containers in POD. If you wish you can store local storage for each containers but is an issue when POD or a container dies and volume gets deleted. Volume remains persistent and is backed up easily.

Volumes can not mount onto other volumes or have hard links to other volumes. Each Container in the Pod’s configuration must independently specify where to mount each volume.

  • There are different persistent volumes which kubernetes supports such as:
    • awsEBS – An awsElasticBlockStore volume mounts an Amazon Web Services (AWS) EBS volume into your pod. EBS volume can be pre-populated with data, and that data can be shared between pods. This Volume works with a condition that nodes on which pods are running must be AWS EC2 instances
    • azure disk – The azure Disk volume type mounts a Microsoft Azure Data Disk into a pod
    • Fiber channel-An fc volume type allows an existing fiber channel block storage volume to mount in a Pod.
  • Lets checkout AWS EBS configuration example.
apiVersion: v1
kind: Pod
metadata:
  name: test-ebs
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /test-ebs
      name: test-volume
  volumes:
  - name: test-volume
    # This AWS EBS volume must already exist.
    awsElasticBlockStore:
      volumeID: "<volume id>"
      fsType: ext4

Kubernetes StatefulSet

  • Kubernetes StatefulSet manages stateful applications such as MySQL, Databases , MongoDB. It is used for those apps which needs persistent storage.
  • Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods.
  • For a StatefulSet with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1}.
  • When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.
  • Before a scaling operation is applied to a Pod, all of its predecessors must be Running and Ready.
  • Before a Pod is terminated, all of its successors must be completely shutdown.

Lets checkout StatefulSet configuration example below.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s