The Ultimate Guide : Getting Started with Jenkins Pipeline

Application deployment is a daily task for developers and operations team. With Jenkins you can work with your deployment but for long deployment process you need a way to make things look easy and deploy in structured way.

To Bring simplicity in process of deployment Jenkins Pipeline are your best friend. They make the process look like as if river is flowing beautifully. Having said that , In this tutorial we will cover all about CI/CD and in depth knowledge of Jenkins Pipeline and Jenkins file.

Table of content

  1. What is CI/CD ( Continuous Integration and Continuous deployments)?
  2. What is Jenkins Pipeline?
  3. How to create a basic Jenkins Pipeline
  4. Handling Parameters in Jenkins Pipeline
  5. How to work with Input Parameters
  6. Conclusion

What is CI/CD ( Continuous Integration and Continuous deployments)

With CI/CD products are delivered to clients in a very smart and effective way by using different automated stages. With CI/CD it saves tons of time for both developer and operations team and there are very less chances of human errors. CI/CD stands for continuous integration and continuous deployments. It automates everything starting from integrating to deployments.

Continuous Integration

CI also known as Continuous integration is primarily used by developers. Successful Continuous integration means developers code is built , tested and then pushed to Shared repository whenever there is a change in code.

Developers push code changes every day, multiple times a day. For every push to the repository, you can create a set of scripts to build and test your application automatically. These scripts help decrease the chances that you introduce errors in your application.

This practice is known as Continuous Integration. Each change submitted to an application, even to development branches, is built and tested automatically and continuously.

Continuous Delivery

Continuous delivery is step beyond continuous integration . In this case not only application is continuously built and tested each time the code is pushed but application is also deployed continuously. However, with continuous delivery, you trigger the deployments manually.

Continuous delivery checks the code automatically, but it requires human intervention to deploy the changes.

Continuous Deployment

Continuous deployment is again a step beyond continuous integration the only difference between deployment and delivery is deployment automatically takes the code from shared repository and deploy the changes to environments such as Production where customers can see those changes. This is the final stage of CI/CD pipeline. With CD it takes hardly few minutes to deploy the code to the environments. It depends on heavy pre automation testing.

Examples of CI/CD Platform:

  • Spinnaker and Screwdriver built platform for CD
  • GitLab , Bamboo , CircleCI , Travis CI and GoCD are built platform for CI/CD

What is Jenkins Pipeline?

Jenkins Pipeline are group of plugins which helps to deliver a complete continuous delivery pipeline into Jenkins. Jenkins Pipeline plugin is automatically installed while installing the Jenkins with suggested plugins. This starts from building the code till deployment of the software right up to the customer. Jenkins pipeline allows you to write complex operations and code deployment as code with DSL language ( Domain specific language ) where we define a text file called “JENKINSFILE” which is checked into the repository.

  • Benefits of Jenkins pipeline
    • Pipeline can be written in code which can be more easier and gives more ability to review.
    • In case Jenkins stop you can still continue to write Jenkins file
    • With code capabilities you can allow waiting, approvals , stop and many other functionalities.
    • It support various extensions & plugins.
  • Jenkins file can be written with two syntax’s ( DSL: Domain Specific Language)
    • Declarative Pipeline : This is newer and writing code with this is much easier
    • Scripted Pipeline : This is older and writing code with this is little complicated
  • Scripted pipeline syntax can be generated from
http://Jenkins-server:8080/pipeline-syntax/
  • Declarative Pipeline syntax can be generated from
http://Jenkins-server:8080/directive-generator/

  • Jenkins Pipeline supports various environmental variables such as
    • BUILD_NUMBER: Displays the build number
    • BUILD_TAG: Displays the tag which is jenkins-${JOB_NAME}-${BUILD_NUMBER}
    • BUILD_URL: Displays the URL of the result of Build
    • JAVA_HOME: Path of Java home
    • NODE_NAME: It specifics the name of the node. For example set it to master is for Jenkins controller
    • JOB_NAME: Name of the Job
  • You can set the environmental variables dynamically in pipeline as well
    environment {
        AWS_ACCESS_KEY_ID     = credentials('jenkins-aws-secret-key-id')
        AWS_SECRET_ACCESS_KEY = credentials('jenkins-aws-secret-access-key')
        MY_KUBECONFIG = credentials('my-kubeconfig')
   }
  • Lets take a example of Jenkins file and understand the basic terms one by one
    • pipeline: It is Declarative Pipeline-specific syntax 
    • agent: Agent allows Jenkins to allocate an executor or a node. For example Jenkins slave
    • Stages: It include multiple tasks which Pipeline needs to perform. It can have a single task as well.
    • Stage: Stage is one single task under stages.
    • steps: These are steps which needs to be executed in every stage.
    • sh: sh is one of the step which executes shell command.
pipeline {
   agent any 
    stages {
        stage('Testing the Jenkins Version') {
            steps {
                echo 'Hello, Jenkins'
                sh 'service jenkins status'
               //  sh("kubectl --kubeconfig $MY_KUBECONFIG get pods")
            }
        }
    }
}

How to create a basic Jenkins Pipeline

  • Install Jenkins on the ubuntu machine. Please find the steps to install Jenkins from here
  • Once you have Jenkins Machine , visit Jenkins URL and Navigate to New Item
  • Choose Pipeline from the option and provide it a name such as pipeline-demo and click OK
  • Now add a Description such as my demo pipeline and add a Pipeline script as below
pipeline {
   agent any 
    stages {
        stage('Testing the Jenkins Version') {
            steps {
                echo 'Hello, Jenkins'
                sh 'service jenkins status'
            }
        }
    }
}
  • Click on Save & Finally click on Build Now
  • lets verify the code execution from console output of the Job. So click on the build number and click on it.

Handling Parameters in Jenkins Pipeline

If you wish to use Build with Parameters , so those parameters are accessible using params keyword in pipeline.

Lets see a quick example. In below code we have Profile as a parameter and it can be accessed as ${params.Profile} .Lets paste the code in pipeline script as we did earlier

pipeline {
  agent any
  parameters {
    string(name: 'Profile', defaultValue: 'devops-engineer', description: 'I am devops guy') 
}
 stages {
    stage('Testing DEVOPS') {
       steps {
          echo "${params.Profile} is a cloud profile"
       }
     }
   }
}
  • Lets build the Jenkins pipeline now.
  • Next verify the console output
  • Similarly we can use different parameters such
pipeline {
    agent any
    parameters {
        string(name: 'PERSON', defaultValue: 'AutomateInfra', description: 'PERSON')
        text(name: 'BIOGRAPHY', defaultValue: '', description: 'BIOGRAPHY')
        booleanParam(name: 'TOGGLE', defaultValue: true, description: 'TOGGLE')
        choice(name: 'CHOICE', choices: ['One', 'Two', 'Three'], description: 'CHOICE')
        password(name: 'PASSWORD', defaultValue: 'SECRET', description: 'PASSWORD')
    }
    stages {
        stage('All-Parameters') {
            steps {
                echo "I am ${params.PERSON}"
                echo "Biography: ${params.BIOGRAPHY}"
                echo "Toggle: ${params.TOGGLE}"
                echo "Choice: ${params.CHOICE}"
                echo "Password: ${params.PASSWORD}"
            }
        }
    }
}

How to work with Input Parameters

Input parameter allows you to provide an input using a input step. Unless input is provided the pipeline will be paused. Lets see a quick example in which Jenkins job will prompt for “should we continue” message. Unless we approve it will remain as it is else finally it will abort.

pipeline {
    agent any
    stages {
        stage('Testing input condition') {
            input {
                message "Should we continue?"
                ok "Yes, we should."
                submitter "automateinfra"
                parameters {
                    string(name: 'PERSON', defaultValue: 'Automate', description: 'Person')
                }
            }
            steps {
                echo "Hello, ${PERSON}, nice to meet you."
            }
        }
    }
}
  • Lets paste the content in Jenkins pipeline script and click on build now.
  • Let us verify by clicking on build Now

Conclusion

In this tutorial we learnt what is CI/CD and CI/CD open source tool Jenkins. We covered how to write pipeline and syntax of Jenkins pipeline using its language known as DSL ( domain specific language ) . Also we learnt in depth of Jenkins pipeline and created basic Jenkins pipeline and executed it.

Hope this tutorial will help you a kick start to how to work with Jenkins pipeline and execute them. If you like this please share it.

Brilliant Guide to Check all possible ways to view Disk usage on Ubuntu Machine

Monitoring of application or system disk utilization has always remained a top most and crucial responsibility of any IT engineer. In the IT world with various software’s , automation and tools it is very important to keep a track of disk utilization regularly.

Having said that, In this tutorial we will show you best commands and tools to work with your disk utilization. Please follow me along to read and see these commands and their usage.

Table of content

  1. Check Disk Space using disk free or disk filesystems command ( df )
  2. Check Disk Space using disk usage command ( du )
  3. Check Disk Usage using ls command
  4. Check Disk Usage using pydf command
  5. Check Disk Usage using Ncdu command( Ncurses Disk Usage )
  6. Check Disk Usage using duc command
  7. conclusion

Check Disk Space using disk free or disk filesystems command (df)

It stands for disk free. This command provides us information about the available space and used space on a file system. There are multiple parameters which can be passed along with this utility to provide additional outputs. Lets look at some of the commands from this utility.

  • To see all disk space available on all the mounted file systems on ubuntu machine.
df
  • To see all disk space available on all the mounted file systems on ubuntu machine in human readable format.
    • You will notice a difference in this command output and a previous. The difference is instead of 1k-blocks you will see size which is human readable.
df -h
  • To check the disk usage along with type of filesystem
df -T
  • To check disk usage of particular Filesystem
df /dev/xvda1
  • To check disk usage of multiple directories.
df -h  /opt /var /etc /lib
  • To check only Percent of used disk space
df -h --output=source,pcent
  • To check data usage based on filesystem wise
df -h -t ext4

Check Disk Space using disk usage command ( du )

du command provides disk usage information. This command provides file and directories space utilization. Lets see some of the example .

  • To check disk usage of directory
du /lib # Here we are taking lib directory
  • To check disk usage of directory with different block size type .
    • M for MB
    • G for GB
    • T for TB
du -BM /var
  • To check disk usage according to the size
    • Here s represents summarize
    • Here k represents size in KB , you can use M, G or T and so on
    • Here sort represents sort
    • Here n represents in numerical order
    • Here r represents in reverse order
du -sk /opt/* | sort -nr

Check Disk Usage using ls command

ls command is used of listing of files but also provides information about disk utilized by directories and files. Lets see some of these command.

  • To list the files in human readable format.
ls -lh
  • To list the file in descending order of size of files.
ls -ls

Check Disk Usage using pydf command

pydf is a python based command-line tool which is used to display disk usage with different colors. Lets dive into command now.

  • To check the disk usage with pydf
pydf -h 

Check Disk Usage using Ncdu command (Ncurses Disk Usage)

Ncdu is a disk utility for Unix systems. This command is text-based user interface under the [n]curses programming library.Let us see a command from Ncdu

ncdu

Check Disk Usage using duc command

Duc is a  command line utility which queries the disk usage database and also create, maintain and the database.

  • Before we run a command using duc be sure to install duc package.
sudo apt install duc
  • duc is successfully installed , now lets now run a command
duc index /usr
  • To list the disk usage using duc command with user interface
duc ui /usr

Conclusion

There are various ways to identify and view disk usage in Linux or ubuntu operating system. In this tutorial we learnt and showed best commands and disk utilities to work with . Now are you are ready to troubleshoot disk usage issues or work with your files or application and identify the disk utilization.

Hope this tutorial gave you in depth understanding and best commands to work with disk usage . Hoping you never face any disk issues in your organization. Please share if you like.

How to build docker images , containers and docker services with Terraform using docker provider.

Docker has been a vital and very important tool for deploying your web applications securely and because of light weighted technology and they way it works has captured the market very well. Although some of the steps are manual but after deployment docker make things look very simple.

But can we automate even before the deployment takes place? So here comes terraform into play which is a infrastructure as a code tool which automate docker related work such as creation of images, containers and service with few commands.

In this tutorial we will see what is docker , terraform and how can we use docker provider in terraform to automate docker images and containers.

Table of content

  1. What is Docker?
  2. What is Terraform?
  3. How to Install terraform on ubuntu machine.
  4. What is Docker provider?
  5. Create Docker Image, containers and docker service using docker provider on AWS using terraform
  6. Conclusion

What is docker ?

Docker is an open source tool for developing , shipping and running applications. It has ability to run applications in loosely isolated environment using containers. Docker is an application which helps in management of containers in a very smooth and effective way. In containers you can isolate your applications. Docker is quite similar to virtual machine but it is light weighted and can be ported easily.

Containers are light weighted as they are independent of hypervisors load and configuration. They directly connect with machines ie. hosts kernel.

Prerequisites

  • Ubuntu machine preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account
  • Recommended to have 4GB RAM
  • At least 5GB of drive space
  • Ubuntu machine should have IAM role attached with full access of ec2 instance or it is always great to have administrator permissions to work with terraform demo.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

What is Terraform?

Terraform is a tool for building , versioning and changing the infrastructure. Terraform is Written in GO Language and the syntax language of configuration files is hcl which stands for HashiCorp configuration language which is much easier than yaml or json.

Terraform has been in use for quite a while now . I would say its an amazing tool to build , change the infrastructure in very effective and simpler way. It’s used with variety of cloud provider such as Amazon AWS, Oracle, Microsoft Azure , Google cloud and many more. I hope you would love to learn it and utilize it.

How to Install Terraform on Ubuntu 18.04 LTS

  • Update your already existing system packages.
sudo apt update
  • Download the latest version of terraform in opt directory
wget https://releases.hashicorp.com/terraform/0.13.0/terraform_0.13.0_linux_amd64.zip
  • Install zip package which will be required to unzip
sudo apt-get install zip -y
  • unzip the Terraform download zip file
unzip terraform*.zip
  • Move the executable to executable directory
sudo mv terraform /usr/local/bin
  • Verify the terraform by checking terraform command and version of terraform
terraform               # To check if terraform is installed 

terraform -version      # To check the terraform version  
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.

What is Docker provider in terraform?

Docker provider helps to connect with docker images and docker containers using docker API. So in case of terraform , we would need to configure docker provider so that terraform can work with docker images and containers.

There are different ways in which docker provider can be configured. Lets see some of them now.

  • Using docker host’s hostname
provider "docker" {
  host = "tcp://localhost:2376/"
}
  • Using dockers IP address
provider "docker" {
  host = "tcp://127.0.0.1:2376/"
}
  • In case your docker host is remote machine
provider "docker" {
  host = "ssh://user@remote-host:22"
}
  • Using docker socket

unix:///var/run/docker.sock is a Unix socket the docker daemon listens on. Using this Unix socket you can connect with multiple docker images and containers.

This Unix socket is also used when containers need to communicate with docker daemon such as during the mount binding.

provider "docker" {                             
host = "unix:///var/run/docker.sock"
    }

Create a Docker Image, container and service using docker provider on AWS using terraform

Let us first understand terraform configuration files before we start creating files for our demo.

  • main.tf : This file contains actual terraform code to create service or any particular resource
  • vars.tf : This file is used to define variable types and optionally set the values.
  • output.tf: This file contains the output of resource we wish to store. The output are displayed
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important as we provide information to terraform regarding on which cloud provider it needs to execute the code

Now, In below demo we will create docker image , container , services using docker provider. Now Lets configure terraform files which are needed for this demo. In this demo we only need one file that is main.tf to start with.

main.tf

provider "docker" {                                 # Create a Docker Provider
host = "unix:///var/run/docker.sock"
 }

resource "docker_image" "ubuntu" {                  # Create a Docker Image
    name = "ubuntu:latest"
}

resource "docker_container" "my_container" {        # Creates a Docker Container
  image = docker_image.ubuntu.latest         # Using same image which we created earlier
  name = "my_container"
}

resource "docker_service" "my_service" {            # Create a Docker Service
  name = "myservice"
  task_spec {
   container_spec {
     image = docker_image.ubuntu.latest     # Using same image which we created eearlier
    }
   }
  endpoint_spec {
    ports {
     target_port = "8080"
       }
    }
}
  • Now your files and code are ready for execution . Initialize the terraform
terraform init
  • Terraform initialized successfully ,now its time to run the terraform plan command.
  • Terraform plan is a sort of a blueprint before deployment to confirm if correct resources are being provisioned or deleted.
terraform plan

NOTE:

If you intend to create Docker service on same machine without having multiple nodes please run below command first so that our docker service gets created successfully using terraform

docker swarm init
  • After verification , now its time to actually deploy the code using apply.
terraform apply
  • Now lets verify all the three components one by one if they are created successfully using terraform with docker provider.
  • Verify docker image
docker images
  • Verify docker containers
docker ps -a
  • Verify docker service
docker service ps 

Conclusion:

In this tutorial we will see what is docker , terraform and how can we use docker provider in terraform to automate docker images and containers.

Hope this tutorial will help you in understanding the terraform and provisioning the docker components using terraform. Please share with your friends if you find it useful.

Fixing an Ubuntu System that Will Not Boot

How can you fix an ubuntu machine if its causing issues? There could be various reasons and sometimes your machine even don’t boot, that’s a huge problem !! Why not fix it or learn how to fix it permanently? Lets begin!

Steps to be performed when system will not boot.

  1. Checking BIOS
  • If your system is unable to boot even with LIVE DVD or USB Drive , there could be two possibilities.
    • You accidently deleted the boot device or
    • Its an Hardware issue.

2. Checking GRUB

  • If you are able to turn on the computer on and get past the initial BIOS startup then bring up the GRUB menu .It could be that few things might have overwritten which can be solved by using the recovery mode. Lets learn how to recover all the things.
    • Press Shift after the BIOS is done to access the GRUB menu.
    • Select Advanced Options for Ubuntu.
    • From the new menu, select an entry with the words recovery mode. This boots into a recovery menu with options to automatically fix several possible problems, or at least it lets you boot into a minimal recovery-mode version of Ubuntu with only the most necessary processes loaded. From here, you may be able to fix disks, check file systems, drop to a root prompt to fix file permissions, and so on.
  • In case you are unable to bring up GRUB menu then reinstalling GRUB is the only option. Lets learn how to reinstall GRUB.
    • Boot Ubuntu from a live DVD or bootable USB drive that has the same Ubuntu
      release as your system, such as 16.04.
    • Determine the boot drive on your system:
      a. Open a terminal and use sudo fdisk -l to list the drives attached to the system.
      b. Look for an entry in the output with an * in the Boot column. This is your boot
      device. It will look something like /dev/sda1.
    • Mount the Ubuntu partition at /mnt by using this command, replacing /dev/sda1
      with the information you just found:
      sudo mount /dev/sda1 /mnt
    • Reinstall GRUB with this command, again replacing /dev/sda1 with what you found
      earlier:
      sudo grub-install --boot-directory=/mnt/boot /dev/sda1
    • Restart the computer, and Ubuntu should boot properly.

Conclusion

In this small and very important post you learnt how to bring back ubuntu machine that doesn’t boot and learnt ways to recover them in future !!