Ultimate Ansible interview questions and answers

If you are preparing for a DevOps interview or for an Ansible administrator, consider this guide as your friend to practice ansible interview questions and answers and help you pass the exam or interview.

Without further delay, let’s get into this Ultimate Ansible interview questions and answers guide, where you will have three Papers to practice containing 20 questions of Ansible.

PAPER-1

Q1. What is Ansible ?

Answer: Ansible is open source configuration tool written in python Language. Ansible is used to deploy or configure any software, tools or files on remote machines quickly using SSH Protocol.

Q2. What are advantages of Ansible.

Answer: Ansible is simple to manage, agent-less, has great performance as it is quick to deploy and doesn’t require much efforts to setup and no doubt it is reliable.

Q3. What are the things which Ansible can do ?

Answer: Deployment of apps such as apache tomcat, AWS EC2 instance, configuration Management such as configure multiple file in different remote nodes , Automates tasks and is used for IT orchestration.

Q4. Is it possible to have Ansible control node on windows ?

Answer: No, you can have Ansible controller host or node only on Linux based operating system however you can configure windows machine as your remote hosts.

Q5. What are requirements when the remote host is windows machine ?

Answer: Ansible needs Powershell 3.0 and at least .NET 4.0 to be installed on windows Host, winRM Listener should be created and activated before you actually deploy or configure remote node as windows machine.

Q6. Where are different components of Ansible ?

Answer: API’s, Modules, Host , Playbook , Cloud, Networking and Inventories.

Q7.What are Ansible adhoc commands ?

Answer: Ansible adhoc commands are the single line commands that are generally used for testing purpose or if you need to take an action which is not repeatable and rarely used such as restart service on a machine etc. Below is the example of ansible adhoc command.

The below command starts the apache service on the remote node.

ansible all -m ansible.builtin.service -a “name=apache2 state=started”

Q8. What is the ansible command to check uptime of all servers ?

Answer: Below is the ansible command to check uptime of the servers. This command will provide you an output stating since long the remote node is up.

ansible all -a /usr/bin/uptime 
Ansible ad hoc command to check the uptime of the server which is 33 days.
Ansible ad hoc command to check the server’s uptime, which is 33 days.

Q9. How to Install the Apache service using ansible command ?

Answer: To install apache service using ansible command you can use ansible adhoc command as shown below. In the below command b flag is to become root.

ansible all -m apt -a  "name=apache2 state=latest" -b  

Q10.What are the steps or commands to Install Ansible on Ubuntu Machine ?

Answer: The below commands you will need to execute to Install Ansible on Ubuntu Machine

# Update your system packages using apt update command
sudo apt update 
# Install below prerequisites package to work with PPA repository.
sudo apt install software-properties-common 
# Install Ansible PPA repository (Personal Package repository) 
sudo apt-add-repository –yes –update ppa:ansible/ansible
# Finally Install ansible
sudo apt install ansible

Q11. What is Ansible facts in Ansible ?

Answer: Ansible facts allows you to fetch or access the data or values such as hostname or Ip address from the remote hosts and stored.

Below is the example showing how you can run Ansible facts using ansible-playbook named main.yml.

# main.yml 
---
- name: Ansible demo
  hosts: web
  remote_user: ubuntu
  tasks:
    - name: Print all available facts
      ansible.builtin.debug:
        var: ansible_facts
 ansible-playbook main.yml
Output of the Ansible facts using ansible-playbook
The output of the Ansible facts using ansible-playbook

Q12. What are Ansible tasks ?

Answer: Ansible tasks are group of task which ansible playbook needs to perform such as copy , installing package , editing configurations on remote node and restarting services on remote node etc.

Let’s look at a basic Ansible task. In below code the Ansible task is to check if apache service is running on the remote node?

tasks:
  - name: make sure apache is running
    service:
      name: httpd
      state: started

Q13. What are Ansible Roles ?

Answer: Ansible roles is a way to structurally maintain your playbooks such that you can easily understand and work on it. Ansible role basically contains different folders for the simplicity such as it lets you load the files from files folder, variables from variable folder, handlers, tasks etc.

You can create different Ansible roles and reuse them as many times as you need.

Q14. Command to Create a user on linux machine using Ansible?

Answer: To create a user on linux machine using Ansible you can use ansible adhoc command as shown below.

ansible all -m ansible.builtin.user -a “name=name password=password” -b

Q15. What is Ansible Tower ?

Answer: Ansible tower is web based solution that makes Ansible even more to easy to use for IT teams. Ansible tower can be used for upto 10 nodes. It captures all recent activities like status of host . It integrates with notifications’ about all necessary updates. It also schedules Ansible Jobs very well.

Q16. How to connect with remote machines in Ansible?

Answer: After installing Ansible , configure Ansible inventory with the list of hosts or grouping them accordingly and finally connecting them using SSH protocol. After you configure the Ansible inventory you can test the connectivity between Ansible controller and remote nodes using ping module to ping all the nodes in your inventory

ansible all -m ping

You should see output for each host in your inventory, similar to this:

aserver.example.org | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}

Q17. Does Ansible support AWS?

Answer: Yes. There are lots of AWS modules that are present in Ansible and can be used to manage AWS resources. Refer Ansible collections in the Amazon Namespace.

Q18. Which Ansible module allows you to copy files from remote machine to control machine ?

Answer: Ansible fetch module. Ansible fetch module is used for fetching files from remote machines and storing them locally in a file tree, organized by hostname.

- name: Store file from remote node directory to host directory 
  ansible.builtin.fetch:
    src: /tmp/remote_node_file
    dest: /tmp/fetched_host_file

Q19. Where you can find Ansible inventory by default ?

Answer: The default location of Ansible inventory is /etc/ansible/hosts.

Q20. How can you check the Ansible version?

Answer: To check the ansible version, run the ansible –version command below.

ansible --version
Checking the ansible version by using ansible --version command.
Checking the ansible version by using the ansible –version command.

Join 17 other followers

PAPER-2

Q1. Is Ansible agentless ?

Answer: Yes, Ansible is an open source tool that is agentless. Agentless here means when you install Ansible on host controller and when you use to deploy or configure changes on remote nodes, the remote node doesn’t require any agent or softwares to be installed.

Q2. What is Primary use of Ansible ?

Answer: Ansible can be used in IT Infrastructure to manage and deploy software application on remote machines.

Q3. What are Ansible Hosts or remote nodes ?

Answer: Ansible hosts are machines or nodes on which Ansible controller host deploys the software’s. Ansible host could be Linux, RedHat, Windows, etc.

Q4. What is CI (Continuous Integration) ?

Answer: CI also known as Continuous integration is primarily used by developers. Successful Continuous integration means when developers code is built , tested and then pushed to Shared repository whenever there is a change in code.

Q5. What is the main purpose of role in Ansible ?

Answer: The main purpose of Ansible role is to reuse the content again by using the proper Ansible folder structure directory. These directory’s folder contains multiple configuration files or content accordingly that needs to be declared at various places and in various modules, to minimize the re-code work roles are used.

Q6. What is Control Node?

Answer: The Control node is the Ansible node on which Ansible is installed. Before you install control node make sure Python is already installed on machine prior to installing Ansible.

Q7. Can you have Windows machine as Controller node ?

Answer: No.

Q8. What is the other names of Ansible Hosts?

Answer: Ansible hosts can also be called as Managed nodes. Ansible is not installed on managed nodes.

Q9. What is host file in Ansible ?

Answer: Inventory file is also known as host file in Ansible which is by default stored in /etc/ansible/hosts directory.

Q10. What is collections in Ansible ?

Answer: Ansible collections is a distribution format that include playbooks, roles, modules, plugins.

Q11. What is Ansible module in Ansible.

Answer: Ansible contains various Ansible modules that have a specific purpose such as copying the data, adding a user and many more. You can invoke a single module within a task defined in playbooks or several different modules in a playbook.

Q12. What is task in Ansible ?

Answer: To perform any action you need a task. Similarly in Ansible you need a task to run modules. For Ansible ad hoc command you can execute task only once.

Q13. What is Ansible Playbook?

Answer: Ansible playbook is an ordered lists of tasks that you run and are designed to be human-readable and are developed in a basic text language. For example in the below ansible playbook there are two tasks first is to create a user named adam and other task is to create a user shanky in the remote node.

---
- name: Ansible Create user functionality module demo
  hosts: web # Defining the remote server
  tasks:

    - name: Add the user 'Adam' with a specific uid and a primary group of 'sudo'
      ansible.builtin.user:
        name: adam
        comment: Adam
        uid: 1095
        group: sudo
        createhome: yes        # Defaults to yes
        home: /home/adam   # Defaults to /home/<username>

    - name: Add the user 'Adam' with a specific uid and a primary group of 'sudo'
      ansible.builtin.user:
        name: shanky
        comment: shanky
        uid: 1089
        group: sudo
        createhome: yes        # Defaults to yes
        home: /home/shanky  # Defaults to /home/<username>



Creating two users using ansible playbook
Creating two users using ansible-playbook

Q14. Where do you create basic inventory in Ansible?

Answer: /etc/ansible/hosts

Q15. What is Ansible Tower ?

Answer: Ansible tower is web based solution that makes Ansible even more to easy to use for IT teams. Ansible tower can be used for upto 10 nodes. It captures all recent activities like status of host . It integrates with notification’s about all necessary updates. It also schedules Ansible Jobs very well.

Q16. What is the command for running the Ansible playbook?

Answer: The below is the command to run or execute the ansible-playbook.

ansible-playbook my_playbook

Q17. On which protocol does Ansible communicate to remote node?

Answer: SSH

Q18. How to use ping module to ping all the nodes?

Answer: Below is the command which you can use to ping all the remote nodes.

ansible all -m ping

Q19. Provide an example to run a live command on all of your nodes?

Answer:

ansible all -a "/bin/echo hello"
Printing hello on remote node using ansible command.
Printing hello on remote node using ansible command.

Q20. How to run ansible command with privilege escalation (sudo and similar) ?

Answer: Below command executes the ansible command with root access by using --become flag.

ansible all -m ping -u adam --become

PAPER-3

Q1. Which module allows you to create a directory?

Answer: Ansible file module allows you to create a directory.

Q2. How to define number of parallel processes while communicating to hosts .

Answer: By setting the forks in ansible and to set the forks you need to edit ansible.cfg file.

Q3. Is Ansible agentless configuration management Tool ?

Answer: Yes

Q4. What is Ansible Inventory ?

Answer: Ansible works against managed nodes or hosts to create or manage the infrastructure . We list down these hosts or nodes in a file known as Inventory. Inventory can be of two types one is ini and other is YAML format

Q5. How to create a Ansible inventory in the ini format ?

Answer:

automate2.mylabserver.com
[httpd]
automate3.mylabserver.com
automate4.mylabserver.com
[labserver]
automate[2:6].mylabserver.com

Q6. How to create a Ansible inventory in the YAML format?

Answer:

all:
  hosts:
     automate2.mylabserver.com
  children:
      httpd:
        hosts:
          automate3.mylabserver.com
          automate4.mylabserver.com
      labserver:
         hosts:
          automate[2:6].mylabserver.com

Q7.What is Ansible TAG ?

Answer: When you need to add tags with Ansible then you can use Ansible Tags to do this. You can apply Ansible Tags on block level , playbook level, individual task level or role level.

tasks:
- name: Install the servers
  ansible.builtin.yum:
    name:
    - httpd
    - memcached
    state: present
  tags:
  - packages
  - webservers

Q8. What are key things required for Playbook ?

Answer: Hosts should be configured in inventory, Tasks should be declared in ansible playbook and Ansible should already be installed.

Q9. How to use existing Ansible tasks ?

Answer: We can use by importing the tasks import_tasks. Ansible import_tasks imports a list of tasks to be added to the current playbook for subsequent execution.

Q10. How can you secure the data in Ansible-playbook ?

Answer: You can secure the data using ansible-vault to encrypt the data and later decrypt it. Ansible Vault is a feature of ansible that allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plaintext in playbooks or roles.

Q11. What is Ansible Galaxy?

Answer: Ansible Galaxy is a repository for Ansible Roles that are available to drop directly into your Playbooks to streamline your automation projects.

Q12. How can you download roles from Ansible Galaxy ?

Answer: Below code allows you to download roles from Ansible Galaxy.

ansible-galaxy install username.role_name

Q13. What are Variables in ansible?

Answer: Ansible variables are assigned with values which are further used for commuting. After you create variables, either by defining them in a file, passing them at the command line, or registering the return value or values of a task as a new variable

Q14. Command to generate a SSH key-pair for connecting with remote machines?

Answer: ssh-keygen

Q15. What is Ansible Tower ?

Answer: Ansible tower is web based solution that makes Ansible even more to easy to use for IT teams. Ansible tower can be used for upto 10 nodes. It captures all recent activities like status of host . It integrates with notification’s about all necessary updates. It also schedules Ansible Jobs very well.

Q16. What is the command for running a playbook?

Answer: ansible-playbook my_playbook.

Q17. Does Ansible support AWS?

Answer: Yes.There are lots of modules that are present in Ansible.

Q18. How to create encrypted files using Ansible?

Answer: By using the below ansible-vault command.

ansible-vault create file.yml 

Q19. What are some key features of Ansible Tower?

Answer:

With Ansible Tower, you can view dashboards and see whatever is going on in real-time, job updates, who ran the playbook or ansible commands, Integrated notifications, schedule ansible jobs, run ansible remote commands, and perform.

Q20. What is the first-line syntax of any ansible playbook?

Answer: First line syntax of ansible-playbook is – – –

---   # The first Line Syntax of ansible playbook

Join 17 other followers

Conclusion

In this ultimate guide, you had a chance to revise everything you needed to pass the interview with Ansible interview questions and answers.

Now that you have sound knowledge of Ansible and various components, modules, and features, and are ready for your upcoming interview.

How to Install Apache tomcat using Ansible.

If you are looking to install apache tomcat instances, consider using Ansible as a great way.

Ansible is an agentless automation tool that manages machines over the SSH protocol by default. Once installed, Ansible does not add a database, and there will be no daemons to start or keep running.

With Ansible, you can create an ansible playbook and use it to deploy dozens of tomcat in one go. In this tutorial, you will learn how to install apache tomcat using Ansible. Let’s get started.

Prerequisites

This post will be a step-by-step tutorial. If you’d like to follow along, be sure you have:

  • An Ansible controller host. This tutorial will be using Ansible v2.9.18.
  • A remote Linux computer to test out the tomcat installation. This tutorial uses Ubuntu 20.04.3 LTS as the remote node.
  • An inventory file and one or more hosts are configured to run Ansible commands and playbooks. The remote Linux computer is called webserver, and this tutorial uses an inventory group called web.

Ensure your remote machine IP address is inside /etc/ansible/hosts ( either one remote machine or define it as a group)

Building tomcat Ansible-playbook on the Ansible Controller

Ansible is an automation tool used for deploying applications and systems easily; it could be Cloud, Services, orchestration, etc. Ansible uses YAML Language to build playbooks which are finally used to deploy or configure the required change. To deploy tomcat, let’s move ahead and create the ansible playbook.

  • SSH or login into your any Linux machine.
  • Create a file named my_playbook3.yml inside /etc/ansible folder and paste below code.

The below playbook contains all the tasks to install tomcat on the remote node. The first task is to update your system packages by using the apt command, further creating tomcat user and group. The next task is to install java, install tomcat, and create necessary folders and permissions for the tomcat directory.

---
- name: Install Apache Tomcat10 using ansible
  hosts: webserver
  remote_user: ubuntu
  become: true
  tasks:
    - name: Update the System Packages
      apt:
        upgrade: yes
        update_cache: yes

    - name: Create a Tomcat User
      user:
        name: tomcat

    - name: Create a Tomcat Group
      group:
        name: tomcat

    - name: Install JAVA
      apt:
        name: default-jdk
        state: present


    - name: Create a Tomcat Directory
      file:
        path: /opt/tomcat10
        owner: tomcat
        group: tomcat
        mode: 755
        recurse: yes

    - name: download & unarchive tomcat10 
      unarchive:
        src: https://mirrors.estointernet.in/apache/tomcat/tomcat-10/v10.0.4/bin/apache-tomcat- 10.0.4.tar.gz
        dest: /opt/tomcat10
        remote_src: yes
        extra_opts: [--strip-components=1]

    - name: Change ownership of tomcat directory
      file:
        path: /opt/tomcat10
        owner: tomcat
        group: tomcat
        mode: "u+rwx,g+rx,o=rx"
        recurse: yes
        state: directory

    - name: Copy Tomcat service from local to remote
      copy:
        src: /etc/tomcat.service
        dest: /etc/systemd/system/
        mode: 0755

    - name: Start and Enable Tomcat 10 on sever
      systemd:
        name: tomcat
        state: started
        daemon_reload: true

Running Ansible-playbook on the Ansible Controller

Earlier in the previous section, you created the ansible-playbook, which is great, but it is not doing much unless you deploy it. To deploy the playbook using the ansible-playbook command.

Assuming you are logged into Ansible controller:

  • Now run the playbook using the below ansible-playbook command.
ansible-playbook my_playbook3.yml

As you can see below, all the tasks are successfully completed; if the status of TASK shows ok, that means the task was already completed; else, for changed status, Ansible performs the task on the remote node.

Running the ansible-playbook in the Ansible controller host
Running the ansible-playbook in the Ansible controller host
  • Next, verify remote machine if Apache Tomcat is installed successfully and started use the below command.
systemctl status tomcat 
service tomcat status
Verifying the tomcat service on the remote node
Verifying the tomcat service on the remote node
  • Also you can verify by running process command.
ps -ef | grep tomcat
ps -aux | grep tomcat
Checking the tomcat process
Checking the tomcat process
Checking the tomcat process
Checking the tomcat process

Join 17 other followers

Tomcat files and Tomcat directories on a remote node

Now that you have successfully installed the tomcat on the remote node and verified the tomcat service, it is equally important to check the tomcat files created and the purpose of each of them.

  • Firstly all the tomcat files and tomcat directories are stored under <tomcat-installation-directory>/*.

Your installation directory is represented by environmental variable  $CATALINA_HOME

  • The tomcat directory and files should be owned by user tomcat
  • The tomcat user should be member of tomcat group.
Verify all files of tomcat
Verify all files of tomcat
  • <tomcat-installation-directory>/bin: This directory consists of startup and shutdown scripts (startup.sh and shutdown.sh) to run or stop the tomcat directly without using the tomcat service configured.
Verify installation directory of tomcat
Verify installation directory of tomcat
  • <tomcat-installation-directory>/conf: This is very crucial directory where tomcat keeps all its configuration files.
Tomcat configuration directory
Verify Tomcat configuration directory
  • <tomcat-installation-directory>/logs: In case you get any errors while running your tomcat then you can look at your safeguard ie. logs , tomcat creates its own logs under this directory.
Tomcat logs directory
Verify Tomcat logs directory
  • <tomcat-installation-directory>/webapps: This is the directory where you place your code such as .war and run your applications. It is highly recommended to stop tomcat and then deploy your application inside this directory and then start tomcat.
Tomcat Code directory
Verify Tomcat Code directory

Conclusion

In this tutorial, we covered in-depth how can you install Apache Tomcat 10 on the ubuntu 18.0 version using Ansible controller and finally discussed files and directories which are most important for any Apache tomcat admins and developers. If you wish to run your application on lightweight and easily, Apache Tomcat is your friend.

How to Work with Ansible When and Other Conditionals

If you need to execute Ansible tasks based on different conditions, then you’re in for a treat. Ansible when and other conditionals let you evaluate conditions, such as based on OS, or if one task is dependent on the previous task.

In this tutorial, you’re going to learn how to work with Ansible when and other conditionals so you can execute tasks without messing things up.

Click here and Continue reading

How to Deploy stateful application in AWS EKS cluster

Stateful deployment means the deployment that contains the persistent storage. in this tutorial you will learn how to deploy an stateful application deployment.

To Perform the deployment you will need a load balancer that will route the traffic to WordPress pods and WordPress site pods will store data in MySQL pod by routing it via MySQL service as shown in below picture.

WordPress or Nginx – These are known as web layers: Here in below case we will use WordPress as a frontend which will contain persistent volume (EBS) to store HTML pages.

MongoDB or MySQL – These are known as data layers: In below case you will use MySQL will be backend that will contain Persistent volume (EBS) to store MySQL data

Prerequisites

  • EKS cluster
  • AWS account

Creating a Namespace

  1. Create a namespace with below command. Creation of namespace allows you to separate a particular project or a team or env.
kubectl create namespace stateful-deployment

Creating a Storage class required for Persistent Volume (PV)

  1. Create a Storage class that is required for persistent volume in the kubernetes cluster.
    • In AWS EKS a persistent volume (PV) is implemented via a EBS volume, which has to be declared as a storage class first and to declare a storage class first create a file gp2-storage-class.yaml and copy/paste the below code.

A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
reclaimPolicy: Retain
mountOptions:
  - debug
  1. Create the Storage class by running the below command.
kubectl apply -f gp2-storage-class.yaml --namespace=stateful-deployment
  1. In case you receive any error then run below command.
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' --namespace=stateful-deployment
  1. Verify all the storage class in the cluster.
kubectl get storageclasses --all-namespaces

Creating a persistent volume claim (PVC).

  1. Next, create a persistent volume claim (PVC). A stateful app can then request a volume, by specifying a persistent volume claim (PVC) and mount it in its corresponding pod. Again create a file and name it as pvc.yaml and copy/paste the below content.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wp-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi

  • kubectl get pvc –namespace=stateful-deployment

Creating secrets to store passwords

  1. create secret which stores mysql pw, to be injected as env var into container
kubectl create secret generic mysql-pass --from-literal=password=mysql-pw --namespace=stateful-deployment
  1. Now, get or fetch the secrets that were recently created.
kubectl get secrets --namespace=stateful-deployment

Creating the Stateful backend deployment in the cluster

  1. Create a file mysql.yaml for the deployment and copy/paste the below code.
    • apiVersion is the kubernetes API version to manage the object. Here the object is service.
    • For Deployment and Replicasets : apps/v1
    • For Pod and Service: v1
  1. Kind denotes what kind of resource/object will kubernetes will create ( here in the below case : deployment)
    • metadata: Data that helps uniquely identify the object, including a name string, UID, and optional namespace.
    • Labels are key/value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are meaningful and relevant to users. Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key/value labels defined. Each Key must be unique for a given object.
    • spec: What state you desire for the object
    • define Label for the pod in deployment under template
    • Here you can see selector tag, which is used by deployment to talk with its pods. The selector field defines how the Deployment finds which Pods to manage. In this case, you simply select a label that is defined in the Pod template (app: nginx).
    • With labels, you can mark different types of resources in your cluster with the same key: value pair. Then, you can specify the selector to match the label so that you can build upon these other resources. If you plan to expose your app publicly, you must use a label that matches the selector that you specify in the service.
apiVersion: v1
kind: Service
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  ports:
    - port: 3306
  selector:
    app: wordpress
    tier: mysql
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim
  • Launch mysql deployment and service by running the below command.
kubectl apply -f mysql.yaml --namespace=stateful-deployment

Stateless vs Stateful Deployments in the Kubernetes Cluster

Deployment with EBS v/s StatefulSet with EBS

Deployment with EBS : All the Pods are created on the same node and PV is attached

StatefulSet with EBS: Pods are created on various nodes with different EBS attached.

Deploy application using EBS vs Deploy applications using EFS

EBS

  • EBS volumes are tied to only one AZ, so recreated pods can be only started in AZ of the previous EBS volume.
  • For Example if you have an ec2 instance in AZ(a) and pod inside running inside it and attached with an EBS in AZ(a) then in case your pods gets restarted in another instance in same AZ(a) then it will be able to attach to EBS in AZ(a) but if in case pod gets restarted in another instance in different AZ(b) then it pod wont be able to attach to EBS in AZ(a) rather will need a new EBS in AZ(b)
Stateful app deployment using EBS

EFS

  • To deploy application properly you need a shared volume between all pods and deploy pods in Multi AZ.
  • EBS are not shared volume as they are EBS volumes belong a particular AZ rather than multi AZ.
  • So instead use EFS ( Elastic file system). They are mounted as network file system on multiple ec2 instances regardless of AZ.
  • EFS works with EC2 instance in multi AZ and are highly available and expensive than gp2 almost 3 times.
  • Using EFS pods can be launched on any node in any AZ
Stateful app deployment using EFS

The Ultimate Guide on API Testing with Complete Automation

API Automation with Rest Assured library

What is an API ?

API is an interface that allows communication between client to server to simplify the building of client-server software.

API is an software that allows two applications to talk to each other. Each time you use an app like Facebook, send an instant message, or check the weather on your phone, you’re using an API.

When you use an application on your mobile phone, the application connects to the Internet and sends data to a server. The server then retrieves that data, interprets it, performs the necessary actions and sends it back to your phone. The application then interprets that data and presents you with the information you wanted in a readable way. This is all possible with an API.

Difference between Types of API’s [ SOAP v/s REST ]

REST: Representational State Transfer. It is an lightweight and scalable service built on REST architecture. It uses HTTP protocol. It is based on architectural pattern

Elements of REST API:

  • Method: GET, PUT, DELETE
    • POST – This would be used to send the data to the server such as customer information or uploading any file using the RESTful web service. To send the data use Form parameter and body payload.
    • GET – This would be used to retrieve data from the server using the RESTful web service. It only extracts the data there is no change in the data. No Payload or body required. To get the data use query parameter.
    • PUT – This would be used to update the resources using the RESTful web service
    • DELETE – This would be used to delete * using the RESTful services
  • Request Headers: These are additional instructions that are sent along with the request
  • Request Body: Data is sent along with the POST request that is it wants to add a resource to the server.
  • Response status code: Returned along with the request such as 500, 200 etc.

Characteristics of REST

  • REST is an Architectural style in which a web service can only be treated as a RESTful service if it follows the constraints of being 1. Client Server 2. Stateless 3. Cacheable 4. Layered System 5. Uniform Interface
  • Stateless means that the state of the application is not maintained in REST .For example, if you delete a resource from a server using the DELETE command, you cannot expect that delete information to be passed to the next request. This is required so that server can process the response appropriately
  • The Cache concept is to help with the problem of stateless which was described in the last point. Since each server client request is independent in nature, sometimes the client might ask the server for the same request again
  • REST use Uniform Service locators to access to the components on the hardware device. For example, if there is an object which represents the data of an employee hosted on a URL as automateinfra.com , the below are some of URI that can exist to access them automateinfra.com/blog

SOAP: Simple Object Access Protocol.

  • Follows strict rules for communicate between [client-server] as it doesn’t follows what is being followed by REST follows Uniform Interface, Client-Server, Stateless, Cacheable, Layered System, Code.
  • SOAP was designed with a specification. It includes a WSDL file which has the required information on what the web service does in addition to the location of the web service.
  • The other key challenge is the size of the SOAP messages which get transferred from the client to the server. Because of the large messages, using SOAP in places where bandwidth is a constraint can be a big issue.
  • SOAP uses service interfaces to expose its functionality to client applications. In SOAP, the WSDL file provides the client with the necessary information which can be used to understand what services the web service can offer.
  • SOAP uses only XML to transfer the information or exchanging the information where as REST uses plain text, HTML , JSON and XML and more.

Application Programming Interface theory (API-theory)

  • When Website is owned by single owner such as Google: In that case when frontend site needs to connect to backend site then it may vary with different languages and can cause lot of compatibility issues such as frontend uses Angular and backend uses Java , so you would need API to deal with it.
  • When Your client needs to access data from your website then you would need to expose the API rather than exploring your entire code and packages.
  • When client connects to another client or server using API the transmission of data takes places using either XML or JSON which are language independent.

Ultimate Jenkins tutorial for DevOps Engineers

Jenkins is an open source automated CI/CD tool where CI stands for continuous integration and CD stands for Continuous delivery. Jenkins has its own built-in Java servlet container server which is Jetty. Jenkins can also be run in different servlet containers such as Apache tomcat or glassfish.

  • Jenkins is used to perform smooth and quick deployment. It can be deployed to local machine or on premises data center or any cloud.
  • Jenkins takes your code any sort of code such as python, java or go or JS etc. and compiles it using different compiler such as MAVEN one of the most used compiler and then builds your code in war or Zip format and sometimes as a docker Image. Finally once everything is built properly it deploy as an when required . It integrates very well with lots of third party tools.

JAVA_HOME and PATH are variables to enable your operating system to find required Java programs and utilities.

JAVA_HOME: JAVA_HOME is an (OS) environment variable that can optionally be set after either the (JDK) or (JRE) is installed. The JAVA_HOME environment variable points to the file system location where the JDK or JRE was installed. This variable should be configured on all OS’s that have a Java installation, including Windows, Ubuntu, Linux, Mac, and Android. 

The JAVA_HOME environment variable is not actually used by the locally installed Java runtime. Instead, other programs installed on a desktop computer that requires a Java runtime will query the OS for the JAVA_HOME variable to find out where the runtime is installed. After the location of the JDK or JRE installation is found, those programs can initiate Java-based processes, start Java virtual machines and use command-line utilities such as the Java archive utility or the Java compiler, both of which are packaged inside the Java installation’s \bin directory.

  • JAVA_HOME if you installed the JDK (Java Development Kit)
    or
  • JRE_HOME if you installed the JRE (Java Runtime Environment) 

PATH: Set the PATH environment variable if you want to be able to conveniently run the executables (javac.exejava.exejavadoc.exe, and so on) from any directory without having to type the full path of the command. If you do not set the PATH variable, you need to specify the full path to the executable every time you run it, such as:

C:\Java\jdk1.8.0\bin\javac Myprogram.java
# The following is an example of a PATH environment variable:

C:\Java\jdk1.7.0\bin;C:\Windows\System32\;C:\Windows\;C:\Windows\System32\Wbem

Installing Jenkins using msi installer on Windows Machine

MSI is an installer file that installs your program on the executing system. Setup.exe is an application (executable file) that has MSI file(s) as one of the resources. The MSI is the file extension of MSI files. They are Windows installers. An MSI file is a compressed package of installer files. It consists of all the information pertaining to adding, modifying, storing, or removing the respective software.  MSI file includes data, instructions, processes, and add-ons that are necessary for the application to work normally.

EXE is short for Executable. This is any kind of binary file that can be executed. All windows programs are exe files. Prior to MSI files, all installers were EXE files. The exe is a file extension of an executable file. An executable file executes a set of instructions or a code when opening it. An executable file is compiled from source code to binary code. It can be directly executed by the Windows OS. These files are understandable by the machine, and they can be directly executed by the operating system

MSI is a file extension of windows installer which is a software component of Microsoft Windows used for the installation, maintenance, and removal of software. Whereas, exe is a file extension of an executable file that performs indicated tasks according to the encoded instructions. 

  1. Navigate to https://www.jenkins.io/download/ and select windows option and your download of Jenkins msi will begin.
  1. Once downloaded click on the jenkins.msi
  1. Continue the Jenkins setup.
  1. Select the Port 8080 and click on Test Port and then Hit Next.
  1. Provide the admin password from the provided Path mentioned in RED color.
  1. Further install the plugins required for jenkins.
  1. Next,it will prompt for First admin user. Please fill the required information and keep it safe with you , as you will use this to login.
  1. Now Jenkins URL configuration screen will appear , keep it as it is for now.
  1. Click on Save and Finish.
  1. Now your Jenkins is ready , click on Start using Jenkins. Soon, you will see Jenkins Dashboard. You can create New Jobs by clicking on New Item.

Installing Jenkins using jenkins exe on Windows Machine

  1. Similarly now install jenkins.war from jenkins URL and click on Generic Java package(.war).
  2. Next run the command as below.
java -jar jenkins.war -http=8181
  1. Next, copy the Jenkins password from the log output and paste it in the as you did earlier in windows msi section point (5) and follow rest of the points.

Installing jenkins on Apache Tomcat server on Windows Machine

  1. Install the Apache Tomcat on windows machine from https://tomcat.apache.org/download-90.cgi and click on tomcat installer as per your system. This tutorial is performed on 64 bit windows machine.
  1. Next, unzip the tomcat installation folder and copy the jenkin.war file in the webapps folder.
  1. Next, go inside the bin folder and run the tomcat by clicking on the startup batch script.
  1. Finally you will notice that Apache Tomcat has started and Jenkins as well.
  1. Now, navigate to localhost:8080 URL and you should see tomcat page as shown below.
  1. Further, navigate to localhost:8080/jenkins to redirect to Jenkins Page.

Configuring the Jenkins UI

  1. First click on Manage Jenkins and then navigate to Configure system.
  1. Next, add the system message and save it which should display this message on Jenkins everytime as below.
  1. To configure the name of the Jobs add the name Pattern as below.
  1. Next, try creating a a new Jenkins Job with random name then it will not allow you and display the error message.

Managing User’s and Permission’s in Jenkins UI

  • Go to Manage Jenkins and Navigate to Manage users in the Jenkins UI.
  • Then Create three users as shown below admin, dev, qa.
  • Next, Navigate to Manage Jenkins and choose Configure Global Security.
  • Next select Project-based Matrix Authorization Strategy and define the permissions for all users as you want.

Role Based Stratergy

  • In Previous section you noticed that adding all users and grnating all permissions is little tough job. So, instead create a role and add users in it. To do that first step is to install the Plugin as shown below.
  • Next select Role based Stratergy as shown below and define the permissions for all users as you want.
  • Next, navigate to Manage Jenkins and then to Manage and Assign Jenkins and then click on Manage Roles.
  • Add 3 Global Roles named DEV Team, QA Team and admin.
  • Add 2 Items Roles developers and Testers with define patterns so that Jobs names are declared accordingly.
  • Next, Click on Assign Role
  • Assigning the roles as shown below.

Conclusion

In this tutorial you learnt how to install jenkins on windows through various ways , how to configure Jenkins Dashboard UI and how to manager users and Permissions.

How does Python work Internally with a computer or operating system

Are you a Python developer and trying to understand how does Python Language works? This article is for you where you will learn each and every bit and piece of Python Language. Let’s dive in!

Python

Python is a high-level language, which is used in designing, deploying, and testing at lots of places. It is consistently ranked among today’s most popular programming languages. It is also dynamic and object-oriented language but also works on procedural styles as well, and runs on all major hardware platforms. Python is an interpreted language.

High Level v/s Low Level Languages

High-Level Language: High-level language is easier to understand than is it is human readable. It is either compiled or interpreted. It consumes way more memory and is slow in execution. It is portable. It requires a compiler or interpreter for a translation.

The fastest translator that converts high level language is .

Low-Level Language: Low-level languages are machine-friendly that is machines can read the code but not humans. It consumes less memory and is fast to execute. It cannot be ported. It requires an assembler for translation.

Interpreted v/s Compiled Language

Compiled Language: Compiled language is first compiled and then expressed in the instruction of the target machine that is machine code. For example – C, C++, C# , COBOL

Interpreted Language: An interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program and these kinds of languages are known as interpreter languages. For example JavaScript, Perl, Python, BASIC

Python vs C++/C Language Compilation Process

C++ or C Language: These Languages need compilation that means human-readable code has to be translated into Machine-readable code. The Machine code is executed by the CPU. Below is the sequence in which code execution takes place.

  1. Human Readable is compiled.
  2. Compilation takes place.
  3. Compiled code generates a executable file which is in a machine code format (Understood by Hardware).
  4. Execuation file is executed by CPU

Python Language:

Python is a high-level language

Bytecode, also termed p-code, is a form of instruction set designed for efficient execution by a software interpreter

  1. Python code is written in .py format such as test.py.
  2. Python code is then compiled into .pyc or .pyo format which is a byte code not a machine code ( Not understood by Machine) using Python Interpreter.
  3. Once your program has been compiled to byte code (or the byte code has been loaded from existing .pyc files), it is shipped off for execution to something generally known as the Python Virtual Machine
  4. Byte code is converted into machine code using PVM ( Python Virtual Machine).
  5. Once your program has been compiled to byte code (or the byte code has been loaded from existing .pyc files), it is shipped off for execution to something generally known as the Python Virtual Machine
  6. Now byte code that is test.pyc is further converted into machine code using virtual machine such as (10101010100010101010)
  • Finally Program is executed and output is displayed.
How Python runs? – Indian Pythonista

Conclusion

In this tutorial, you learnt how the python language works and interacts with Operating systems and Hardware. So, which application are you planning to build using Python?

Windows Boot Process Step by Step

If you are looking to find how exactly windows booting happens then you are at the right place. In this tutorial, you will learn step by step how windows boot processing works. Let’s dive in.

Technical Terms:

Firmware

Firmware is an electronic component that contains the software components such as BIOS. These instructions are steps that inform electronic components regarding how to operate. A kernel is not to be confused with a basic input/output system which is an independent program stored on a chip within a computer’s circuit board.

Firmware is stored in non-volatile memory devices such as ROM, EPROM, or flash memory

CMOS

A complementary metal-oxide-semiconductor (CMOS) is a type of integrated circuit technology.
The term is often used to refer to a battery-powered chip found in many personal computers that
holds some basic information, including the date and time and system configuration settings,
needed by the basic input/output system (BIOS) to start the computer.

The CMOS (Complementary Metal-Oxide Semiconductor) chip stores the settings that you make
with the BIOS configuration program.

Flash Memory

Flash Memory is lifelong and unchanged storage that is used to store information even when the system is powered off. Flash memory is widely used with car radios, cell phones, digital cameras, PDAs, solid-state drives, tablets, and printers.

Step by Step Windows boot Processing

Basic Input Output System (BIOS) – [STEP 1]

  • BIOS is the very first software to run when a computer is started and is stored on a small memory chip on the motherboard
  • BIOS provides steps to the computer on how to perform basic functions such as booting.
  • A computer’s basic input/output system (BIOS) is a program that’s stored in nonvolatile memory such as read-only memory (ROM) or flash memory, making it firmware
  • BIOS is also used to identify and configure the hardware in a computer such as the hard drive, floppy drive, optical drive, CPU, memory, and related equipment.
  • BIOS performs a POST (Power On Self Test). POST checks all the hardware devices connected to a computer like RAM, hard disk, etc, and makes sure that the system can run smoothly with those hardware devices. If the POST is a failure the system halts with a beep sound.
  • The other task of the BIOS is to read the MBR. MBR stands for Master Boot Record and its the first sector on a hard disk. MBR contains the partition table and boot loader.

Power On Self Test (POST) – [STEP 2]

POST checks all the hardware devices connected to a computer like RAM, hard disk, etc, and makes sure that the system can run smoothly with those hardware devices. If the POST is a failure the system halts with a beep sound.

The first set of startup instructions is the POST, which is responsible for the following system and diagnostic functions:

  • Performs initial hardware checks, such as determining the amount of memory present
  • Verifies that the devices needed to start an operating system, such as a hard disk, are present
  • Retrieves system configuration settings from nonvolatile memory, which is located on the motherboard
  • If a single beep is sounded from the PC, then there are no hardware issues present in the system. However, an alternative beep sequence indicates that the PC has detected a hardware issue that needs to be resolved before moving on to the next stages of the process

MBR (Master Boot Record) – [STEP 3]

BIOS reads the MBR. MBR has the first sector on the hard disk. MBR contains the boot loader.

Windows Boot Manager – [STEP 4]

Windows Boot Manager enables you to choose from multiple operating systems or select the kernels or helps to start Windows Memory Diagnostics. Windows Boot Manager starts the Windows Boot Loader. Located at %SystemDrive%\bootmgr.

Windows Boot Loader [STEP 5]

The boot loader is a small program that loads the kernel to the memory of the computer that is RAM. There are three boot files in a Windows operating system and they are NTLDR, NTDETECT.COM, and Boot.ini

  • The path of NTLDR (NT Loader) is C:\Windows\i386\NTLDR.
  • C:\boot.ini contains the configuration files of NTLDR
  • This file detect hardware’s and passes information to NTLDR

Kernel Loading [STEP 6]

The Windows Boot Loader is responsible for loading the Windows kernel (Ntoskrnl.exe) and the Hardware Abstraction Layer (HAL), Hal.dll( Hal.dll file) that helps the kernel to interact with hardware.  The Windows executive processes the configuration information stored in the registry in HKLM\SYSTEM\CurrentControlSet and starts services and drivers.

Winlogon.exe starts the login procedures of the windows machine

A High Level Summary of Boot Process:

  1. The computer loads the basic input/output system (BIOS) from ROM. The BIOS provides the most basic information about storage devices, boot sequence, security, Plug and Play (auto device recognition) capability and a few other items.
  2. The BIOS triggers a test called a power-on self-test (POST) to make sure all the major components are functioning properly. You may hear your drives spin and see some LEDs flash, but the screen, at first, remains black.
  3. The BIOS has the CPU send signals over the system bus to be sure all of the basic components are functioning. The bus includes the electrical circuits printed on and into the motherboard, connecting all the components with each other.
  4. The POST tests the memory contained on the display adapter and the video signals that control the display. This is the first point you’ll see something appear on your PC’s monitor.
  5. During a cold boot the memory controller checks all of the memory addresses with a quick read/write operation to ensure that there are no errors in the memory chips. Read/write means that data is written to a bit and then read back from that bit. You should see some output to your screen – on some PCs you may see a running account of the amount of memory being checked.
  6. The computer loads the operating system (OS) from the hard drive into the system’s RAM. That ends the POST and the BIOS transfers control to the operating system. Generally, the critical parts of the operating system – the kernel – are maintained in RAM as long as the computer is on. This allows the CPU to have immediate access to the operating system, which enhances the performance and functionality of the overall system

High Level Summary 2

  1. BIOS is the first software to run when a computer is started and stored on a small memory chip on the motherboard. BIOS is also used to identify hardware issues using POST and configure the hardware in a computer such as the hard drive, floppy drive, optical drive, CPU, memory, and related equipment.
  2. The other task of the BIOS is to read the MBR using windows boot manager.
  3. Further, Windows Boot Manager enables you to choose from multiple operating systems or select the kernels or helps to start Windows Memory Diagnostics. Windows Boot Manager starts the Windows Boot Loader (GRUB or LILO) located at %SystemDrive%\bootmgr.
  4. The boot loader ( GRUB or LILO) is a small program that loads the kernel to the memory of the computer that is RAM. There are three boot files in a Windows operating system, and they are NTLDR, NTDETECT.COM, and Boot.ini.
    • The path of NTLDR (NT Loader) is C:\Windows\i386\NTLDR.
    • C:\boot.ini contains the configuration files of NTLDR
    • This file detect hardware’s and passes information to NTLDR
  5. Next, The Windows Boot Loader is responsible for loading the Windows kernel (Ntoskrnl.exe) and the Hardware Abstraction Layer (HAL), Hal.dll( Hal.dll file) that helps the kernel to interact with hardware.
  6. Now, the Windows executive processes the configuration information stored in the registry in HKLM\SYSTEM\CurrentControlSet and starts services and drivers.
  7. Finally, winlogon.exe starts the login procedures of the windows machine

Conclusion

In this tutorial, you learned how to step by step boot Windows Machine. So, which Windows Machine do you plan to reboot?

How to Work with Ansible When and Other Conditionals

If you need to execute Ansible tasks based on different conditions, then you’re in for a treat. Ansible when and other conditionals lets you evaluate conditions, such as based on OS, or if one task is dependent on the previous task.

In this tutorial, you’re going to learn how to work with Ansible when and other conditionals so you can execute tasks without messing things up.

This Blog has been Written by Author of Automateinfra.com (Shanky) on adamtheautomator.com [ATA]

Click here and Continue reading

How to Install Terraform on Linux and Windows

Are you overwhelmed with the number of cloud services and resources you have to manage? Do you wonder what tool can help with these chores? Wonder no more and dive right in! This tutorial will teach how to install Terraform!

Terraform is the most popular automation tool to build, change and manage your cloud infrastructure effectively and quickly. So let’s get started!

This Blog has been Written by Author of Automateinfra.com (Shanky) on adamtheautomator.com [ATA]

Click here and Continue reading

How to run Python flask applications on Docker Engine

Cannot we isolate our apps so that they are independent of each other and run perfectly ? The answer is absolutely “YES”, that correct that’s very much possible with docker and containers. They provide you isolated environment and are your friend for deploying many applications with each taking its own container. You can run as many as containers in docker and are independent of each other. They all share same kernel memory.

In this tutorial we will go through a simple demonstration of a python application which will run on docker engine.

Table of content

  1. What is Python ?
  2. What is docker ?
  3. Prerequisites
  4. Create a Python flask application
  5. Create a Docker file
  6. Build Docker Image
  7. Run the Python flask application Container
  8. Conclusion

What is Python ?

Python is a language from which you create web applications and system scripts. It is a used vastly across the organizations and very easy to learn. Python apps require isolated environment to run its application very well. This is quite possible with Docker and containers which we will use in this tutorial.

If you wish to know more about python please visit our Python’s Page to learn all about Python.

What is docker ?

Docker is an open source tool for developing , shipping and running applications. It has ability to run applications in loosely isolated environment using containers. Docker is an application which helps in management of containers in a very smooth and effective way. In containers you can isolate your applications. Docker is quite similar to virtual machine but it is light weighted and can be ported easily.

Containers are light weighted as they are independent of hypervisors load and configuration. They directly connect with machines ie. hosts kernel.

Prerequisites

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

Create a Python flask application

  • Before we create our first program using python flask we need to install python flask and python virtual environment for flask to run.

pip install virtualenv # virtual python environment 
  • Create and activate a virtual environment named virt:
virtualenv venv
source virt/bin/activate
 
  • Finally install Flask

pip install flask # Install Flask from pip
  • Now create a text file and name it as app.py where we will write our first python flask code as below.
from flask import Flask # Importing the class flask

app = Flask(__name__)   # Creating the Flask class object.

@app.route('/')         # app.route informs flask about the URL to be used by function
def func():             # Creating a function
      return("Iam from Automateinfra.com")  

if __name__ ==  "__main__":    # Programs starts from here.
    app.run(debug=True)
  • Create one more file in same directory and name it as requirements.txt where we will define the dependency of flask application
Flask==1.1.1
  • Now our python code app.py and requirements.txt are ready for execution. Lets execute our code using below command.
python app.py
This image has an empty alt attribute; its file name is image-42.png
  • Great, so our python flask application ran successfully on our local machine. Now we need to execute same code on docker . Lets now move to docker part.

Create a docker file

Docker file is used to create a customized docker images on top of basic docker image. It is a text file that contains all the commands to build or assemble a new docker image. Using docker build command we can create new customized docker images . Its basically another layer which sits on top of docker image. Using newly built docker image we can run containers in similar way.

This image has an empty alt attribute; its file name is image-43.png
  • Create a docker file and name it as Docker file . Keep this file also in same directory as app.py and requirements.txt
FROM python:3.8           # Sets the base image 
WORKDIR /code             # Sets the working directory in the container
COPY requirements.txt .   # copy the dependencies file to the working directory
RUN pip install -r requirements.txt  # Install dependencies
COPY src/ .               # Copy the content of the local src directory to the working directory
CMD [ "python", "./app.py" ] # Command to run on container start  
This image has an empty alt attribute; its file name is image-44.png

Build docker Image

  • Now we are ready to build our new image . So lets build our image
docker build -t myimage .
  • You should see the docker images by now.
docker images
This image has an empty alt attribute; its file name is image-45.png

Run the Python flask application Container

  • Now run our first container using same docker image ( myimage)
docker run -d -p 5000:5000 myimage
  • Verify if container is successfully created.
docker ps -a
This image has an empty alt attribute; its file name is image-45.png

Conclusion

In this tutorial we covered what is docker , what is python and using python flask application created a application on docker engine in one of the containers.

Hope this tutorial will helps you in understanding and setting up Python flask and python flask applications on docker engine in ubuntu machine.

Please share with your friends.