How to create a new Docker image using Dockerfile: Dockerfile Example

Are you looking to create your own Docker image? The docker images are the basic software applications or an operating system but when you need to create software with advanced functionalities of your choice, then consider creating a new docker image with dockerfile.

In this tutorial, you will learn how to create your own Docker Image using dockerFile, which contains a set of instructions and arguments again each instruction. Let’s get started.

Join 28 other followers

Table of Content

  1. Prerequisites
  2. What is Dockerfile?
  3. Dockerfile instructions or Dockerfile Arguments
  4. Dockerfile Example
  5. Conclusion

Prerequisites

If you’d like to follow along step-by-step, you will need the following installed:

  • Ubuntu machine with Docker installed. This tutorial uses Ubuntu 21.10 machine.
  • Docker v19.03.8 installed.

What is Dockerfile?

If you are new to dockerfile, you should know what dockerfile is. Dockerfile is a text file that contains all the instructions a user could call on the command line to assemble an image from a base image. The multiple instructions could be using the base image, updating the repository, Installing dependencies, copying source code, etc.

Docker can build images automatically by reading the instructions from a Dockerfile. Each instruction in DockerFile creates another layer (All the instructions take their own memory). While building the new image using the docker build command ( which is done by Docker daemon), if any instruction fails and if you rebuild the image, then previous instructions which are cached are used to build.

New Docker image can be built using simply executing docker build command, or if you need to build docker image from a different path use f flag.

docker build . 
docker build -f /path/to/a/Dockerfile .

Dockerfile instructions or Dockerfile Arguments

Now that you have a basic idea about what is docker and dockerfile, let’s understand some of the most important Dockerfile instructions or Dockerfile Arguments.

  • FROM : From instruction initializes new build stage and sets the base image for subsequent instructions. From instruction may appear multiple times in the dockerFile.
  • ARG: ARG is the only instruction that comes before FROM. The ARG instruction defines a variable that users can pass while building the image using the docker build command such as
 --build-arg <varname>=<value> flag
  • EXPOSE: Expose instruction informs docker about the port’s container listens on. The EXPOSE instruction does not actually publish the port.; it is just for the sake of understanding for admins to know about which ports are intended to be published
  • ENV: The ENV instruction sets the environment variable in the form of key-value pair.
  • ADD: The ADD instruction copies new files, directories, or remote file URLs from your docker host and adds them to the filesystem of the image.
  • VOLUME: The VOLUME instruction creates a mount point and acts as externally mounted volumes from the docker host or other containers.
  • RUN: The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. RUN are declared in two ways, either shell way or executable way.
  • Shell way: the command is run in a shell i.e.,/bin/sh. If you need to run multiple commands, use the backslash.
  • Executable way: RUN [“executable”, “param1”, “param2”] . If you need to use any other shell than /bin/sh, you should consider using executable.
RUN /bin/bash -c 'source $HOME/.bashrc; echo $HOME' # Shell way
RUN ["/bin/bash", "-c", "echo HOME"] # Executable way (other than /bin/sh)
  • CMD: The CMD instruction execute the command within the container just like docker run exec command. There can be only one CMD instruction in DockerFile. If you list more than one then last will take effect. CMD has also three forms as shown below
    • CMD ["executable","param1","param2"] (exec form, this is the preferred form)
    • CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
    • CMD command param1 param2 (shell form)

Let’s take an example in the below docker file; if you need that your container should sleep for 5 seconds and then exists, use the below command.

FROM Ubuntu
CMD sleep 5
  • Run the below docker run command to create a container, and then it sleeps for 5 seconds, and then the container exists.
docker run <new-image>
  • But If you wish to modify the sleep time such as 10 then either you will need to either change the value manually in the Dockerfile or add the value in the docker command.
FROM Ubuntu    
CMD sleep 10    # Manually changing the sleep time in dockerfile
  • Now execute the docker run command as shown below.
docker run <new-image> sleep 10  # Manually changing the sleep time in command line

You can also use the entry point shown below to automatically add the sleep command in the running docker run command, where you just need to provide the value of your choice.

FROM Ubuntu
ENTRYPOINT ["sleep"]

With Entrypoint, when you execute the docker run command with a new image, sleep will automatically get appended in the command as shown below. Still, the only thing you need to specify is the number of seconds it should sleep.

docker run <new-image> sleep <add-value-of-your-choice>

So in case of CMD command instruction command line parameters passed are completely replaced where in case of entrypoint parameters passed are appended.

If you don’t provide the <add-value-of-your-choice>, this will result in an error. To avoid the error, you should consider using both CMD and ENTRYPOINT but make sure to define both CMD and ENTRYPOINT in json format.

FROM Ubuntu    
ENTRYPOINT sleep       # This command will always run sleep command
CMD ["5"]              # In case you pass any parameter in command line it will be picked else 5 by default
  • ENTRYPOINT: An ENTRYPOINT allows you to run the commands as an executable in a container. ENTRYPOINT is preferred when defining a container with a specific executable. You cannot override an ENTRYPOINT when starting a container unless you add the --entrypoint flag.

Dockerfile Example

Up to now, you learned how to declare dockerfile instructions and executable of each instruction, but unless you create a dockerfile and build a new image with these commands, they are not doing much. So let’s learn and understand by creating a new dockerfile. Let’s begin.

  • Login to the ubuntu machine using your favorite SSH client.
  • Create a folder under home directory named dockerfile-demo and switch to this directory.
mkdir ~/dockerfile-demo
cd dockerfile-demo/
  • Create a file inside the ~/dockerfile-demo directory named dockerfile and copy/paste the below code. Below code contains from instruction which sets the base image as ubuntu and runs the update and nginx installation commands and builds the new image. Once you run the docker containter then Image created is printed on the screen on containers terminal using echo command.
FROM ubuntu:20.04
MAINTAINER shanky@automateinfra.com
RUN apt-get update 
RUN apt-get install nginx 
CMD [“echo”,”Image created”] 
docker build -t docker-image:tag1 .
Building the docker image and tagging successfully
Building the docker image and tagging successfully

As you can see below, once the docker container is started, the Image created is printed on the container’s screen.

Running the container
Running the container

Join 28 other followers

Conclusion

In this tutorial, you learned what is dockerfile, a lot of dockerfile instructions and executables, and finally, how to create your own Docker Image using dockerFile.

So which application are you planning to run using the newly created docker image?

Ultimate docker interview questions for DevOps

If you are looking to crack your DevOps engineer interview, docker is one of the important topics that you should prepare. In this guide, understand the docker interview questions for DevOps that you should know.

Let’s go!

Join 28 other followers

Table of Content

Q1. What is Docker ?

Answer: Docker is lightweight containerized technology. It allows you to automate deployment in portable containers which is built from docker images.

Q2. What is Docker Engine.

Answer: Docker Engine is an server where docker is installed . Docker Client and server remains on the same server or remote host. Clients can connect with server using CLI or RESTful API’s.

Q3. What is use of Dockerfile and what are common instructions used in docker file?

Answer: We can either pull docker image and use it directly to build our apps or we can use it and on top of it we can create one more layer according to the need that’s where Dockerfile comes in play. With Docker file you can design the image accordingly. Some common instructions are FROM, LABEL, RUN , CMD

Q4. What are States of Docker Containers ?

Answer: Running , Exited , Restarting and Paused.

Q5. What is DockerHUB ?

Answer: Dockerhub is a cloud based registry for docker images . You can either pull or push your images in DockerHub.

Q6. Where are Docker Volumes stored ?

Answer: Docker Volumes are stored in /var/lib/docker/volumes.

Q7.Write a Dockerfile to Create and Copy a directory and built using Python Module ?

Answer:

FROM Python:3.0
WORKDIR /app
COPY . /app

Q8. What is the medium of communication between docker client and server?

Answer: Communication between docker client and server is taken care by REST API , socker.IO and TCP Protocol.

Q9. How to start Docker container and create it ?

Answer: Below command will create the container as well as run the container. Ideally just to create container you use docker container create “Container_name”

docker run -i -t centos:6

Q10.What is difference between EXPOSE PORT and PUBLISH Port ?

Answer: Expose Port means you just exposes it locally i.e to container only . Publish Port means you are allowing from outside World.

Q11. How can you publish Port i.e Map Host to container port ? Provide an example with command

Answer:

Here p is mapping between Host and container and -d is dettached mode i.e container runs in background and you just see container ID displayed on

docker container run -d -p 80:80 nginx

Q12. How do you mount a volume in docker ?

Answer:

docker container run -d --name "My container"  --mount  source="vol1",target=/app  nginx

Q13. How Can you run multiple containers in single service ?

Answer: We can achieve this by using docker swarm or docker compose. Docker compose uses YAML formatted files.

Q14. Where do you configure logging driver in docker?

Answer: We can do that in file daemon.jason.file.

Q15. How can we go inside the container ?

Answer:

docker exec -it "Container_ID"  /bin/bash

Q16. How can you scale your Docker containers?

Answer: By using Docker compose command.

docker-compose --file scale.yml scale myservice=5

Q17. Describe the Workflow from Docker file to Container execution ?

Answer: Docker file ➤ Docker Build ➤ Docker Image (or Pull from Registry) ➤Docker run -it ➤ Docker Container ➤Docker exec -it ➤Bash

Q18. How to monitor your docker in production ?

Answer:

docker stats : Get information about CPU , memory and usage etc.

docker events : Check activities of containers such as attach , detach , die, rename , commit etc.

Q19. Is Docker swarm an approach to orchestrate containers ?

Answer: Yes it is one of them and other is kubernetes

Q20. How can you check docker version?

Answer: docker version command which gives you client and server information together.

Q21. How can you tag your Docker Image ?

Answer: Using docker tag command.

docker tag "ImageID" "Repository":tag

Conclusion

In this guide, you learned some of the basic questions around the docker interview questions for DevOps that you should know.

There is some more interview guide published on automateinfra.com; which one did you like the most?

The Ultimate Docker tutorial for Beginners [Easiest Way]

Are you new to Docker? If yes, then you are here for the treat. Docker allows you to deploy, run and test your applications easily in any operating system in no time.

In this Ultimate Docker tutorial for Beginners, learn Docker, docker container, and everything about Docker? Still interested?

Let’s get started.

Join 28 other followers

Table of Content

  1. Prerequisites
  2. What is docker?
  3. Why Docker (docker vs vm or docker containers vs virtual machines)?
  4. What is a docker images?
  5. What is a Docker container?
  6. Docker vs Hypervisor
  7. Docker Architecture
  8. Docker commands for devops engineer
  9. Conclusion

Prerequisites

If you’d like to follow along step-by-step, you will need the following installed:

  • Ubuntu machine with Docker installed. This tutorial uses Ubuntu 21.10 machine.
  •  Docker v19.03.8 installed.

What is docker?

Docker is an open-source platform that allows you to deploy, run and ship applications. With Docker, it takes no time to deploy, run and test your applications anywhere in different operating systems. It is a lightweight, loosely isolated environment. It has a better cost-effective solution than a hypervisor virtual machine as you can use more compute resources.

Docker is written in the Go programming language and takes advantage of several features of the Linux kernel to deliver its functionality.

Docker provides the ability to package and run an application in a loosely isolated environment called a container. Containers are lightweight and contain everything needed to run the application, so you do not need to rely on the host.

Why Docker (docker vs vm or docker containers vs virtual machines)?

In this section, you will learn why you need docker and the difference between docker vs. VM by understanding docker vs. VM performance.

Virtual machine without Docker: The Operating system is dependent on hardware and is installed on it. But when you are hosting your applications directly on the operating system, applications depend on different libraries and other compatible issues. So, when you have two or more applications, it becomes difficult to manage so many microservices and takes a long time to step up.

Applications running without Docker
Applications running without Docker

Virtual machine with Docker engine: Now, this time, you are hosting your applications on Docker rather than directly on the operating system, so applications have their own libraries and dependencies. So, when you have two or more applications, it’s easier to manage as it has its own isolated environment and doesn’t break anything due to compatibility issues.

Containers have extra benefits such as:

  • Easy to create container image rather than VM image.
  • Container images are portable and can run on Ubuntu, RHEL, on-premises, public clouds, etc.
  • predictable application performance.

Applications running with Docker Engine
Applications running with Docker Engine

What is a docker images?

Docker images basically contain a set of instructions for creating docker containers. Docker images are available in the docker hub’s public registry, and you can also customize the image by adding more instruction in a file known as the docker file.

Docker Images are binary data or package that contains the application and all its software dependencies and runtimes.

To find the docker images on the docker hub, find the image in the search box. For example, to find the docker image nginx.

Finding the docker image nginx on docker hub.
Finding the docker image nginx on docker hub.

What is a Docker container?

Docker applications run inside a lightweight environment known as docker containers, and you can run multiple containers in the host such as windows, Linux, or macOS. Docker containers are also known as runnable instances of a docker image. You can create, delete, stop, start docker containers at any time.

All the container contains their own services, networking interferences, processes, mounts, etc., but they all share the same OS Kernel, i.e., docker containers share the same kernel.

For example, Ubuntu, Fedora, centos all have the same OS kernel, i.e., Linux but have different software (UI, drivers, file manager, compilers). That is why Linux has so many flavors available, as some are different in GUI or as the command lines.

Docker container sharing the same kernel means: If you have docker installed on ubuntu machine, in that case you are allowed to run docker containers based on Ubuntu, Fedora, centos images as the underlying operating system kernel is same.

Virtual machines doesnt depend on underlying Operating system or kernel and can run any application on the same hypervisor.

OS kernel : OS kernel manages the operations of memory and CPU time. It is core component which acts as a bridge between applications or software’s and data processing performed at hardware level using system calls.

  • You cannot run Windows based container in the Linux based OS Kernel. For that you will require docker on windows such as docker desktop.
  • You can run Linux based container in windows based OS kernel but actually you are using Linux virtual machine on top of windows and then running Linux based container on Linux virtual machine.
Ubuntu, Fedora, centos all sharing the same OS kernel i.e. Linux
Ubuntu, Fedora, centos all share the same OS kernel, i.e., Linux

In the below example: Docker can run any container based on Ubuntu, Fedora, or Centos as the underlying OS kernel is ubuntu. So the Containers share all the networking and other OS things from the kernel, but only software is installed on the container.

Docker capability of running multiple Flavors of Linux as all of them share same kernel
Docker’s capability of running multiple Flavors of Linux as all of them share the same kernel.

Docker vs Hypervisor

As you read earlier, Docker containers have only one underlying operating system that helps in cost optimization, utilization of resources, disk space, and less time to boot.

With Hypervisor, you have so many operating systems to work ( or many kernels) with that increases the overall load, disk size, utilization of resources, and also take more time to boot.

Normal OS vs Virtualization vs Containerization
Normal OS vs. Virtualization vs. Containerization

There are two docker editions: the community edition and the enterprise edition. The community edition is free and present in windows, Linux, macOS, and cloud( AWS or Azure ). Enterprise Edition comes with more features such as image security image management.

Docker Architecture

Docker uses a client-server architecture where the docker client connects to the Docker daemon, which performs all the functions such as building images, running containers, and distributing containers.

  1. Docker client connects to docker daemon using REST API over Unix sockets. For example, when you run the docker run command the docker client first connects to docked daemon which performs the task. Docker daemons can also communicate with other daemons to manage containers.
  1. Docker Daemon: dockerd is the docker daemon which listens to API requests and manages Docker objects such as images, containers, network, and volumes.
Docker Architecture
Docker Architecture
  1. Docker registries store docker images. Docker hub is the public registry that anybody can use. Using docker pull or docker run command images are pulled from Docker hub and using docker push they are pushed into docker hub.

Docker commands for devops engineer

In this section, you will be introduced to all the important docker commands that you should know. Knowing the docker command is important if you need to work with Docker engineers and docker containers. Let’s dive into it.

Docker run command

Docker run command is used to start a container and run a command in a new container.

  • To run a container from nginx image run the docker run command as shown below.
    • If docker contains the image on the system then it uses that to run the container else it will pull it from the docker hub and later for subsequent execution same image which got downloaded will be used.
    • By default container runs in Foreground or in attached mode that means container will be attached to the console or the standard output of the docker container and you will see the output of web service on your screen.

Also you wont be able to do anything on the console even if you press clt +x it wont take any effect on the console.

docker run name_of_the_image
docker run nginx
  • To run a container from hello-world image, run the following command.
docker run hello-world
Creating the container using hello-world image
Creating the container using hello-world image
  • To run a container from ubuntu image.

In this case, the container starts and exits immediately. Unlike virtual machines, docker is not supposed to host the operating system; instead, they are supposed to run any web application or specific tasks or web server.

docker run ubuntu 
Creating the container using ubuntu image
Creating the container using ubuntu image
  • To run a container from ubuntu image and keep the container running you can allow container to sleep for particular time from an ubuntu image.
docker run ubuntu sleep 5000
Creating the container using Ubuntu image which sleeps for 5000 seconds
Creating the container using Ubuntu image which sleeps for 5000 seconds
  • To run a container from ubuntu image or centos image and keep the container running run the execute command on the container directly followed by cat /etc/*release* but container will exit after you log out from the container as it is not doing anything or running any service.

Like Python in general if application prompts you to provide input then Dockerzing the application wouldn’t wait for the prompt it just prints the output on the standard output that is STDOUT and exists because by default container doesn’t listhen to the standard input even though you are attached to the console.

The i flag indicating you’d like to open an interactive SSH session to the container. The i flag does not close the SSH session even if the container is not attached. The t flag allocates a pseudo-TTY which much be used to run commands interactively.

-it ( attaches input and terminal to the container)

docker run -it centos bash 
cat /etc/*release*
Creating the container using CentOS image which displays the release version
Creating the container using CentOS image which displays the release version
  • To run a container in detached mode or in the background and you will get the prompt immediately as container starts. With this mode you need to use -d flag. The container will continue to run in the background.
docker run -d nginx
Creating the container using nginx image in the detached mode that is in background
Creating the container using nginx image in the detached mode that is in the background

If you want to attach the container back in attached mode consider running docker attach name_of_the_container or container_id

  • To run a container from a specific version run the below command by using the correct tag and if you dont specify the tag then docker will pick the latest version of tag.
docker run image:tag

Docker ps command

Docker ps command is used to list the running containers. After you execute the docker ps command, you will see it; the command provides details such as docker image used, docker command if any, docker container creation time, container status if this container is mapped with hots,and the name of the container.

docker ps
Checking the docker container
Checking the docker container

To list all the containers irrespective of their states such as exited or still running, pending, Inactive or terminated, etc.

docker ps -a

Docker stop command

If you need to stop your docker container run the docker stop command. If you want to stop a container after a few seconds or some time, consider using the -t flag along with the docker stop command.

Stopping the container is ok but you don’t get rid of space consumption.

docker stop container-name or container-id -t 
Stopping the docker container
Stopping the docker container

Docker pull command

If you need to pull or download the images directly from the docker public repository, then use the docker pull command such as to pull the fedora image.

docker pull name_of_the_image
Pulling the image from dockerhub
Pulling the image from the docker hub

Docker rm command

If you need to remove containers from the docker engine, you should use the docker rm command.

  • To remove the container and get rid of space consumption.
docker rm name_of_the_container
  • To remove multiple containers in the docker.
docker rm container_ID_1 container_ID_2 container_ID_3

Docker rmi command

If you need to remove the docker images, you should use the docker rmi command below.

docker rmi name_of_the_image

Make sure first you get rid of all associated containers that is remove all the containers associated with the images.

Removing the Python image
Removing the Python image

Docker exec command

One of the most useful and important commands in docker is the docker exec command. The docker exec command allows you to execute a command inside the running container. Let’s see below how to run a command inside the running container.

docker exec name_of_the_container cat /etc/hosts

As you can see below, there is already a running container with container id 3d….. on which you tried executing the cat command. After running the docker exec command, it will display all the hosts inside the directory.

docker exec -it 3d55ce5bfc50 cat /etc/hosts
Executing the command inside the container to verify the release version
Executing the command inside the container to verify the release version

Docker attach command

By default, the docker container runs in attached mode, but you can attach it back by running the docker attach command if you run your container in detached mode. Docker attach command attaches local standard input, output, and error streams to a running container.

As shown below topdemo container was run in the detached mode in the background using the docker run command and attached back to the console.

docker run -d --name topdemo ubuntu /usr/bin/top -b
docker attach topdemo
Attaching the docker container to the console.
Attaching the docker container to the console.

Docker Port Mapping

Do you what, when you create or run a container using docker create or docker run command , the containers are not exposed to the outside world on their own.

To expose the docker to the outside world via service you need to expose it via --publish or -p flag. By running the –publish or -p flag a firewall rule is created that maps a container port to a port on the Docker host to the outside world.

  • To access the container running on port 5000 from web browser over port 80 you would need to run the docker run command with p flag. p flag will map Port 80 on docker host to Port 5000 to containers Port as shown in the below case 1.
  • Similarly, in the below case 2: Docker host will listen to application on port 8000 on web and internally on Port 5000 on container IP address.
  • Again in the final case 3: Docker host will listen to application on port 8001 on web and internally on Port 5000 on container IP address.

You can run as many applications you wish to run with different docker host ports but you cannot use the same docker host port again.

docker run -p 80:5000 nginx    # Case 1
docker run -p 8000:5000 nginx  # Case 2
docker run -p 8001:5000 nginx  # Case 3

Docker Volume Mapping

Do you know docker has its own isolated filesystem? If your container is deleted by mistake or you need to delete the container, then your data is gone. For example, when running the MySQL container, the volume is created on the /var/lib/mysql directory inside the container and is blown away as soon as you either stop or remove the container.

To solve the issue, it is recommended to have data persistent within the container and consider mounting a volume from the docker host to the container using the below command.

  • /opt/datadir is the volume created on docker host
  • /var/lib/mysql : Mapping the volume with containers directory
  • mysql is the name of the docker image
  • v flag mounts the volume from host to container.
docker run -v /opt/datadir:/var/lib/mysql mysql
  • To find detailed view of container use inspect command.
docker inspect name_of_the_container  or container_id
  • To find logs of the container use docker logs command.
docker logs container_name or container_id

Join 28 other followers

Conclusion

In this Ultimate Guide on docker for beginners, you learned what docker is, what is docker container is, various docker commands that you should know.

Now that you have Sound Knowledge on Docker and containers, which applications do you plan to host on it?

How to Create Dockerfile step by step and Build Docker Images using Dockerfile

If you want to create your own Docker images rather than already cooked Docker images, consider using Dockerfile, the layer-based docker image building file.

Docker file is used to create customized docker images on top of basic docker images using various arguments such as FROM, ADD, CMD, etc.

In this tutorial, you will learn everything about Dockerfile, how to create Dockerfile, and create a customized docker image.

Let’s jump in to understand each bit of it.

Join 28 other followers

Table of Content

  1. What is Dockerfile?
  2. Prerequisites
  3. How to Create Dockerfile ( Dockerfile commands )
  4. Dockerfile instructions or Dockerfile Arguments
  5. Docker cmd vs entrypoint
  6. How to Create Docker Image and run a container using Dockerfile
  7. Dockerfile example using multiple Docker Arguments
  8. Conclusion

What is Dockerfile?

Dockerfile is used to create customized docker images on top of basic docker images using a text file that contains all the commands to build or assemble a new docker image. Using the docker build command, you can create new customized docker images.

Once a new Docker image is created, you can further use the docker image to launch docker containers and run.

Launching a new Docker container using a new Docker image created using Dockerfile
Launching a new Docker container using a new Docker image created using Dockerfile

Prerequisites

  • You must have ubuntu machine preferably 18.04 plus version and if you don’t have any machine you can create an AWS EC2 instance on AWS account
  • Docker must be installed on ubuntu machine. To install Docker on ubuntu machine follow here.

How to Create Dockerfile ( Dockerfile commands )

Now that you have a basic idea about what Dockerfile is and how it helps you launch a new Docker image of your choice and further Docker container from it. Let’s discuss some of the important aspects of Dockerfile and how to create a Dockerfile.

There are two forms in which Dockerfile can be written that is shell form and other is exec form.

  • The Syntax of Shell form command is:
Shell form <instruction> command
# Shell form
ENV name John Dow
ENTRYPOINT echo "Hello, $name"
  • The Syntax of exec form command is:
Exec form <instruction> ["executable", "param1", "param2"]
# exec form
RUN ["apt-get", "install", "python3"]
CMD ["/bin/echo", "Hello world"]
ENTRYPOINT ["/bin/echo", "Hello world"]
  • Environmental variables inside Dockerfile can be declared as $var_name or ${var_name}
WORKDIR ${HOME}  # This is equivalent to WORKDIR ~
ADD . $HOME      # This is equivalent to ADD . ~

Dockerfile instructions or Dockerfile Arguments

Now that you have a basic idea about what is docker and dockerfile, let’s understand some of the most important Dockerfile instructions or Dockerfile Arguments that are used in the Dockerfile.

  • FROM command is used to set a base Docker image. For example in the below command ubuntu:14.04 is set as the base docker image.
FROM base:${CODE_VERSION}

FROM ubuntu:14.04
  • RUN command executes commands when the Docker container starts.

RUN lets you execute commands inside of your Docker image however CMD lets you define a default command to run when your container starts. 

RUN echo $VERSION
# RUN <command> (shell form)
# RUN ["executable", "param1", "param2"] (exec form)
  • ADD command will add all the files from the host to docker images. Below command will add a file from folder directory kept at host to containers /etc directory.
ADD folder/file.txt /etc/

ADD directive is more powerful in two ways as it can handle remote URLs and can also auto-extract tar files.

  • CMD command will set the default command if you don’t specify any command while starting an container.
    • It can be overridden by user passing an argument while running the container.
    • If you apply multiple CMD command only last takes effect
CMD ["Bash"]
  • Maintainer allows you to add author details
MAINTAINER support@automateinfra.com
  • EXPOSE command informs docker about the port which container should listen on
    • Below are are setting a container to listen on port 8080
EXPOSE 8080
  • The ENV command sets an environment variable in the new container
    • Below we are setting HOME environments variable to /root
ENV HOME /root
  • USER command Sets the default user within the container
USER ansible
  • VOLUME command creates a shared volume that can be shared among containers or by the host machine
VOLUME ["/var/www", "/var/log/apache2", "/etc/apache2"]
  • WORKDIR command set the default working directory for the container
WORKDIR app/
  • ARG command allows users to pass at build-time with the docker build command .
    • Syntax  --build-arg <varname>=<value> 
ARG username
  • ARG arugument
docker build  --build-arg username=automateinfra 
  • LABEL instruction adds metadata to an image and it uses key value pair
LABEL 
  • SHELL command allows to overwrite the use of default shell.
    • SHELL command will overwrite the use of [“/bin/sh”,”-c”] in case of linux shell
    • SHELL command will overwrite the use of [“cmd”,”/S”,”/C”] in case of windows shell
# Executed as cmd /S /C echo -command Write Host default
RUN powershell -command Write-Host default

# Executed as PowerShell  -command Write-Host hello
SHELL ["PowerShell", "-command"]
RUN Write-Host hello
  • ENTRYPOINT is also used for running the command but with a difference from CMD command
    • In case of ENTRYPOINT command if you give command line argument ENTRYPOINT doesn’t allow to override it.

Docker cmd vs entrypoint

In this section learn what is the docker cmd vs entry point docker arguments as these are the most widely used and important arguments one must know.

Create a Dockerfile containing the following code which included the CMD argument.

FROM nginx
CMD [“echo”, “Hello World”]
  • Next create a docker Image followed by running a docker container using CMD argument.
docker build . 
# Run a container normally without CMD argument
sudo docker run [image_name]
# Run a container using CMD argument.
sudo docker run [image_name] hostname

You will notice that the CMD argument is completely overwritten with the argument that you pass on the command line argument.

running a docker container using CMD argument
running a docker container using CMD argument

Now, update the Dockerfile containing the following code with the ENTRYPOINT argument.

FROM nginx
ENTRYPOINT  [“echo”, “Hello World”]
  • Next create a docker Image followed by running a docker container using ENTRYPOINT argument.
docker build . 
# Run a container normally without ENTRYPOINT  argument
sudo docker run [image_name]
# Run a container using ENTRYPOINT argument.
sudo docker run [image_name] hostname

You will notice that the ENTRYPOINT argument is appended when you pass the command line argument.

running a docker container using ENTRYPOINT argument
running a docker container using ENTRYPOINT argument

How to Create Docker Image and run a container using Dockerfile

Now that you have sound knowledge of how Dockerfile is created using different commands. Let’s now dive in and learn some basics Dockerfile examples to get you started.

  • Login to ubuntu machine using your favorite SSH Client.
  • Create a folder named dockerfile-demo1 under opt directory.
cd /opt
mkdir dockerfile-demo1
cd dockerfile-demo1
  • Create a named Dockerfile inside the /opt/dockerfile-demo1 directory and copy/paste the below code. The commands which Dockerfile will use are:
    • FROM: It sets the base image as ubuntu
    • RUN: It runs the following commands in the container
    • ADD: It adds the file from a folder
    • WORKDIR: It tells about the working directory
    • ENV: It sets a environment variable
    • CMD: Its runs a command when the container starts
FROM ubuntu:14.04
RUN \
    apt-get -y update && \
    apt-get -y upgrade && \
    apt-get -y install git curl unzip man wget telnet
ADD folder/.bashrc /root/.bashrc
WORKDIR /root
ENV HOME /root
CMD ["bash"]

The syntax to build docker Image from Dockerfile is shown below.

docker build .        # Command 1
# Provide the exact path where Dockerfile is located
docker build -f /path-of-Dockerfile .       # Command 2
  • Now, build a Docker Image using the docker build command.
 docker build -t image1 .
Building a docker image
Building a docker image
  • Lets verify the Docker Image by running the following command.
docker images
Verifying the docker images
Verifying the docker images
  • Now, its time to check if Docker Image is successfully working by running a container and verifying all the Dockerfile commands inside the container.
docker run -i -t 5d983653b8f4

Great, you will notice that the commands declared in Dockerfile are executed in the below snap. The new docker Image was created successfully, which further you tested by launching a docker container.

Running the docker container
Running the docker container

Dockerfile example using multiple Docker Arguments

In this section, let’s look at another Dockerfile example using multiple Docker Arguments. Assuming you are still logged into the ubuntu machine.

  • Create another folder named dockerfile-demo2 under opt directory.
mkdir dockerfile-demo2
cd dockerfile-demo2
  • Create a Dockerfile with your favorite editor inside the /opt/dockerfile-demo2 directory and copy/paste the below content into it.
FROM ubuntu:14.04
ARG LABEL_NAME
LABEL org.label-schema.name="$LABEL_NAME"
SHELL ["/bin/sh", "-c"]
RUN apt-get update && \
    apt-get install -y sudo curl git gcc make openssl libssl-dev libbz2-dev libreadline-dev libsqlite3-dev zlib1g-dev libffi-dev
USER ubuntu
WORKDIR /home/ubuntu
ENV LANG en_US.UTF-8
CMD [“echo”, “Hello World”]
  • Now, build a Docker Image using the following command
 docker build --build-arg LABEL_NAME=mylabel  -t imagetwo .
build a Docker Image
build a Docker Image
  • Lets verify the Docker Image by running the following command.
docker images
Verifying the Docker Image
Verifying the Docker Image
  • Also verify the new Docker Image by running a container and verifying all the Dockerfile commands inside the container.
docker run -i -t 2716c9e6c4af
Verifying the Docker Image by running a container
Verifying the Docker Image by running a container
docker run -i -t 2716c9e6c4af /bin/bash
  • Lets navigate inside the container and check other details as well.
    • User is ubuntu
    • Working directory is /home/ubuntu
    • Curl is installed
    • ENV LANG is also set.
Checking commands in docker container
Checking commands in docker container
  • Finally if you want to check LABEL then use the docker inspect command by specifying the docker image.
docker inspect 2716c9e6c4af
Checking LABEL in docker container using docker inspect command
Checking LABEL in docker container using docker inspect command

Conclusion

This tutorial taught you in-depth commands used inside Dockerfile to build a Docker Image. There are several commands which we covered using examples in the demonstration.

Also, you learned how to create docker images, run containers, and verify if those commands were executed successfully. Dockerfile is essential for building new Docker Images on top of the base Docker image.

Which docker image do you plan to create using a customized Dockerfile and run containers?

How to create Node.js Docker Image and Push to DockerHub using Jenkins Pipeline

If you plan to run your node.js application, nothing could be better than running on the docker and safely storing it on Dockerhub.

Running an application on docker is a huge benefit because of its light-weighted technology and security. Docker images are stored safely on the dockerhub

In this tutorial, you will learn how to create a docker image for node.js applications and push it to dockerhub using the Jenkins pipeline.

Let’s get started.

Join 28 other followers

Table of Content

  1. What is Jenkins Pipeline?
  2. What is Jenkinsfile?
  3. What is node.js server-side javascript?
  4. Prerequisites
  5. How to Install node.js and node.js framework on ubuntu machine
  6. Creating Node.js Application
  7. How to create dockerfile for Node.js application
  8. Pushing all the code to GIT Repository: Git Push
  9. Creating Jenkinsfile to run docker image of Node.js application
  10. Configure Jenkins to Deploy Node.js Docker Image and Push to Dockerhub
  11. Conclusion

What is Jenkins Pipeline?

Jenkins Pipeline uses a group of plugins that help deliver a complete continuous delivery pipeline starting from building the code till deployment of the software right up to the customer.

Jenkins Pipeline plugin is automatically installed while installing the Jenkins with suggested plugins and allows you to write complex operations and code deployment as code using DSL language ( Domain-specific language).

Related: the-ultimate-guide-getting-started-with-Jenkins-pipeline

What is Jenkinsfile?

Jenkins Pipelines uses a text file called Jenkinsfile, which is checked into the repository. You define each stage and steps that need to be executed, such as building and compiling code to deploy in respective environments.

Jenkinsfile allows you to define steps in code format, which is easier and gives more ability to review. In case Jenkins stops, you can still continue to write Jenkinsfile. It can hold, wait, approve, stop Jenkins job and many other functionalities.

What is node.js server-side javascript?

JavaScript is a language used with other languages to create a web page add some dynamic features such as rollover or graphics, and Node.js is an open-source JavaScript runtime environment.

Node.js performs well as it contains only a single process without wasting much memory, CPU and never blocks any threads or processes and furthermore allows multiple connections simultaneously.

Node.js allows JavaScript developers to create apps in the front and backend.

Prerequisites

  • You must have ubuntu machine preferably 18.04 version + and if you don’t have any machine you can create a ec2 instance on AWS account
  • Docker and Jenkins must be installed on ubuntu machine.
  • Make sure you have git hub account and a repository created . If you don’t have follow here

How to Install node.js and node.js framework on ubuntu machine

Now that you know what is node.js and what are benefits of having node.js. In this section, let’s dive into how to Install node.js and node.js frameworks on ubuntu machines.

  • Log in to the Ubuntu machine using your favorite SSH client.
  • Now, create a folder named nodejs-jenkins under opt directory in the ubuntu machine.
cd /opt
mkdir nodejs-jenkins
cd nodejs-jenkins
  • Now install node.js on ubuntu machine using the below command.
sudo apt install nodejs
  • Next, Install node.js package manager. Installing node.js package manager allows to install modules node_modules inside the same directory.
sudo apt install npm
  • Further install Nodejs express Web Framework and initialize it. This command will generate package.json file containing the project, all the dependencies and metadata details that will be used to create a project and further application.
npm init
  • Further, add one dependency which is required by node.js application.
npm install express --save

Creating Node.js Application

Now that you have successfully installed the node.js application and framework required to create a node.js application. Let’s create a node.js application.

Assuming you are still logged into the ubuntu machine.

  • Firstly, create a file named main.js in the /opt/nodejs-jenkins directory and copy/paste the below code.
var express = require('express')    //Load express module with `require` directive
var app = express() 

//Define request response in root URL (/)
app.get('/', function (req, res) {
  res.send('Hello Welcome to Automateinfra.com')
})
app.listen(8081, function () {
  console.log('app listening on port 8081!')
})

How to create dockerfile for Node.js application

Previously you created a node.js application, which is great, but to deploy node.js on docker, you will need to create a dockerfile. Docker file creates customized docker images on top of docker images.

After creating the dockerfile, you can use the docker build command to create a new customized docker image.

Running docker containers using dockerfile
Running docker containers using dockerfile

Let’s jump in and create a dockerfile for the node.js application.

  • Create a file named dockerfile in the same /opt/nodejs-jenkins directory and copy/paste the below content.
FROM node:7              # Sets the base image
RUN mkdir -p /app
WORKDIR /app             # Sets the working directory in the container
COPY package.json /app   # copy the dependencies file to the working directory
RUN npm install          # Install dependencies
COPY . /app       # Copy the content of the local src directory to the working directory
EXPOSE 4200
CMD ["npm", "run", "start"]
  • Next, to create a node.js docker image using above dockerfile run the below command on ubuntu machine.
docker build .
Building the docker image using dockerfile
Building the docker image using dockerfile

Pushing all the code to GIT Repository: Git Push

Earlier, you created the node.js application and manually built the docker image to confirm that your code is absolutely fine and works. Let’s push all the files and code in the Git repository using various commands so that later Jenkins can use these files and build the application.

  • Now your directory /opt/nodejs-jenkins should have look as below.
checking the content of the directory
checking the content of the directory
  • Initialize a new repository in the same directory /opt/nodejs-jenkins by running git init command.
git init
  • Add all the file in git repository using the below command in the same directory /opt/nodejs-jenkins
git add .
  • Now, check the status of git repository with below command. If there are any errors they will appear else you are good to go to the next step.
git status
  • Commit your changes in git repository using the git commit command in the same directory /opt/nodejs-jenkins
 git commit -m "MY FIRST COMMIT"
  • Add the remote repository that you already had as a origin by running git remote add command.
git remote add origin https://github.com/Engineercloud/nodejs-jenkins.git
Adding the remote origin
Adding the remote origin
  • Finally push the changes in the remote branch . Once prompted specify your credentials.
git push -u origin master
Pushing the code in git repository
Pushing the code in git repository
  • Now, verify the code on github by navigating to the repository link.
Verifying the git repository
Verifying the git repository

Creating Jenkinsfile to run docker image of Node.js application

Let’s quickly jump into the creation of Jenkinsfile to run docker image of Node.js application to deploy an application using Jenkins.

  • Create a file named Jenkinsfile in the same /opt/nodejs-jenkins directory and copy/paste the below content. Make sure to change the sXXXXXXX410/dockerdemo as per you’re dockerhub username and repository name.
node {
     def app 
     stage('clone repository') {
      checkout scm  
    }
     stage('Build docker Image'){
      app = docker.build("sXXXXX410/dockerdemo")
    }
     stage('Test Image'){
       app.inside {
         sh 'echo "TEST PASSED"' 
      }  
    }
     stage('Push Image'){
       docker.withRegistry('https://registry.hub.docker.com', 'git') {            
       app.push("${env.BUILD_NUMBER}")            
       app.push("latest")   
   }
}
  • Now push the Jenkinsfile as well in github in the same repository that you used earlier. Now, your repository should look something like below.
Pushing the Jenkinsfile in the git repository
Pushing the Jenkinsfile in the git repository

Configure Jenkins to Deploy Node.js Docker Image and Push to Dockerhub

Now that you have all the code, including Jenkinsfile, in the Github repository and deploy the node.js application on docker using Jenkinsfile, you need to create a Jenkins Job. Let’s configure Jenkins by creating a new multi-branch pipeline Jenkins Job.

  • Create a new multibranch pipeline Jenkins Job named nodejs-image-dockerhub by clicking on new item and selecting multibranch pipeline on the Dashboard.
Creating a new multibranch pipeline Jenkins Job
Creating a new multibranch pipeline Jenkins Job
  • Now click on nodejs-image-dockerhub job and click on configure it with git URL and then hit save.
Configuring the Git URL in the Jenkins
Configuring the Git URL in the Jenkins
  • To connect to dockerhub you would need to add dockerhub credentials by clicking on the Dashboard ➔ Manage Jenkins ➔ Manage credentials ➔ click on global ➔ Add credentials.
Adding the dockerhub credentials in the Jenkins
Adding the dockerhub credentials in the Jenkins
  • Next, on Jenkins server add Jenkins user in the docker group so that Jenkins user has access to run docker commands.
sudo groupadd docker
sudo usermod -a -G docker jenkins
service docker restart
  • Further, make sure Jenkins users has sudo permissions by running below commands.
sudo vi  /etc/sudoers
jenkins ALL=(ALL) NOPASSWD: ALL
  • Now you are ready to run your first jenkins job. click on scan Multibranch pipeline job that will scan your branches in the repository. Then click on Branch and then click on Build Now.
Running the Jenkins Job
Running the Jenkins Job
  • The Jenkins job is completed successfully. Lets verify if Docker image has been successfully pushed to dockerhub by visiting Dockerhub repository.
Checking the docker image in dockerhub

Conclusion

In this tutorial, you learned what the Jenkins pipeline is, node.js is, and how to create a dockerfile, Jenkins file, and node.js application. Finally, you pushed the docker image to dockerhub using Jenkins.

So now you know how to deploy the node.js application on docker; which application do you plan to deploy next?

How to run Node.js applications on Docker Engine

Table of content

  1. What is Node.js ?
  2. What is docker ?
  3. Prerequisites
  4. How to Install Node.js on ubuntu machine
  5. Install Node.js Express Web Framework
  6. Create a Node.js Application
  7. Create a Docker file for Nodejs application
  8. Build Docker Image
  9. Run the Nodejs application on a Container
  10. Conclusion

What is Node.js ?

Node.js is an open source JavaScript runtime environment. Now, what is JavaScript ? Basically JavaScript is a language which is used with other languages to create a web page and add some dynamic features such as roll over and graphics.

Node.js runs as a single process without wasting much of memory and CPU and never blocks any threads or process which is why its performance is very efficient. Node.js also allows multiple connections at the same time.

With the Node.js it has become one of the most advantage for JavaScript developer as now they can create any apps utilizing it as both frontend or as a backend.

Building applications that runs in the any browser is a completely different story than than creating a Node.js application although both uses JavaScript language.

What is docker ?

Docker is an open source tool for developing , shipping and running applications. It has ability to run applications in loosely isolated environment using containers. Docker is an application which helps in management of containers in a very smooth and effective way. In containers you can isolate your applications. Docker is quite similar to virtual machine but it is light weighted and can be ported easily.

Containers are light weighted as they are independent of hypervisors load and configuration. They directly connect with machines ie. hosts kernel.

Prerequisites

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install node.js on ubuntu machine

  • Update your system packages.
sudo apt update
  • Lets change the directory and Download the node js package
cd /opt

sudo apt install nodejs
  • Install node js package manager
sudo apt install npm
  • Verify the node js package installation
nodejs -v

Install Nodejs Express Web Framework

  • Install Nodejs Express Web Framework and initialize it
npm init
  • package.json which got created after initializing the Nodejs framework will have all the dependencies which are required to run. Let us add one dependency which is highly recommended.
npm install express --save

CREATE NODE.JS APPLICATION

main.js

var express = require('express')    //Load express module with `require` directive
var app = express() 

//Define request response in root URL (/)
app.get('/', function (req, res) {
  res.send('Hello Welcome to Automateinfra.com')
})


app.listen(8081, function () {
  console.log('app listening on port 8081!')
})
  • Now Run the Node.js application locally on ubuntu machine to verify
node main.js
This image has an empty alt attribute; its file name is image-38.png

Create a docker file for Node.js application

Docker file is used to create a customized docker images on top of basic docker image. It is a text file that contains all the commands to build or assemble a new docker image. Using docker build command we can create new customized docker images . Its basically another layer which sits on top of docker image. Using newly built docker image we can run containers in similar way.

This image has an empty alt attribute; its file name is image-43.png
  • Create a docker file and name it as Docker file . Keep this file also in same directory as main.js
FROM node:7              # Sets the base image
WORKDIR /app             # Sets the working directory in the container
COPY package.json /app   # copy the dependencies file to the working directory
RUN npm install          # Install dependencies
COPY . /app              # Copy the content of the local src directory to the working directory
CMD node main.js         # Command to run on container start  
EXPOSE 8081

Build a Docker Image

  • Now we are ready to build our new image . So lets build our image
docker build -t nodejs-image .
  • You should see the docker images by now.
docker images

Run the Nodejs application on a Container

  • Now run our first container using same docker image ( nodejs-image)
docker run -d -p 8081:8081 nodejs-image
  • Verify if container is successfully created.
docker ps -a

Great, You have dockerized Nodejs application on a single container .

This image has an empty alt attribute; its file name is image-38.png

Conclusion:

In this tutorial we covered what is docker , what is Nodejs and using Nodejs application created a application on docker engine in one of the containers.

Hope this tutorial will helps you in understanding and setting up Nodejs and Nodejs applications on docker engine in ubuntu machine.

Please share with your friends.

How to Install Docker ubuntu step by step

If you want to deploy multiple applications in an isolated environment, consider using Docker and containers where applications have their own container.

In this tutorial, you’ll install the Docker ubuntu machine. You’ll then work with docker containers and docker images and push an image to a Docker Repository. So let’s get started.

Let’s dive in.

Join 28 other followers

Table of Content

  1. What is Docker?
  2. Prerequisites
  3. How to Install Docker on Ubuntu 18.04 LTS
  4. How to run Docker command using Non-Root user (run docker commands without sudo)
  5. Working with docker command ( docker pull command, docker ps, docker image)
  6. Tag Docker image using docker tag and push docker image to docker hub
  7. Conclusion

What is Docker?

Docker is an open-source tool for developing, shipping, and running applications. It has the ability to run applications in a loosely isolated environment using containers. Docker is an application that helps manage containers in a very smooth and effective way. In containers, you can isolate your applications. Docker is quite similar to a virtual machine, but it is lightweight and easily ported.

Containers are light weighted as they are independent of hypervisors load and configuration. They directly connect with machines, i.e., the host’s kernel.

What is Docker?
What is Docker?

Prerequisites

  • Ubuntu machine preferably 18.04 version +, if you don’t have any machine you can create an ec2 instance on AWS account
  • Recommended to have 4GB RAM and at least 5GB of drive space.
  • An account on Docker Hub if you wish to create your own images and push them to Docker Hub.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Docker on Ubuntu 18.04 LTS

Let’s kick off this tutorial by installing docker on ubuntu. Installing docker on the ubuntu machine is a straightforward task. The Docker installation package is available in the official Ubuntu repository may not be the latest version.

This tutorial will install Docker from the official Docker repository to ensure you get the latest version. To do that, add a new package source, add the GPG key from Docker to ensure the downloads are valid, and then install the package.

  • Login to your Ubuntu machine using your favorite SSH client.
  • First, update your existing list of packages by running the below command
sudo apt update
  • Install the below prerequisites software so that apt uses packages over the https protocol. The apt transport software allows your machine to connect with external repositories over HTTPS or HTTP over TLS.
sudo apt install apt-transport-https ca-certificates curl software-properties-common

  • Next, add the GPG key for the official Docker repository to your system. This key builds the trust of your machine with the docker official repository.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

  • Now, add the Docker repository to APT sources so that you can install the docker installation package.
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"

  • Next, update the package database with the Docker packages from the newly added repo using the following command.
sudo apt update
  • Finally, Install the docker on the ubuntu machine using the apt install command.
sudo apt install docker-ce

  • Docker should be started by now; verify the status and version of docker by using service docker status the command.
Checking the docker service using service docker status command
Checking the docker service using the service docker status command

How to run Docker command using Non-Root user (run docker commands without sudo)

The docker command can only be run by the root user or by a user in the docker group, automatically created during Docker’s installation process. If you attempt to run the docker command without prefixing it with sudo or without being in the docker group, it will give you a permission denied error message.

  • To enable users other than root or to run docker commands without sudo, first create the docker group using the groupadd docker command.
groupadd docker
  • Next, add the users in the docker group that you created earlier and can run Docker commands.
sudo usermod -aG docker jenkins   # To add Jenkins user to docker group
sudo usermod -aG docker user1     # To add user1 to docker group
sudo usermod -aG docker ubuntu    # To add ubuntu user  to docker group
  • Restart the docker service by service docker restart command.
service docker restart 
  • Check the docker version by running the below command.
docker version

Your docker commands should now run succesfully now without giving any permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock like this, if you continue to get this message make sure to logout and login back on your machine.

Verifying the docker version using docker version command
Verifying the docker version using the docker version command

Working with docker command ( docker pull command, docker ps, docker image)

The most important part of docker is to run several docker commands such as docker pull which fetches docker images that contain your code, the environment to run it, and several parameters. Docker images are stored on the Docker repository known as the docker hub, though you can also store them on your local machine.

Docker architecture
Docker architecture
  • Use the docker pull command to pull the docker image on your ubuntu machine.
docker pull ubuntu
Pulling an image from docker repository using docker pull command on ubuntu machine
Pulling an image from docker repository using docker pull command on ubuntu machine
  • Now check the downloaded image on your ubuntu machine and run the docker images command.
docker images
Checking all docker images in the system
Checking all docker images in the system
  • Now, run the first docker container by using the below command.
docker run -it ubuntu # Here i,t provides you interactive shell access and ubuntu is image name
Running docker container using docker run command
Running docker container using docker run command.
  • To check all container details ( Exited, running, etc.), run the docker ps -a command.
 docker ps -a
Checking details of all container details
Checking details of all container details

Tag Docker image using docker tag and push docker image to docker hub

Now, let’s learn how to push the docker image to the docker hub. You need a docker hub account to push the docker image to the docker hub.

Assuming you are still logged into the Ubuntu machine using the SSH client.

  • Login to docker hub with your credentials and then login.
docker login -u docker-registry-user
Login to docker hub using docker login -u command
Login to docker hub using docker login -u command
  • Before you push your docker image to the docker hub, it’s highly recommended to tag your image with your docker hub username using the docker tag command. The syntax of the docker tag command is docker tag image.
docker tag ubuntu:latest  <dockerhub-username>/ubuntu:latest
  • After successful login in docker hub, now push your docker image using the following command.
docker push <dockerhub-username>/ubuntu:latest
Running docker push command
Running docker push command.

As you can see below, the docker image is successfully pushed into the docker hub that you created on the ubuntu machine.

successfully pushed docker image into docker hub
successfully pushed docker image into docker hub

Join 28 other followers

Conclusion

In this tutorial, you learned how to install Docker, learned several docker commands such as docker images, docker tag, docker pull, and how to push docker image to docker hub.

Docker is a great way to host your applications separately and efficiently. So which applications are you planning to run in docker?