Ultimate Jenkins tutorial for DevOps Engineers

Jenkins is an open source automated CI/CD tool where CI stands for continuous integration and CD stands for Continuous delivery. Jenkins has its own built-in Java servlet container server which is Jetty. Jenkins can also be run in different servlet containers such as Apache tomcat or glassfish.

  • Jenkins is used to perform smooth and quick deployment. It can be deployed to local machine or on premises data center or any cloud.
  • Jenkins takes your code any sort of code such as python, java or go or JS etc. and compiles it using different compiler such as MAVEN one of the most used compiler and then builds your code in war or Zip format and sometimes as a docker Image. Finally once everything is built properly it deploy as an when required . It integrates very well with lots of third party tools.

JAVA_HOME and PATH are variables to enable your operating system to find required Java programs and utilities.

JAVA_HOME: JAVA_HOME is an (OS) environment variable that can optionally be set after either the (JDK) or (JRE) is installed. The JAVA_HOME environment variable points to the file system location where the JDK or JRE was installed. This variable should be configured on all OS’s that have a Java installation, including Windows, Ubuntu, Linux, Mac, and Android. 

The JAVA_HOME environment variable is not actually used by the locally installed Java runtime. Instead, other programs installed on a desktop computer that requires a Java runtime will query the OS for the JAVA_HOME variable to find out where the runtime is installed. After the location of the JDK or JRE installation is found, those programs can initiate Java-based processes, start Java virtual machines and use command-line utilities such as the Java archive utility or the Java compiler, both of which are packaged inside the Java installation’s \bin directory.

  • JAVA_HOME if you installed the JDK (Java Development Kit)
    or
  • JRE_HOME if you installed the JRE (Java Runtime Environment) 

PATH: Set the PATH environment variable if you want to be able to conveniently run the executables (javac.exejava.exejavadoc.exe, and so on) from any directory without having to type the full path of the command. If you do not set the PATH variable, you need to specify the full path to the executable every time you run it, such as:

C:\Java\jdk1.8.0\bin\javac Myprogram.java
# The following is an example of a PATH environment variable:

C:\Java\jdk1.7.0\bin;C:\Windows\System32\;C:\Windows\;C:\Windows\System32\Wbem

Installing Jenkins using msi installer on Windows Machine

MSI is an installer file that installs your program on the executing system. Setup.exe is an application (executable file) that has MSI file(s) as one of the resources. The MSI is the file extension of MSI files. They are Windows installers. An MSI file is a compressed package of installer files. It consists of all the information pertaining to adding, modifying, storing, or removing the respective software.  MSI file includes data, instructions, processes, and add-ons that are necessary for the application to work normally.

EXE is short for Executable. This is any kind of binary file that can be executed. All windows programs are exe files. Prior to MSI files, all installers were EXE files. The exe is a file extension of an executable file. An executable file executes a set of instructions or a code when opening it. An executable file is compiled from source code to binary code. It can be directly executed by the Windows OS. These files are understandable by the machine, and they can be directly executed by the operating system

MSI is a file extension of windows installer which is a software component of Microsoft Windows used for the installation, maintenance, and removal of software. Whereas, exe is a file extension of an executable file that performs indicated tasks according to the encoded instructions. 

  1. Navigate to https://www.jenkins.io/download/ and select windows option and your download of Jenkins msi will begin.
  1. Once downloaded click on the jenkins.msi
  1. Continue the Jenkins setup.
  1. Select the Port 8080 and click on Test Port and then Hit Next.
  1. Provide the admin password from the provided Path mentioned in RED color.
  1. Further install the plugins required for jenkins.
  1. Next,it will prompt for First admin user. Please fill the required information and keep it safe with you , as you will use this to login.
  1. Now Jenkins URL configuration screen will appear , keep it as it is for now.
  1. Click on Save and Finish.
  1. Now your Jenkins is ready , click on Start using Jenkins. Soon, you will see Jenkins Dashboard. You can create New Jobs by clicking on New Item.

Installing Jenkins using jenkins exe on Windows Machine

  1. Similarly now install jenkins.war from jenkins URL and click on Generic Java package(.war).
  2. Next run the command as below.
java -jar jenkins.war -http=8181
  1. Next, copy the Jenkins password from the log output and paste it in the as you did earlier in windows msi section point (5) and follow rest of the points.

Installing jenkins on Apache Tomcat server on Windows Machine

  1. Install the Apache Tomcat on windows machine from https://tomcat.apache.org/download-90.cgi and click on tomcat installer as per your system. This tutorial is performed on 64 bit windows machine.
  1. Next, unzip the tomcat installation folder and copy the jenkin.war file in the webapps folder.
  1. Next, go inside the bin folder and run the tomcat by clicking on the startup batch script.
  1. Finally you will notice that Apache Tomcat has started and Jenkins as well.
  1. Now, navigate to localhost:8080 URL and you should see tomcat page as shown below.
  1. Further, navigate to localhost:8080/jenkins to redirect to Jenkins Page.

Configuring the Jenkins UI

  1. First click on Manage Jenkins and then navigate to Configure system.
  1. Next, add the system message and save it which should display this message on Jenkins everytime as below.
  1. To configure the name of the Jobs add the name Pattern as below.
  1. Next, try creating a a new Jenkins Job with random name then it will not allow you and display the error message.

Managing User’s and Permission’s in Jenkins UI

  • Go to Manage Jenkins and Navigate to Manage users in the Jenkins UI.
  • Then Create three users as shown below admin, dev, qa.
  • Next, Navigate to Manage Jenkins and choose Configure Global Security.
  • Next select Project-based Matrix Authorization Strategy and define the permissions for all users as you want.

Role Based Stratergy

  • In Previous section you noticed that adding all users and grnating all permissions is little tough job. So, instead create a role and add users in it. To do that first step is to install the Plugin as shown below.
  • Next select Role based Stratergy as shown below and define the permissions for all users as you want.
  • Next, navigate to Manage Jenkins and then to Manage and Assign Jenkins and then click on Manage Roles.
  • Add 3 Global Roles named DEV Team, QA Team and admin.
  • Add 2 Items Roles developers and Testers with define patterns so that Jobs names are declared accordingly.
  • Next, Click on Assign Role
  • Assigning the roles as shown below.

Conclusion

In this tutorial you learnt how to install jenkins on windows through various ways , how to configure Jenkins Dashboard UI and how to manager users and Permissions.

The Ultimate Guide on the Hardware and Software/components of the computer

Knowing the hardware of the computer is very important as an IT engineer and this tutorial gives you each and every information about the same. In this tutorial learn everything about the Hardware and Software/components of the computer.

What is Computer System?

The computer contains mainly two parts: Hardware and software.The computer is a programmable electronic device that can be programmed to accept the input and then provide the output. The Computer hardware can only understand binary numbers that are 0 or 1. Computer transfer data one byte at a time.

Computer stores all data on the hard disk as 0 or 1 Binary numbers. ASCII ( American Standard Code for information interchange.

What are Computer hardware’s?

Hardware is made up of various electronic circuits and components such as I/O devices, CPU, disk, and the motherboard.

  • Input Devices: The input device is used to provide the input ( data, instructions) into the RAM of the computer such as
    • Keyboard
    • Trackball ( the upper part of the mouse)
    • light pen
    • OBR (Optical Bar Code reader) – This is used to scan the vertical bars, read tags.
    • OCR ( Optical Character reader) – This is used to detect the alphanumeric characters like reading passanger tickets, computer printed bills of credit card.
  • Output Devices: The output devices provides the end result which user provided as input such as monitor.
    • Monitor also known as VDU ( Visual display unit). It contains CRT ( Cathode ray Tube) which displays the character as output.
    • Ther are many different types monitors available in market such as CGA ( Color graphics adapter) , EGA (Enhanced graphic adapter) ,VGA ( Video graphics adapter), SVGA (Super VGA) which is best in market.
  • CPU( Central Processing Unit): CPU is the most important hardware part of the Computer which performs all the functions and execution of input data. It executes the instructions stored in the main memory. CPU has a set of electronic circuits that executes the program instructions. CPU contains its own memory that is Cache to immediately process the data,
  • Memory or storage: This a a storage place where all the data resides. Again there are two categories in memory i.e Primary and the secondary memory.
    • Primary memory: These memory are directly connected with CPU and are extremely fast such as RAM (Random Access Memories) which is volatile in nature and ROM (Read Only Memories) that is non volatile in nature. The CPU works with these memory only.
    • Secondary Memory: These memory are not directly connected to the CPU such as Floppy disk, CD Rom ,hard disks or RAM.
  • Motherboard: The motherboard is most important hardware component like a circuit board. It is the main printed circuit board ( PCB ) found in computers. CPU is installed in one of the sockets of the motherboard or directly soldered. There are slots in which memory is installed.
New Laptop Motherboard Hp Pavilion 15-AC, 15 AC 15AC, LA-C701P intel Core  i3 Cpu, laptop motherboard – ON OFF SHOP
  • Buses: The data is stored in 0 or 1 binary format in register in a form of a unit. When the data needs to travel or move from one registers to other then you need a seperate wires and these wires are known as buses.
    • The data bus is used to move data,
    • Address bus to move address or memory location
    • Control bus to send control signals between various components of a computer.
  • Types of Buses
    • System bus transfers information between different parts inside computer system.
    • Control bus has two wires, set and enable. When CPU wants to read from RAM, the enable wire will be opened; when CPU wants to save information on RAM, CPU will enable the set wire.
    • Data bus is a two-way bus carry data commute between CPU and RAM.
    • Address bus is a one-way bus carry addresses from CPU to RAM.
  • Clock: Clock is an important component of CPU which measures and allocates fixed slot for processing each and every micro operations.
    • The clock speed measures the number of cycles your CPU executes per second, measured in GHz (gigahertz).
    • A CPU with a clock speed of 3.2 GHz executes 3.2 billion cycles per second
    • The CPU is allocated one or more clock cycle to complete the micro operations.
    • The processor base frequency refers to the CPU’s regular operating point, while the Max Turbo Frequency refers to the maximum speed the processor
    • The CPU executes the instructions in synchronization with the clock pulse.
    • The operations are performed with a speed of clock cycle per second (MHz) with a range of 4.77 MHz to 266 Mhz.
    • The speed of CPU is measured in terms of MIPS( Millions of instructions per second) or cycles per second
    • Each central processing unit has an internal clock that produces pulses at a fixed rate to synchronize all computer operations

Chipsets: Chipset handles an incredible amount of data. It is the glue that connects the microprocessor with the motherboard. It contains two basic parts northbridge (connects directly to processor via FSB i.e front side bus) and the South Bridge primarily handles the routing of traffic between the various input/output (I/O) devices on the system for which speed is not vital to the total performance, such as the disk drives (including RAID drive arrays), optical drives.

Video (Graphics) Card:


A dedicated video card (or video adapter) is an expansion card installed inside your system unit to translate binary data received from the CPU or GPU into the images you view on your monitor. It is an alternative to the integrated graphics chip.
Modern video cards include ports allowing you to connect to different video equipment; also they contain their own RAM, called video memory. Video cards also come with their own processors or GPUs


Sound Cards

  1. Sound cards attached to the motherboard and enabled your computer to record and reproduce sounds.
  2. Most computers ship with a basic sound card, most often a 3D sound card. 3D sound is better than stereo sound

Ethernet Card/Network Cards

An Ethernet network requires that you install or attach network adapters to each computer or peripheral you want to connect to the network. Most computers come with Ethernet adapters preinstalled as network interface cards (NICs).


CPU (Central Processing Unit)

CPU is the most important hardware part of the Computer which performs all the functions and execution of input data. It executes the instructions stored in the main memory. CPU has a set of electronic circuits that executes the program instructions.

An example of a CPU is Intel 8085 which was an 8-bit microprocessor.

The U is for Ultrabook: Intel's low-power, dual-core Haswell CPUs unveiled  | Ars Technica
Intel announces Core i9 laptop processor, new 8th-gen desktop CPUs, four  extra 300-series chipsets, more | TechSpot

Computer interacts with primary storage that is the main memory for processing data and instructions. CPU contains mainly two components Arthematic Logic Unit and Control Unit.

  • Arthematic Logic Unit (ALU) is a digital circuit that performs all the calculations such as bitwise and mathematical operations on binary number.
  • Control Unit: CU controls all the activities such as transfer of data, instructions. It takes or obtains the instructions from the memory and understands it and then forward it further for execution or calculations. The control unit sends the control signals along the control bus.
  • Registers:  These are high speed memory built into CPU chip circuits to acess or store the data immediate from the calculations or instructions performed by ALU. They act as high speed temporary memory. Registers can store two words at a time until overwritten. CPU needs to process very fast so in order for CPU to process the instructions or data from the RAM you need a place high memories in between which is Registers.
    • Registers work under the direction of the control unit to accept, hold, and transfer instructions or data and perform arithmetic or logical comparisons at high speed.
  • Types of Registers
    • Program Counter: Stores the address of next instruction to be executed
    • Accumulator: This registers temporarily stores data from ALU immediately.
    • Memory Address Registers: Stores the address of current instruction being executed
    • Memory data Registers or Memory Buffer Registers: Holds the data from that is copied from the RAM and ready for CPU to process
  • Below is the Image snapshot of various registers that are used in the CPU.
  • Cache (L2 or L3). A processor uses memory installed in the chip itself to store and speed up operations before utilizing external system RAM. This on-board memory is stored in one or more caches, which are identified L2 or L3. More powerful processors will be equipped with larger caches.
  • Socket Unit: On which CPU is installed on the motherboard.

Computer Architecture and its Working

The Working a Computer system comprises input operations, storage operations, data processing, and output operations.

  1. When you press a key on your keyboard lets say ABC. The keyboard has PCB behind it which converts the alphabets ABC into the binary number and sends it to the CPU.
  2. The other scenerio could be execution of a single program like 35 + 49
  3. CPU component that is Control unit fetches (gets) the instruction from the RAM memory how to draw ABC ( basically like opcode, operand) using the data bus and also in the mean time asks RAM to store this in your memory until I perform execution of calculation. ( At times CPU fetches from Hard disk instead of RAM as your OS lies in the Hard disk)
  4. Data bus brings the data and instructions in CPU’s internal memory that is registers for processing the data.
  5. The control unit decodes the instruction (decides what it means) to machine bianry code and directs that the necessary data be moved from memory to the arithmetic/logic unit. The steps (2,3,4)together are called instruction time, or I-time.
  6. The arithmetic/logic unit executes the arithmetic or logical instruction. That is, the ALU is given control and performs the actual operation on the data.
  7. Thc arithmetic/logic unit stores the result of this operation in memory or in a register. ( Step 5 and 6 are Execution or E time)
  8. The control unit eventually directs memory to release the result to an output device or a secondary storage device. The combination of I-time and E-time is called the machine cycle. To Perform all the instructions it is known as clock cycles that is executes 5 instructions and nowadays Modern CPU’s can perform millions of clock cycle per second.
  9. All these things are happening on the circuit known as motherboard.
Step by Step function of CPU along with Memory

Another example of how a computer works

  • Suppose your Hard disk has 500 processes.
  • Suppose RAM can execute a maximum of 50 process.
  • Lets say: You ran a program which is an executable code (low level code that is machine code) to run 500 process that are stored in the hard disk.
  • Then as CPU will request RAM to provide the 50 process instructions to execute. If in case it doesnt have then it asks Hard disk to provide the instructions.
  • Harddisk copy the instructions to RAM and then CPU fetches from the RAM.
  • For Hard disk to copy 50 Process to RAM and then CPU to fetch from RAM is decided by Operating system using different algorithms such as short term scheduler or long term scheduler.

Data flow from CPU to Memory and Vice Versa

Step by Step function of CPU
  1. The MAR stands for Memory address register which is connected to the Address Bus. It stores the memory address of an instruction. The sole function of MAR is to contain the RAM address of the instruction the CPU wants.
  2. The MDR stands for Memory data register which is connected to the Data Bus. It holds the data that will be written to the RAM or read from the RAM. Even when ALU performs the operations the data is stored in high memory registers such as MBR or MDR
  3. The relationship between MAR and MDR is that the MAR gives the address the data of the MDR will be read from or written to.
  • .

Single Core CPU v/s Multi Core CPU

Single-core CPU will only be able to process one program at a time. However, when you run multiple programs simultaneously, then a single-core processor will divide all programs into small pieces and concurrently execute with time slicing.

For EX:

P1 initiated——————————————————– P1 Ends

P2 initiated ——————————— P2 Ends

P3 Initaited —————- P3 Ends

Unlike single-core processing, it is a way in which computing tasks are divided into sub-parts, and a multicore processor (multiple CPU cores) execute each sub-task simultaneously.A dual-core CPU literally has two central processing units on the CPU chip. A quad-core CPU has four central processing units, an octa-core CPU has eight central processing units, and so on.

P1 initiated—————— P1 Ends

P2 initiated —————– P2 Ends

P3 Initaited —————– P3 Ends

Hyper Threading or Logical Processor or Threads of CPU

Threads are the virtual components or codes, which divide the physical core of a CPU into virtual multiple cores. A single CPU core can have up to 2 threads per core. For dual-core (i.e., 2 cores) it will have 4 threads, for Octal core (i.e., 8 core) it will have 16 threads, and vice-versa.

Windows’ Task Manager shows this fairly well. Here, for example, you can see that this system has one actual CPU (socket) and four cores. Hyperthreading makes each core look like two CPUs to the operating system, so it shows 8 logical processors.

Threads of Processes

The thread is created by a process. Every time you open an application, it itself creates a thread that will handle all the tasks of that specific application. Like-wise the more application you open more threads will be created.

The threads are always created by the operating system for performing a task of a specific application.

Batch Processing vs Multi programming vs vs Multiprocessing vs Multitasking vs Multithreading Operating Systems

Batch processing is the grouping of several same processing jobs to be executed one after another by a computer without any user interaction.

Multiprogramming is the ability of an OS to execute multiple programs at the same time on a single processor machine.

Multiprocessing system: When one system is connected to more than one processor which collectively works for the completion of the task.

Multithreading is a conceptual programming paradigm where a process is divided into a number of sub-processes called threads. Each thread is independent and has its own path of execution with enabled inter-thread communication.

Magnetic Storage Device

There are devices that are known to be magnetic storage devices as they have a layer of magnetic substance on their surface. These devices have a read-write assembly that converts data and instructions in the form of 0 or 1 into some form of the magnetic signal.

Floppy: The floppy disk stores the data in the form of a magnetic signal and while data is stored they are converted from 0,1 to magnetic signals. These were introduced by IBM and later were known as diskettes. They have an option called a small sliding switch called write-protect notch from which you cannot delete the data from the floppy drive.

Hard Disk: Can store a huge amount of data and have hard platters that hold magnetic medium compared to floppy disks compared to floppy and tapes that have plastic films. Information remains intact even after switching off the computer. So in computers, Operating System is installed and stored on the hard disk. As the hard disk is non volatile memory, OS does not lose on the turn off

Magnetic tapes: They are similar to tapes that you see in cassettes or video cassettes and are divided into tracks. One of the tracks is used to detect the error. They can store as much as 10GB of data. It allows only sequential order which is a disadvantage.

Zip disks: It is similar to floppy disks. They have a less magnetic coating as compared to which allows more tracks per inch on the track surface.

Optical Storage Device

In the case of Optical storage devices, the signals are stored in the form of light. So 0’s and 1’s are converted into light information. Let’s learn about some of the optical storage devices.

CD-ROM: It stands for compact disk read-only memory. When you add or write any data in the CD it is known as burning the CD. It is basically ROM where data can be read but once written cannot be rewritten or erased.

DVD-ROM: Used for high-quality video and has better storage such as 4GB to 18GB.

Conclusion

In this tutorial, you learned everything about hardware and how computers work? So with this knowledge, you are a computer hardware pro and you can easily diagnose your systems!

How does Python work Internally with a computer or operating system

Are you a Python developer and trying to understand how does Python Language works? This article is for you where you will learn each and every bit and piece of Python Language. Let’s dive in!

Python

Python is a high-level language, which is used in designing, deploying, and testing at lots of places. It is consistently ranked among today’s most popular programming languages. It is also dynamic and object-oriented language but also works on procedural styles as well, and runs on all major hardware platforms. Python is an interpreted language.

High Level v/s Low Level Languages

High-Level Language: High-level language is easier to understand than is it is human readable. It is either compiled or interpreted. It consumes way more memory and is slow in execution. It is portable. It requires a compiler or interpreter for a translation.

The fastest translator that converts high level language is .

Low-Level Language: Low-level languages are machine-friendly that is machines can read the code but not humans. It consumes less memory and is fast to execute. It cannot be ported. It requires an assembler for translation.

Interpreted v/s Compiled Language

Compiled Language: Compiled language is first compiled and then expressed in the instruction of the target machine that is machine code. For example – C, C++, C# , COBOL

Interpreted Language: An interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program and these kinds of languages are known as interpreter languages. For example JavaScript, Perl, Python, BASIC

Python vs C++/C Language Compilation Process

C++ or C Language: These Languages need compilation that means human-readable code has to be translated into Machine-readable code. The Machine code is executed by the CPU. Below is the sequence in which code execution takes place.

  1. Human Readable is compiled.
  2. Compilation takes place.
  3. Compiled code generates a executable file which is in a machine code format (Understood by Hardware).
  4. Execuation file is executed by CPU

Python Language:

Python is a high-level language

Bytecode, also termed p-code, is a form of instruction set designed for efficient execution by a software interpreter

  1. Python code is written in .py format such as test.py.
  2. Python code is then compiled into .pyc or .pyo format which is a byte code not a machine code ( Not understood by Machine) using Python Interpreter.
  3. Once your program has been compiled to byte code (or the byte code has been loaded from existing .pyc files), it is shipped off for execution to something generally known as the Python Virtual Machine
  4. Byte code is converted into machine code using PVM ( Python Virtual Machine).
  5. Once your program has been compiled to byte code (or the byte code has been loaded from existing .pyc files), it is shipped off for execution to something generally known as the Python Virtual Machine
  6. Now byte code that is test.pyc is further converted into machine code using virtual machine such as (10101010100010101010)
  • Finally Program is executed and output is displayed.
How Python runs? – Indian Pythonista

Conclusion

In this tutorial, you learnt how the python language works and interacts with Operating systems and Hardware. So, which application are you planning to build using Python?

Windows Boot Process Step by Step

If you are looking to find how exactly windows booting happens then you are at the right place. In this tutorial, you will learn step by step how windows boot processing works. Let’s dive in.

Technical Terms:

Firmware

Firmware is an electronic component that contains the software components such as BIOS. These instructions are steps that inform electronic components regarding how to operate. A kernel is not to be confused with a basic input/output system which is an independent program stored on a chip within a computer’s circuit board.

Firmware is stored in non-volatile memory devices such as ROM, EPROM, or flash memory

CMOS

A complementary metal-oxide-semiconductor (CMOS) is a type of integrated circuit technology.
The term is often used to refer to a battery-powered chip found in many personal computers that
holds some basic information, including the date and time and system configuration settings,
needed by the basic input/output system (BIOS) to start the computer.

The CMOS (Complementary Metal-Oxide Semiconductor) chip stores the settings that you make
with the BIOS configuration program.

Flash Memory

Flash Memory is lifelong and unchanged storage that is used to store information even when the system is powered off. Flash memory is widely used with car radios, cell phones, digital cameras, PDAs, solid-state drives, tablets, and printers.

Step by Step Windows boot Processing

Basic Input Output System (BIOS) – [STEP 1]

  • BIOS is the very first software to run when a computer is started and is stored on a small memory chip on the motherboard
  • BIOS provides steps to the computer on how to perform basic functions such as booting.
  • A computer’s basic input/output system (BIOS) is a program that’s stored in nonvolatile memory such as read-only memory (ROM) or flash memory, making it firmware
  • BIOS is also used to identify and configure the hardware in a computer such as the hard drive, floppy drive, optical drive, CPU, memory, and related equipment.
  • BIOS performs a POST (Power On Self Test). POST checks all the hardware devices connected to a computer like RAM, hard disk, etc, and makes sure that the system can run smoothly with those hardware devices. If the POST is a failure the system halts with a beep sound.
  • The other task of the BIOS is to read the MBR. MBR stands for Master Boot Record and its the first sector on a hard disk. MBR contains the partition table and boot loader.

Power On Self Test (POST) – [STEP 2]

POST checks all the hardware devices connected to a computer like RAM, hard disk, etc, and makes sure that the system can run smoothly with those hardware devices. If the POST is a failure the system halts with a beep sound.

The first set of startup instructions is the POST, which is responsible for the following system and diagnostic functions:

  • Performs initial hardware checks, such as determining the amount of memory present
  • Verifies that the devices needed to start an operating system, such as a hard disk, are present
  • Retrieves system configuration settings from nonvolatile memory, which is located on the motherboard
  • If a single beep is sounded from the PC, then there are no hardware issues present in the system. However, an alternative beep sequence indicates that the PC has detected a hardware issue that needs to be resolved before moving on to the next stages of the process

MBR (Master Boot Record) – [STEP 3]

BIOS reads the MBR. MBR has the first sector on the hard disk. MBR contains the boot loader.

Windows Boot Manager – [STEP 4]

Windows Boot Manager enables you to choose from multiple operating systems or select the kernels or helps to start Windows Memory Diagnostics. Windows Boot Manager starts the Windows Boot Loader. Located at %SystemDrive%\bootmgr.

Windows Boot Loader [STEP 5]

The boot loader is a small program that loads the kernel to the memory of the computer that is RAM. There are three boot files in a Windows operating system and they are NTLDR, NTDETECT.COM, and Boot.ini

  • The path of NTLDR (NT Loader) is C:\Windows\i386\NTLDR.
  • C:\boot.ini contains the configuration files of NTLDR
  • This file detect hardware’s and passes information to NTLDR

Kernel Loading [STEP 6]

The Windows Boot Loader is responsible for loading the Windows kernel (Ntoskrnl.exe) and the Hardware Abstraction Layer (HAL), Hal.dll( Hal.dll file) that helps the kernel to interact with hardware.  The Windows executive processes the configuration information stored in the registry in HKLM\SYSTEM\CurrentControlSet and starts services and drivers.

Winlogon.exe starts the login procedures of the windows machine

A High Level Summary of Boot Process:

  1. The computer loads the basic input/output system (BIOS) from ROM. The BIOS provides the most basic information about storage devices, boot sequence, security, Plug and Play (auto device recognition) capability and a few other items.
  2. The BIOS triggers a test called a power-on self-test (POST) to make sure all the major components are functioning properly. You may hear your drives spin and see some LEDs flash, but the screen, at first, remains black.
  3. The BIOS has the CPU send signals over the system bus to be sure all of the basic components are functioning. The bus includes the electrical circuits printed on and into the motherboard, connecting all the components with each other.
  4. The POST tests the memory contained on the display adapter and the video signals that control the display. This is the first point you’ll see something appear on your PC’s monitor.
  5. During a cold boot the memory controller checks all of the memory addresses with a quick read/write operation to ensure that there are no errors in the memory chips. Read/write means that data is written to a bit and then read back from that bit. You should see some output to your screen – on some PCs you may see a running account of the amount of memory being checked.
  6. The computer loads the operating system (OS) from the hard drive into the system’s RAM. That ends the POST and the BIOS transfers control to the operating system. Generally, the critical parts of the operating system – the kernel – are maintained in RAM as long as the computer is on. This allows the CPU to have immediate access to the operating system, which enhances the performance and functionality of the overall system

Conclusion

In this tutorial, you learned how to step by step boot Windows Machine. So, which Windows Machine do you plan to reboot?

The Ultimate Guide on AWS EKS for Beginners [Easiest Way]

In this Ultimate Guide as a beginner you will learn everything you should know about AWS EKS and how to manage your AWS EKS cluster ?

Common! lets begin !

Table of Content

  1. What is AWS EKS ?
  2. Why do you need AWS EKS than Kubernetes?
  3. Installing tools to work with AWS EKS Cluster
  4. Creating AWS EKS using EKSCTL command line tool
  5. Adding one more Node group in the AWS EKS Cluster
  6. Cluster Autoscaler
  7. Creating and Deploying Cluster Autoscaler
  8. Nginx Deployment on the EKS cluster when Autoscaler is enabled.
  9. EKS Cluster Monitoring and Cloud watch Logging
  10. What is Helm?
  11. Creating AWS EKS Cluster Admin user
  12. Creating Read only user for the dedicated namespace
  13. EKS Networking
  14. IAM and RBAC Integration in AWS EKS
  15. Worker nodes join the cluster
  16. How to Scale Up and Down Kubernetes Pods
  17. Conclusion

What is AWS EKS ?

Amazon provides its own service AWS EKS where you can host kubernetes without worrying about infrastructure like kubernetes nodes, installation of kubernetes etc. It gives you a platform to host kubernetes.

Some features of Amazon EKS ( Elastic kubernetes service)

  1. It expands and scales across many availability zones so that there is always a high availability.
  2. It automatically scales and fix any impacted or unhealthy node.
  3. It is interlinked with various other AWS services such as IAM, VPC , ECR & ELB etc.
  4. It is very secure service.

How does AWS EKS service work?

  • First step in EKS is to create EKS cluster using AWS CLI or AWS Management console or using eksctl command line tool.
  • Now, next you can have your own machines EC2 where you can deploy applications or deploy to AWS Fargate which manages it for you.
  • Now connect to kubernetes cluster with kubectl or eksctl commands.
  • Finally deploy and run applications on EKS cluster.

Why do you need AWS EKS than Kubernetes?

If you are working with Kubernetes you are required to handle all the below thing yourself such as:

  1. Create and Operate K8s clusters.
  2. Deploy Master Nodes
  3. Deploy Etcd
  4. Setup CA for TLS encryption.
  5. Setup Monitoring, AutoScaling and Auto healing.
  6. Setup Worker Nodes.

But with AWS EKS you only need to manage worker node other all rest Masters node, etcd in high availability , API server, KubeDNS, Scheduler, Controller Manager, Cloud Controller all the things are taken care of Amazon EKS.

You need to pay 0.20 US dollar per hour for your AWS EKS cluster which takes you to 144 US Dollar per month.

Installing tools to work with AWS EKS Cluster

  1. AWS CLI: Required as a dependency of eksctl to obtain the authentication token. To install AWS cli run the below command.
pip3 install --user awscli
After you install aws cli make sure to set the access key and secret key id in aws cli so that it can create the EKS cluster.
  1. eksctl: To setup and operate EKS cluster. To install eksctl run the below commands. Below command will download the eksctl binary in the tmp directory.
curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/v0.69.0/eksctl_Linux_amd64.tar.gz" | tar xz -C /tmp
  • Next, move the eksctl directory in the executable directory.
sudo mv /tmp/eksctl /usr/local/bin
  • To check the version of eksctl and see if it is properly install run below command.
eksctl version
  1. kubectl: Interaction with k8s API server. To install the kubectl tool run the below first command that updates the system and installs the https package.
sudo apt-get update && sudo apt-get install -y apt-transport-https
  • Next, run the curl command that will add the gpg key in the system to verify the authentication with the kubernetes site.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  • Next, add the kubernetes repository
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
  • Again update the system so that it takes the effect after addition of new repository.
sudo apt-get update
  • Next install kubectl tool.
sudo apt-get install -y kubectl
  • Next, check the version of the kubectl tool by running below command.
kubectl version --short --client
  1. IAM user and IAM role:
  • Create an IAM user with administrator access and use that IAM user to explore the AWS resources on the console. This is the user which also be used in the EC2 instance that you will use to manage AWS EKS cluster by passing user’s credentials in aws cli.
  • Also make sure to create a IAM role that you will apply on the EC2 instance from where you will manage AWS EKS and other AWS resources.

Creating AWS EKS using EKSCTL command line tool

Up to now you installed and setup the tools that are required for creating an AWS EKS Cluster. To know how to create a cluster using the eksctl command then run the help command which will provide you flags that you need to use while creating a AWS EKS cluster.

eksctl create cluster --help 
  1. Lets begin to create a EKS cluster. To do that create a file named eks.yaml and copy and paste the below content.
    • apiVersion is the kubernetes API version that will mange the deployment.
    • Kind denotes what kind of resource/object will kubernetes will create. In the below case as you need to provision cluster you should give Clusterconfig
    • metadata: Data that helps uniquely identify the object, including a name string, UID, and optional namespace.
    • nodegroups: Provide the name of node group and other details required for node group that will be used in your EKS cluster.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-course-cluster
  region: us-east-1

nodeGroups:
  - name: ng-1
    instanceType: t2.small
    desiredCapacity: 3
    ssh: # use existing EC2 key
      publicKeyName: eks-course
  1. Now, execute the command below to create the cluster.
eksctl create cluster -f eks.yaml
  1. Once cluster is successfully created run the below command to know the details of the cluster.
eksctl get cluster
  1. Next, Verify the AWS EKS cluster on AWS console.
  1. Also verify the nodes of the nodegroups that were created along with the cluster by running the below commands.
kubectl get nodes
  1. Also, verify the Nodes on AWS console. To check the nodes navigate to EC2 instances.
  1. Verify the nodegroups in the EKS Cluster by running the eksctl command.
eksctl get nodegroup --cluster EKS-cluster
  1. Finally Verify the number of Pods in the EKS Cluster by running the below eksctl command.
eksctl get pods --all-namespaces

Adding one more Node group in the AWS EKS Cluster

To add another node group in EKS Cluster follow the below steps:

  1. Create a yaml file as shown below and copy/paste the below content. In below file you will notice that previous nodegroup is already mentioned otherwise if you run this file without it it will override previous changes and remove the ng-1 node group from the cluster.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-cluster
  region: us-east-1

nodeGroups:
  - name: ng-1
    instanceType: t2.small
    desiredCapacity: 3
    ssh: # use existing EC2 key
      publicKeyName: testing
# Adding the another Node group nodegroup2 with min/max capacity as 3 and 5 resp.
  - name: nodegroup2
    minSize: 2
    maxSize: 3
    instancesDistribution:
      maxPrice: 0.2
      instanceTypes: ["t2.small", "t3.small"]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 50
    ssh:
      publicKeyName: testing
  1. Next run the below command that will help you to create a nodegroups.
eksctl create nodegroup --config-file=node_group.yaml.yaml --include=' nodegroup2'
  1. If you wish to delete the node group in EKS Cluster run anyone of the below commands.
eksctl delete nodegroup --cluster=EKS-cluster --name=nodegroup2
eksctl delete nodegroup --config-file=eks.yaml --include='nodegroup2' --approve
  • To Scale the node group in EKS Cluster
eksctl scale nodegroup --cluster=name_of_the_cluster --nodes=5 --name=node_grp_2

Cluster Autoscaler

The cluster Autoscaler automatically launches additional worker nodes if more resources are needed, and shutdown worker nodes if they are underutilized. The AutoScaling works within a node group, so you should create a node group with Autoscaler feature enabled.

Cluster Autoscaler has the following features:

  • Cluster Autoscaler is used to scale up and down the nodes within the node group.
  • It runs as a deployment based on CPU and Memory utilization.
  • It can contain on demand and spot instances.
  • There are two types of scaling
    • Multi AZ Scaling: Node group with Multi AZ ( Stateless workload )
    • Single AZ Scaling: Node group with Single AZ ( Stateful workload)

Creating and Deploying Cluster Autoscaler

The main function and use of Autoscaler is it dynamically on the fly adds or removes the node within the nodegroup. The Autoscaler works as a deployment and depends on the CPU/Memory requests.

There are two types of scaling available : Multi AZ v/s Single AZ ( Stateful Workload) as EBS cannot be spread across multiple availability zone

To create the cluster Autoscaler you can add multiple nodegroups in the cluster as per need . In this examples lets consider to deploy 2 node groups with single AZ and 1 node groups across 3 AZs using spot instance with Autoscaler enabled

  1. Create a file create and name it as autoscaler.yaml.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-cluster
  region: us-east-1

nodeGroups:
  - name: scale-east1c
    instanceType: t2.small
    desiredCapacity: 1
    maxSize: 10
    availabilityZones: ["us-east-1c"]
# iam holds all IAM attributes of a NodeGroup
# enables IAM policy for cluster-autoscaler
    iam:
      withAddonPolicies:
        autoScaler: true
    labels:
      nodegroup-type: stateful-east1c
      instance-type: onDemand
    ssh: # use existing EC2 key
      publicKeyName: eks-ssh-key
  - name: scale-spot
    desiredCapacity: 1
    maxSize: 10
    instancesDistribution:
      instanceTypes: ["t2.small", "t3.small"]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 0
    availabilityZones: ["us-east-1c", "us-east-1d"]
    iam:
      withAddonPolicies:
        autoScaler: true
    labels:
      nodegroup-type: stateless-workload
      instance-type: spot
    ssh: 
      publicKeyName: eks-ssh-key

availabilityZones: ["us-east-1c", "us-east-1d"]
  1. Run the below commands to add a nodegroups or delete a nodegroups.
eksctl create nodegroup --config-file=autoscaler.yaml
  1. eksctl get nodegroups –cluster=EKS-Cluster
  1. Next, to deploy the Autoscaler run the below kubectl command.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
kubectl -n kube-system annotate deployment.apps/cluster-autoscaler cluster-autoscaler.kubernetes.io/safe-to-evict="false"
  1. To edit the deployment and set your AWS EKS cluster name run the below kubectl command.
kubectl -n kube-system edit deployment.apps/cluster-autoscaler
  1. Next, describe the deployment of the Autoscaler by running the below kubectl command.
kubectl -n kube-system describe deployment cluster-autoscaler
  1. Finally view the cluster Autoscaler logs by running the kubectl command on kube-system namespace.
kubectl -n kube-system logs deployment.apps/cluster-autoscaler
  1. Verify the Pods. You should notice below that first pod is for Nodegroup1 , similarly second is for Nodegroup2 and finally the third is Autoscaler pod itself.

Nginx Deployment on the EKS cluster when Autoscaler is enabled.

  1. To deploy the nginx application on the EKS cluster that you just created , create a yaml file and name it something which you find it convenient and copy/paste the below content into that.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        service: nginx
        app: nginx
    spec:
      containers:
      - image: nginx
        name: test-autoscaler
        resources:
          limits:
            cpu: 300m
            memory: 512Mi
          requests:
            cpu: 300m
            memory: 512Mi
      nodeSelector:
        instance-type: spot


  1. Now to apply the nginx deployment, run the below command.
kubectl apply -f nginx-deployment.yaml
  1. After successful deployment , check the number of Pods.
kubectl get pods
  1. Checking the number of nodes and type of node.
kubectl get nodes -l instance_type=spot
  • Scale the deployment to 3 replicas ( that is 3 pods will be scaled)
kubectl scale --replicas=3 deployment/test-autoscaler
  • Checking the logs and filtering the events.
kubectl -n kube-system logs deployment.apps/cluster-autoscaler | grep -A5 "Expanding Node Group"

EKS Cluster Monitoring and Cloud watch Logging

By, now you have already setup EKS cluster but it is also important to monitor your EKS cluster. To monitor your cluster follow the below steps:

  1. Create a below eks.yaml file and copy /paste below code into the file.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKS-cluster
  region: us-east-1

nodeGroups:
  - name: ng-1
    instanceType: t2.small
    desiredCapacity: 3
    ssh: # use existing EC2 key
      publicKeyName: eks-ssh-key
cloudWatch:
  clusterLogging:
    enableTypes: ["api", "audit", "authenticator"] # To select only few log_types
    # enableTypes: ["*"]  # If you need to enable all the log_types
  1. Now apply the cluster logging by running the command.
eksctl utils update-cluster-logging --config-file eks.yaml --approve 
  1. To Disable all the configuration types
eksctl utils update-cluster-logging --name=EKS-cluster --disable-types all

To get container metrics using cloudwatch: First add IAM policy (CloudWatchAgentServerPolicy ) to all your nodegroup(s)- to nodegroup(s) role and Deploy Cloudwatch Agent – After you deploy it will have its own namespace (cloudwatch-agent)

  1. Now run the below command.
curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/master/k8s-yaml-templates/quickstart/cwagent-fluentd-quickstart.yaml | sed "s/{{cluster_name}}/EKS-course-cluster/;s/{{region_name}}/us-east-1/" | kubectl apply -f -
  1. To check what all has been created in namespaces
kubectl get all -n amazon-cloudwatch

kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --limits=cpu=500m --expose --port=80
kubectl run --generator=run-pod/v1 -it --rm load-generator --image=busybox /bin/sh
Hit enter for command prompt
while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done

What is Helm?

Helm is the package manager similar to what you have in ubuntu or python such as apt or pip. Helm contains mainly three components.

  • Chart: All the dependency files and application files.
  • Config: Any configuration that you would like to deploy.
  • Release: It is an running instance of a chart.

Helm Components

  • Helm client: Manages repository, Managing releases, Communicates with Helm library.
  • Helm library: It interacts with Kubernetes API server.

Installing Helm

  • To install helm make sure to create the directory with below commands and then change the directory
mkdir helm && cd helm
  • Next, add official stable helm repository which contains sample charts to install
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
helm version
  • To find all the lists of the repo
helm repo list
  • To Update the repository
helm repo update
  • To check all the charts in the helm repository.
helm search repo
  • To install one of the charts. After running the below command then make sure to check the number of Pods running by using kubectl get pods command.
helm install name_of_the_chart stable/redis
  • To check the deployed charts
helm ls # 
  • To uninstall helm deployments.
helm uninstall <<name-of-release-from-previous-output>>

Creating AWS EKS Cluster Admin user

To manage all resources in the EKS cluster you need to have dedicated users either ( Admin or Read only ) to perform tasks accordingly. Lets begin by creating an admin user first.

  1. Create IAM user in AWS console (k8s-cluster-admin) and store the access key and secret key for this user locally on your machine.
  2. Next, add user to configmap aws-auth section within map Users section. But before you add a user, lets find all the configmap in kube-system namespace because we need to store all the users in aws-auth.
kubectl -n kube-system get cm
  1. Save the kubectl command in the yaml formatted file.
kubectl -n kube-system get cm aws-auth -o yaml > aws-auth-configmap.yaml
  1. Next, edit the aws-auth-configmap.yaml and add the mapUsers with the following information:
    • userarn
    • username
    • groups as ( system:masters) which has admin/all permissions basically a role
  1. Run the below command to apply the changes of newly added user.
kubectl apply -f aws-auth-configmap.yaml -n kube-system

After you apply changes you will notice that in AWS EKS you will not see any warning such as kubernetes objects cannot be accessed or something like that.

  1. Now check if user has been properly created by running the describe command.
kubectl -n kube-system describe cm aws-auth
  1. Next, add user to aws credentials file in dedicated section (profile) and then export it using export command or store it in aws cli command line.
export AWS_PROFILE="profile_name"
  1. Finally check which user is currently running the aws cli commands
aws sts get-caller-identity

Creating a read only user for the dedicated namespace

Similarly, now create a read only user for AWS-EKS service. Lets follow the below steps to create a read only user and map it in configmap with IAM.

  1. Create a namespace using below namespace.
kubectl create namespace production
  1. Create a IAM user on AWS Console
  1. Create a file rolebinding.yaml and add both the role and role bindings that includes the permissions that a kubernetes user will have.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  namespace: production
  name: prod-viewer-role
rules:
- apiGroups: ["", "extensions", "apps"]
  resources: ["*"]  # can be further limited, e.g. ["deployments", "replicasets", "pods"]
  verbs: ["get", "list", "watch"] 
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: prod-viewer-binding
  namespace: production
subjects:
- kind: User
  name: prod-viewer
  apiGroup: ""
roleRef:
  kind: Role
  name: prod-viewer-role
  apiGroup: ""
  1. Now apply the role and role bindings using the below command.
kubectl apply -f rolebinding.yaml
  1. Next edit the yaml file and apply the changes such as userarn, role and username as you did previously.
kubectl -n kube-system get cm aws-auth -o yaml > aws-auth-configmap.yaml
kubectl apply -f aws-auth-configmap.yaml -n kube-system
  1. Finally test the user and setup

EKS Networking

  • Amazon VPC contains CNI Plugins from which each Pod receives IP address which is linked with ENI .
  • Pods have same IP address within the VPC that means inside and outside the EKS cluster.
  • Make sure to use maximum IP address by using CIDR/18 which has more IP address.
  • EC2 instance can also have limited amount of ENI/IP address that is each EC2 instance can have limited PODS ( like 36 or so according to Instance_type)

IAM and RBAC Integration in AWS EKS

  • Authentication is done by IAM
  • Authorization is done by kubernetes RBAC
  • You can assign RBAC directly to IAM entities.

kubectl ( USER SENDS AWS IDENTITY) >>> Connects with EKS >>> Verify AWS IDENTITY ( By Authorizing AWS Identity with Kubernetes RBAC )

Worker nodes join the cluster

  1. When you create a worker node, assign the IAM Role and authorize that IAM Role needs to be authorized in RBAC in order to join the cluster. Add system:bootstrappers and system:nodes groups in your ConfigMap. The value for rolearn is the NodeInstanceRole and then run the below command
kubectl apply -f aws-auth.yaml
  1. Check current state of cluster services and nodes
kubectl get svc,nodes -o wide

How to Scale Up and Down Kubernetes Pods

There are three ways of Scaling up/down the kubernetes Pods, Lets look at all of these three.

  1. Scale the deployment to 3 replicas ( that is 3 pods will be scaled) using kubectl scale command.
kubectl scale --replicas=3 deployment/nginx-deployment
  1. Next, update the yaml file with 3 replicas and run the below kubectl apply command. ( Lets say you have abc.yaml file)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        service: nginx
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx 
        resources:
          limits:
            cpu: 300m
            memory: 512Mi
          requests:
            cpu: 300m
            memory: 512Mi
      nodeSelector:
        instance-type: spot
kubectl apply -f abc.yaml
  1. You can scale the Pods using the kubernetes Dashboard.
  1. Apply the manifest file that you created earlier by running below command.
kubectl apply -f nginx.yaml
  1. Next verify if the deployment has been done succesfully.
kubectl get deployment --all-namespaces

Conclusion

In this tutorial you learned everything about AWS EKS from beginners to Advanced level.

Now, you have string understanding of AWS EKS which applications do you plan to manage on it ?

How to Work with Ansible When and Other Conditionals

If you need to execute Ansible tasks based on different conditions, then you’re in for a treat. Ansible when and other conditionals lets you evaluate conditions, such as based on OS, or if one task is dependent on the previous task.

In this tutorial, you’re going to learn how to work with Ansible when and other conditionals so you can execute tasks without messing things up.

This Blog has been Written by Author of Automateinfra.com (Shanky) on adamtheautomator.com [ATA]

Click here and Continue reading

How to Install Terraform on Linux and Windows

Are you overwhelmed with the number of cloud services and resources you have to manage? Do you wonder what tool can help with these chores? Wonder no more and dive right in! This tutorial will teach how to install Terraform!

Terraform is the most popular automation tool to build, change and manage your cloud infrastructure effectively and quickly. So let’s get started!

This Blog has been Written by Author of Automateinfra.com (Shanky) on adamtheautomator.com [ATA]

Click here and Continue reading

The Ultimate Guide on Docker for Beginners [Easiest Way]

Table of Content

  • Introduction to docker
  • Why do you need Docker?
  • Docker Images
  • Docker Containers
  • Why Docker Containers not Hypervisor
  • Docker Client-Server Architecture

Introduction to docker

Docker is an open-source platform that allows you to deploy, run and ship applications. With Docker, it takes quick time to deploy, run and test your applications anywhere in different operating systems. It is a lightweight loosely isolated environment. It has a better cost-effective solution than hypervisors virtual machine as you can use more compute resources.

Docker is written in the Go programming language and takes advantage of several features of the Linux kernel to deliver its functionality.

Docker provides the ability to package and run an application in a loosely isolated environment called a container. Containers are lightweight and contain everything needed to run the application, so you do not need to rely on what is currently installed on the host.

Docker manages the lifecycle of containers starting from developing the applications using containers, which you can use or distribute later and deploy in any environment such as a local data center, a cloud provider, or a hybrid of the two.

Why do you need Docker?

Without Docker : As usual your Operating system are dependent on Hardware and are installed on it. But when you are hosting your applications directly on operating system then applications are dependent on different libraries and other compatible issues. So, when you have two or more applications it becomes difficult to manage so many micro services and takes long time to step up.

Without Docker

With Docker : As usual your Operating system are dependent on Hardware and are installed on it. Now this time you are hosting your applications on Docker rather than directly on operating system then applications have there own libraries and dependencies. So, when you have two or more applications it’s easier to manage as it has its own isolated environment.

With Docker

Docker Images

Docker images basically contain instructions for creating docker containers. Docker images are available in the public registry that is the docker hub and you can also customize the image by adding more instruction in a file which is known as the docker file.

Docker Images are binary data or package that contains application and all its software dependencies and runtimes.

Docker Images can be found on this link.

Docker Containers

Docker applications run inside a lightweight environment which is known as containers and you can run multiple containers in your host it could be windows, Linux, or macOS. Containers are runnable instances of an image. You can create, delete, stop, start containers at any point in time.

Containers are completely isolated environments . It can have its own services, networking interferences, processes, mounts etc. but they all share the same OS Kernel. For Example: the Ubuntu, Fedora, centos they all have same OS kernel i.e. Linux but have different software’s that why there are so many flavors of Linux available such as some are different in GUI or as command line.

OS kernel : OS kernel manages the operations of memory and CPU time. It is core component which acts as a bridge between applications or software’s and data processing performed at hardware level using system calls.

Docker container sharing the same kernel means: docker can run any containers on any operating system that are based and compatible on the underlying operating system kernel.

  • You cannot run Windows based container in the Linux based OS Kernel. For that you will require docker on windows.
  • You can run Linux based container in windows based OS kernel but actually you are using Linux virtual machine on top of windows and then running Linux based container on Linux virtual machine.
Different Operating system sharing the OS Kernel ( Linux )

In the below example: Docker can run any container based on Ubuntu, Fedora or Centos as the underlying OS kernel is ubuntu. So the Containers share all the networking and other OS things from kernel but only software is installed on container.

Why Docker Containers not Hypervisor

Docker containers has only 1 underlying operating system that helps in the cost optimization , utilization of resources , disk space and also helps in less time in booting.

With Hypervisor, you have so many operating system that increases the overload ( disk size) , utilization of resources and also takes more time to boot.

Normal OS – Virtualization – Containerization

There are two docker editions: community edition and the other is enterprise edition. The community edition is free and present in windows, Linux, macOS and cloud( AWS or Azure ) . Enterprise edition comes with more features such as image security, image management.

Docker Client-Server Architecture

Docker uses a client-server architecture where the docker client connects to the Docker daemon which performs all the functions such as building images, running containers, and distributing containers.

Docker client connects to docker daemon using REST API over Unix sockets. For example, when you run the docker run command your docker client first connects with docked daemon which performs the task. Docker daemons can also communicate with other daemons to manage containers.

Docker Daemon: dockerd is the docker daemon which listens to API requests and manages Docker objects such as images, containers, network, and volumes.

Docker registries store docker images. Docker hub is the public registry that anybody can use. Using docker pull or docker run command images are pulled from Docker hub and using docker push they are pushed into docker hub.

Docker commands

  • To run a container from nginx image.
    • If docker image is present on the system docker uses that to run the container else it will pull it from the docker hub and later for subsequent execution same image which got downloaded will be used.
    • By default container runs in Foreground or in attached mode that means container will be attached to the console or the standard out of the docker container and you will see the output of web service on your screen and you wont be able to do anything on the console. Press (Clt + C)to Exit
docker run name_of_the_image
docker run nginx
  • To run a container from ubuntu image. In this case container starts and exits immediately because unlike virtual machines docker are not suppose to host the operating system instead they are supposed to run either any web application or specific tasks or web server.
docker run ubuntu 
  • To run a container from ubuntu image and keep the container running you can allow container to sleep for particular time from an ubuntu image.
docker run ubuntu sleep 5000
  • To run a container from ubuntu image or centos image and keep the container running run the execute command on the container directly as shown below and followed by /cat/etc/*release* but container will exit after you log out from the container as it is not doing anything or running any service.
    • If application needs you to provide input then Dockerzing the application wouldn’t wait for the prompt it just prints the output on the standard output that is STDOUT because by default container doesn’t listhen to the standard input even though you are attached to the console. It doesn’t have terminal to read the input from.
    • You must map standard input of your host to the docker container using the i parameter.
    • With i parameter you have an interactive mode.
    • With t (sudo terminal) parameter you are attached to the container’s terminal. Because application prompts on the terminal and we haven’t attached to containers terminal.
docker run -it centos bash /cat/etc/*release*
  • To run a container in detached mode or in the background and you will get the prompt immediately as container starts. With this mode you need to use -d flag. The container will continue to run in the background.
docker run -d nginx
  • To attach the container back in attached mode.
docker attach name_of_the_container  or container_id
  • To list all the running container’s and some of the information about the containers.
docker ps
  • To list all the container ( Running, Pending, Inactive or terminated etc.)
docker ps -a
  • To stop a container. Stops the container but you don’t get rid of space consumption.
docker stop name_of_the_container
  • To remove the container and get rid of space consumption.
docker rm name_of_the_container
  • To remove multiple containers in the docker.
docker rm container_ID_1 container_ID_2 container_ID_3
  • To list all the docker images
docker images
  • To download or pull the docker image
docker pull name_of_the_image
  • To remove the docker images. Make sure first you get rid of all associated containers that is remove all the containers associated with the images.
docker rmi name_of_the_image
  • To execute a command inside a container
docker exec name_of_the_container cat /etc/host
  • To run a container from a specific version.
docker run image:tag
  • To access the application from web browser you would need to run the docker run command with p flag.
    • p flag will map Port 80 on docker host to Port 5000 to containers Port.
    • In Case 1 – Docker host will listen to application on port 80 on web and internally on Port 5000 on container IP address.
    • In Case 2 – Docker host will listen to application on port 8000 on web and internally on Port 5000 on container IP address.
    • In Case 3 – Docker host will listen to application on port 8001 on web and internally on Port 5000 on container IP address.
    • You can run as many applications you wish to run with different docker host ports but you cannot use the same docker host port again.
docker run -p 80:5000 nginx    # Case 1
docker run -p 8000:5000 nginx  # Case 2
docker run -p 8001:5000 nginx  # Case 3
  • To have a data persistent with the container you should consider mounting a volume from docker host to the container as when you run the MySQL container the volume is created on the /var/lib/mysql directory inside the container and is blown away as soon as you either stop or remove the container.
    • /opt/datadir is the volume created on docker host
    • /var/lib/mysql : Mapping the volume with containers directory
    • mysql is the name of the docker image
    • v flag mounts the volume from host to container.
docker run -v /opt/datadir:/var/lib/mysql mysql
  • To find detailed view of container use inspect command.
docker inspect name_of_the_container  or container_id
  • To find logs of the container use docker logs command.
docker logs container_name or container_id

Creating Your Own Docker Image

Why do you need to create your own docker image? As you might have seen the docker images are just the basic software applications or just the operating system but most of the times you need to create more and more software’s of top of the base image to modify your application. Lets checkout

To create your own Docker Image you need to have a file known as DockerFile where you define instructions that needs to be executed such as base image , update repository, Install dependencies, copy source code, Run the web application etc. Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.

Each instruction in DockerFile creates another layer that is (All the instructions takes their own memory). While building if any instruction fails and if you rebuild the image then previous instruction which are cached are used to build.

The build is run by the Docker daemon, not by the CLI. The first thing a build process does is send the entire context (recursively) to the daemon and daemon creates the new image.

docker build . 

To Build the docker Image using dockerfile on a different path use f flag.

docker build -f /path/to/a/Dockerfile .
  1. FROM : From instruction initializes new build stage and sets the base image for subsequent instructions. From instruction may appear multiple times in the DockerFile.
  2. ARG: ARG is the only instruction that comes before FROM and FROM takes the arguments.
  3. RUN: The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. RUN are declared with two ways either shell way or executable way.
    • Shell way: the command is run in a shell i.e. /bin/sh. If you need to run multiple command use backslash.
    • Executable way: RUN [“executable”, “param1”, “param2”] . If you need to use any other shell than /bin/sh than you should consider using executable way.
RUN 
/bin/bash -c 'source $HOME/.bashrc; \
echo $HOME'
RUN 

/bin/bash -c 'source $HOME/.bashrc; echo $HOME'
  1. CMD: The main purpose of a CMD is to execute the command within the container just like docker run exec command . CMD has also three forms as shown below. There can be only one CMD instruction in DockerFile. If you list more than one then last will take effect. Also if you don’t specify the executable in CMD then you can also use it along with ENTRYPOINT but make sure to define both CMD and ENTRYPOINT in json format.
    • CMD [“executable”,”param1″,”param2″]
    • CMD [“param1″,”param2”]
    • CMD command param1 param2

CMD defines default commands and/or parameters for a container. CMD is an instruction that is best to use if you need a default command which users can easily override

  1. ENTRYPOINT: An ENTRYPOINT allows you to run the commands as a executable in a container. When you provide the command line arguments to docker run it  will be appended all elements in an exec form ENTRYPOINT, and will override all elements specified using CMD. Lets look at an example.

ENTRYPOINT is preferred when you want to define a container with a specific executable. You cannot override an ENTRYPOINT when starting a container unless you add the --entrypoint flag.

Combine ENTRYPOINT with CMD if you need a container with a specified executable and a default parameter that can be modified easily

The Ultimate Guide on AWS Cloud Practitioner- PART-1

What does Server contains?

  • Compute: The Computation is done by CPU.
  • Memory : RAM is the memory of the machine.
  • Storage: Data
  • Database: Store Data in Structure way.

Why Cloud instead of Datacenter ?

  • You will need to pay rent for the data center.
  • You will need to Pay for maintenance , power supply , cooling and take care of data center.
  • Adding hardware and infrastructure takes more time.
  • Scaling is limited.
  • In case of disaster everything is lost.

Benefits of Cloud over Datacenter

  • Cloud computing is on demand delivery of compute power, database storage, applications and IT storage.
  • You pay as you use.
  • Provision exactly right type and size of compute resources and access instantly.
  • Simple way to access all services such as database , servers, storage.

Features of Cloud Computing

1. Cloud computing is flexible: If you need more services and more need of Servers you can easily scale up your cloud capacity or you may Scale down again.

2. Disaster Recovery – Cloud is helping organizations to move to that trend. With Allowing Automation and Creating Infrastructure with Different Automation Tools you can redeploy and rebuild your services ASAP. Also The Backups and format of recovery in Cloud is great.

3. Never Miss an Update: As The Service itself is not managed by organization, Provider takes care of your Infra and server Management so its Ideal solution than the reality.

4. Cloud services minimizes cloud Expenditure: Cloud Computing cuts Hardware costs. You simply pay a you go and Enjoy the services.

5. Workstations in the cloud: You can work from anywhere in the world and anytime.

6. Cloud computing offers security: Lost Laptops are Billion dollar Business problem but loss of expensive piece of data is exceptionally a big Problem. Here your data resides in Cloud with more Max security and tolerance which is a greatest advantage in my Option.

7. Mobility: Cloud computing allows mobile access to corporate data via smartphones and devices, which is a great way to ensure that no one is ever left out of the loop.

Cloud computing adoption is on the rise every year, and it doesn’t take long to see why enterprises recognize cloud computing benefits.

Different Types of Cloud Computing

SAAS (Software as a Service): With SaaS, an organization accesses a specific software application hosted on a remote server and managed by a third-party provider

PAAS (Platform as a Service): With PaaS, an organization accesses a pre-defined environment for software development that can be used to build, test, and run applications. This means that developers don’t need to start from scratch when creating apps.

IAAS (Infrastructure as a Service): With IaaS, an organization migrates its hardware—renting servers and data storage in the cloud rather than purchasing and maintaining its own infrastructure. IaaS provides an organization with the same technologies and capabilities as a traditional data center, including full control over server instances.

System administrators within the business are responsible for managing aspects such as databases, applications, runtime, security, etc., while the cloud provider manages the servers, hard drives, networking, storage, etc.

How to Set up a PostgreSQL Database on Amazon RDS

Table of Content

  1. What is Database?
  2. What is AWS RDS?
    • Database Created Manually vs Database Created using AWS RDS
  3. What is PostgreSQL?
  4. Creating Amazon RDS with PostgreSQL Manually
    • Prerequisites
    • Important Parameters for setting up Postgres
    • Security Required in Setting up Postgres
    • Connecting to DB instance using Postgres engine with Master Password from pgadmin4
  5. Troubleshooting the PostgreSQL connectivity issues.

What is Database?

All of you might be very well aware of what database is used for , right ? But Let me share few examples of it. Suppose you have an company and you want to store all the information of your employees, such as Name, Employee ID , Employee Address , Employee Joining date , Employee benefits , where do you store them ? So , the Answer is database where you keep your all data securely and efficiently.

This image has an empty alt attribute; its file name is image-44.png

What is AWS RDS?

“Amazon Relational Database” is a web service which helps in setting up and maintaining the relational database in AWS. What I mean by maintaining here is : we can configure our RDS in such a way that it can scale up or scale down whenever required , it has resizable capacity i.e you can configure different instance size and can be load balanced and fault tolerant in nature. All depends on how we would like to configure. The service helps us in solving by removing lots of tedious management tasks than setting up manually and saving lot of our time. RDS supports 6 database engines: Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database and SQL Server.

Database Created ManuallyDatabase Created using AWS RDS
You need to buy SeverNo need to buy any hardware
You need to configure backups manuallyAWS RDS takes care of backups automatically
To make it highly available you need to configure many thingsYou just need to choose highly available option
You cannot utilize IAM if you database is not build using RDS or on AWSYou can configure access using IAM
Less secure , as you get SHELL accessMore secure , as you don’t have access to SHELL




This image has an empty alt attribute; its file name is image-45.png
AWS RDS service contains RDS instances , Instances contains RDS databases and database users & finally you connect them using Clients such as pgadmin4

What is PostgreSQL?

PostgreSQL is an open source relational database system which has capability to handle heavy workload and can scale systems very easily. It can run mostly on all Operating systems. It is an open source but highly extensible like you can define your own data types, functions .

Some of the features of PostgreSQL are:

  • Security
  • Extensibility
  • Text Search
  • Reliable
  • Data Integrity
  • Good Performance

Creating Amazon RDS with PostgreSQL Manually

This image has an empty alt attribute; its file name is image-49.png

Prerequisites: Make sure you have two things

a) Amazon AWS account , you can create your account via https://signin.aws.amazon.com if you don’t have it already.

b) pgAdmin utility to connect to PostgreSQL database instance via https://www.pgadmin.org/download which will be use later in this tutorial once we are done with creation part.

So, Lets go and create our AWS RDS Postgres now !!

Step 1) Sign into your AWS account and on the top of the Page you will see “Search for services, features, marketplace products and docs” , here please type AWS RDS and choose RDS.

This image has an empty alt attribute; its file name is image-46.png

Step 2) On Left Panel click on Databases and then click on Create database.

This image has an empty alt attribute; its file name is image-47.png

Step 3) After you click on Create database choose method as Standard create , Engine as PostgreSQL and Latest Version : PostgreSQL 12.5-R1 and select FREE tier from Templates.

This image has an empty alt attribute; its file name is image-51.png

Step 4) Lets provide the database name , master username , master password and keeping all the storage values as default .

*According to your need and setup you can Provide storage.

This image has an empty alt attribute; its file name is image-52.png
This image has an empty alt attribute; its file name is image-53.png

Step 5) Lets Configure the Connectivity now : “Availability & durability and connectivity “

Important Note : Security group is used to allow your Inbound and Outbound traffic. In order for us to connect to RDS instance we will modify the inbound and outbound rules of default security group which we are using to create the RDS instance.

Path to reach security group : On top of the Page you will see “Search for services, features, marketplace products and docs” , here look for EC2 and enter and then you will see Security groups under Network and security. Choose the default security group which is linked with your VPC.

This image has an empty alt attribute; its file name is image-58.png
This image has an empty alt attribute; its file name is image-60.png
This image has an empty alt attribute; its file name is image-55.png

Step 6) Make sure the default settings for “Database authentication” and “Additional configuration” are selected.

This image has an empty alt attribute; its file name is image-57.png

Step 7) We are now ready : Lets click on Create Database

**** It usually takes few mins for your RDS instance to be launched.

Here we GO !! Our Database instance is successfully created now

This image has an empty alt attribute; its file name is image-59.png

Step 8) Lets Open pgAdmin and connect to our “myrds” database instance

After you open pgadmin you will see Servers on the left side. Right click and servers and create a new server

This image has an empty alt attribute; its file name is image-63.png

Step 9) Lets Open pgAdmin and connect to our “myrds” database instance

After you open pgadmin you will see Servers on the left side. Right click and create a new server

In General Tab select name as “myrds” and in Connection Tab provide the Host i.e endpoint URL of your database instance ( You can find this in Connectivity and security under Databases in AWS RDS

This image has an empty alt attribute; its file name is image-64.png
This image has an empty alt attribute; its file name is image-65.png

Step 10) Click “SAVE”

This image has an empty alt attribute; its file name is image-66.png



Step 11) Now, let’s go ahead and create a database within the server. Right click on “Databases” and select “Create” and then “Database…” Give the database a name and save when done.

This image has an empty alt attribute; its file name is image-67.png

Step 12) Alright, now our new database “testing” is also created successfully. 

This image has an empty alt attribute; its file name is image-69.png

Troubleshooting the PostgreSQL connectivity issues.

Error while connectingSolution
Database name doesn’t exist while connectingTry connecting the default database i.e postgres
Couldn’t connect to server: connection time out1.Check if you entered correct host name
2. Make sure DB is publicly accessible
3. Check if you are giving correct port i.e 5432
4. Check the Inbound and outbound rule of default security group


How to Install AWS CLI Version 2 and Setup AWS credentials

AWS CLI that is AWS Command Line Interface that enables you to interact with AWS services in various AWS accounts using commands in your command-line shell from your local environment or remotely. The AWS CLI provides direct access to the public APIs of AWS services.

You can control multiple AWS services from the command line and automate them through scripts. You can run AWS CLI commands from Linux shell such as bash , zsh , tcsh and from windows machine you can usecommand prompt or PowerShell to execute AWS CLI commands.

The AWS CLI is available in two versions but lets learn how to install AWS CLI version 2.

Table of Contents

  1. Installing AWS CLI Version 2 on windows machine
  2. Creating an IAM user in AWS account with programmatic access
  3. Configure AWS credentials using aws configure
  4. Verify aws configure from AWS CLI by running a simple commands
  5. Configuring AWS credentials using Named profile.
  6. Verify Named profile from AWS CLI by running a simple commands.
  7. Configuring AWS credentials using environment variable
  8. Conclusion

Installing AWS CLI Version 2 on windows machine

  • Download the installed for AWS CLI on windows machine from here
  • Select I accept the terms and then click next button
  • Do custom setup like location of installation and then click next button
  • Now you are ready to install the AWS CLI 2
  • Click finish and now verify the AWS cli
  • Verify the AWS version by going to command prompt and type
aws --version

Now AWS cli version 2 is successfully installed on windows machine.

Creating an IAM user in AWS account with programmatic access

There are two ways to connect to an AWS account, the first is providing a username and password on the AWS login page using browser and the other way is to configure Access key ID and secret keys of IAM user on your machine and then use command-line tools such as AWS CLI to connect programmatically.

For applications to connect from AWS CLI to AWS Service, you should already have Access key ID and secret keys with you that you will configure on your local machine to connect to AWS account.

Lets learn how to create a IAM user and Access key ID and secret keys !!

  1. Open your favorite web browser and navigate to the AWS Management Console and log in.
  2. While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item.
  1. To Create a user click on Users→ Add user and provide the name of the user myuser and make sure to tick the Programmatic access checkbox in Access type which enables an access key ID and secret access key and then hit the Permissions button.
  1. Now select the “Attach existing policies directly” option in the set permissions and look for the “Administrator” policy using filter policies in the search box. This policy will allow myuser to have full access to AWS services.
  1. Finally click on Create user.
  2. Now, the user is created successfully and you will see an option to download a .csv file. Download this file which contains IAM users i.e. myuser Access key ID and Secret access key which you will use later in the tutorial to connect to AWS service from your local machine.

Configure AWS credentials using aws configure

Now you IAM user with Access key ID and secret keys ,but AWS CLI cannot perform anything unless you configure AWS credentials . Once you configure the credentials then AWS CLI allows you to connect to AWS account and execute commands.

  • Configure AWS Credentials by running the aws configure command on command prompt
aws configure
  • Enter the details such as AWS Access key ID , Secret Access Key , region . You can skip the output format as default or text or json .
  • Once AWS is configured successfully , verify by navigating to C:\Users\YOUR_USER\.aws  and see if two file credentials and config are present.
  • Now open both the files and verify.
  • Now, you’re AWS credentials are configured successfully using aws configure.

Verify aws configure from AWS CLI by running a simple commands

Now, you can test if AWS Access key ID , Secret Access Key , region you configured in AWS CLI is working fine by going to command prompt and running the following commands.

aws ec2 describe-instances

Configuring AWS credentials using Named profile.

A named profile is a collection of settings and credentials that you can apply to a AWS CLI command. When you specify a profile to run a command, the settings and credentials are used to run that command

Earlier you created one IAM user and configure AWS credentials using aws configure, lets learn how to store named profiles.

  1. Open credentials files which got created earlier using aws configure or create a file at  C:\Users\your_profile\.aws\credentials on your windows machine.
  2. Now , you can provide multiple  Access key ID and Secret access key  into the credentials file in the below format and save the file.

credentials files help you to set your profile. By this way, it helps you to create multiple profiles and avoid confusion while connecting to specific AWS accounts.

  1. Similarly, create another file C:\Users\your_profile\.aws\config in the same directory
  2. Next, add the “region” into the config file and make sure to add the name of the profile which you provided in the credentials file, and save the file. This file allows you to work with a specific region.

~/.aws/credentials (Linux & Mac) or %USERPROFILE%\.aws\credentials (Windows)

~/.aws/config (Linux & Mac) or %USERPROFILE%\.aws\config (Windows)

Verify Named profile from AWS CLI by running a simple commands

Lets open command prompt and run the below command to verify sandbox profile which you created earlier under two files ( %USERPROFILE%\.aws\credentials and USERPROFILE%\.aws\config)

aws ec2 describe-instances --profile sandbox

If you get a response shows you were able to configure Named profile succesfully.

Configuring AWS credentials using environment variable

Lets open command prompt and set the AWS secret key and access key using environmental variable. Using set to set an environment variable changes the value used until the end of the current command prompt session, or until you set the variable to a different value

Conclusion

In this tutorial, you learned how to install AWS CLI and configured it using AWS Access key ID , Secret Access Key, region. Also you learned how to generate AWS Access key ID , Secret Access Key by creating an IAM user.

Python Compilation and Working !!

Table of Content

  1. Understanding the difference between high level and low level language.
  2. Interpreted v/s Compiled Language
  3. Introduction to Python
  4. How Python Works ?
  5. Python Interpreter
  6. Python Standard Library
  7. Python Implementations
  8. Python Installation
    • Python Installation on Linux Machine
    • Python Installation on Windows Machine
    • Python Installation on MacOS
  9. Conclusion

Understanding the difference between High & Low-level Languages

High-Level Language: High-level language is easier to understand than is it is human readable. It is either compiled or interpreted. It consumes way more memory and is slow in execution. It is portable. It requires a compiler or interpreter for a translation.

The fastest translator that converts high level language is .

Low-Level Language: Low-level Languages are machine-friendly that is machines can read the code but not humans. It consumes less memory and is fast to execute. It cannot be ported. It requires an assembler for translation.

Interpreted v/s Compiled Language

Compiled Language: Compiled language is first compiled and then expressed in the instruction of the target machine that is machine code. For example – C, C++, C# , COBOL

Interpreted Language: An interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program and these kinds of languages are known as interpreter languages. For example JavaScript, Perl, Python, BASIC

Introduction to Python

Python is a high-level language, which is used in designing, deploying, and testing at lots of places. It is consistently ranked among today’s most popular programming languages. It is also dynamic and object-oriented language but also works on procedural styles as well, and runs on all major hardware platforms. Python is an interpreted language.

How does Python Works?

Bytecode, also termed p-code, is a form of instruction set designed for efficient execution by a software interpreter

  • First step is to write a Python program such as test.py
  • Then using Python interpreter program is in built compiled and gets converted into byte code that is test.pyc.
  • Python saves byte code like this as a startup speed optimization. The next time you run your program, Python will load the .pyc files and skip the compilation step, as long as you haven’t changed your source code since the byte code was last saved.
  • Once your program has been compiled to byte code (or the byte code has been loaded from existing .pyc files), it is shipped off for execution to something generally known as the Python Virtual Machine
  • Now byte code that is test.pyc is further converted into machine code using virtual machine such as (10101010100010101010)
  • Finally Program is executed and output is displayed.
How Python runs? – Indian Pythonista

Python Interpreter

Python includes both interpreter and compiler which is implicitly invoked.

  • In case of Python version 2, the Python interpreter compiles the source file such as file.py and keep it in same directory with an extension file.pyc
  • In case of Python version 3 : the Python interpreter compiles the source file such as file.py and keep it in subdirectory __pycache__
  • Python does not save the compiled bytecode when you run the script directly; rather, Python recompiles the script each time you run it.
  • Python saves bytecode files only for modules you import however running Python command with -B flag avoids saving compiled bytecode to disk.
  • You can also directly execute Python script in the Unix operating system if you add shebang inside your script.
#! /usr/bin/env python

Python Standard Library

Python standard library contains several well-designed Python modules for convenient reuse like representing data, processing text, processing data, interacting with operating systems and filesystems, and web programming. Python modules are basically Python Programs like a file (abc.py) that are imported.

There are some extension modules that allows Python code to access functionalities supplied by underlying OS or other software’s components such as GUI, database and network, XML parsing. You can also wrap existing C/C++ libraries into python extension modules.

Python Implementations

Python is more than a language, you can utilize the implementation of Python in many ways such as :

  • CPython: CPython is an interpreter, compiler, set of built in and optional extension modules all coded in C language. Python code is converted into bytecode before interpreting it.
  • IronPython: Python implementation for the Microsoft-designed Common Language Runtime (CLR), most commonly known as .NET, which is now open source and ported to Linux and macOS
  • PyPy: PyPy is a fast and flexible implementation of Python, coded in a subset of Python itself, able to target several lower-level languages and virtual machines using advanced techniques such as type inferencing
  • Jython: Python implementation for any Java Virtual Machine (JVM) compliant with Java 7 or better. With Jython, you can use all Java libraries and framework and it supports only v2 as of now.
  • IPython: Enhances standard CPython to make it more powerful and convenient for interactive use. IPython extends the interpreter’s capabilities by allowing abbreviated function call syntax, using question mark to query an objects documentation etc.

Python Installation

Python Installation on Linux Machine

If you are working on the Latest platforms you will find Python already installed in the systems. At times Python is not installed but binaries are available in the system which you can install using RPM tool or using APT in Linux machines and for Windows use MSI( Microsoft Installer ) .

Ubuntu 16 server
Ubuntu 18 server

Python Installation on Windows Machine

Python can be installed in Windows with a few steps, and to install Python steps can be found here.

Python Installation on macOS

Python V2 comes installed on macOS but you should install the latest Python version always. The popular third-party macOS open-source package manager Homebrew offers, among many other packages, excellent versions of Python, both v2 and v3

  • To install Homebrew, open Terminal or your favorite OS X terminal emulator and run
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
  • Add homebrew directory at the top of your PATH environment variable.
export PATH="/usr/local/opt/python/libexec/bin:$PATH"
  • Now install Python3 using the following commands.
brew install python3
  • Verify the installation of Python using below command

Conclusion

In this tutorial, you learned a basic introduction to python, why it is interpreted, and high-level language. Also, you learned lots of details of python data types, keywords, and how python works. There were a handful of examples that you learned. Hope this tutorial will help you and if you like please share it.

How to Launch AWS Elasticsearch using Terraform in Amazon Account

It is important to have a search engine for your website or applications. When it comes to automation of these great features such as load balancing and scalability of websites, Amazon provides it own managed service known as Amazon Elasticsearch.

In this tutorial you will learn about what is Amazon Elasticsearch and how to create a Amazon Elasticsearch domain using Terraform.

Table of Contents

  1. What Is Amazon Elasticsearch Service?
  2. Prerequisites:
  3. Terraform Configuration Files and Structure
  4. Configure Terraform files for AWS Elasticsearch
  5. Verify AWS Elasticsearch in Amazon Account
  6. Conclusion

It is important to have a search engine for your website or applications. When it comes to automation of these great features such as load balancing and scalability of websites, Amazon provides it own managed service known as Amazon Elasticsearch.

In this tutorial you will learn about what is Amazon Elasticsearch and how to create a Amazon Elasticsearch domain using Amazon Management console and then search the data using Kibana.

What Is Amazon Elasticsearch Service?

Amazon Elasticsearch Service is a managed service which deploys and scale the Elasticsearch clusters in the cloud. Elasticsearch is an open source analytical and search engine which is used to perform real time application monitoring and log analytics.

Amazon Elasticsearch service provisions all resources for Elasticsearch clusters and launches it. It also replaces the failed Elasticsearch nodes in the cluster automatically.

Features of Amazon Elasticsearch Service

  • It can scale up to 3 PB of attached storage
  • It works with various instance types.
  • It easily integrates with other services such as IAM for security for ,VPC , AWS S3 for loading data , AWS Cloud Watch for monitoring and AWS SNS for alerts notifications.

Prerequisites:

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Configure Terraform files for AWS Elasticsearch

In this demonstration we will create a simple Amazon Elasticsearch using Terraform from Windows machine.

  • Create a folder on your desktop on windows Machine and name it as Terraform-Elasticsearch
  • Now create a file main.tf inside the folder you’re in and paste the below content
rresource "aws_elasticsearch_domain" "es" {
  domain_name           = var.domain
  elasticsearch_version = "7.10"

  cluster_config {
    instance_type = var.instance_type
  }
  snapshot_options {
    automated_snapshot_start_hour = 23
  }
  vpc_options {
    subnet_ids = ["subnet-0d8c53ffee6d4c59e"]
  }
  ebs_options {
    ebs_enabled = var.ebs_volume_size > 0 ? true : false
    volume_size = var.ebs_volume_size
    volume_type = var.volume_type
  }
  tags = {
    Domain = var.tag_domain
  }
}


resource "aws_elasticsearch_domain_policy" "main" {
  domain_name = aws_elasticsearch_domain.es.domain_name
  access_policies = <<POLICIES
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "es:*",
            "Principal": "*",
            "Effect": "Allow",
            "Resource": "${aws_elasticsearch_domain.es.arn}/*"
        }
    ]
}
POLICIES
}
  • Create one more file vars.tf inside the same folder and paste the below content
variable "domain" {
    type = string
}
variable "instance_type" {
    type = string
}
variable "tag_domain" {
    type = string
}
variable "volume_type" {
    type = string
}
variable "ebs_volume_size" {}
  • Create one more file output.tf inside the same folder and paste the below content
output "arn" {
    value = aws_elasticsearch_domain.es.arn
} 
output "domain_id" {
    value = aws_elasticsearch_domain.es.domain_id
} 
output "domain_name" {
    value = aws_elasticsearch_domain.es.domain_name
} 
output "endpoint" {
    value = aws_elasticsearch_domain.es.endpoint
} 
output "kibana_endpoint" {
    value = aws_elasticsearch_domain.es.kibana_endpoint
}
  • Create one more file provider.tf inside the same folder and paste the below content:
provider "aws" {      # Defining the Provider Amazon  as we need to run this on AWS   
  region = "us-east-1"
}
  • Create one more file terraform.tfvars inside the same folder and paste the below content
domain = "newdomain" 
instance_type = "r4.large.elasticsearch"
tag_domain = "NewDomain"
volume_type = "gp2"
ebs_volume_size = 10
  • Now your files and code are ready for execution .
  • Initialize the terraform using below command.
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply

Verify AWS Elasticsearch in Amazon Account

Terraform commands ( init , plan and apply ) all ran successfully. Now Lets verify it on AWS Management console of all the things were created properly.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘Elasticsearch’, and click on the Elasticsearch menu item.
  • Now You will see that the newdomain is created succesfully
  • Click on newdomainto see all the details.

Conclusion

In this tutorial you learnt about what is Amazon Elasticsearch and how to create a Amazon Elasticsearch domain using Terraform

So, Now you have a strong fundamental understanding of AWS Elasticsearch , which website are you going to implement on Elasticsearch with Terraform ?

Getting Started with Amazon Elasticsearch Service and Kibana

It is important to have a search engine for your website or applications. When it comes to automation of these great features such as load balancing and scalability of websites, Amazon provides it own managed service known as Amazon Elasticsearch.

In this tutorial you will learn about what is Amazon Elasticsearch and how to create a Amazon Elasticsearch domain using Amazon Management console and then search the data using Kibana.

Table of contents

  1. What Is Amazon Elasticsearch Service?
  2. Creating the Amazon Elasticsearch Service domain
  3. Upload data to Amazon Elasticsearch for indexing
  4. Search documents using Kibana in Amazon Elasticsearch
  5. Conclusion

What Is Amazon Elasticsearch Service?

Amazon Elasticsearch Service is a managed service which deploys and scale the Elasticsearch clusters in the cloud. Elasticsearch is an open source analytical and search engine which is used to perform real time application monitoring and log analytics.

Amazon Elasticsearch service provisions all resources for Elasticsearch clusters and launches it. It also replaces the failed Elasticsearch nodes in the cluster automatically.

Features of Amazon Elasticsearch Service

  • It can scale up to 3 PB of attached storage
  • It works with various instance types.
  • It easily integrates with other services such as IAM for security for ,VPC , AWS S3 for loading data , AWS Cloud Watch for monitoring and AWS SNS for alerts notifications.

Creating the Amazon Elasticsearch Service domain

In this tutorial you will see how to create Elasticsearch cluster using Amazon Management console. Lets start.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘Elasticsearch’, and click on the Elasticsearch menu item.
  • Now, one thing to note here is the name of Amazon Elasticsearch domain is same as that of Elasticsearch cluster that means domains are clusters with the settings, instance types, instance counts, and storage resources that you specify.
  • Now, click on Create a new domain.
  • Select the deployment type as development and testing domain
  • Now Under Configure domain provide the Elasticsearch domain name as “firstdomain” . A domain is the collection of resources needed to run Elasticsearch. The domain name will be part of your domain endpoint.
  • Under Data nodes, choose the t3.small.elasticsearch and ignore rest of the settings and click on NEXT
  • Under Network configuration, choose Public access. For Fine-grained access control, choose Create master user. Provide a user name as user and password as Admin@123. Fine-grained access control keeps your data safe.
  • For Domain access policy, choose Allow open access to the domain. Access policies control whether a request is accepted or rejected when it reaches the Amazon Elasticsearch Service domain
  • Now click on NEXT till end and create the domain. It takes few minutes for Domain to get Launched.
  • Click on the firstdomain Elasticsearch domain

Upload data to Amazon Elasticsearch for indexing

  • You can load streaming data into your Amazon Elasticsearch Service (Amazon ES) domain from many different sources. Some sources, like Amazon Kinesis Data Firehose and Amazon Cloud Watch Logs, have built-in support for Amazon ES. Others, like Amazon S3, Amazon Kinesis Data Streams, and Amazon DynamoDB, use AWS Lambda functions as event handlers
  • In this tutorial we will directly use a sample data to upload the data.
  • Click on the Kibana link as shown in above snapshot using the username user and password Admin@123 and then click on Add data
  • As this is just the demonstration , lets use sample data and Add e-commerce Orders.

Search documents using Kibana in Amazon Elasticsearch

Kibana is a popular open source visualization tool which works with AWS Elasticsearch service. It provides interface to monitor and search the indexes. Lets use Kibana to search the sample data which you just uploaded in AWS ES.

  • Click on Discover option from the main menu to search the data.
  • Now you will notice that Kibana will search the data and populate for you. You can modify the timelines and many other fields accordingly.

Kibana did provide the data when we searched in the dashboard using sample data which you uploaded.

Conclusion

In this tutorial you learnt about what is Amazon Elasticsearch and how to create a Amazon Elasticsearch domain using Amazon Management console. Also we learnt how to upload the sample data in AWS ES although this can be done using various ways such as S3 , Dynamo DB etc.

So, Now you have a strong fundamental understanding of AWS Elasticsearch , which site are you going to implement it in ?

How to Setup AWS WAF and Web ACL using Terraform on Amazon Cloud

It is always a good practice to monitor and make sure your applications or website are fully protected. AWS cloud provides you a service known as AWS WAF that Protect your web applications from common web exploits.

Lets learn everything about AWS WAF ( Web Application Firewall ) and use Terraform to create it.

Table of Contents

  1. What is AWS WAF ?
  2. Prerequisites
  3. Terraform Configuration Files and Structure
  4. Configure Terraform files for AWS WAF
  5. Deploy AWS WAF using Terraform commands
  6. Conclusion

What is AWS WAF ?

AWS WAF stands for Amazon Web services Web Application Firewall. Using AWS WAF you can monitor all the HTTP or HTTPSrequests that are forwarded to Amazon Cloud Front , Amazon Load balancer , Amazon API Gateway REST API etc. from users. It also controls who can access the required content or data based on specific conditions such source IP address etc.

AWS WAF Protect your web applications from common web exploits. To know more about Detailed view of AWS WAF , please find it on the other Blog Post What is AWS Web Application Firewall ?

Prerequisites:

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Configure Terraform files for AWS WAF

In this demonstration we will create a simple Amazon WAF instance using Terraform on Windows machine.

  • Create a folder on your desktop or any location on windows Machine ( I prefer it on Desktop). Now create a file main.tf inside the folder you’re in and paste the below content
# Creating the IP Set

resource "aws_waf_ipset" "ipset" {
   name = "MyFirstipset"
   ip_set_descriptors {
     type = "IPV4"
     value = "10.111.0.0/20"
   }
}

# Creating the Rule which will be applied on Web ACL component

resource "aws_waf_rule" "waf_rule" { 
  depends_on = [aws_waf_ipset.ipset]
  name        = var.waf_rule_name
  metric_name = var.waf_rule_metrics
  predicates {
    data_id = aws_waf_ipset.ipset.id
    negated = false
    type    = "IPMatch"
  }
}

# Creating the Rule Group which will be applied on Web ACL component

resource "aws_waf_rule_group" "rule_group" {  
  name        = var.waf_rule_group_name
  metric_name = var.waf_rule_metrics

  activated_rule {
    action {
      type = "COUNT"
    }
    priority = 50
    rule_id  = aws_waf_rule.waf_rule.id
  }
}

# Creating the Web ACL component in AWS WAF

resource "aws_waf_web_acl" "waf_acl" {
  depends_on = [ 
     aws_waf_rule.waf_rule,
     aws_waf_ipset.ipset,
      ]
  name        = var.web_acl_name
  metric_name = var.web_acl_metics

  default_action {
    type = "ALLOW"
  }
  rules {
    action {
      type = "BLOCK"
    }
    priority = 1
    rule_id  = aws_waf_rule.waf_rule.id
    type     = "REGULAR"
 }
}
  • Create one more file vars.tf inside the same folder and paste the below content
variable "web_acl_name" {
  type = string
}
variable "web_acl_metics" {
  type = string
}
variable "waf_rule_name" {
  type = string
}
variable "waf_rule_metrics" {
  type = string
}
variable "waf_rule_group_name" {
  type = string
}
variable "waf_rule_group_metrics" {
  type = string
}
  • Create one more file output.tf inside the same folder and paste the below content
output "aws_waf_rule_arn" {
   value = aws_waf_rule.waf_rule.arn
}

output "aws_waf_rule_id" {
   value = aws_waf_rule.waf_rule.id
}

output "aws_waf_web_acl_arn" {
   value = aws_waf_web_acl.waf_acl.arn
}

output "aws_waf_web_acl_id" {
   value = aws_waf_web_acl.waf_acl.id
}

output "aws_waf_rule_group_arn" {
   value = aws_waf_rule_group.rule_group.arn
}

output "aws_waf_rule_group_id" {
   value = aws_waf_rule_group.rule_group.id
}
  • Create one more file provider.tf inside the same folder and paste the below content
provider "aws" {      
  region = "us-east-1"
}
  • Again, Create one more file terraform.tfvars inside the same folder and paste the below content
web_acl_name = "myFirstwebacl"
web_acl_metics = "myFirstwebaclmetics"
waf_rule_name = "myFirstwafrulename"
waf_rule_metrics = "myFirstwafrulemetrics"
waf_rule_group_name = "myFirstwaf_rule_group_name"
waf_rule_group_metrics = "myFirstwafrulgroupmetrics"
  • Now your files and code are all set and your directory should look something like below.

Deploy AWS WAF using Terraform commands

  • Now, Lets Initialize the terraform by running the following init command.
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply command .
terraform apply

By now, you should have created the Web ACL and other components of AWS WAF with Terraform. Let’s verify by manually checking in the AWS Management Console.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘WAF’, and click on the WAF menu item.
  • Now You should be on AWS WAF Page, Lets verify each component starting from Web ACL .
  • Now verify the IP Set
  • Now, Verify the Rules which in the Web ACL.
  • Next, Lets verify the Web ACL Rule Groups.

Conclusion

In this tutorial you learned about AWS WAF that is Web Application Firewall and how to setup in Amazon cloud using Terraform .

It is very important to protect your website from attacks. So which Website do you plan to protect ?

Hope this tutorial helped you and if so please comment and share it with your friends.

How to Install and Setup Terraform on Windows Machine step by step

There are lots of automation tools and scripts available for this and one of the finest tool to automate your infrastructure is Terraform which is also known as Infrastructure as code.

Learn how to Install and Setup Terraform on Windows Machine step by step.

Table of Content

  1. What is Terraform ?
  2. Prerequisites
  3. How to Install Terraform on Windows 10 machine
  4. Creating an IAM user in AWS account with programmatic access
  5. Configuring the IAM user Credentials on Windows Machine
  6. Run Terraform commands from Windows machine
  7. Launch a EC2 instance using Terraform
  8. Conclusion

What is Terraform ?

Terraform is a tool for building , versioning and changing the Cloud infrastructure. Terraform is Written in GO Language and the syntax language of configuration files is hcl which stands for HashiCorp configuration language which is much easier than yaml or json.

Terraform has been in use for quite a while now . I would say its an amazing tool to build , change the infrastructure in very effective and simpler way. It’s used with variety of cloud provider such as Amazon AWS, Oracle, Microsoft Azure , Google cloud and many more. I hope you would love to learn it and utilize it.

Prerequisites

How to Install Terraform on Windows machine

  • Open your favorite browser and download the appropriate version of Terraform from HashiCorp’s download Page. This tutorial will download terraform 0.13.0 version
  • Make a folder on your C:\ drive where you can put the Terraform executable something Like  C:\tools where you can put binaries.
  • Extract the zip file to the folder C:\tools
  • Now Open your Start Menu and type in “environment” and the first thing that comes up should be Edit the System Environment Variables option. Click on that and you should see this window.
  • Now Under System Variables and look for Path and edit it
  • Click New and add the folder path where terraform.exe is located to the bottom of the list
  • Click OK on each of the menus.
  • Now, Open Command Prompt or PowerShell to check if terraform is properly added in PATH by running the command terraform from any location.
On Windows Machine command Prompt
On Windows Machine PowerShell
  • Verify the installation was successful by entering terraform --version. If it returns a version, you’re good to go.

Creating an IAM user in AWS account with programmatic access

For Terraform to connect to AWS Service, you should have an IAM user with an Access key ID and secret keys in the AWS account that you will configure on your local machine to connect to AWS account from your local machine.

There are two ways to connect to an AWS account, the first is providing a username and password on the AWS login page on the browser and the other way is to configure Access key ID and secret keys on your machine and then use command-line tools to connect programmatically.

  1. Open your favorite web browser and navigate to the AWS Management Console and log in.
  2. While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item.
  1. To Create a user click on Users→ Add user and provide the name of the user myuser and make sure to tick the Programmatic access checkbox in Access type which enables an access key ID and secret access key and then hit the Permissions button.
  1. Now select the “Attach existing policies directly” option in the set permissions and look for the “Administrator” policy using filter policies in the search box. This policy will allow myuser to have full access to AWS services.
  1. Finally click on Create user.
  2. Now, the user is created successfully and you will see an option to download a .csv file. Download this file which contains IAM users i.e. myuser Access key ID and Secret access key which you will use later in the tutorial to connect to AWS service from your local machine.

Configuring the IAM user Credentials on Windows Machine

Now, you have an IAM user myuser created. The next, step is to set the download myuser credentials on the local machine which you will use to connect AWS service via API calls.

  1. Create a new file, C:\Users\your_profile\.aws\credentials on your local machine.
  2. Next, Enter the Access key ID and Secret access key from the downloaded csv file into the credentials file in the same format and save the file.
[default]     # Profile Name
aws_access_key_id = AKIAXXXXXXXXXXXXXXXX
aws_secret_access_key = vIaGXXXXXXXXXXXXXXXXXXXX

credentials files help you to set your profile. By this way, it helps you to create multiple profiles and avoid confusion while connecting to specific AWS accounts.

  1. Similarly, create another file C:\Users\your_profile\.aws\config in the same directory
  2. Next, add the “region” into the config file and make sure to add the name of the profile which you provided in the credentials file, and save the file. This file allows you to work with a specific region.
[default]   # Profile Name
region = us-east-2

Run Terraform commands from Windows machine

By Now , you have already installed Terraform on your windows Machine, Configured IAM user (myuser) credentials so that Terraform can use it and connect to AWS services in Amazon account.

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Launch a EC2 Instance Using Terraform

In this demonstration we will create a simple Amazon Web Service (AWS) EC2 instance and run Terraform commands on Windows machine.

  • Create a folder on your desktop or any location on windows Machine ( I prefer it on Desktop)
  • Now create a file main.tf inside the folder you’re in and paste the below content
resource "aws_instance" "my-machine" {  # Resource block to define what to create
  ami = var.ami         # ami is required as we need ami in order to create an instance
  instance_type = var.instance_type             # Similarly we need instance_type
}
  • Create one more file vars.tf inside the same folder and paste the below content
variable "ami" {         # Declare the variable ami which you used in main.tf
  type = string      
}

variable "instance_type" {        # Declare the variable instance_type used in main.tf
  type = string 
}

Next, selecting the instance type is important. Click here to see a list of different instance types. To find the image ID ( ami ) , navigate to the LaunchInstanceWizard and search for ubuntu in the search box to get all the ubuntu image IDs. This tutorial will use Ubuntu Server 18.04.LTS image.

  • Create one more file output.tf inside the same folder and paste the below content
output "ec2_arn" {
  value = aws_instance.my-machine.arn     # Value depends on resource name and type ( same as that of main.tf)
}  
  • Create one more file provider.tf inside the same folder and paste the below content:
provider "aws" {      # Defining the Provider Amazon  as we need to run this on AWS   
  region = "us-east-1"
}
  • Create one more file terraform.tfvars inside the same folder and paste the below content
ami = "ami-013f17f36f8b1fefb" 
instance_type = "t2.micro"
  • Now your files and code are ready for execution .
  • Initialize the terraform using below command.
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply
  • After verification , now its time to actually deploy the code using apply.
terraform apply

Great Job, terraform commands execution was done successfully. Now we should have ec2 instance launched in AWS.

It generally take a minute or so to launch a instance and yes we can see that the instance is successfully launched now in us-east-1 region as expected.

Conclusion

In this tutorial you learnt What is terraform , how to Install and Setup Terraform on Windows Machine and launched an ec2 instance on AWS account using terraform.

Keep Terraforming !!

Hope this tutorial will helps you in understanding and setting up Terraform on Windows machine. Please share with your friends.

What is AWS WAF (Web Application Firewall) and how to Setup WAF in AWS account.

It is always a good practice to monitor and make sure your applications or website are fully protected. AWS cloud provides you a service known as AWS WAF that Protect your web applications from common web exploits.

Lets learn everything about AWS WAF ( Web Application Firewall )

Table of Content

  1. What is AWS WAF( Web Application Firewall) ?
  2. Components of AWS WAF ( Web Application Firewall)
  3. Prerequisites
  4. Getting started with AWS WAF ( Web Application Firewall)
  5. Conclusion

What is AWS WAF ?

AWS WAF stands for Amazon Web services Web Application Firewall. Using AWS WAF you can monitor all the HTTP or HTTPSrequests that are forwarded to Amazon Cloud Front , Amazon Load balancer , Amazon API Gateway REST API etc. from users. It also controls who can access the required content or data based on specific conditions such source IP address etc.

AWS WAF Protect your web applications from common web exploits.

Benefits of AWS WAF

  • This is helpful when you want Amazon Cloud Front , Amazon Load balancer , Amazon API Gateway REST to provide the content or serve content to particular users or block particular users.
  • You can configure AWS WAF to count the requests that match those properties without allowing or blocking those requests
  • Protects you from web attacks using conditions you specify.
  • It provides you real time metrics and details of web requests.

Components of AWS WAF

AWS WAF service contains some important components , lets discuss each of them now.

web ACL (web Access Control List) : It is used to protect set of AWS Resources. After you create web ACL you add rules inside it. Rules define specific conditions which are applied on web requests coming from users and how to handle these web requests. You also set default action in web ACL whether to allow or block requests that passes these rules.

Rules : Rules contains statements which define the criteria. IF criteria is matched then the web requests are allowed else they are blocked. Rule are based on some criteria like IP addresses or address ranges , Country or geographical location, Strings that appear in the request etc.

AWS Managed Rules rule group : You can use rules individually or in reusable rule groups. There are two types of rules AWS Managed rule groups or Managing your own rule groups

IP sets and regex pattern sets: AWS WAF stores some more complex information in sets that you use by referencing them in your rules.

  • An IP set is a group of IP addresses and IP address ranges that you want to use together in a rule statement. IP sets are AWS resources.
  • A regex pattern set provides a collection of regular expressions that you want to use together in a rule statement. Regex pattern sets are AWS resources.

Prerequisites

  • You must have AWS account in order to setup AWS WAF . If you don’t have AWS account, please create a account from here AWS account.
  • You must have IAM user with Administrator rights and setup credentials using AWS CLI or using AWS Profile.

Getting started with AWS WAF

In order to work and setup AWS WAF , the most important component is to create Web ACL. In AWS WAF there is nothing like WAF which gets created its just the name of the service that works with CloudFront, Load balancer and many more services. Lets get started.

Creating a Web ACL

You use a Web Access Control list (ACL) to protect a set of AWS resources. You create a web ACL and define its by adding the rules such to block or allow and to which extent it should allow or block it. You can use individual rule or groups of rule. To create Web ACL.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘WAF’, and click on the WAF menu item.
  • Now click on Create Web ACL
  • Now Provide the Name , Cloud Watch metric Name and Resource type as choose CloudFront distributions.
  • Next, Click on Add AWS Resources and select the CloudFront distribution which already have and then hit NEXT
  • Now , In Add rules and rule groups select Add my own rules and rule groups that means you need to Add the Values by your own.
    • Provide the name as myrule123
    • In Type choose Regular Rule
    • Inspect as Header
    • Header field as User-Agent
  • Select if a request as matches the statement for this tutorial however you can also use other available options such as Create a string match condition , geo match condition or an IP match condition .
  • While building the rules there are 3 types of Rule Actions options available such as
    • Count: AWS WAF counts the request but doesn’t determine whether to allow it or block it
    • Allow: AWS WAF allows the request to be forwarded to the protected AWS resource
    • Block: AWS WAF blocks the request and sends back to the client.
  • You can instruct AWS WAF to insert custom headers into the original HTTP request for rule actions or web ACL default actions that are set to allow or count. You can only add to the request. You can’t modify or replace any part of the original request
  • Hit the next button till the end and then Create Web ACL
  • The above rules you added were manual rules which you added, but at times you need to add AWS Managed rules, to do that select AWS Managed rules and select and then
  • So Your Web ACL is Ready and should look like as below.

AWS WAF service contains the most important component that is to Web ACL which you created and inside that you created the rule and applied them. Once Web ACL is created with rules then this you assign these Web ACL’s with CloudFront , Load Balancer etc. to save them from getting exploited from attacks.

Conclusion

In this tutorial you learned about AWS WAF that is Web Application Firewall and how to setup in Amazon cloud. It is very important to protect your website from attacks. So which Website do you plan to protect ?

What is CloudFront and how to Setup CloudFront with AWS S3 and ALB Distributions

Internet users are always impressed with a high speed & loading capacities websites . Why Not you have a website that loads the content within quick seconds and delivering it ?

In this tutorial you learn what is Cloud Front and how to set Cloud Front Distributions in Amazon cloud. Cloud Front enables and helps users to retrieve their content quickly by utilizing the concept of caching.

Table of Content

  1. What is Cloud Front?
  2. Prerequisites
  3. Creating an IAM user in AWS account with programmatic access
  4. Configuring the IAM user Credentials on local Machine
  5. Setting up Amazon CloudFront
  6. How to Use Custom URLs in CloudFront by Adding Alternate Domain Names (CNAMEs)
  7. Using Amazon EC2 or Other Custom Origins
  8. Conclusion

What is Cloud Front

CloudFront is a Amazon web service that helps in speeding up the distribution of content either static or dynamic such as .html, .css, .js , images and many more to users. CloudFront delivers the content using edge locations when any request is requested by users.

By utilizing the Cloud Front the content is delivered to the users very quickly using edge location. In case content is not available in edge locations then CloudFront request from the origin configured. Origins could be like AWS S3 bucket or HTTP server or Load Balancer etc.

Use cases of Cloud Front

  • It accelerates the delivery of your static website such as images , style sheets , Java script and so on.
  • Live streaming of video
  • Also use of Lambda at edge location with CloudFront adds more variety of ways to customize cloud front.

How CloudFront delivers content to your users

  • User makes a request to website or application let say a html page http://www.example.com/mypage.html
  • DNS server routes the request to Cloud Front edge locations.
  • Cloud Front checks if the request can be fulfilled with edge location .
  • IF Edge location have the files , then CloudFront sends back to the user else
  • CloudFront queries to the origin server
  • Origin server sends the files back to edge location and then Cloud front sends back to the User.

How CloudFront works with regional edge caches

This kind of cache brings the content more closer to the users to help performance. Regional edge caches help with all types of content, particularly content which becomes less popular over time such as user-generated content, such as video, photos, or artwork; e-commerce assets such as product photos and videos etc.

This cache sits in between the origin server and edge locations. The Edge location stores the content and cache but when the content is too old it removes it from its cache. There comes the role of regional cache which has wide coverage to store lots of content.

Prerequisites

  • You must have AWS account in order to setup AWS CloudFront. If you don’t have AWS account, please create a account from here AWS account.
  • You must have IAM user with Administrator rights and setup credentials using AWS CLI or using AWS Profile. You will see this below step to create IAM and configure credentials.
  • AWS S3 bucket

Creating an IAM user in AWS account with programmatic access

In order to connect to AWS Service, you should have an IAM user with an Access key ID and secret keys in the AWS account that you will configure on your local machine to connect to AWS account from your local machine.

There are two ways to connect to an AWS account, the first is providing a username and password on the AWS login page on the browser and the other way is to configure Access key ID and secret keys on your machine and then use command-line tools to connect programmatically.

  1. Open your favorite web browser and navigate to the AWS Management Console and log in.
  2. While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item.
  1. To Create a user click on Users→ Add user and provide the name of the user myuser and make sure to tick the Programmatic access checkbox in Access type which enables an access key ID and secret access key and then hit the Permissions button.
  1. Now select the “Attach existing policies directly” option in the set permissions and look for the “Administrator” policy using filter policies in the search box. This policy will allow myuser to have full access to AWS services.
  1. Finally click on Create user.
  1. Now, the user is created successfully and you will see an option to download a .csv file. Download this file which contains IAM users i.e. myuser Access key ID and Secret access key which you will use later in the tutorial to connect to AWS service from your local machine.

Configuring the IAM user Credentials on local Machine

Now, you have an IAM user myuser created. The next, step is to set the download myuser credentials on the local machine which you will use to connect AWS service via API calls.

  1. Create a new file, C:\Users\your_profile\.aws\credentials on your local machine.
  2. Next, Enter the Access key ID and Secret access key from the downloaded csv file into the credentials file in the same format and save the file.
[default]     # Profile Name
aws_access_key_id = AKIAXXXXXXXXXXXXXXXX
aws_secret_access_key = vIaGXXXXXXXXXXXXXXXXXXXX

credentials files help you to set your profile. By this way, it helps you to create multiple profiles and avoid confusion while connecting to specific AWS accounts.

  1. Similarly, create another file C:\Users\your_profile\.aws\config in the same directory
  2. Next, add the “region” into the config file and make sure to add the name of the profile which you provided in the credentials file, and save the file. This file allows you to work with a specific region.
[default]   # Profile Name
region = us-east-2

Setting up Amazon CloudFront

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘CloudFront’, and click on the CloudFront menu item.
  • Click on Create distributions and then Get Started
  • In the Origin setting provide the S3 bucket name and keep other values as default.
  • For the settings under Default Cache Behavior Set and Distribution Settings, accept the default values and then click on Create distribution.
  • AWS S3 bucket is already created before we started this tutorial. Lets upload a index.html ( with a text hello ) to the bucket and set the permission as public access as shown below.
  • Now check the Amazon S3 URL to verify that your content is publicly accessible
  • Check the CloudFront URL by hitting Domain Name /index.html and it should show the same result as your index.html file contains
domainname/index.html

How to Use Custom URLs in CloudFront by Adding Alternate Domain Names (CNAMEs)

In CloudFront as seen above the CloudFront URL is generated with a domain name *.cloudfront.net by default. If you wish to use your own domain name that is CNAME such as abc.com in the URL , you can assign it as yourself.

  • In our case by default the URL is :
http://dsx78lsseoju7.cloudfront.net/index.html
  • If you wish to use alternate domain such below, follow the step as below
http://abc.com/index.html
  • Go back to CloudFront Page and look for the distribution where you need to change the domain and click on Edit
  • Provide the domain name and you must have SSL certificate already in place.
  • Finally Create an alias resource record set in Route 53 by visiting Route53 Page .
  • Go to the Route53 Page by searching on the top of the AWS Page
  • Click on the Hosted Zone and then click on Create Record
  • Now Here, Provide the name of Record which can be anything, Record type and Route traffic as CloudFront distribution

After Successful creation of Route 53 you can verify the index page ( http://mydomain.abc.com/index.html ) if it works fine.

Using Amazon EC2 or Other Custom Origins

A custom Origin can be a Amazon Elastic Compute Cloud (AWS EC2) for example http server. You need to provide the DNS name of the server as custom origin.

Below are some key points to keep in mind while setting the custom origin as AWS EC2.

  • Host and serve the same content on all server in same way.
  • Restrict access requests to the HTTP and HTTPS ports that your custom origin listens on that is AWS EC2.
  • Synchronize the clocks of all servers in your implementation.
  • Use an Elastic Load Balancing load balancer to handle traffic across multiple Amazon EC2 instances
  • When you create your CloudFront distribution, specify the URL of the load balancer for the domain name of your origin server

Conclusion

In this tutorial you learnt what is Cloud Front and how to set Cloud Front Distributions in Amazon cloud. Cloud Front enables and helps users to retrieve their content quickly by utilizing the concept of caching.

By Now, you know what is CloudFront and how to setup CloudFront , what are you going to manage with CloudFront Next ?

The Ultimate Guide: Getting Started with Groovy and Groovy Scripts

Powerful, dynamic language, with static-typing and static compilation capabilities, for the Java platform aimed at improving developer productivity .Groovy syntax is simple and easy. It saves a lot of code and effort thus increasing the productivity of developer if he had to do the same thing in Java.

In this tutorial you will learn what is groovy and how to install Groovy on windows and Linux machine . Later you will see two examples which helps you kickstart for Writing Groovy Scripts.

Table of Content

  1. What is Groovy?
  2. Prerequisites
  3. How to Install Groovy on Windows Machine
  4. How to Install Groovy on Ubuntu Machine
  5. Groovy Syntax
  6. Groovy Examples
  7. Conclusion

What is Groovy?

Groovy is a Powerful static as well as dynamic language which is almost same as Java language with few differences. Groovy language is vastly used in Jenkins Pipelines. It integrates with Java libraries very well to deliver powerful enhancements and features including domain specific language authoring and scripting capabilities.

Basic Features of Groovy

  • Groovy supports all Java libraries and it has its own libraries as well.
  • It has a simple similar syntax as that of Java but in more simpler form
  • It has both static and dynamic nature
  • It has great extensibility for the language and tooling.
  • Last but not the least it a free open source language which is being used by lot of developers.

Prerequisites

  • Ubuntu 18 Machine or Windows machine
  • Make sure to have Java 8 plus installed on machines. To check Java version run the following command.
java --version
On Ubuntu Machine
On Windows Machine

How to Install Groovy on ubuntu Machine

Installing Groovy on Ubuntu machine is pretty straightforward. Lets Install the Groovy on Ubuntu 18 machine.

  • First Update the ubuntu official repository by running the apt command.
sudo apt update
  • Now, download the groovy installation script by running the curl command.
curl -s get.sdkman.io | bash
  • Now Install the groovy using the sdk command

How to install Groovy on Windows machine

  • Now you will see windows installer package, once you click on it it will automatically download the file.
  • Now click on the the downloaded windows installer package and installation will begin.
  • Accept the license Agreement
  • Make sure you select Typical for Setup Type and click on Install
  • Now Groovy is successfully installed on windows machine. Open Groovy console from the Start menu & run a simple command to test.

Groovy Syntax

Shebang line

  • This allows script to run groovy scripts directly from command line provided you have groovy installed and groovy command is available on the PATH
#!/usr/bin/env groovy
println "Hello from the shebang line"

Strings

  • Strings are basically chain of characters. Groovy strings are written with single quotes ' or double quotes '' and even with triple quotes '''
'This is an example of single line'

"This is an example of double line"

def threequotes = '''
line1
line2
line3
'''

String interpolation

Groovy expressions can be interpolated and it is just like replacing a placeholder with its value. Placeholder in groovy are surrounded by ${} or $ . Also if you pass GString in any method where string is required then you should replace it by calling toString() on that method.

def name  = "automate"
def greet =  "Hello $name"

Groovy Examples

Lets now see two examples of Groovy

  1. JsonSlurper : JsonSlurper is a class that parses JSON text or reader content into Groovy data
    • creating instance of the JsonSlurper class
    • Using the parseText function of the JsonSlurper class to parse some JSON text
    • access the values in the JSON string via the key.
import groovy.json.JsonSlurper 

class Example {
   static void main(String[] args) {
      def jsonSlurper = new JsonSlurper() # creating instance of the JsonSlurper class
      def object = jsonSlurper.parseText('{ "name":  "John", "ID" : "1"}') 
 	
      println(object.name);
      println(object.ID);
   } 
}
  1. Catching Exceptions
    • Accessing an array with an index value which is greater than the size of the array
class Example {
   static void main(String[] args) {
      try {
         def arr = new int[3];
         arr[5] = 5;
      }catch(ArrayIndexOutOfBoundsException ex) {
         println("Catching the Array out of Bounds exception");
      }catch(Exception ex) {
         println("Catching the exception");
      }
		
      println("Let's move on after the exception");
   } 
}

Conclusion

This tutorial is pretty straightforward and to get you started with Groovy. In this tutorial you learnt what is groovy and how to install Groovy on windows and Linux machine . Later you learnt two examples which helps you kickstart for Writing Groovy Scripts.

Well , Groovy is used at various places such as Jenkins pipelines etc.What do you plan to code with Groovy next ?

The Ultimate Guide: Getting Started with GitLab

With lots of software development and testing around different applications and products you certainly need a best way to deploy it in effective and in best way. With So many microservices and code it becomes very crucial for any developer or system engineers to collaborate and make a successful product ready.

Managing the code is now very well taken care by Git which is distributed code repository but on the top of it deployment has been very effective and easily managed with the help of GitLab

In this tutorial you will learn all about GitLab , Managing Pipelines , Projects and many more which a devops engineer should know to get started.

Table of Content

  1. What is GitLab?
  2. Prerequisites
  3. Creating Projects on GitLab
  4. Creating a Repository on GitLab
  5. Creating a Branch on GitLab
  6. Get started with GitLab CI/CD Pipelines
  7. Pipeline Architecture
  8. Conclusion

What is GitLab?

Git is a distributed version control designed to handle small to large projects with speed and efficiency. On the top of Git , GitLab is fully integrated platform to manage devops lifecycle or software developments.

It is single application to manage entire DevOps lifecycle.

Prerequisites

  • You should have GitLab account handy. If you don’t have create it from here

Creating Projects on GitLab

GitLab projects hold all the files , folders , code and all the documents you need to build your applications.

  • To create a project in GitLab click on Projects on the top and then click on Create a Project
  • Now click on Create blank project
  • On the Blank project tab provide the Project name and as this is demo we will keep this repository Private.
  • Now Project is successfully created.
  • You are ready to upload files either manually create/upload on GitLab
  • Also you can push the files using command line by cloning the repository and adding the files as show below.
git clone https://gitlab.com/XXXXXXXXX/XXXXX.git
cd firstgitlab
touch README.md
git add README.md
git commit -m "add README"
git push -u origin master

Creating a Repository on GitLab

A repository is a place where you store all your code and related files. It is part of a Project. You can create multiple repositories in a single project.

To create a new repository, all you need to do is create a new project or fork an existing project. Once you create a new project, you can add new files via UI or via command line.

Creating a Branch on GitLab

  • By Now, you saw GitLab Project creation. By default if you add any file it will be checked in master branch.
  • Click on New file and then select Dockerfile and add content and then commit the file by adding the comments.
  • You will see that Dockerfile is now added in master branch under FirstGitLab project.
  • So far we created a file which by default gets added in master branch. But if you need a separate Branch click on the Branches and then hit New Branch.
  • Provide a name for the new branch.

Get started with GitLab CI/CD Pipelines

Before you start CI/CD part on GitLab make sure to have following

  • runners : runners are agents that run your CI/CD jobs. To check the available runners Go to Settings > CI/CD and expand Runners. As long as you have at least one active available runner then you will be able to run the Job.
  • .gitlab-ci.yml file : In this file you define your CI/CD jobs , decisions which runner should take with specific conditions, structure of job and order of Jobs. Go to Project overview and then click on New file & name it as .gitlab-ci.yml
  • Now Paste the below content
build-job: 
    stage: build 
    script:
       - echo "Hello, $GITLAB_USER_LOGIN"
test-job:
    stage: test
    script: 
       - echo "Testing CI/CD Pipeline"
deploy-job:
    stage: deploy
    script:
       - echo "Deploy from the $CI_COMMIT_BRANCH branch" 
  • Now Pipeline should automatically trigger for this pipeline configuration. Click on Pipelines to validate and View status of pipeline.
  • To view details of a job, click the job name, for example build.
  • Pipelines can be scheduled to run automatically as and when required.

Pipeline Architecture

Pipelines are the fundamental building blocks for CI/CD in GitLab. There are three main ways to structure your pipelines, each with their own advantages. These methods can be mixed and matched if needed:

  • Basic: Good for straightforward projects where all the configuration are stored at one place. This is the simplest pipeline in GitLab. It runs everything in the build stage at the same time and once all of those finish, it runs everything in the test stage the same way, and so on.

If Build A is completed it waits for BUILD B and once both are completed it moves to next TEST STAGE. Similarly if TEST B is completed it will wait for TEST A and then once both are completed they move to DEPLOY STAGE.

Directed Acyclic Graph: Good for large, complex projects that need efficient execution and you want everything to run as quickly as possible.

If Build A and TEST A both are completed it moves to next DEPLOY STAGE even if TEST B is still running

Child/Parent Pipelines: Good for monorepos and projects with lots of independently defined components. This job is run mostly using trigger keyword.

Conclusion

GitLab is the first single application for software development, security, and operations that enables continuous DevOps. GitLab makes the software lifecycle faster and improves the speed of business.

GitLab provides solutions for each of the stages of the DevOps lifecycle. So Which application are you going to build ?

Hope you had learnt a lot from this guide and helped you. If you like please share.

Helm Charts: A Simple way to deploy application on Kubernetes

Kubernetes deployment can be manually done but it may take lots of efforts and ton of hours to build and organize yaml file in structured way. Helm charts are one of the best practices for building efficient clusters in Kubernetes.

In this tutorial you will learn step-by-step How to create a Helm chart , set up, and deploy on a web server. Helm charts simplify application deployment on a Kubernetes cluster

Table of content

  1. What is Helm and Helm charts
  2. Prerequisites
  3. Installing Helm on windows machine
  4. Installing Helm on ubuntu machine
  5. Installing Minikube on Ubuntu machine
  6. Creating Helm charts
  7. Configure Helm Chart
  8. Deploy Helm chart
  9. View the Deployed Application
  10. Verify the Pods which we created using Helm chart
  11. Conclusion

What is Helm and Helm charts

Helm is a package manager for kubernetes which makes application deployments and management easier. Helm is a command line tool which allows you to create helm chart .

Helm charts is a collection of templates and setting which defines set of kubernetes resources. In Helm charts we define all the resources which are needed as part of application. Helm charts communicates with kubernetes cluster using REST api.

Working with Helm chart makes the job of deployment or management easier. It also supports versioning.

Prerequisites

  • Docker should be installed on ubuntu machine.
  • kubectl should be installed on ubuntu machine.

Installing Helm on windows machine

To install Helm on Windows machine

  • Now, extract the windows-amd64 zip to the preferred location
  • Now open command prompt and hop to the same location and type helm.exe
  • Now, check the version of helm

Installing Helm on ubuntu machine

To install Helm on ubuntu machine

  • Download the  latest version of Helm package
 wget https://get.helm.sh/helm-v3.4.1-linux-amd64.tar.gz
  •  Unpack the helm package manager
tar xvf helm-v3.4.1-linux-amd64.tar.gz
  • Now move linux-amd64/helm to /usr/local/bin
sudo mv linux-amd64/helm /usr/local/bin
  • Check the version of helm
helm version

Installing Minikube on Ubuntu machine

minikube is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes. Lets Install it.

  • Download and Install the minikube package on ubuntu machine.
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb

sudo dpkg -i minikube_latest_amd64.deb
  • Now start minikube with normal user but not with root user.
minikube start
  • Now verify if minikube is installed properly by running the following commands
minikube status

Creating Helm charts

Before we create Helm chart make sure helm is installed properly. To check run the below command.

which helm
  • Starting a new Helm chart requires one simple command
helm create automate
  • As soon as First-chart is created it will create a folder with the same name containing different files.

Configure Helm Chart

By now Helm chart has been created by using just a single command. But for deploying using Helm chart we would need to configure few files which got generated with helm create command.

  1. Chart.yaml contains details of chart such as name , description , api version to be used , chart version to be deployed etc. You don’t require to update this file .
  1. template directory: Next , most important part of the chart is the template directory which holds all the configurations for your application that will be deployed into the cluster such as ingress.yaml , service.yaml etc . You don’t need any modification in this directory as well.
  1. charts : This folder contains no file initially. Other dependent charts are added here if required. (optional task). Skip this as well.
  1. values.yml: values.yml is a main file which contains all the configuration related to deployments. Customize the values.yml file according to the deployment .
    • replicaCount: is set to 1 that means only 1 pod will come up ( No change required)
    • pullPolicy : Update to Always.
    • nameOverride: automate-app
    • fullnameOverride: automate-chart
    • There are two types of networking options available a) ClusterIP address which exposes service on cluster internal IP and b) NodePort exposes service on each kubernetes node IP address. We will use NodePort here.

Your values.yaml should like something below.

replicaCount: 1

image:
  repository: nginx
  pullPolicy: Always
  # Overrides the image tag whose default is the chart appVersion.
  tag: ""

imagePullSecrets: []
nameOverride: "automate-app"
fullnameOverride: "automate-chart"

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: "automateinfra"

podAnnotations: {}

podSecurityContext: {}
  # fsGroup: 2000

securityContext: {}
service:
  type: NodePort
  port: 80

ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths: []
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: {}
autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

nodeSelector: {}

tolerations: []

affinity: {}

Deploy Helm chart

Now that you’ve made the necessary modifications to create a Helm chart, you can deploy it using a Helm command, add a name point to the chart, add a values file, and send it to a namespace:

helm install automate-chart automate/ --values automate/values.yaml
  • Helm install command deploys the app , Now run the both export commands as shown in the helm install command’s output.
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services automate-chart)

export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")

View the Deployed Application

  • Run echo command as shown in the output of helm install command.
echo http://$NODE_IP:$NODE_PORT

Verify the Pods which we created using Helm chart

You already saw that application is deployed successfully and we can see that Nginx Page loaded. But to verify from kubernetes end , lets run the following command and verify.

kubectl get nodes

kubectl get pods

Conclusion

After following the outlined step-by-step instructions, you have a Helm chart created, set up, and deployed on a web server. Helm charts simplify application deployment on a Kubernetes cluster.

Hope you liked this tutorial and it helped you. Please share with your friends.

Complete Python Course ( Python for beginners)

Python’s standard library is very extensive, offering a wide range of facilities. The library contains built-in modules (written in C) that provide access to system functionality such as file I/O that would otherwise be inaccessible to Python programmers, as well as modules written in Python that provide standardized solutions for many problems that occur in everyday programming

In this tutorial, we will learn everything which a beginner and a DevOps engineer should know in Python. We will cover the basic definition of python and some brilliant examples which will be enough to get you started with Python and for sure you will love it.

Table of content

  1. What is Python?
  2. Prerequisites
  3. Python Keywords
    • Python Numbers
    • Python Strings
    • Python Tuple
    • Python Lists
    • Python Dictionary
    • Python Sets
  4. Python variables
  5. Python Built-in functions
  6. Python Handling Exceptions
  7. Python Functions
  8. Python Searching
  9. Conclusion

What is Python?

Python is a high-level, oops-based, interactive, and general-purpose scripting programing language. Python is a language that is used as a backend as well as a frontend language. It focuses on objects over functions.

Python is also an interpreted language because it converts codes into machine-level code even before it runs. It works on a variety of protocols such as HTTPS, FTP , SMTP, and many more. The latest version is 3.9.2 which was released in December 2020. Python works very well with most of the such as atom, notepad ++, vim.

Python works on windows, Linux, and macOS systems, and many more. For Windows OS it can run a single command on windows terminal and for Linux & macOS it can easily run on the shell without needing to save the program every time.

Prerequisites

  • Python doesn’t come installed on windows so make sure you have Python installed on windows machine. To see how to install python on windows click here.
  • For macOS and Linux Python comes installed by default but could be the older version such as python2. To check python version run the command.
Checking the installed older version of Python2
Checking the installed Python3
  • In case if your system has Python2 or if you receive an error message “Python not found” then you need to run the following command to install python on macOS or Linux.
sudo apt install python3

Python Keywords

Python reserves certain words for special purposes known as keywords such as

  • continue
  • def
  • del
  • break
  • class
  • if
  • return

Python Data types

Whatever program you write in Python is data and this data contains some values which are also known as objects. Each object has a type known as data types. These data types are either mutable in nature that is modifiable or immutable in nature that is unmodifiable.

Python contains numerous built-in data types such as:

Python Numbers

Numbers are either integers or floating-point.

  • Decimal integer: 1 , 2
  • Binary integer: 0b010101
  • Octal integer: 0o1
  • Hexadecimal integer: 0x1
  • Floating point: 0.0 , 2.e0

Boolean

Booleans are represented by either True or False

Python String

Python strings are a collection of characters surrounded by quotes ” “. There are different ways in which strings are declared such as:

  1. str() – In this method you decalre the characters or words or data inside the double quotes.
  1. Directly calling it in quotes – “Hello, this is method2 to display string”
  1. Using Format – format method was introduced in Python3 and uses curly brackets {} to replace the values.
    • In below example you will notice that first curly bracket will be replaced by first value a and second will be replaced by b
    • If you provide any numerical value inside the curly braces it considers it as index and then retrieve from the given values accordingly as shown in the second example below.
    • if you provide key value pair then values are substituted according to key as shown in the third example below.
'{} {}'.format('a','b')
'{0} {0}'.format('a','b')
'{a} {b}'.format(a='apple', b='ball')
  1. Using f string – f string are prepended with either f or F before the first quotation mark. The value is subsituted with the variables declared.
a=1 # Declaring a 
f"a is {a}" 
  1. Template strings – Template strings are designed to offer a simple string substitution mechanism. These built-in methods work for tasks where simple word substitutions are necessary.
from string import Template
new_value = Template("$a b c d")       #  a will be substituted here
x = new_value.substitute(a = "Automation")
y = new_value.substitute(a = "Automate")
print(x,y)

Some Tricky Examples of declaring string

Input String

print('This is my string 1')   # Correct String
print("This is my string 2")   # Correct String
# print('This is not a string ") # InCorrect String as you cannot used mixed quotes
# print("This is not a string')  # InCorrect String as you cannot used mixed quotes

# Examples of Special characters inside the String such as quotes

# print('Hello's Everyone')  # Incorrect Statement
print('Hello\'s Everyone')   # Correct Statement after using escape (To insert characters that are illegal in a string, use an escape character. )
print("Hello's Everyone")    # Correct Statement enclose within double quotes
print('Hello "shanky')       # COrrect STatement 
print('Hello "shanky"')      # Correct STatment
# print("Hello "S"shanky") # Incorrect Statement
print("Hello ""shanky")  

# No need to Escape if using triple quotes but proper use of triple quotes
print(''''This is not a string "''')
print('''Hello" how' are"" u " I am " f'ine'r''')
print('''''Hello" how' are"" u " I am " f'ine'r''')
print("""'''''Hello" how' are"" u " I am " f'ine'r""") 

Output String

This is my string 1
This is my string 2
Hello's Everyone
Hello's Everyone
Hello "shanky
Hello "shanky"
Hello shanky
'This is not a string "
Hello" how' are"" u " I am " f'ine'r
''Hello" how' are"" u " I am " f'ine'r
'''''Hello" how' are"" u " I am " f'ine'r

Python Tuple

Tuples: Tuples are immutable ordered sequence of items that cannot be modified. The items of a tuple are arbitrary objects and may be of different types and allow duplicate values. For Example

# 10,20,30,30 are fixed at respective index 0,1,2,3 positions 
(10,20,30,30) or (3.14,5.14,6.14)

Python Lists

Lists: The list is a mutable ordered sequence of items. The items of a list are arbitrary objects and may be of different types. List items are ordered, changeable, and allow duplicate values.

[2,3,"automate","2"]

Python Dictionaries

Dictionary are written as key:value pair , where key is an expression giving the item’s key and value is an expression giving the item’s value. A dictionary is a collection which is ordered*, changeable and does not allow duplicates.

# Dictionary with three items where x,y and z are keys.
# where x,y and z have 42, 3.14 and 7 as the values.
{'x':42, 'y':3.14, 'z':7} 

Python Sets

Sets: Set stores multiple items in a single variable. It contains unordered and unindexed data. Sets cannot have two items with the same value.

{"apple", "banana", "cherry"}
Data typesMutable or Immutable
StringImmutable (Cannot be modified)
TuplesImmutable (Cannot be modified)
IntegersImmutable (Cannot be modified)
ListMutable (Can be modified)
SetsMutable (Can be modified)
Floating pointImmutable (Cannot be modified)
DictionariesMutable (Can be modified)

Python variables

Variables are stored as a information it could be number , symbol , name etc. which are used to be referenced. Lets see some of the examples of Python variables.

  • There are few points one must remember when using variables such as
    • Variables cannot start with digits
    • Spaces are not allowed in variables.
    • Avoid using Python keywords

Example 1:

  • In below example var is a variable and value of var is this is a variable
var="this is a variable" # Defining the variable
print(var)    # Printing the value of variable

Example 2:

  • In below example we are declaring three variable.
    • first_word and second_word are storing the values
    • add_words is substituting the variables with values
first_word="hello"
second_word="devops"
add_words=f"{first_word}{second_word}"
print(add_words)
  • If you wish to print words in different line then use "\n" as below
first_word="hello"
second_word="devops"
add_words=f"{first_word}\n{second_word}"
print(add_words)

Dictionary

In simple words these are key value pairs where keys can be number, string or custom object. Dictionary are represented in key value pairs separated by comma within curly braces.

map = {'key-1': 'value-1', 'key-2': 'value-2'}
  • You can access the particular key using following way
map['key-1']

Lets see an example to access values using get() method

my_dictionary = {'key-1': 'value-1', 'key-2': 'value-2'}
my_dictionary.get('key-1')    # It will print value of key-1 which is value-1
print(my_dictionary.values()) # It will print values of each key
print(my_dictionary.keys())   # It will print keys of each value
my_dictionary.get('key-3')    # It will not print anything as key-3 is missing

Lists

Lists are ordered collection of items. Lists are represented using square brackets containing ordered list of item.

[0, 1, 2, 3, 4, 5, 6, 7, 8, 9] # Example of List
  • We can add or remove items from the list using built in function such as pop() or insert() or append() and many more. Lets us see an example.

The contents of one list can be added to another using the extend method:

list1 =['a', 'b', 'c', 'd']
print(list1)                        # Printing only List 1
list2 = ['e', 'f']
list2.extend(list1)
print(list2)                        # Printing List 2 and also 1
  • Use insert() to add one new guest to the beginning of your list.
  • Use insert() to add one new guest to the middle of your list.
  • Use append() to add one new guest to the end of your list.

Python Built-in functions

There are various single line command which are already embedded in python library and those are known as built in functions. You invoke a function by typing the function name, followed by parentheses.

  • To check the Python version on windows or Linux machine run the following command.
python3 --version
  • To print the output of a program , use the print command.
print("Hello Devops")
  • To generate a list of number through a range built-in function run the following command.
list(range(0,10))

Handling Exceptions

Exceptions are error which causes a program to stop if not handled properly. There are many built-in exceptions, such as IOErrorKeyError, and ImportError. Lets see a simple example below.

  • Here we defined a list of characters and stored it in a variable devops
  • Now, while true indicated that till the ,condition is true it will execute the try block.
  • .pop() is built in method to remove each item one by one.
  • Now in our case as soon as all the characters are removed then except block catches the IndexError and prints the message.
devops = ['d','e','v','o','p','s']
 
while True:
    try:
        devop = devops.pop()
        print(devop)
    except IndexError as e:
        print("I think I did lot of pop ")
        print(e)
        break
 
Output:
 
s
p
o
v
e
d
I think I did lot of pop
pop from empty list

Python Functions

Earlier in this tutorial we have already seen that there are numerous built in function and some of them you used above. But you can define and create your own functions. Lets see the syntax of function.

def <FUNCTION NAME>(<PARAMETERS>):
    <CODE BLOCK>
<FUNCTION NAME>(<ARGUMENTS>)

Lets look at some of the Python functions examples

EXAMPLE 1

  • Here each argument use order of arguments to assign value which is also known as positional argument.
  • a and b variables are parameters which are required to run the function
  • 1 and 2 are arguments which are used to pass the value to the function ( arguments are piece of information that’s passed from a function call to a function)
def my_function(a,b):
  print(f" value of a is {a}")
  print(f" value of b is {b}")
my_function(1, 2)

EXAMPLE 2:

  • With keyword arguments, assign each argument a default value:
def my_function(a=3,b=4):
  print(f" value of a is {a}")
  print(f" value of b is {b}")
my_function()

EXAMPLE 3

Passing arbitrary number of arguments. When you are not sure about the number of parameters to be passed then we call it as arbitrary. Lets look at an example

  • Find the Even in the string

mylist = []
def myfunc(*args):      #  args is to take any number of arguments together in myfunc
    for item in args:
        if int(item)%2 == 0:
            mylist.append(item)
    print(mylist)
myfunc(5,6,7,8,9)

EXAMPLE 4

  • IF LOOP: Find the least among two numbers if both numbers are even else return greater among both the numbers

def two_of_less(a,b):    # Defining the Function where a and b variables are parameters
    if a%2==0 and b%2==0:
      print(min(a,b))       # using built in function min()
    if a%2==1 or b%2==1:
      print(max(a,b))       # using built in function max()
two_of_less(2,4)

EXAMPLE 5

  • Write a function takes a two-word string and returns True if both words begin with same letter

def check(a):
    m = a.split()
    if m[0][0] == m[1][0] :
     print("Both the Words in the string starts with same letter")
    else:
     print("Both the Words in the string don't start with same letter")    
check('devops Engineer')

Python Searching

The need to match patterns in strings comes up again and again. You could be looking for an identifier in a log file or checking user input for keywords or a myriad of other cases.

Regular expressions use a string of characters to define search patterns. The Python re package offers regular expression operations similar to those found in Perl.

Lets look at example which will give you overall picture of in built functions which we can use with re module.

  • You can use the re.search function, which returns a re.Match object only if there is a match.
import re
import datetime
 
name_list = '''Ezra Sharma <esharma@automateinfra.com>,
   ...: Rostam Bat   <rostam@automateinfra.com>,
   ...: Chris Taylor <ctaylor@automateinfra.com,
   ...: Bobbi Baio <bbaio@automateinfra.com'''
 
# Some commonly used ones are \w, which is equivalent to [a-zA-Z0-9_] and \d, which is equivalent to [0-9]. 
# You can use the + modifier to match for multiple characters:
 
print(re.search(r'Rostam', name_list))
print(re.search('[RB]obb[yi]',  name_list))
print(re.search(r'Chr[a-z][a-z]', name_list))
print(re.search(r'[A-Za-z]+', name_list))
print(re.search(r'[A-Za-z]{5}', name_list))
print(re.search(r'[A-Za-z]{7}', name_list))
print(re.search(r'[A-Za-z]+@[a-z]+\.[a-z]+', name_list))
print(re.search(r'\w+', name_list))
print(re.search(r'\w+\@\w+\.\w+', name_list))
print(re.search(r'(\w+)\@(\w+)\.(\w+)', name_list))
 

OUTPUT

<re.Match object; span=(49, 55), match='Rostam'>
<re.Match object; span=(147, 152), match='Bobbi'>
<re.Match object; span=(98, 103), match='Chris'>
<re.Match object; span=(0, 4), match='Ezra'>
<re.Match object; span=(5, 10), match='Sharm'>
<re.Match object; span=(13, 20), match='esharma'>
<re.Match object; span=(13, 38), match='esharma@automateinfra.com'>
<re.Match object; span=(0, 4), match='Ezra'>
<re.Match object; span=(13, 38), match='esharma@automateinfra.com'>
<re.Match object; span=(13, 38), match='esharma@automateinfra.com'>

Conclusion

In this tutorial you learnt everything which a beginner and a Devops engineer should know. This tutorial covered definition of python and some brilliant examples which will be enough to get you started with Python and for sure you will love it.

By Now, you are ready to build some exciting python programs. Hope you liked this tutorial and please share it with your friends.

How to Launch AWS RedShift Cluster using AWS Management Console in Amazon account.

Although there are lots of Storage service which stores ample of data but when it comes to analyzing the data performance has always remained a challenge. The issues with performance are like unable to retrieve data within time, storage leakage etc.

To solve these issues AWS Amazon provides its own managed service for both storing Gigs, Terabyte of data and then analyzing the data and the service is AWS Redshift.

In this tutorial you will learn about Amazons data warehouse and analytic service AWS Redshift , What is AWS Redshift cluster and how to create AWS Redshift cluster using AWS Management console.

Table of Content

  1. What is Amazon Redshift?
  2. What is Amazon Redshift Cluster?
  3. Amazon Redshift Cluster overview
  4. Prerequisites
  5. How to Create a basic Redshift Cluster using AWS Management console
  6. Conclusion

What is Amazon Redshift?

Amazon Redshift is a AWS analytical service which is used to analyze the data. Amazon Redshift allows us to store massive data and analyze the data using query on the database. It is fully managed service that means you don’t need to worry about scalability and infrastructure.

First step to upload the data is to create the set of nodes which is known ad Amazon Redshift cluster. Cluster contains groups of nodes. Once Cluster is created then you can upload tons of data ( in Gigabits) and then start analyzing the data.

Amazon Redshift manages everything for you such as monitoring, scaling , applying patches, upgrades , capacity whatever is required at infrastructure end.

What is Amazon Redshift Cluster?

Amazon Redshift cluster can contain a single node or more than one node. It all depends on the requirements. IF you wish to create more than one node then that is known as cluster. AWS Redshift Cluster contains one leader node and other nodes are known as compute nodes.

You can create AWS Redshift cluster using various ways such as:

  • AWS Command Line interface ( AWS CLI )
  • AWS Management console
  • AWS SDK’s ( Software Development kit) libraries .

Amazon Redshift Cluster overview

Lets see some of the concepts of Amazon Redshift cluster.

  • Redshift cluster snapshots can be created either manually or automatically & are stored in AWS S3 bucket.
  • Administrator assigns IAM permissions on Redshift cluster if any users wants to access it.
  • Amazon cloud watch is primarily used to capture health and performance of Amazon Redshift cluster.
  • As soon as you create Amazon Redshift cluster one database is also created. This database is used to query and analyze the data. While you provision the cluster you need to provide master user which is superuser for the database & has all rights.
  • When a client queries Redshift cluster all the request are received by leader node , it further parses and develop query execution plans. Leader node coordinates with compute node and then provide final results to clients.

Prerequisites

  • You must have AWS account in order to setup AWS Redshift cluster. If you don’t have AWS account, please create a account from here AWS account.
  • You must have access to create IAM role and AWS Redshift cluster.
  • (Optional) : If you have AWS Administrator rights then it will be helpful.

How to Create a basic Redshift Cluster using AWS Management console

Before we start creating a Redshift cluster we need an IAM role which Redshift will assume to work with other services such as AWS S3 etc. So lets get started.

  • Open your browser and and go to AWS Management console and on the top search for IAM , here click on Roles
  • Next , click on Create Role.
  • Next , select service as Redshift
  • Now , scroll down to the bottom and you will see “Select your use case”, here choose Redshift – Customizable, then choose Next: Permissions.
  • Now attach AmazonS3ReadOnlyAccess policy and click N
  • Next , skip tagging as of now just click on Next: Tags and then Review & finally hit Create Role.
  • IAM role is created successfully , keep the IAM role ARN handy with you:
  • Now on AWS Management console search for Redshift on the top of the page.
  • Now click on Create Cluster and provide the name of cluster . As this is the demo , we will use free trial cluster.
  • Now , provide the database details and save them for future. Also Associate IAM role which we created earlier.
  • Finally click on Create cluster
  • By Now, AWS Redshift cluster is created successfully and available for use.
  • Lets validate our database connection by running a simple query. Click on Query data
  • Now Enter Database credentials for making the connecting to AWS Redshift cluster ( dev database was created by default)
  • Now Run a query as below
    • Some of the tables inside the database like events , date were created by default.
select * from date

This confirms that AWS Redshift Cluster is created successfully and we are able to hit queries on it .

Conclusion

In this tutorial we learnt about Amazons data warehouse and analytic service AWS Redshift , What is AWS Redshift cluster and how to create AWS Redshift cluster using AWS Management console.

By learning this Service now you are ready with working with Gigs and Terabyte of data and analyze it with best performance.

Ultimate Guide on how to add apt-repository and PPA repositories and working with ubuntu repository

As a Linux administrator it is very important to know how you are managing your applications & Software’s. Every command and every installation of packages require critical attention before executing it.

So In this Ultimate guide we will learn everything you should know about ubuntu repositories , how to add apt-repository & PPA repositories and working with ubuntu repository and apt commands.

Table of Content

  1. What is ubuntu repository?
  2. How to add a ubuntu repository?
  3. Manually Adding apt-repository in ubuntu
  4. Adding PPA Repositories
  5. Working with Ubuntu repositories
  6. How apt or apt-get command work with Ubuntu Repository
  7. Conclusion

What is ubuntu repository?

APT repository is a network shared server or a local directory containing deb packages and metadata files that are readable by the APT tools. When installing packages using the Ubuntu Software Center or the command line utilities such as apt or apt-get the packages are downloaded from one or more apt software repositories.

On Ubuntu and all other Debian based distributions, the apt software repositories are defined in the /etc/apt/sources.list file or in separate files under the /etc/apt/sources.list.d/ directory.

The names of the repository files inside the /etc/apt/sources.list.d/ directory must end with .list.

How to add apt-repository in ubuntu ?

add-apt-repository is basically a python script that helps in addition of repositories in ubuntu.

Lets take a example to add a mongodb repository in ubuntu machine

  • add-apt-repository utility is included in software-properties-common package.
sudo apt update
sudo apt install software-properties-common
  • Import the repository public key by running apt-key command
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4
  • Add the MongoDB repository using the command below.
sudo add-apt-repository 'deb [arch=amd64] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse'
Verified in /etc/apt/source.list repository has been added succesfully

Manually Adding apt-repository in ubuntu

To add repositories manually in ubuntu edit the /etc/apt/sources.list file and add the apt repository line to the file.

To add the repository open the sources.list file with your favorite editor

sudo vi /etc/apt/sources.list

Add the repository line to the end of the file:

sudo add-apt-repository 'deb [arch=amd64] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse'
  • If required to add manually public key for which you can use wget or curl command

Adding PPA Repositories

Personal Package Archives (PPA) allows you to upload Ubuntu source packages that are built and published with Launchpad as an apt repository.

When you add a PPA repository the add-apt-repository command creates a new file under the /etc/apt/sources.list.d/ directory.

Lets take a example of Addition of ansible PPA repository in ubuntu machine

  • PPA utility is included in software-properties-common package similar to add-apt-repository
sudo apt update
sudo apt install software-properties-common
  • Add PPA ansible Repository in the system.
sudo apt-add-repository --yes --update ppa:ansible/ansible 
#  PPA is Personal Package Archive 
  • Lets check the directory /etc/apt/sources.list.d/ has ansible PPA repository

Working with Ubuntu repositories

Repositories in ubuntu machine are basically file servers or network shares under which it has lot of packages , it could be .deb packages or files which are readable by apt or apt-get command.

/etc/apt/sources.list or 

/etc/apt/sources.list.d

What does sources.list and sources.list.d contains ?

  • Software in Ubuntu’s repository is divided into four categories or components – main, restricted, universe and multiverse.
    • main: contains applications that are free software that are fully supported by ubuntu.
    • multiverse: contains software’s that are not free that requires license.
    • restricted: only to promote free software and ubuntu team cannot fix it & then provide it back to author if any issues are found.
    • universe: They have all the possible software’s which are free and open sourced but ubuntu don’t provide regular patch guarantee.
deb http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ bionic main restricted
deb-src http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ bionic main restricted
deb http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ bionic-updates main restricted
deb-src http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ bionic-updates main restricted
deb http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ bionic universe
deb-src http://security.ubuntu.com/ubuntu bionic-security multiverse
  • deb or deb-src are either .deb packages or source code
  • http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ is the repository URL
  • bionic , bionic-security , xenial are distributions code name.
  • main, restricted, universe and multiverse are repository categories.

How apt or apt-get command work with Ubuntu Repository

APT stands for Advanced Packaging Tool which performs functions such as installation of new software packages, upgrade of existing software packages, updating of the package list index, and even upgrading the entire Ubuntu system by connecting with repositories stored under /etc/apt/sources.list or /etc/apt/source.list.d/

Let us see an example of how apt command works with ubuntu repositories.

  • Install below three packages
apt install curl

apt install wget

apt install telnet
  • You will notice that all the above packages are already up to date and latest
  • Now run the apt update command to update the repositories. apt command contains three types of lines.
    • Hit: If there is no change in package version from the previous version
    • Ign: It means package is being ignored.
    • Get: It means it has a new version available. It will download the information about the version (not the package itself). You can see that there is download information (size in kb) with the ‘get’ line in the screenshot above.
apt update
  • After completion of command it provides the details if upgrade is required by any package or not. In our case it shows 37 packages can be upgraded. Lets see the list of packages which can be upgraded by running the following command.
apt list --upgradable

You can either upgrade a single package or upgrade all packages together.

To upgrade a single package use : apt install <package-name>

To upgrade all packages use : apt upgrade

  • Lets just update the curl package by running the apt install command and verify
 apt install curl
  • You will notice that updating curl command upgraded 2 packages which were related to curl and rest of 35 are still not upgraded.
  • Now, lets upgrade rest of the 35 packages together by running apt upgrade command.
apt upgrade
  • Lets run apt update command again to verify if ubuntu still requires any software to be upgrade. Command output should look like “All packages are up to date”
apt update

Conclusion

In this tutorial we learnt everything about ubuntu repositories and how to add various repositories and how to work with them . Finally we saw how apt command works with ubuntu repositories.

This Ultimate Guide will give you a very strong understanding of package management which is most important thing for a Linux administrator. Hope you liked this tutorial and was helpful. Please share.

How to Connect Windows to Linux and Linux to Windows using PowerShell 7 SSH Remoting ( PS Remoting Over SSH)

PowerShell Remoting has various benefits. It started with Windows when Windows administrators use to work remotely work with tons of windows machine over WinRM protocol. With Automation and Unix distribution spreading across the world and require by every single IT engineer, PowerShell introduced PSRemoting over SSH in PowerShell 7 to connect Windows to Linux and Linux to Windows remotely .

In this tutorial we will learn how to setup PS Remoting on windows machine and on Linux machine using PS remoting over SSH ( PowerShell 7 supported) . Finally we will connect both Windows to Linux and Linux to Windows machine. Lets get started.

Table of Content

  1. What is PSRemoting or PowerShell Remoting Over WinRM?
  2. What is PSRemoting or PowerShell Remoting Over SSH?
  3. Prerequisites
  4. Step by step set up SSH remoting on Windows
  5. Step by step set up SSH remoting on Ubuntu
  6. Test the OpenSSH connectivity from Windows machine to Linux using PSRemoting
  7. Test the OpenSSH connectivity from Linux to Windows machine using PSRemoting
  8. Conclusion

What is PSRemoting or PowerShell Remoting?

PowerShell Remoting is a feature of PowerShell. With PowerShell Remoting you can connect with a single or tons of servers at a single time.

PS Remoting Over SSH (Windows to Linux and Windows to Windows)

WS-Management or Web services management or WS-Man provides a common way for systems to access and exchange management information across the IT infrastructure.

Microsoft implemented WS-Management or Web services management or WS-Man in WinRM that is Windows Remote Management that allows hardware and operating systems, from different vendors to connect to each other. For WinRM to obtain data from remote computers, you must configure a WinRM listener. WinRM listener can work on both HTTP or HTTPS Protocols.

PS Remoting Over WinRM (Linux to Windows)

When PowerShell Remoting takes place between two servers that is one server try to run commands remotely on other server, the source server connects to destination server on WinRM Listener. To configure PSRemoting on local machine or remote machine please visit the link

What is PSRemoting or PowerShell Remoting Over SSH?

Microsoft introduced PowerShell 7 Remoting over SSH, which allows true multiplatform PowerShell remoting between Linux, macOS, and Windows. PowerShell SSH Remoting creates a PowerShell host process on the target machine as an SSH subsystem. Normally, Windows PowerShell remoting uses WinRM for connection negotiation and data transport. However, WinRM is only available on Windows-based machines. That means Linux machines can connect with windows or windows can connect to Windows over WinRM but Windows cannot connect to Linux.

With PowerShell 7 Remoting over SSH Now its possible to remote between Linux, macOS, and Windows.

PS Remoting Over SSH ( Windows to Linux , Linux to Windows)

Prerequisites

  • Microsoft Windows Server 2019 standard . This machine should also have PowerShell 7 installed. If you don’t have PowerShell installed please follow here to install.
  • Make sure you have local account setup in Windows server 2019. We will be using “automate” user.
  • Make sure you set the password for ubuntu user on ubuntu machine or if you have it then ignore.
  • Ubuntu machine with PowerShell 7 installed.

Step by step set up SSH remoting on Windows

Here we will discuss about how to setup SSH remoting on Windows Machine and run the PSRemoting commands.

  • Assuming you are on Windows 2019 standard machine with PowerShell 7 installed. Lets verify it once.
  • Before we setup SSH on windows machine & if you try to make a SSH session with Linux machine you will received an error message like this.
  • Next step is to install Open SSH client and server on Windows 2019 standard server. Lets use the PowerShell utility Add-WindowsCapabillity and run the commands.
Add-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0
 
Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
  • Once Open SSH is installed successfully , we need to start the OpenSSH Services.
Start-Service sshd
Set-Service sshd -StartupType Automatic
  • Now, Edit the OpenSSH configuration sshd_config located in C:\Windows\System32\OpenSSH or you can find it in C:\ProgramData\ssh\sshd_config by adding Subsystem for PowerShell.
Subsystem powershell c:/progra~1/powershell/7/pwsh.exe -sshs -NoLogo -NoProfile
  • Also make sure OpenSSH configuration file sshd_config has PasswordAuthentication set to yes
  • Restart the service
Restart-Service sshd
  • SSH remoting is now properly set on Windows Machine

Step by step set up SSH remoting on Ubuntu

Previously we configured SSH remoting on Windows Machine , now we need to perform similar steps in ubuntu machines with various commands.

  • PowerShell 7 must be installed on ubuntu machine
  • Install OpenSSH client and server on ubuntu machine
sudo apt install openssh-client
sudo apt install openssh-server
  • Similarly Edit the Sshd_config file in ubuntu machine
vi /etc/ssh/sshd_config
  • Paste the below content (Add the Subsystem for PowerShell) and make sure PasswordAuthentication set to yes
Subsystem powershell /usr/bin/pwsh -sshs -NoLogo -NoProfile
  • Restart the service
sudo service sshd restart

Test the OpenSSH connectivity from Windows machine to Linux using PSRemoting

Here now we are set with windows and ubuntu SSH remoting steps , now lets verify the SSH connectivity between from windows to ubuntu machine

Verification Method 1

  • Create a session and then enter into session and run commands from windows PowerShell to Linux PowerShell
New-PSSession -Hostname  54.221.35.44 -UserName ubuntu # Windows to Linux Create Session

Enter-PSSession -Hostname 54.221.35.44 -UserName ubuntu # Windows to Linux Enter Session

Verification Method 2

  • Create the session and then test the connectivity from Windows machine to Linux using Invoke-Command command
$SessionParams = @{
     HostName = "54.221.35.44"
     UserName = "ubuntu"
     SSHTransport = $true
 }
Invoke-Command @SessionParams -ScriptBlock {Get-Process}

Test the OpenSSH connectivity from Linux to Windows machine using PSRemoting

Lets verify the SSH connectivity between from ubuntu machine to windows Machine.

  • Open PowerShell on ubuntu machine with following command
pwsh
  • Although you are on ubuntu machine lets verify the ubuntu version [Optional Step]
  • Now SSH into Windows machine using following command
ssh automate@3.143.233.234
  • Here we go , You can clearly see that we are have SSH into Windows machine successfully

Conclusion

PowerShell Remoting has various benefits. It started with Windows when Windows administrators use to work remotely work with tons of windows machine over WinRM protocol. With Automation and Unix distribution spreading across the world and require by every single IT engineer. To resolve problem of connecting windows to Linux and Linux to windows PowerShell introduced PSRemoting over SSH to connect Windows to Linux and Linux to Windows remotely with easy setups .

Hope you find this tutorial helpful. If you like please share it with your friends.

What is PSRemoting or PowerShell Remoting and how to Enable PS Remoting

PSRemoting or PowerShell Remoting is a PowerShell based remoting which allows you to connect to one or thousands of remote computers and execute commands. PSRemoting allows you to sit at one place and execute commands on remote machine as if you are executing physically on the servers.

In this tutorial you will learn what is PS Remoting that is PowerShell Remoting and how to enable PowerShell Remoting locally and on remote machines.

Table of Content

  1. What is PSRemoting or PowerShell Remoting?
  2. Prerequisites
  3. How to Enable PS Remoting Locally on system?
  4. How to Enable PS Remoting on remote system?
  5. Conclusion

What is PSRemoting or PowerShell Remoting?

PowerShell Remoting is a feature of PowerShell. With PowerShell Remoting you can connect with a single or tons of servers at a single time.

WS-Management or Web services management or WS-Man provides a common way for systems to access and exchange management information across the IT infrastructure.

Microsoft implemented WS-Management or Web services management or WS-Man in WinRM that is Windows Remote Management that allows hardware and operating systems, from different vendors to connect to each other. For WinRM to obtain data from remote computers, you must configure a WinRM listener. WinRM listener can work on both HTTP or HTTPS Protocols.

When PowerShell Remoting takes place between two servers that is one server try to run commands remotely on other server, the source server connects to destination server on WinRM Listener.

How to check WinRM listeners on Windows Host?

To check the WinRM listeners on windows host use the following command

 winrm e winrm/config/listener

Prerequisites

  • Make sure you windows machine with PowerShell 7 installed . If you don’t have, Install it from here.

How to Enable PS Remoting Locally on system?

There are two ways in which you can enable PSRemoting on the local machine.

Use Enable-PSRemoting to Enable PS Remoting Locally on system

  • Invoke the command Enable-PSRemoting and this performs the following function
    • WinRM service is started
    • Creates listener on 5985 for HTTP
    • Registers and Enable PowerShell sessions
    • Set PowerShell sessions to allow remote sessions.
    • Restarts WinRM server

Enable-PSRemoting  # By Default its enabled in Windows
  • On a Server OS, like Windows Server 2019, the firewall rule for Public networks allows on remote connections from other devices on the same network. On a client OS, like Windows 10, you will receive an error stating that you are a public network.
Command Ran on Windows 2019 server
Command Ran on Windows 10 Machine
  • If you want to ignore the Error message because of Network Profile on client like windows 10 use the following command
Enable-PSRemoting -SkipNetworkProfileCheck

Use WinRM to Enable PS Remoting Locally on system

  • We can use WinRM quickconfig command as well to enable PS Remoting on local machine
winrm quickconfig

How to Enable PS Remoting on remote system?

There are two ways in which you can enable PSRemoting on the remote machine.

Use PS exec to Enable PS Remoting on remote system

  • Using PS exec you can run command on remote machine after connecting to remote machine. When you run PS exec command , it initialize the PowerShell session on remote machine and then run the command.
.\psexec.exe \\3.143.113.23 -h -s powershell.exe Enable-PSRemoting -Force # 3.143.113.23 is remote machine's IP address

Use WMI to Enable PS Remoting on remote system

Using PowerShell and the Invoke-CimMethod cmdlet. Using the Invoke-CimMethod cmdlet, you can instruct PowerShell to connect to the remote computer over DCOM and invoke methods.

$SessionArgs = @{
     ComputerName  = 'WIN-U22NTASS3O7'
     Credential    = Get-Credential
     SessionOption = New-CimSessionOption -Protocol Dcom
 }
 $MethodArgs = @{
     ClassName     = 'Win32_Process'
     MethodName    = 'Create'
     CimSession    = New-CimSession @SessionArgs
     Arguments     = @{
         CommandLine = "powershell Start-Process powershell -ArgumentList 'Enable-PSRemoting -Force'"
     }
 }
 Invoke-CimMethod @MethodArgs

Conclusion

In this tutorial, you have learned what is PSRemoting and how to enable PSRemoting with various methods locally on the machine as well as remotely on the machine. This will give you great opportunity to automate with various remote machines together.

Getting Started with PowerShell Commands which Every Devops should Know.

PowerShell is a strong tool which contains rich command utilities and commands which can make life easier for developers and Devops engineers. In this tutorial we will learn about important commands which are run on PowerShell with all practical’s to get you started with it.

Table of Content

  1. What is PowerShell ?
  2. Prerequisites
  3. Getting Started with PowerShell commands
  4. Wrapping Up

What is PowerShell ?

PowerShell is a command line tool or command line shell which helps in automation of various tasks , allows you to run scripts & helps you in managing variety of configuration. PowerShell runs on Windows , Linux and macOS

PowerShell is built on .NET Command Language Runtime that is ( CLR ) . It works currently on .NET 5.0 Framework as its runtime.

Features of PowerShell

  • It provides tab completion
  • It works with all .NET Frameworks objects
  • It allows pipelines of commands.
  • It has built support for various file formats such as JSON, CSV and XML

Prerequisites

Getting Started with PowerShell commands

PowerShell is a command line shell or command line tool or command line utility. There are tons of commands which are already loaded or in built in PowerShell and these commands are known as cmdlets.

  • There are majorly three types of command type in PowerShell
    • Alias
    • cmdlets
    • Function
  • To check the current version of PowerShell
$PSVersionTable
  • To check the execution policy of PowerShell
    • Restricted indicates that users are not allowed to run the scripts unless restrictions are removed.
Get-ExecutionPolicy
  • To Update the execution policy of PowerShell
    • This policy will allow users to run the Scripts
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned    # Run as Administrator
  • To Check all the commands on PowerShell
Get-Command
  • To get help with command execution and about the command on Powershell
Get-Help
  • To check the status of Windows32 time service
Get-Service -Name w32time
  • To check the short form of PowerShell commands use Alias.
Get-Alias -Name gcm
Get-Alias -Name gm
  • To check the Folder structure and files under the folder.
 Get-ChildItem -Path C:\
  • To open system logs using PowerShell command
Show-EventLog
  • To check specific details of process such as chrome browser
 Get-Process chrome
  • To get content of a particular file
Get-Content .\.gitignore
  • To get drives in the current session
Get-PSDrive
  • To remove a particular file or folder using the following command.
Remove-Item .\date.txt

Wrapping up

This was pretty straightforward tutorial which covers basic PowerShell commands. We learnt majorly GET-command, GET-service command and different cmdlets which can be used with PowerShell. Hope this was useful tutorial to get you started with how to run commands on PowerShell.

How to Install PowerShell 7.1.3 on Ubuntu and Windows Machine Step by Step.

With some many windows or Linux Administrators in the world automation has always been top most requirement. PowerShell is one the most widely and command line shell which gives you string ability to perform any tasks with any remote operating system very easily.

In this tutorial we will go through basic definition of PowerShell, benefits and features of PowerShell and finally how to install latest PowerShell on both Windows and Ubuntu Machine.

Table of content

  1. What is PowerShell?
  2. Working with PowerShell
  3. Install PowerShell 7.1.3 on Windows Machine
  4. How to Install PowerShell 7.1.3 on Ubuntu Machine
  5. Conclusion

What is PowerShell?

PowerShell is a command line tool or command line shell which helps in automation of various tasks , allows you to run scripts & helps you in managing variety of configuration. PowerShell runs on Windows , Linux and macOS

PowerShell is built on .NET Command Language Runtime that is ( CLR ) . It works currently on .NET 5.0 Framework as its runtime.

Features of PowerShell

  • It provides tab completion
  • It works with all .NET Frameworks objects
  • It allows pipelines of commands.
  • It has built support for various file formats such as JSON, CSV and XML

Working with PowerShell

PowerShell is a command line tool or command line shell which was meant for windows automation . But it has widely grown and upgraded with lots of feature and benefits. Lets check out some of the key benefits.

  • PowerShell can be used for cloud management such as retrieve or deploy new resources.
  • PowerShell can be used with Continuous integration and continuous deployment pipelines i.e.. CI/CD
  • PowerShell is widely used now by Devops and sysops engineers.
  • PowerShell comes with hundreds of preinstalled commands
  • PowerShell command are called cmdlets

To check the version of PowerShell , although there are various command but lets run the following

$PSVersionTable.PSVersion

Install PowerShell on Windows Machine

By default PowerShell is already present on the windows machine. To verify click on start bar and look for PowerShell.

  • Verify the current version of PowerShell by running the following command.
Get-Host | Select-Object Version
  • Extract the downloaded binary on the desktop
  • Execute the pwsh.exe
  • Now you should see PowerShell version7.1.3 when you run the following command.
  • Lets verify PowerShell by invoking the Get-Command

How to Install PowerShell 7.1.3 on Ubuntu Machine

We will install PowerShell on ubuntu 18.04 via Package repository. So lets dive in here and start

  • Update the list of packages
sudo apt-get update
  • Install pre-requisite packages.
sudo apt-get install -y wget apt-transport-https software-properties-common
  • Download the Microsoft repository GPG keys
wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb

he apt software repositories are defined in the /etc/apt/sources.list file or in separate files under the /etc/apt/sources.list.d/ directory.
  • Register the Microsoft repository GPG keys. You will notice that as soon as we run the below command a repository is added inside /etc/apt/source.list.d directory
sudo dpkg -i packages-microsoft-prod.deb
  • Update the Repository again
sudo apt-get update
  • Enable the “universe” repositories
sudo add-apt-repository universe
  • Install PowerShell
sudo apt-get install -y powershell
  • Start PowerShell
pwsh
  • Lets verify PowerShell by invoking the Get-Command

Conclusion

This tutorial is pretty straightforward and to get you started with PowerShell. In this tutorial we defined what is PowerShell and what are benefits of PowerShell. Later we Installed Latest PowerShell 7.1.3 on both ubuntu and windows machine. Hope this tutorial helps you with PowerShell setup & please share it if you like.

How to Create Dockerfile step by step and Build Docker Images using Dockerfile

There were days when a organization use to get physical server and a system administrator was asked to make the system ready within months like Installing OS, Adding Software’s and Network configuration and finally applications use to get deployed in months.

Now the same work can be done in literally 5 minutes . Yes it can be done by launching docker containers using Dockerfile ( Layer based ) docker image building file. If you would like to know more follow along.

In this this tutorial we will learn everything about Dockerfile , how to create Dockerfile and commands used inside Dockerfile also known as Docker Instruction. These Dockerfile further can be used to create customized docker image. Lets jump in to understand each bit of it.

Table of content

  1. What is Dockerfile?
  2. Prerequisites
  3. How to Create Dockerfile ( Dockerfile commands or Dockerfile Instructions)
  4. How to build a Docker Image and run a container using Dockerfile
  5. Conclusion

What is Dockerfile?

Docker file is used to create a customized docker images on top of basic docker image. It is a text file that contains all the commands to build or assemble a new docker image. Using docker build command we can create new customized docker images . Its basically another layer which sits on top of docker image. Using newly built docker image we can run containers in similar way.

This image has an empty alt attribute; its file name is image-43.png

Prerequisites

  • You must have ubuntu machine preferably 18.04 version + and if you don’t have any machine you can create a ec2 instance on AWS account
  • Docker must be installed on ubuntu machine. If you don’t have follow here

How to Create Dockerfile ( Dockerfile commands)

  • There are two forms in which docker file can be written
    • Shell form <instruction> command
    • Exec form <instruction> [“executable”, “param1”, “param2”]
# Shell form
ENV name John Dow
ENTRYPOINT echo "Hello, $name"
# exec form
RUN ["apt-get", "install", "python3"]
CMD ["/bin/echo", "Hello world"]
ENTRYPOINT ["/bin/echo", "Hello world"]
  • To build docker Image from Dockerfile
docker build .        or

docker build -f /path-of-Docker-file .
  • Environmental variables inside Docker file can be written as $var_name or ${var_name}
WORKDIR ${HOME}  # This is equivalent to WORKDIR ~
ADD . $HOME      # This is equivalent to ADD . ~
  • FROM command is used when we need to build a new Docker Image using Base Image
    • Below command will set ubuntu:14.04 as the base image.
FROM base:${CODE_VERSION}

FROM ubuntu:14.04
  • RUN command is executed while building the image that is on top of the current image and then creates a new layer. You can run multiple RUN commands in Dockerfile
RUN echo $VERSION
# RUN <command> (shell form)
# RUN ["executable", "param1", "param2"] (exec form)
  • ADD command will add all the files from the host to container
    • Below command will add a file from folder directory kept at host to containers /etc directory
ADD folder/file.txt /etc/
  • CMD command will set the default command if you don’t specify any command while starting an container.
    • It can be overridden by user passing an argument while running the container.
    • If you apply multiple CMD command only last takes effect
CMD ["Bash"]

EXAMPLE

  • Lets assume a single line Docker file containing following code
CMD [“echo”, “Hello World”]
  • Lets create a docker Image
docker build . 
  • Run a container to see CMD command actions
sudo docker run [image_name]
  • Check the Output of the command
O/p:  Hello World
  • Run a container with an argument to see CMD command actions
sudo docker run [image_name] hostname
  • Check the Output of the command
O/P: 067687387283 # Which is containers hostname
  • Maintainer allows you to add author details
MAINTAINER support@automateinfra.com
  • EXPOSE helps to inform docker about the port which container should listen on
    • Below are are setting a container to listen on port 8080
EXPOSE 8080
  • The ENV command sets an environment variable in the new container
    • Below we are setting HOME environments variable to /root
ENV HOME /root
  • USER command Sets the default user within the container
USER ansible
  • VOLUME command creates a shared volume that can be shared among containers or by the host machine
VOLUME ["/var/www", "/var/log/apache2", "/etc/apache2"]
  • WORKDIR command set the default working directory for the container
WORKDIR app/
  • ARG command allows users to pass at build-time with the docker build command .
    • Syntax  --build-arg <varname>=<value> 
ARG username
docker build  --build-arg username=automateinfra 
  • LABEL instruction adds metadata to an image and it uses key value pair
LABEL 
  • SHELL command allows to overwrite the use of default shell.
    • SHELL command will overwrite the use of [“/bin/sh”,”-c”] in case of linux shell
    • SHELL command will overwrite the use of [“cmd”,”/S”,”/C”] in case of windows shell

Executed as cmd /S /C echo -command Write Host default
RUN powershell -command Write-Host default


Executed as PowerShell  -command Write-Host hello
SHELL ["PowerShell", "-command"]
RUN Write-Host hello
  • ENTRYPOINT is also used for running the command but with a difference from CMD command
    • In case of ENTRYPOINT command if you give command line argument ENTRYPOINT doesn’t allow to override it.

EXAMPLE

  • Lets assume a single line Docker file containing following code
ENTRYPOINT  [“echo”, “Hello World”]
  • Lets create a docker Image
docker build . 
  • Run a container to see ENTRYPOINTcommand actions
sudo docker run [image_name]
  • Check the Output of the command
O/p:  Hello World
  • Run a container with an argument to see ENTRYPOINT command actions
sudo docker run [image_name] parameter
  • Check the Output of the command
O/P: Hello World parameter

How to Create Docker Image and run a container using Dockerfile

Now we should be good with how Dockerfile is created using different commands. Lets now dive in to see some of the examples to get you started.

EXAMPLE 1

  • Create a folder under opt directory and name it as dockerfile-demo1
cd /opt
mkdir dockerfile-demo1
cd dockerfile-demo1
  • Create a Dockerfile with your favorite editor
vi Dockerfile
  • Command which we will use for Dockerfile
    • FROM: It sets the base image as ubuntu
    • RUN: It runs the following commands in the container
    • ADD: It adds the file from a folder
    • WORKDIR: It tells about the working directory
    • ENV: It sets a environment variable
    • CMD: Its runs a command when the container starts
  • Paste the below content
FROM ubuntu:14.04
RUN \
    apt-get -y update && \
    apt-get -y upgrade && \
    apt-get -y install git curl unzip man wget telnet
ADD folder/.bashrc /root/.bashrc
WORKDIR /root
ENV HOME /root
CMD ["bash"]
  • Now, build a Docker Image using the following command
 docker build -t image1 .
  • Lets verify the Docker Image by running the following command.
docker images
  • Now, its time to check if Docker Image is successfully working . So lets run a container and then verify all the Dockerfile commands inside the container.
docker run -i -t 5d983653b8f4
Looks Great, we can see all the commands which we used in Docker file were executed and Docker Image was created . We tested this on one of the container built using Docker Image.

EXAMPLE 2

  • Create a folder under opt directory and name it as dockerfile-demo2
cd /opt
mkdir dockerfile-demo2
cd dockerfile-demo2
  • Create a Dockerfile with your favorite editor
vi Dockerfile
  • Paste the below content
FROM ubuntu:14.04


ARG LABEL_NAME
LABEL org.label-schema.name="$LABEL_NAME"
SHELL ["/bin/sh", "-c"]


RUN apt-get update && \
    apt-get install -y sudo curl git gcc make openssl libssl-dev libbz2-dev libreadline-dev libsqlite3-dev zlib1g-dev libffi-dev


USER ubuntu
WORKDIR /home/ubuntu


ENV LANG en_US.UTF-8
CMD [“echo”, “Hello World”]


  • Now, build a Docker Image using the following command
 docker build --build-arg LABEL_NAME=mylabel  -t imagetwo .
  • Lets verify the Docker Image by running the following command.
docker images
  • Now, its time to check if Docker Image is successfully working . So lets run a container and then verify all the Dockerfile commands inside the container.
docker run -i -t 2716c9e6c4af
  • CMD command ran successfully
  • As we defined a custom CMD which displays the echo command and exits out of container. Lets go inside the container and check other details.
    • User is ubuntu
    • Working directory is /home/ubuntu
    • Curl is installed
    • ENV LANG is also set.
docker run -i -t 2716c9e6c4af /bin/bash
  • Finally if you want to check LABEL command then you can see it on host by inspecting the docker image.
docker inspect 2716c9e6c4af

Looks Great, we can see all the commands which we used in Docker file were executed and Docker Image was created . We tested this on one of the container built using Docker Image.

Conclusion

In this tutorial we learnt in depth commands used inside Dockerfile to build a Docker Image . There are several commands which we covered using examples in the demonstration.

Also we learnt how to create docker image and run containers and verify if those commands were executed successfully. Dockerfile is very important concept for building new Docker Images on top of Base Docker image.

Now, you’re ready with how to create Dockerfile and using Dockerfile create Image and run containers. Hope this tutorial will help you with Docker concepts. Please share if you like.

How to create Node.js Docker Image and Push to Docker Hub using Jenkins Pipeline

Creating a application on docker is a huge benefit because of its light weighted technology and security. Docker images are stored safely on docker hub . But how can we create docker images and push to docker hub automatically ? Its possible with none other than JENKINS

In this tutorial we will learn how to create docker images for node.js application and using Jenkins pushing it to docker hub.

Table of content

  1. What is Jenkins Pipeline?
  2. What is Node.js?
  3. Prerequisites
  4. How to Install node.js and node.js framework on ubuntu machine
  5. Create Node.js Application
  6. Create Docker file for Node.js application
  7. Push the code and configuration to GIT Repository
  8. Create Jenkins file for creating a docker image of Node.js application and pushing to Docker hub
  9. Configure Jenkins to Deploy Docker Image and Push to Docker Hub
  10. Conclusion

What is Jenkins Pipeline?

Jenkins Pipeline are group of plugins which helps to deliver a complete continuous delivery pipeline into Jenkins. Jenkins Pipeline plugin is automatically installed while installing the Jenkins with suggested plugins. This starts from building the code till deployment of the software right up to the customer. Jenkins pipeline allows you to write complex operations and code deployment as code with DSL language ( Domain specific language ) where we define a text file called “JENKINSFILE” which is checked into the repository.

  • Benefits of Jenkins pipeline
    • Pipeline can be written in code which can be more easier and gives more ability to review.
    • In case Jenkins stop you can still continue to write Jenkins file
    • With code capabilities you can allow waiting, approvals , stop and many other functionalities.
    • It support various extensions & plugins.

Related: the-ultimate-guide-getting-started-with-Jenkins-pipeline

What is Node.js?

Node.js is an open source JavaScript runtime environment. Now, what is JavaScript ? Basically JavaScript is a language which is used with other languages to create a web page and add some dynamic features such as roll over and graphics.

Node.js runs as a single process without wasting much of memory and CPU and never blocks any threads or process which is why its performance is very efficient. Node.js also allows multiple connections at the same time.

With the Node.js it has become one of the most advantage for JavaScript developer as now they can create any apps utilizing it as both frontend or as a backend.

Building applications that runs in the any browser is a completely different story than than creating a Node.js application although both uses JavaScript language.

Prerequisites

  • You must have ubuntu machine preferably 18.04 version + and if you don’t have any machine you can create a ec2 instance on AWS account
  • Docker must be installed on ubuntu machine.
  • Make sure you have git hub account and a repository created . If you don’t have follow here

How to Install node.js and node.js framework on ubuntu machine

  • Create a folder under opt directory
cd /opt
mkdir nodejs-jenkins
cd nodejs-jenkins
  • Install node.js on ubuntu machine
sudo apt install nodejs
  • Install node js package manager. This will install modules node_modules inside the same directory.
sudo apt install npm
  • Install Nodejs Express Web Framework and initialize it. This command will generate package.json file containing the project and metadata required details.
npm init

package.json which got created after initializing the Nodejs framework will have all the dependencies which are required to run. Let us add one dependency which is highly recommended.

npm install express --save

Create Node.js Application

  • Create a node.js application . So lets create a file main.js and name it as main.js on the same folder /opt/nodejs-jenkins
var express = require('express')    //Load express module with `require` directive
var app = express() 

//Define request response in root URL (/)
app.get('/', function (req, res) {
  res.send('Hello Welcome to Automateinfra.com')
})


app.listen(8081, function () {
  console.log('app listening on port 8081!')
})

Create Dockerfile for Node.js application

Docker file is used to create a customized docker images on top of basic docker image. It is a text file that contains all the commands to build or assemble a new docker image. Using docker build command we can create new customized docker images . Its basically another layer which sits on top of docker image. Using newly built docker image we can run containers in similar way.

This image has an empty alt attribute; its file name is image-43.png
  • Create docker file under the same folder/opt/nodejs-jenkins
FROM node:7              # Sets the base image

RUN mkdir -p /app
WORKDIR /app             # Sets the working directory in the container
COPY package.json /app   # copy the dependencies file to the working directory
RUN npm install          # Install dependencies
COPY . /app       # Copy the content of the local src directory to the working directory
EXPOSE 4200
CMD ["npm", "run", "start"]
  • Verify this Docker file on ubuntu machine by running the following command.
docker build .

Push the code and configuration to GIT Repository

  • Now we are ready with our code and configurations as below.
    • Dockerfile
    • main.js
    • package.json
    • node_modules

Now push all the code into GIT repository by performing below steps

  • Initialize your new repository in the same directory /opt/nodejs-jenkins
git init
  • Add the file in git repository using the command in the same directory /opt/nodejs-jenkins
git add .
  • Again check the status of git repository using the command in the same directory /opt/nodejs-jenkins
git status
  • Commit your changes in git repository using the command in the same directory /opt/nodejs-jenkins
 git commit -m "MY FIRST COMMIT"
  • Add the remote repository which we created earlier as a origin in the same directory /opt/nodejs-jenkins
git remote add origin https://github.com/Engineercloud/nodejs-jenkins.git
  • Push the changes in the remote branch ( Enter your credentials when prompted)
git push -u origin master
  • Verify the code on GIT HUB by visiting the repository link

Create Jenkins file for creating a docker image of Node.js application and pushing to dockerhub

  • Create a file and name it as Jenkins file your favorite editor and paste the below content.
    • Make sure to change the sXXXXXXX410/dockerdemo as per you’re docker hub username and repository name
node {
     def app 
     stage('clone repository') {
      checkout scm  
    }
     stage('Build docker Image'){
      app = docker.build("sXXXXX410/dockerdemo")
    }
     stage('Test Image'){
       app.inside {
         sh 'echo "TEST PASSED"' 
      }  
    }
     stage('Push Image'){
       docker.withRegistry('https://registry.hub.docker.com', 'git') {            
       app.push("${env.BUILD_NUMBER}")            
       app.push("latest")   
   }
}
  • Now push this file as well in github using git commands or simply create this file directly in the repository. Finally repository should something like this.

Configure Jenkins to Deploy Docker Image and Push to Docker Hub

  • Assuming you have Jenkins installed.
  • Now, Create a multibranch pipeline Jenkins Job and provide it a name as nodejs-image-dockerhub by clicking on new item and selecting multibranch pipeline on the Dashboard.
  • Now click on nodejs-image-dockerhub job and click on configure it with git URL and then hit save.
  • As we will connect with docker hub we would need to add docker hub credentials. Now we will click on Dashboard >>Manage Jenkins >>Manage credentials >> click on global >> Add credentials
  • Now go to Jenkins server and make sure Jenkins user is added in docker group
sudo groupadd docker
sudo usermod -a -G docker jenkins
service docker restart
  • Make sure Jenkins users has sudo permissions
sudo vi  /etc/sudoers

jenkins ALL=(ALL) NOPASSWD: ALL

Now we are all set to run our First Jenkins Pipeline. Click on Scan Multibranch pipeline job and then you will notice your branch name. Then click on Branch and then click on Build Now.

  • Now verify if Docker image has been successfully pushed to docker hub by visiting Dockerhub repository.

Conclusion

In this tutorial we covered what is Jenkins pipeline, what is node.js. Also we demonstrated how to create a docker file, Jenkins file and node.js application and pushed it into repository and finally using Jenkins created docker image and pushed it in docker hub.

This tutorial is in depth and very knowledgeable post if somebody wants to work on docker and Jenkins together for automations. Hope you liked it and if so please share it.

The Ultimate Guide : Getting Started with Jenkins Pipeline

Application deployment is a daily task for developers and operations team. With Jenkins you can work with your deployment but for long deployment process you need a way to make things look easy and deploy in structured way.

To Bring simplicity in process of deployment Jenkins Pipeline are your best friend. They make the process look like as if river is flowing beautifully. Having said that , In this tutorial we will cover all about CI/CD and in depth knowledge of Jenkins Pipeline and Jenkins file.

Table of content

  1. What is CI/CD ( Continuous Integration and Continuous deployments)?
  2. What is Jenkins Pipeline?
  3. How to create a basic Jenkins Pipeline
  4. Handling Parameters in Jenkins Pipeline
  5. How to work with Input Parameters
  6. Conclusion

What is CI/CD ( Continuous Integration and Continuous deployments)

With CI/CD products are delivered to clients in a very smart and effective way by using different automated stages. With CI/CD it saves tons of time for both developer and operations team and there are very less chances of human errors. CI/CD stands for continuous integration and continuous deployments. It automates everything starting from integrating to deployments.

Continuous Integration

CI also known as Continuous integration is primarily used by developers. Successful Continuous integration means developers code is built , tested and then pushed to Shared repository whenever there is a change in code.

Developers push code changes every day, multiple times a day. For every push to the repository, you can create a set of scripts to build and test your application automatically. These scripts help decrease the chances that you introduce errors in your application.

This practice is known as Continuous Integration. Each change submitted to an application, even to development branches, is built and tested automatically and continuously.

Continuous Delivery

Continuous delivery is step beyond continuous integration . In this case not only application is continuously built and tested each time the code is pushed but application is also deployed continuously. However, with continuous delivery, you trigger the deployments manually.

Continuous delivery checks the code automatically, but it requires human intervention to deploy the changes.

Continuous Deployment

Continuous deployment is again a step beyond continuous integration the only difference between deployment and delivery is deployment automatically takes the code from shared repository and deploy the changes to environments such as Production where customers can see those changes. This is the final stage of CI/CD pipeline. With CD it takes hardly few minutes to deploy the code to the environments. It depends on heavy pre automation testing.

Examples of CI/CD Platform:

  • Spinnaker and Screwdriver built platform for CD
  • GitLab , Bamboo , CircleCI , Travis CI and GoCD are built platform for CI/CD

What is Jenkins Pipeline?

Jenkins Pipeline are group of plugins which helps to deliver a complete continuous delivery pipeline into Jenkins. Jenkins Pipeline plugin is automatically installed while installing the Jenkins with suggested plugins. This starts from building the code till deployment of the software right up to the customer. Jenkins pipeline allows you to write complex operations and code deployment as code with DSL language ( Domain specific language ) where we define a text file called “JENKINSFILE” which is checked into the repository.

  • Benefits of Jenkins pipeline
    • Pipeline can be written in code which can be more easier and gives more ability to review.
    • In case Jenkins stop you can still continue to write Jenkins file
    • With code capabilities you can allow waiting, approvals , stop and many other functionalities.
    • It support various extensions & plugins.
  • Jenkins file can be written with two syntax’s ( DSL: Domain Specific Language)
    • Declarative Pipeline : This is newer and writing code with this is much easier
    • Scripted Pipeline : This is older and writing code with this is little complicated
  • Scripted pipeline syntax can be generated from
http://Jenkins-server:8080/pipeline-syntax/
  • Declarative Pipeline syntax can be generated from
http://Jenkins-server:8080/directive-generator/

  • Jenkins Pipeline supports various environmental variables such as
    • BUILD_NUMBER: Displays the build number
    • BUILD_TAG: Displays the tag which is jenkins-${JOB_NAME}-${BUILD_NUMBER}
    • BUILD_URL: Displays the URL of the result of Build
    • JAVA_HOME: Path of Java home
    • NODE_NAME: It specifics the name of the node. For example set it to master is for Jenkins controller
    • JOB_NAME: Name of the Job
  • You can set the environmental variables dynamically in pipeline as well
    environment {
        AWS_ACCESS_KEY_ID     = credentials('jenkins-aws-secret-key-id')
        AWS_SECRET_ACCESS_KEY = credentials('jenkins-aws-secret-access-key')
        MY_KUBECONFIG = credentials('my-kubeconfig')
   }
  • Lets take a example of Jenkins file and understand the basic terms one by one
    • pipeline: It is Declarative Pipeline-specific syntax 
    • agent: Agent allows Jenkins to allocate an executor or a node. For example Jenkins slave
    • Stages: It include multiple tasks which Pipeline needs to perform. It can have a single task as well.
    • Stage: Stage is one single task under stages.
    • steps: These are steps which needs to be executed in every stage.
    • sh: sh is one of the step which executes shell command.
pipeline {
   agent any 
    stages {
        stage('Testing the Jenkins Version') {
            steps {
                echo 'Hello, Jenkins'
                sh 'service jenkins status'
               //  sh("kubectl --kubeconfig $MY_KUBECONFIG get pods")
            }
        }
    }
}

How to create a basic Jenkins Pipeline

  • Install Jenkins on the ubuntu machine. Please find the steps to install Jenkins from here
  • Once you have Jenkins Machine , visit Jenkins URL and Navigate to New Item
  • Choose Pipeline from the option and provide it a name such as pipeline-demo and click OK
  • Now add a Description such as my demo pipeline and add a Pipeline script as below
pipeline {
   agent any 
    stages {
        stage('Testing the Jenkins Version') {
            steps {
                echo 'Hello, Jenkins'
                sh 'service jenkins status'
            }
        }
    }
}
  • Click on Save & Finally click on Build Now