Monitoring and Alerting with Prometheus

What is Prometheus?

Prometheus is a powerful, open-source monitoring system that collects metrics from services and stores them in a time-series database. It records real-time metrics and alerts. It is written in Go Language. It allows powerful queries and great visualization. Prometheus works very well with Grafana when it comes to Dashboards and alerting notifications.

Prometheus includes a Flexible query language. Every time series contains a metrics name and set of key-value pairs called labels.

# Notation of time series
<metric name> {<label name>=<label value>,.....} 
# Example
node_boot_time {instance="localhost:9000",job="node_exporter"}

Prometheus Architecture

Advertisement

How to Install Maven and Setup Maven in Jenkins

What is Apache Maven

A maven is a build tool used for Java projects. Maven can also be used to build and manage projects written in various languages such as C#, Ruby, Scala, and other languages. The Maven project is hosted by the Apache Software Foundation, where it was formerly part of the Jakarta Project.

The most powerful feature of Maven is download project dependency libraries automatically defined in the pom.xml. Also, configure project build cycle such as invoke junits for test sonarqube for static analysis.

Prerequisites

Java Development Kit (JDK) and EclipseMaven 3.3+ require JDK 1.7 or above to execute
MemoryNo minimum requirement
DiskApproximately 10MB is required for the Maven installation itself. In addition to that, additional disk space will be used for your local Maven repository. The size of your local repository will vary depending on usage but expect at least 500MB.
Operating System

No minimum requirement. Start-up scripts are included as shell scripts and Windows batch files.

Install Apache Maven

  • Next unzip the file that you just downloaded on your machine.
  • Next, set the enviornmental variables for Apache Maven by going in enviormental variable and add variable name as M2_HOME and MAVEN_HOME
  • Next append the Maven bin directory in the PATH variable.
  • Next, check the Maven version
  • Next, configure the local maven repository by going in the conf folder and then inside the setting.xml file and update the path after creating a maven_repo folder as shown below.

Setup Maven Project

  • First step is to open eclipse and navigate to new project and then look for Maven as shown below.
  • Select Create a simple project
  • Provide the group ID, artifact ID as shwon below.
  • It will create the new folder on your eclipse as shown below.
  • Next, create a Java class and name it something as my-calculator
  • Create a new class mycalculator
  • After clicking on Finish you will as below.
  • Next, add the methods in the class.
package mycalculator_package;

public class mycalculator {

// Method to add two numbers
 public int add(int a, int b) {
  return a + b;	
}   	
// Methods to multiple two numbers
 public int multiple(int a, int b) {
	  return a * b;	
	}  	
// Methods to subtract two numbers
 public int subtract(int a, int b) {
	  return a - b;	
	}  	
// Methods to divide two numbers
 public int divide(int a, int b) {
	  return a % b;	
	}  	
}
  • Similarly create a junit by creating a test class
  • Next, we dont want Junit4 to be added.
<dependencies>
<!-- https://mvnrepository.com/artifact/junit/junit -->
<dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.13.2</version>
    <scope>test</scope>
</dependency>
</dependencies>  
  • Now, add the code below in the mycalculatortest.java as shown below.
package mycalculatortest_package;
import static org.junit.Assert.*;
import org.junit.Test;
import mycalculator_package.mycalculator;
public class mycalculatortest {

	@Test
	public void addtest() {
		mycalculator calc = new mycalculator();
		assertEquals(100, calc.add(80, 20));
	}
	@Test
	public void subtracttest() {
		mycalculator calc = new mycalculator();
		assertEquals(60, calc.subtract(80, 20));
	}
	@Test
	public void multipletest() {
		mycalculator calc = new mycalculator();
		assertEquals(100, calc.multiple(10, 10));
	}

}
  • Next add the maven compiler version in the POM file as we are using JDK 1.8 and then go to Project and Maven and update Project as shown below.
<properties>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
</properties>  
  • Next Run the Maven Project as shown below by navigating to Run as and then Maven Build. All your Java source code remains in the src folder and all the classes are pressent in the target folder.
Execution Process of Java Program in Detail | Working of JUST-IT-TIME  Compiler (JIT) in Detail - Simple Snippets
  • Further run mvn clean and mvn clean test: Cleans the project and removes all files generated by the previous build and the mvn clean test cleans the target folder as well.
  • Next test locally on your machine some of the other commands such as mvn compile that Compiles source code of the project.
  • mvn test-compile: Compiles the test source code.
  • mvn test: Runs tests for the project.
  • mvn package: Creates JAR or WAR file for the project to convert it into a distributable format.
  • mvn install: Deploys the packaged JAR/ WAR file to the local repository.
  • mvn deploy: Copies the packaged JAR/ WAR file to the remote repository after compiling, running tests and building the project.

Setting up Maven in Jenkins WAY 1

  • Create a new Job in Jenkins and call it Maven-JOB

  • Select top level Maven targets in the Build Step
  • Next add clean test package in the Goals and the location of pom.xml file.
  • Next, trigger the Jenkins Job and should see the project compiled succesfully.

Setting up Maven in Jenkins WAY 2

  • Install Maven Plugin using Manage Jenkins
  • Now create another job where you will notice the Maven Project option as shown below. Select Maven Project and enter an item name as maven-job2
  • Inside the Build TAB add the workspace path which is the location where you pom file is located.
  • Make sure to add: Resolve Dependencies during Pom parsing in the Build step
  • Next navigate to Manage Jenkins to Global Tool Configuration and add Maven and JDK as shown below.
  • Next run the Jenkins Job and you should then be able to run the Job.

How to Automate XML and YML and CSV files using Python

Reading and Writing a YML file using python

yaml and yml files are superset of JSON. Some of the automation tools such as ansible uses yaml based files, referred to as playbooks, to define actions you want to automate. These playbooks use the YAML format.

Working with yaml files is a fun in python , so lets get started but In order to work with yaml files in python you would require to install a PyYAML library as Python doesn’t contain standard library. PyYAML is a YAML parser and emitter for Python.

  • Run the following command to install PyYAML library in your favorite code editor terminal such as visual code studio.
pip install PyYAML
  • Next, create a folder with a name Python and under that create a simple YML file and name it as apache.yml and paste the below content and save it.
---
- hosts: webservers
  vars:
    http_port: 80
    max_clients: 200
  remote_user: root
  tasks:
  - name: ensure apache is at the latest version
    yum:
      name: httpd
      state: latest
  • Next, create another file in same Pythonfolder and name it as read_write_yaml.py and paste the below python code.

Below Python script imports yaml module to work with yaml files and pprint module to get a output in well designed pattern. Next, using open() function it opens the apache.yml file and reads the data using yaml.safe_load() method. Later, using yaml.dump() you can add or write the data into it. As we are not adding any data into it , the output would result as NONE

import yaml
from pprint import pprint

with open('apache.yml', 'r') as new_file:
     verify_apache = yaml.safe_load(new_file)
pprint(verify_apache)

with open('apache.yml', 'w') as new_file2:
     verify_apache2 = yaml.dump(verify_apache, new_file2)
pprint(verify_apache2)

  • Execute the above python script using python command and you should see the below output.
[{'hosts': 'webservers',
  'remote_user': 'root',
  'tasks': [{'name': 'ensure apache is at the latest version',
             'yum': {'name': 'httpd', 'state': 'latest'}}],
  'vars': {'http_port': 80, 'max_clients': 200}}]
None

Reading and Writing a XML file using python

XML files used mostly for structured data. Many web system uses XML to transfer data and one of them is RSS ( Real Simple Syndication) feeds which helps in finding the latest updates on websites from various sources. Python offers XML Library.

  • Next, in the same Python folder create a simple XML file and name it as book.xml and paste the below content and save it. XML has a tree like structure and top element is known as root and rest of them are elements
<?xml version="1.0"?>
<catalog>
   <book id="bk109">
      <author>Author1</author>
      <title>Automate Infra Part 2</title>
      <genre>Science Fiction</genre>
      <price>6.95</price>
      <publish_date>2000-11-02</publish_date>
      <description>book1</description>
   </book>
   <book id="bk112">
      <author>Author2</author>
      <title>Automate Infra Part 1</title>
      <genre>Computer</genre>
      <price>49.95</price>
      <publish_date>2001-04-16</publish_date>
      <description>book2</description>
   </book>
</catalog>

  • Next, create another file in same python folder and name it as read_write_xml.py and paste the below python code.
  • In below script, importing xml.etree.ElementTree module helps to work with xml files and implements a simple and efficient API for parsing and creating XML data. Next, In XML file entire tree is parsed that is it reads the book.xml file and then prints the content inside it.

    import xml.etree.ElementTree as ET   
    tree = ET.parse('book.xml')    # checking each elements
    root = tree.getroot()          # finding the root 
    print(root.tag,root.attrib)
    
    for child in root:                    # Each child and its attributes
        print(child.tag,child.attrib)

  • Execute the above python script using python command and you should see the below output.
  • O/P:
    
    catalog {}
    book {'id': 'bk109'}
    book {'id': 'bk112'}
    

    Reading and Writing a comma-separated values (CSV) file using python

    CSV is most widely used spreadsheets. To work with these file in python you need to import the csv module. Lets learn how to read and write data into CSV.

  • Next, in the same Python folder create a CSV file and name it as devops.csv and add the content similar to below in your file and save it.
    • Next, create another file in same python folder and name it as read_write_csv.py and paste the below python code.

    Below script uses csv module to work with csv files. As soon as python script is executed , open() function opens the csv file and then using csv.reader() it reads it and then prints the rows according to the defined range.

    import csv
    
    with open('devops.csv' , 'r') as csv_file:
        read = csv.reader(csv_file,  delimiter=',')
        for _ in range(5):
            print(next(read))
    print(read)
    
    • Execute the above python script using python command and you should see the below output.
    OUTPUT 
    
    
    ['Date', ' PreviousUserCount', ' UserCountTotal', ' sitepage']
    ['02-01-2021', '61', '5336', ' automateinfra.com/blog']
    ['03-01-2021', '42', '5378', ' automateinfra.com/blog1']
    ['04-01-2021', '26', '5404', ' automateinfra.com/blog2']
    ['05-01-2021', '65', '5469', ' automateinfra.com/blog3']
    <_csv.reader object at 0x0336A370>
    

    Python – Pandas (Data Analysis 3rd Party Library)

    pandas.DataFrame, which acts like a data table, similar to a very powerful spreadsheet. If you want to work on something like row or column in Spreadsheet then DataFrames is the tool for you. So lets get started by installing pip install pandas

    import pandas as pd
    df = pd.read_csv('devops.csv')
    
    print(type(df))
    print(df.head(4)) # Seeing TOP 4 rows in devops.csv file
    print(df.describe()) # Statical View 
    
    O/p:
    
    <class 'pandas.core.frame.DataFrame'>
             Date   PreviousUserCount   UserCountTotal                  sitepage
    0  02-01-2021                  61             5336    automateinfra.com/blog
    1  03-01-2021                  42             5378   automateinfra.com/blog1
    2  04-01-2021                  26             5404   automateinfra.com/blog2
    3  05-01-2021                  65             5469   automateinfra.com/blog3
            PreviousUserCount   UserCountTotal
    count            4.000000         4.000000
    mean            48.500000      5396.750000
    std             18.046237        55.721779
    min             26.000000      5336.000000
    25%             38.000000      5367.500000
    50%             51.500000      5391.000000
    75%             62.000000      5420.250000
    max             65.000000      5469.000000
    
    

    PYTHON : Regular Expressions to Search Text ( * MOSTLY USED AND IMPORTANT)

    BEST TWO EXAMPLE OF SEARCHING [Can be used for Different Practices such as Analysis, HR, Sales Team and many more ]

    name_list = '''Ezra Sharma <esharma@automateinfra.com>,
       ...: Rostam Bat   <rostam@automateinfra.com>,
       ...: Chris Taylor <ctaylor@automateinfra.com,
       ...: Bobbi Baio <bbaio@automateinfra.com'''
    
    # Some commonly used ones are \w, which is equivalent to [a-zA-Z0-9_] and \d, which is equivalent to [0-9]. 
    # You can use the + modifier to match for multiple characters:
    
    print(re.search(r'Rostam', name_list))
    print(re.search('[RB]obb[yi]',  name_list))
    print(re.search(r'Chr[a-z][a-z]', name_list))
    print(re.search(r'[A-Za-z]+', name_list))
    print(re.search(r'[A-Za-z]{5}', name_list))
    print(re.search(r'[A-Za-z]{7}', name_list))
    print(re.search(r'[A-Za-z]+@[a-z]+\.[a-z]+', name_list))
    print(re.search(r'\w+', name_list))
    print(re.search(r'\w+\@\w+\.\w+', name_list))
    print(re.search(r'(\w+)\@(\w+)\.(\w+)', name_list))
    
    O/P:
    
    <re.Match object; span=(49, 55), match='Rostam'>
    <re.Match object; span=(147, 152), match='Bobbi'>
    <re.Match object; span=(98, 103), match='Chris'>
    <re.Match object; span=(0, 4), match='Ezra'>
    <re.Match object; span=(5, 10), match='Sharm'>
    <re.Match object; span=(13, 20), match='esharma'>
    <re.Match object; span=(13, 38), match='esharma@automateinfra.com'>
    <re.Match object; span=(0, 4), match='Ezra'>
    <re.Match object; span=(13, 38), match='esharma@automateinfra.com'>
    <re.Match object; span=(13, 38), match='esharma@automateinfra.com'>
    
    
    
    
    # <IP Address> <Client Id> <User Id> <Time> <Request> <Status> <Size>
    
    
    Line1 = '127.0.0.1 - Automateinfra1 [13/Nov/2021:14:43:30 -0800] "GET /assets/234 HTTP/1.0" 200 2326'
    access_log = '''
    127.0.0.1 - Automateinfra1 [13/Nov/2021:14:43:30 -0800] "GET /assets/234 HTTP/1.0" 200 2326
    127.0.0.2 - Automateinfra2 [13/Nov/2021:14:43:30 -0800] "GET /assets/235 HTTP/1.0" 200 2324
    127.0.0.3 - Automateinfra3 [13/Nov/2021:14:43:30 -0800] "GET /assets/236 HTTP/1.0" 200 2325
    '''
    
    count_ip = r'(?P<IP>\d+\.\d+\.\d+\.\d+)'
    count_time = r'(?P<Time>\d\d/\w{3}/\d{4}:\d{2}:\d{2}:\d{2})'
    count_clientid = r'(?P<User>".+")'
    count_request = r'(?P<Request>".+")'
    
    sol = re.search(r'(?P<IP>\d+\.\d+\.\d+\.\d+)', Line1 )
    print(sol.group('IP'))
    print(re.search(count_request , Line1))
    print(re.search(count_time , Line1))
    
    value = re.finditer(count_ip, access_log)
    for sol in value:
      print(sol.group('IP'))
    
    O/P:
    
    127.0.0.1
    <re.Match object; span=(56, 82), match='"GET /assets/234 HTTP/1.0"'>
    <re.Match object; span=(28, 48), match='13/Nov/2021:14:43:30'>
    127.0.0.1
    127.0.0.2
    127.0.0.3
    

    PYTHON: DEAL With Large Files (FILE BREAKER and LINE BREAKER)

    Rather than loading the whole file into memory as you have done up until now, you can read one line at a time, process the line, and then move to the next. The lines are removed from memory automatically by Python’s garbage collector, freeing up memory.

    # LINE BREAKER 
    
    with open("devops.txt",mode="r") as mynewfile:       # if you open any binary file such as pdf keep w as wb
        with open("devops-corrected.txt", "w") as target_file:
            for line in mynewfile:
                print(target_file.write(line))
    
    o/p:
    
    Automateinfra.com
    automateinfra.com/blog
    automateinfra.com/solutions
    
    
    # FILE BREAKER with chunk of data with number of bytes 
    
    
    with open('book.xml' , 'rb') as sourcefile:
        while True:
            chunk = sourcefile.read(1024)  # break down in 1024 bytes
            if chunk:
                print(chunk)
            else:
                break
    O/P:
    
    b'<?xml version="1.0"?>\r\n<catalog>\r\n   <book id="bk109">\r\n      <author>Author1</author>\r\n      <title>Automate Infra Part 2</title>\r\n      <genre>Science Fiction</genre>\r\n      <price>6.95</price>\r\n    
      <publish_date>2000-11-02</publish_date>\r\n      <description>book1</description>\r\n   </book>\r\n   <book id="bk112">\r\n      <author>Author2</author>\r\n      <title>Automate Infra Part 1</title>\r\n      <genre>Computer</genre>\r\n      <price>49.95</price>\r\n      <publish_date>2001-04-16</publish_date>\r\n      <description>book2</description>\r\n   </book>\r\n</catalog>'
    

    PYTHON ENCRYPTION: MOST IMPORTANT TOPIC OF PYTHON FILE SYSTEM

    There are many times you need to encrypt text to ensure security. In addition to Python’s built-in package hashlib, there is a widely used third-party package called cryptography

    HASHLIB: Uses Hash Function and based on SHA1, SHA224, SHA384, SHA512, and RSA’s MD5 Algorithms

    CRYPTOGRAPHY:

    symmetric key encryption: Its based on shared keys. These algorithms include Advanced Encryption Algorithm (AES), Blowfish, Data Encryption Standard (DES), Serpent, and Twofish

    asymmetric key encryption: Its based on public keys ( which are widely shared ) and private keys which is kept secretly

    # Encryption using HashLib
    
    import hashlib                  # Python Built in Package
    line = "I like editing automateinfra.com"
    bline = line.encode()       # Converting into Binary string
    print(bline)                  # Print the converted Binary string
    
    algo = hashlib.md5()            # Using the secure alogorithm using haslib object
    algo.update(bline)            # Applying the secure alogorithm
    print("Encrypted  text Message")
    print(algo.digest())            # Print the Encypted string
    
    
    # Encryption using Cryptography (Symmetric key encryption)
    
    
    from cryptography.fernet import Fernet  # Third Party Package So you would need pip install cryptography
    key = Fernet.generate_key()             # Generating the keys
    print("Generating the keys ")
    print(key)                              # Prining the keys
     
    algo = Fernet(key) # Using the key AES alogo using Fenet object
    message = b"I definetely like Editing AutomateInfra.com"
    encrypted = algo.encrypt(message)
    print("Encrypted  text Message ")
    print(encrypted)
    print(algo.decrypt(encrypted))
    
    
    # Encryption using Cryptography (ASymmetric key encryption)
    
    from cryptography.hazmat.backends import default_backend
    from cryptography.hazmat.primitives.asymmetric import padding ,rsa
    from cryptography.hazmat.primitives import hashes
    
    private_key = rsa.generate_private_key(public_exponent=65537,key_size=4096,backend=default_backend())  # Generating the Private Key
    
    print(private_key)   # Printing  the Private Key
    public_key = private_key.public_key()   # Generating the Public Key
    print(public_key)    # Printing  the Public  Key
    message = b"I am equally liking Editing AutomateInfra.com"
    
    encrypted = public_key.encrypt(message,padding.OAEP(mgf=padding.MGF1(algorithm=hashes.SHA256()), algorithm=hashes.SHA256() , label=None))
    print(encrypted)
    decrypted = private_key.decrypt(encrypted,padding.OAEP(mgf=padding.MGF1(algorithm=hashes.SHA256()), algorithm=hashes.SHA256(), label=None))
    print(decrypted)
    
    
    O/P:
    
    b'I like editing automateinfra.com'
    Encrypted  text Message
    b'v\x84*\xe55\x01\xa4z\x05\xa2\xa2\xdb\xd1y\xa9\x07'
    Generating the keys
    b'7trCiXpGuCfEnXoIcsFfCGOw-u_Qkas0tv1lBM8xmQo='
    Encrypted  text Message
    b'gAAAAABgAJcPFj-aGttg8MRJQfRYGWyOWy44u-cLWGuDhqoyyvYP1uG4oQYms8BQMr4eExpv74LIZESGvpIUY88fE0_YQCQ32JH0DZsabLOAtc00QCwV8L51WktRjzUab0Fp3jnbOeb2'
    b'I definetely like Editing AutomateInfra.com'
    <cryptography.hazmat.backends.openssl.rsa._RSAPrivateKey object at 0x036491D8>
    <cryptography.hazmat.backends.openssl.rsa._RSAPublicKey object at 0x03850E38>
    b"\x8b\xec\xb0\x91\xec\xe7\x8d;\x11\xbclch\xbdVD@c\xd3J\x07'\xe9\x07\x15\x1c@=^\xd2h\xcaDL\x95\xea[\x0fv\x012\xed\xd5\xed\x0e\x9b\x93V2\x00\xba\x9c\x07\xba\x8b\xf3\xcb\x03M\xa8\xb1\x12ro\xae\xc0\xfb$\xf9\xcc\x85\xe8s\xfc`{\xfe{\x88\xd2\xc3\xffI\x90\xe3\xd2\x1e\x82\x95\xdfe<\xd5\r\x0b\xc4z\xc4\xf7\x00\xcfr\x07npm0\xd4\xc4\xa4>w\x9d]\xcf\xae7F\x91&\x93\xd5\xda\xcaR\x13A\x8ewB\xf6\xd9\xae\xce\xca\x8f\xd6\x91\x06&:\x00\xa0\x84\x05#,\x7fdA\x87\xb2\xe7\x1d\x8b*\xa15\xf8\xb0\x07\xa0n\x1e\xeaI\x02\xbaA\x88ut\x8e\x82<\xfe\xbfM\xe6F\xa3\xcc\xd4\x8b\x80PY\xb5\xd3\x14}C\xe2\x83j\xaf\x85\xa6\x9e\x19\xb2\xd9\xb8\xac\xa4\xfb\x1f\x0c\xce\x9d4\x82\x1e\xfd5\xb49\xa5\xbbL\x01~\x8fA\xee\r\xc7\x84\x9e\x0c\t\x15z\r\xfd]\x0b\xcfW\x01\xd2\x16\x17btc\xeaSl\xf5\xb0\x8a\xe2X\xe7\xa7a\xa7\xf7M\x01\xa2\x0b8\xd6\xf2\xc5c\xbf\xea\xe0\x80\x15\xde-\x98\xa1\xc8ud*\xbel2\xb5\xc8:\x92\xd5\r(_8\xbd\xcb\x80\xf1\x93\x83\xe2\x9f\xed\x82f\xd0\xb2\x8f\x1b\x9eMC\x07\xf9\x08\xb0\x00QA\xea\x93\xc7@&\x84\xff<\xde\x80@\xc8\xc6\x83O&%\x91r-\xb0\xef}\x18tU{C\xa6\x17\x97\x1b\x95g\xc5\x0e>{\xb0\x94a)\xbc)*Sq\x98\xad\xf3>\x04\x9b+x\x95&\xa6\xe6,\xb4~\xf2Y\x06,\xab'uq \x9f0\x7f\xb5\xd50\xbdp\xbb\xdf\x1c\xe9\xb1\xc4\x88y\nq\\\x85\x1e\xd8\x18M\x87\x1aU.\x918;\xcd\x10 \x9b\x11\xf9R\xd3\x8fz\xe8\xf6|C\xfb\x1f\xfd1\x19\x10:>\x1c\x06\x8e\xda\x98\xb2\xf3aa^\xa54\x03\xf8\x03\xc4\xe6\xd9mw\r\x8b\x96\xa2rJ\x03\xe7\xda\x0f\rJ-iPo!^\x8a\xdcg\x8c!L\xa4\xedY\xe5\x12\xdf\xe8\xe7\x0cE\xcd\xa2\xa2Gr\xc0\xe1\xa6\xc5\x9a\x9f\x07\x89\x84\x8b\xb7"
    b'I am equally liking Editing AutomateInfra.com'
    

    PYTHON OS MODULE:

    This module will help to connect with many low level operating system calls and offers connectivity between multi -OS like Unix and Windows.

    import os   # Python Built in Package
    
    print(os.listdir('.'))   # List the directories
    os.rename('automateinfra.txt','automateinfra_backup.txt')
    os.chmod('automateinfra.txt',777) # Add the permissions to the file.
    os.mkdir('/tmp/automateinfra.pdf') # Make the directory
    os.rmdir('/tmp/automateinfra.pdf') # remove the directory
    os.stat('b.txt')  #  These stats include st_mode, the file type and permissions, and st_atime, the time the item was last accessed.
    
    
    cur_dir = os.getcwd()  # Get the current working directory.
    print(os.path.dirname(cur_dir))   # Returns the Parent Directory Path
    print(os.path.split(cur_dir))     # Gives structure from Parent Directory
    print(os.path.basename(cur_dir))  # Returns Base Directory 
    
    while os.path.basename(cur_dir):   # Until Base Path directory is true , keep continuing 
        cur_dir = os.path.dirname(cur_dir)  # Prints the base directory and all above parents Directory
        print(cur_dir)
    
    O/P:
    
    
    C:\Users\AutomateInfra\Desktop\GIT\Python-Desktop
    ('C:\\Users\\AutomateInfra\\Desktop\\GIT\\Python-Desktop', 'Basics')
    Basics
    C:\Users\AutomateInfra\Desktop\GIT\Python-Desktop
    C:\Users\AutomateInfra\Desktop\GIT
    C:\Users\AutomateInfra\Desktop
    C:\Users\AutomateInfra
    C:\Users
    C:\
    
    import os
    
    
    # Check the current working directory
    
    file_name = "automateinfra.txt"
    file_path = os.path.join(os.getcwd(), file_name)  
    print(f"Checking {file_path}")
    if os.path.exists(file_path):
       print(file_path)
    
    # Check user home directory
    
    home_dir = os.path.expanduser("~/") #expanduser function to get the path to the user’s home directory.
    file_path = os.path.join(home_dir,file_name)
    print(f"Checking {file_path}")
    if os.path.exists(file_path):
       print(file_path)
    
    
    o/p:
    
    Checking C:\Users\Automateinfra\Desktop\GIT\Python-Desktop\Basics\automateinfra.txt
    C:\Users\Automateinfra\Desktop\GIT\Python-Desktop\Basics\automateinfra.txt
    Checking C:\Users\Automateinfra/automateinfra.txt
    
    

    The Ultimate Guide on API Testing with Complete Automation

    API Automation with Rest Assured library

    What is an API ?

    API is an interface that allows communication between client to server to simplify the building of client-server software.

    API is an software that allows two applications to talk to each other. Each time you use an app like Facebook, send an instant message, or check the weather on your phone, you’re using an API.

    When you use an application on your mobile phone, the application connects to the Internet and sends data to a server. The server then retrieves that data, interprets it, performs the necessary actions and sends it back to your phone. The application then interprets that data and presents you with the information you wanted in a readable way. This is all possible with an API.

    Difference between Types of API’s [ SOAP v/s REST ]

    REST: Representational State Transfer. It is an lightweight and scalable service built on REST architecture. It uses HTTP protocol. It is based on architectural pattern

    Elements of REST API:

    • Method: GET, PUT, DELETE
      • POST – This would be used to send the data to the server such as customer information or uploading any file using the RESTful web service. To send the data use Form parameter and body payload.
      • GET – This would be used to retrieve data from the server using the RESTful web service. It only extracts the data there is no change in the data. No Payload or body required. To get the data use query parameter.
      • PUT – This would be used to update the resources using the RESTful web service
      • DELETE – This would be used to delete * using the RESTful services
    • Request Headers: These are additional instructions that are sent along with the request
    • Request Body: Data is sent along with the POST request that is it wants to add a resource to the server.
    • Response status code: Returned along with the request such as 500, 200 etc.

    Characteristics of REST

    • REST is an Architectural style in which a web service can only be treated as a RESTful service if it follows the constraints of being 1. Client Server 2. Stateless 3. Cacheable 4. Layered System 5. Uniform Interface
    • Stateless means that the state of the application is not maintained in REST .For example, if you delete a resource from a server using the DELETE command, you cannot expect that delete information to be passed to the next request. This is required so that server can process the response appropriately
    • The Cache concept is to help with the problem of stateless which was described in the last point. Since each server client request is independent in nature, sometimes the client might ask the server for the same request again
    • REST use Uniform Service locators to access to the components on the hardware device. For example, if there is an object which represents the data of an employee hosted on a URL as automateinfra.com , the below are some of URI that can exist to access them automateinfra.com/blog

    SOAP: Simple Object Access Protocol.

    • Follows strict rules for communicate between [client-server] as it doesn’t follows what is being followed by REST follows Uniform Interface, Client-Server, Stateless, Cacheable, Layered System, Code.
    • SOAP was designed with a specification. It includes a WSDL file which has the required information on what the web service does in addition to the location of the web service.
    • The other key challenge is the size of the SOAP messages which get transferred from the client to the server. Because of the large messages, using SOAP in places where bandwidth is a constraint can be a big issue.
    • SOAP uses service interfaces to expose its functionality to client applications. In SOAP, the WSDL file provides the client with the necessary information which can be used to understand what services the web service can offer.
    • SOAP uses only XML to transfer the information or exchanging the information where as REST uses plain text, HTML , JSON and XML and more.

    Application Programming Interface theory (API-theory)

    • When Website is owned by single owner such as Google: In that case when frontend site needs to connect to backend site then it may vary with different languages and can cause lot of compatibility issues such as frontend uses Angular and backend uses Java , so you would need API to deal with it.
    • When Your client needs to access data from your website then you would need to expose the API rather than exploring your entire code and packages.
    • When client connects to another client or server using API the transmission of data takes places using either XML or JSON which are language independent.

    Ultimate Jenkins tutorial for DevOps Engineers

    Jenkins is an open source automated CI/CD tool where CI stands for continuous integration and CD stands for Continuous delivery. Jenkins has its own built-in Java servlet container server which is Jetty. Jenkins can also be run in different servlet containers such as Apache tomcat or glassfish.

    • Jenkins is used to perform smooth and quick deployment. It can be deployed to local machine or on premises data center or any cloud.
    • Jenkins takes your code any sort of code such as python, java or go or JS etc. and compiles it using different compiler such as MAVEN one of the most used compiler and then builds your code in war or Zip format and sometimes as a docker Image. Finally once everything is built properly it deploy as an when required . It integrates very well with lots of third party tools.

    JAVA_HOME and PATH are variables to enable your operating system to find required Java programs and utilities.

    JAVA_HOME: JAVA_HOME is an (OS) environment variable that can optionally be set after either the (JDK) or (JRE) is installed. The JAVA_HOME environment variable points to the file system location where the JDK or JRE was installed. This variable should be configured on all OS’s that have a Java installation, including Windows, Ubuntu, Linux, Mac, and Android. 

    The JAVA_HOME environment variable is not actually used by the locally installed Java runtime. Instead, other programs installed on a desktop computer that requires a Java runtime will query the OS for the JAVA_HOME variable to find out where the runtime is installed. After the location of the JDK or JRE installation is found, those programs can initiate Java-based processes, start Java virtual machines and use command-line utilities such as the Java archive utility or the Java compiler, both of which are packaged inside the Java installation’s \bin directory.

    • JAVA_HOME if you installed the JDK (Java Development Kit)
      or
    • JRE_HOME if you installed the JRE (Java Runtime Environment) 

    PATH: Set the PATH environment variable if you want to be able to conveniently run the executables (javac.exejava.exejavadoc.exe, and so on) from any directory without having to type the full path of the command. If you do not set the PATH variable, you need to specify the full path to the executable every time you run it, such as:

    C:\Java\jdk1.8.0\bin\javac Myprogram.java
    # The following is an example of a PATH environment variable:
    
    C:\Java\jdk1.7.0\bin;C:\Windows\System32\;C:\Windows\;C:\Windows\System32\Wbem

    Installing Jenkins using msi installer on Windows Machine

    MSI is an installer file that installs your program on the executing system. Setup.exe is an application (executable file) that has MSI file(s) as one of the resources. The MSI is the file extension of MSI files. They are Windows installers. An MSI file is a compressed package of installer files. It consists of all the information pertaining to adding, modifying, storing, or removing the respective software.  MSI file includes data, instructions, processes, and add-ons that are necessary for the application to work normally.

    EXE is short for Executable. This is any kind of binary file that can be executed. All windows programs are exe files. Prior to MSI files, all installers were EXE files. The exe is a file extension of an executable file. An executable file executes a set of instructions or a code when opening it. An executable file is compiled from source code to binary code. It can be directly executed by the Windows OS. These files are understandable by the machine, and they can be directly executed by the operating system

    MSI is a file extension of windows installer which is a software component of Microsoft Windows used for the installation, maintenance, and removal of software. Whereas, exe is a file extension of an executable file that performs indicated tasks according to the encoded instructions. 

    1. Navigate to https://www.jenkins.io/download/ and select windows option and your download of Jenkins msi will begin.
    1. Once downloaded click on the jenkins.msi
    1. Continue the Jenkins setup.
    1. Select the Port 8080 and click on Test Port and then Hit Next.
    1. Provide the admin password from the provided Path mentioned in RED color.
    1. Further install the plugins required for jenkins.
    1. Next,it will prompt for First admin user. Please fill the required information and keep it safe with you , as you will use this to login.
    1. Now Jenkins URL configuration screen will appear , keep it as it is for now.
    1. Click on Save and Finish.
    1. Now your Jenkins is ready , click on Start using Jenkins. Soon, you will see Jenkins Dashboard. You can create New Jobs by clicking on New Item.

    Installing Jenkins using jenkins exe on Windows Machine

    1. Similarly now install jenkins.war from jenkins URL and click on Generic Java package(.war).
    2. Next run the command as below.
    java -jar jenkins.war -http=8181
    1. Next, copy the Jenkins password from the log output and paste it in the as you did earlier in windows msi section point (5) and follow rest of the points.

    Installing jenkins on Apache Tomcat server on Windows Machine

    1. Install the Apache Tomcat on windows machine from https://tomcat.apache.org/download-90.cgi and click on tomcat installer as per your system. This tutorial is performed on 64 bit windows machine.
    1. Next, unzip the tomcat installation folder and copy the jenkin.war file in the webapps folder.
    1. Next, go inside the bin folder and run the tomcat by clicking on the startup batch script.
    1. Finally you will notice that Apache Tomcat has started and Jenkins as well.
    1. Now, navigate to localhost:8080 URL and you should see tomcat page as shown below.
    1. Further, navigate to localhost:8080/jenkins to redirect to Jenkins Page.

    Configuring the Jenkins UI

    1. First click on Manage Jenkins and then navigate to Configure system.
    1. Next, add the system message and save it which should display this message on Jenkins everytime as below.
    1. To configure the name of the Jobs add the name Pattern as below.
    1. Next, try creating a a new Jenkins Job with random name then it will not allow you and display the error message.

    Managing User’s and Permission’s in Jenkins UI

    • Go to Manage Jenkins and Navigate to Manage users in the Jenkins UI.
    • Then Create three users as shown below admin, dev, qa.
    • Next, Navigate to Manage Jenkins and choose Configure Global Security.
    • Next select Project-based Matrix Authorization Strategy and define the permissions for all users as you want.

    Role Based Stratergy

    • In Previous section you noticed that adding all users and grnating all permissions is little tough job. So, instead create a role and add users in it. To do that first step is to install the Plugin as shown below.
    • Next select Role based Stratergy as shown below and define the permissions for all users as you want.
    • Next, navigate to Manage Jenkins and then to Manage and Assign Jenkins and then click on Manage Roles.
    • Add 3 Global Roles named DEV Team, QA Team and admin.
    • Add 2 Items Roles developers and Testers with define patterns so that Jobs names are declared accordingly.
    • Next, Click on Assign Role
    • Assigning the roles as shown below.

    Conclusion

    In this tutorial you learnt how to install jenkins on windows through various ways , how to configure Jenkins Dashboard UI and how to manager users and Permissions.

    The Ultimate Guide on the Hardware and Software/components of the computer

    Knowing the hardware of the computer is very important as an IT engineer and this tutorial gives you each and every information about the same. In this tutorial learn everything about the Hardware and Software/components of the computer.

    What is Computer System?

    The computer contains mainly two parts: Hardware and software.The computer is a programmable electronic device that can be programmed to accept the input and then provide the output. The Computer hardware can only understand binary numbers that are 0 or 1. Computer transfer data one byte at a time.

    Computer stores all data on the hard disk as 0 or 1 Binary numbers. ASCII ( American Standard Code for information interchange.

    What are Computer hardware’s?

    Hardware is made up of various electronic circuits and components such as I/O devices, CPU, disk, and the motherboard.

    • Input Devices: The input device is used to provide the input ( data, instructions) into the RAM of the computer such as
      • Keyboard
      • Trackball ( the upper part of the mouse)
      • light pen
      • OBR (Optical Bar Code reader) – This is used to scan the vertical bars, read tags.
      • OCR ( Optical Character reader) – This is used to detect the alphanumeric characters like reading passanger tickets, computer printed bills of credit card.
    • Output Devices: The output devices provides the end result which user provided as input such as monitor.
      • Monitor also known as VDU ( Visual display unit). It contains CRT ( Cathode ray Tube) which displays the character as output.
      • Ther are many different types monitors available in market such as CGA ( Color graphics adapter) , EGA (Enhanced graphic adapter) ,VGA ( Video graphics adapter), SVGA (Super VGA) which is best in market.
    • CPU( Central Processing Unit): CPU is the most important hardware part of the Computer which performs all the functions and execution of input data. It executes the instructions stored in the main memory. CPU has a set of electronic circuits that executes the program instructions. CPU contains its own memory that is Cache to immediately process the data,
    • Memory or storage: This a a storage place where all the data resides. Again there are two categories in memory i.e Primary and the secondary memory.
      • Primary memory: These memory are directly connected with CPU and are extremely fast such as RAM (Random Access Memories) which is volatile in nature and ROM (Read Only Memories) that is non volatile in nature. The CPU works with these memory only.
      • Secondary Memory: These memory are not directly connected to the CPU such as Floppy disk, CD Rom ,hard disks or RAM.
    • Motherboard: The motherboard is most important hardware component like a circuit board. It is the main printed circuit board ( PCB ) found in computers. CPU is installed in one of the sockets of the motherboard or directly soldered. There are slots in which memory is installed.
    New Laptop Motherboard Hp Pavilion 15-AC, 15 AC 15AC, LA-C701P intel Core  i3 Cpu, laptop motherboard – ON OFF SHOP
    • Buses: The data is stored in 0 or 1 binary format in register in a form of a unit. When the data needs to travel or move from one registers to other then you need a seperate wires and these wires are known as buses.
      • The data bus is used to move data,
      • Address bus to move address or memory location
      • Control bus to send control signals between various components of a computer.
    • Types of Buses
      • System bus transfers information between different parts inside computer system.
      • Control bus has two wires, set and enable. When CPU wants to read from RAM, the enable wire will be opened; when CPU wants to save information on RAM, CPU will enable the set wire.
      • Data bus is a two-way bus carry data commute between CPU and RAM.
      • Address bus is a one-way bus carry addresses from CPU to RAM.
    • Clock: Clock is an important component of CPU which measures and allocates fixed slot for processing each and every micro operations.
      • The clock speed measures the number of cycles your CPU executes per second, measured in GHz (gigahertz).
      • A CPU with a clock speed of 3.2 GHz executes 3.2 billion cycles per second
      • The CPU is allocated one or more clock cycle to complete the micro operations.
      • The processor base frequency refers to the CPU’s regular operating point, while the Max Turbo Frequency refers to the maximum speed the processor
      • The CPU executes the instructions in synchronization with the clock pulse.
      • The operations are performed with a speed of clock cycle per second (MHz) with a range of 4.77 MHz to 266 Mhz.
      • The speed of CPU is measured in terms of MIPS( Millions of instructions per second) or cycles per second
      • Each central processing unit has an internal clock that produces pulses at a fixed rate to synchronize all computer operations

    Chipsets: Chipset handles an incredible amount of data. It is the glue that connects the microprocessor with the motherboard. It contains two basic parts northbridge (connects directly to processor via FSB i.e front side bus) and the South Bridge primarily handles the routing of traffic between the various input/output (I/O) devices on the system for which speed is not vital to the total performance, such as the disk drives (including RAID drive arrays), optical drives.

    Video (Graphics) Card:


    A dedicated video card (or video adapter) is an expansion card installed inside your system unit to translate binary data received from the CPU or GPU into the images you view on your monitor. It is an alternative to the integrated graphics chip.
    Modern video cards include ports allowing you to connect to different video equipment; also they contain their own RAM, called video memory. Video cards also come with their own processors or GPUs


    Sound Cards

    1. Sound cards attached to the motherboard and enabled your computer to record and reproduce sounds.
    2. Most computers ship with a basic sound card, most often a 3D sound card. 3D sound is better than stereo sound

    Ethernet Card/Network Cards

    An Ethernet network requires that you install or attach network adapters to each computer or peripheral you want to connect to the network. Most computers come with Ethernet adapters preinstalled as network interface cards (NICs).


    CPU (Central Processing Unit)

    CPU is the most important hardware part of the Computer which performs all the functions and execution of input data. It executes the instructions stored in the main memory. CPU has a set of electronic circuits that executes the program instructions.

    An example of a CPU is Intel 8085 which was an 8-bit microprocessor.

    The U is for Ultrabook: Intel's low-power, dual-core Haswell CPUs unveiled  | Ars Technica
    Intel announces Core i9 laptop processor, new 8th-gen desktop CPUs, four  extra 300-series chipsets, more | TechSpot

    Computer interacts with primary storage that is the main memory for processing data and instructions. CPU contains mainly two components Arthematic Logic Unit and Control Unit.

    • Arthematic Logic Unit (ALU) is a digital circuit that performs all the calculations such as bitwise and mathematical operations on binary number.
    • Control Unit: CU controls all the activities such as transfer of data, instructions. It takes or obtains the instructions from the memory and understands it and then forward it further for execution or calculations. The control unit sends the control signals along the control bus.
    • Registers:  These are high speed memory built into CPU chip circuits to acess or store the data immediate from the calculations or instructions performed by ALU. They act as high speed temporary memory. Registers can store two words at a time until overwritten. CPU needs to process very fast so in order for CPU to process the instructions or data from the RAM you need a place high memories in between which is Registers.
      • Registers work under the direction of the control unit to accept, hold, and transfer instructions or data and perform arithmetic or logical comparisons at high speed.
    • Types of Registers
      • Program Counter: Stores the address of next instruction to be executed
      • Accumulator: This registers temporarily stores data from ALU immediately.
      • Memory Address Registers: Stores the address of current instruction being executed
      • Memory data Registers or Memory Buffer Registers: Holds the data from that is copied from the RAM and ready for CPU to process
    • Below is the Image snapshot of various registers that are used in the CPU.
    • Cache (L2 or L3). A processor uses memory installed in the chip itself to store and speed up operations before utilizing external system RAM. This on-board memory is stored in one or more caches, which are identified L2 or L3. More powerful processors will be equipped with larger caches.
    • Socket Unit: On which CPU is installed on the motherboard.

    Computer Architecture and its Working

    The Working a Computer system comprises input operations, storage operations, data processing, and output operations.

    1. When you press a key on your keyboard lets say ABC. The keyboard has PCB behind it which converts the alphabets ABC into the binary number and sends it to the CPU.
    2. The other scenerio could be execution of a single program like 35 + 49
    3. CPU component that is Control unit fetches (gets) the instruction from the RAM memory how to draw ABC ( basically like opcode, operand) using the data bus and also in the mean time asks RAM to store this in your memory until I perform execution of calculation. ( At times CPU fetches from Hard disk instead of RAM as your OS lies in the Hard disk)
    4. Data bus brings the data and instructions in CPU’s internal memory that is registers for processing the data.
    5. The control unit decodes the instruction (decides what it means) to machine bianry code and directs that the necessary data be moved from memory to the arithmetic/logic unit. The steps (2,3,4)together are called instruction time, or I-time.
    6. The arithmetic/logic unit executes the arithmetic or logical instruction. That is, the ALU is given control and performs the actual operation on the data.
    7. Thc arithmetic/logic unit stores the result of this operation in memory or in a register. ( Step 5 and 6 are Execution or E time)
    8. The control unit eventually directs memory to release the result to an output device or a secondary storage device. The combination of I-time and E-time is called the machine cycle. To Perform all the instructions it is known as clock cycles that is executes 5 instructions and nowadays Modern CPU’s can perform millions of clock cycle per second.
    9. All these things are happening on the circuit known as motherboard.
    Step by Step function of CPU along with Memory

    Another example of how a computer works

    • Suppose your Hard disk has 500 processes.
    • Suppose RAM can execute a maximum of 50 process.
    • Lets say: You ran a program which is an executable code (low level code that is machine code) to run 500 process that are stored in the hard disk.
    • Then as CPU will request RAM to provide the 50 process instructions to execute. If in case it doesnt have then it asks Hard disk to provide the instructions.
    • Harddisk copy the instructions to RAM and then CPU fetches from the RAM.
    • For Hard disk to copy 50 Process to RAM and then CPU to fetch from RAM is decided by Operating system using different algorithms such as short term scheduler or long term scheduler.

    Data flow from CPU to Memory and Vice Versa

    Step by Step function of CPU
    1. The MAR stands for Memory address register which is connected to the Address Bus. It stores the memory address of an instruction. The sole function of MAR is to contain the RAM address of the instruction the CPU wants.
    2. The MDR stands for Memory data register which is connected to the Data Bus. It holds the data that will be written to the RAM or read from the RAM. Even when ALU performs the operations the data is stored in high memory registers such as MBR or MDR
    3. The relationship between MAR and MDR is that the MAR gives the address the data of the MDR will be read from or written to.
    • .

    Single Core CPU v/s Multi Core CPU

    Single-core CPU will only be able to process one program at a time. However, when you run multiple programs simultaneously, then a single-core processor will divide all programs into small pieces and concurrently execute with time slicing.

    For EX:

    P1 initiated——————————————————– P1 Ends

    P2 initiated ——————————— P2 Ends

    P3 Initaited —————- P3 Ends

    Unlike single-core processing, it is a way in which computing tasks are divided into sub-parts, and a multicore processor (multiple CPU cores) execute each sub-task simultaneously.A dual-core CPU literally has two central processing units on the CPU chip. A quad-core CPU has four central processing units, an octa-core CPU has eight central processing units, and so on.

    P1 initiated—————— P1 Ends

    P2 initiated —————– P2 Ends

    P3 Initaited —————– P3 Ends

    Hyper Threading or Logical Processor or Threads of CPU

    Threads are the virtual components or codes, which divide the physical core of a CPU into virtual multiple cores. A single CPU core can have up to 2 threads per core. For dual-core (i.e., 2 cores) it will have 4 threads, for Octal core (i.e., 8 core) it will have 16 threads, and vice-versa.

    Windows’ Task Manager shows this fairly well. Here, for example, you can see that this system has one actual CPU (socket) and four cores. Hyperthreading makes each core look like two CPUs to the operating system, so it shows 8 logical processors.

    Threads of Processes

    The thread is created by a process. Every time you open an application, it itself creates a thread that will handle all the tasks of that specific application. Like-wise the more application you open more threads will be created.

    The threads are always created by the operating system for performing a task of a specific application.

    Batch Processing vs Multi programming vs vs Multiprocessing vs Multitasking vs Multithreading Operating Systems

    Batch processing is the grouping of several same processing jobs to be executed one after another by a computer without any user interaction.

    Multiprogramming is the ability of an OS to execute multiple programs at the same time on a single processor machine.

    Multiprocessing system: When one system is connected to more than one processor which collectively works for the completion of the task.

    Multithreading is a conceptual programming paradigm where a process is divided into a number of sub-processes called threads. Each thread is independent and has its own path of execution with enabled inter-thread communication.

    Magnetic Storage Device

    There are devices that are known to be magnetic storage devices as they have a layer of magnetic substance on their surface. These devices have a read-write assembly that converts data and instructions in the form of 0 or 1 into some form of the magnetic signal.

    Floppy: The floppy disk stores the data in the form of a magnetic signal and while data is stored they are converted from 0,1 to magnetic signals. These were introduced by IBM and later were known as diskettes. They have an option called a small sliding switch called write-protect notch from which you cannot delete the data from the floppy drive.

    Hard Disk: Can store a huge amount of data and have hard platters that hold magnetic medium compared to floppy disks compared to floppy and tapes that have plastic films. Information remains intact even after switching off the computer. So in computers, Operating System is installed and stored on the hard disk. As the hard disk is non volatile memory, OS does not lose on the turn off

    Magnetic tapes: They are similar to tapes that you see in cassettes or video cassettes and are divided into tracks. One of the tracks is used to detect the error. They can store as much as 10GB of data. It allows only sequential order which is a disadvantage.

    Zip disks: It is similar to floppy disks. They have a less magnetic coating as compared to which allows more tracks per inch on the track surface.

    Optical Storage Device

    In the case of Optical storage devices, the signals are stored in the form of light. So 0’s and 1’s are converted into light information. Let’s learn about some of the optical storage devices.

    CD-ROM: It stands for compact disk read-only memory. When you add or write any data in the CD it is known as burning the CD. It is basically ROM where data can be read but once written cannot be rewritten or erased.

    DVD-ROM: Used for high-quality video and has better storage such as 4GB to 18GB.

    Conclusion

    In this tutorial, you learned everything about hardware and how computers work? So with this knowledge, you are a computer hardware pro and you can easily diagnose your systems!

    How does Python work Internally with a computer or operating system

    Are you a Python developer and trying to understand how does Python Language works? This article is for you where you will learn each and every bit and piece of Python Language. Let’s dive in!

    Python

    Python is a high-level language, which is used in designing, deploying, and testing at lots of places. It is consistently ranked among today’s most popular programming languages. It is also dynamic and object-oriented language but also works on procedural styles as well, and runs on all major hardware platforms. Python is an interpreted language.

    High Level v/s Low Level Languages

    High-Level Language: High-level language is easier to understand than is it is human readable. It is either compiled or interpreted. It consumes way more memory and is slow in execution. It is portable. It requires a compiler or interpreter for a translation.

    The fastest translator that converts high level language is .

    Low-Level Language: Low-level languages are machine-friendly that is machines can read the code but not humans. It consumes less memory and is fast to execute. It cannot be ported. It requires an assembler for translation.

    Interpreted v/s Compiled Language

    Compiled Language: Compiled language is first compiled and then expressed in the instruction of the target machine that is machine code. For example – C, C++, C# , COBOL

    Interpreted Language: An interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program and these kinds of languages are known as interpreter languages. For example JavaScript, Perl, Python, BASIC

    Python vs C++/C Language Compilation Process

    C++ or C Language: These Languages need compilation that means human-readable code has to be translated into Machine-readable code. The Machine code is executed by the CPU. Below is the sequence in which code execution takes place.

    1. Human Readable is compiled.
    2. Compilation takes place.
    3. Compiled code generates a executable file which is in a machine code format (Understood by Hardware).
    4. Execuation file is executed by CPU

    Python Language:

    Python is a high-level language

    Bytecode, also termed p-code, is a form of instruction set designed for efficient execution by a software interpreter

    1. Python code is written in .py format such as test.py.
    2. Python code is then compiled into .pyc or .pyo format which is a byte code not a machine code ( Not understood by Machine) using Python Interpreter.
    3. Once your program has been compiled to byte code (or the byte code has been loaded from existing .pyc files), it is shipped off for execution to something generally known as the Python Virtual Machine
    4. Byte code is converted into machine code using PVM ( Python Virtual Machine).
    5. Once your program has been compiled to byte code (or the byte code has been loaded from existing .pyc files), it is shipped off for execution to something generally known as the Python Virtual Machine
    6. Now byte code that is test.pyc is further converted into machine code using virtual machine such as (10101010100010101010)
    • Finally Program is executed and output is displayed.
    How Python runs? – Indian Pythonista

    Conclusion

    In this tutorial, you learnt how the python language works and interacts with Operating systems and Hardware. So, which application are you planning to build using Python?

    The Ultimate Guide on AWS EKS for Beginners [Easiest Way]

    In this Ultimate Guide as a beginner you will learn everything you should know about AWS EKS and how to manage your AWS EKS cluster ?

    Common! lets begin !

    Table of Content

    1. What is AWS EKS ?
    2. Why do you need AWS EKS than Kubernetes?
    3. Installing tools to work with AWS EKS Cluster
    4. Creating AWS EKS using EKSCTL command line tool
    5. Adding one more Node group in the AWS EKS Cluster
    6. Cluster Autoscaler
    7. Creating and Deploying Cluster Autoscaler
    8. Nginx Deployment on the EKS cluster when Autoscaler is enabled.
    9. EKS Cluster Monitoring and Cloud watch Logging
    10. What is Helm?
    11. Creating AWS EKS Cluster Admin user
    12. Creating Read only user for the dedicated namespace
    13. EKS Networking
    14. IAM and RBAC Integration in AWS EKS
    15. Worker nodes join the cluster
    16. How to Scale Up and Down Kubernetes Pods
    17. Conclusion

    What is AWS EKS ?

    Amazon provides its own service AWS EKS where you can host kubernetes without worrying about infrastructure like kubernetes nodes, installation of kubernetes etc. It gives you a platform to host kubernetes.

    Some features of Amazon EKS ( Elastic kubernetes service)

    1. It expands and scales across many availability zones so that there is always a high availability.
    2. It automatically scales and fix any impacted or unhealthy node.
    3. It is interlinked with various other AWS services such as IAM, VPC , ECR & ELB etc.
    4. It is very secure service.

    How does AWS EKS service work?

    • First step in EKS is to create EKS cluster using AWS CLI or AWS Management console or using eksctl command line tool.
    • Now, next you can have your own machines EC2 where you can deploy applications or deploy to AWS Fargate which manages it for you.
    • Now connect to kubernetes cluster with kubectl or eksctl commands.
    • Finally deploy and run applications on EKS cluster.

    Why do you need AWS EKS than Kubernetes?

    If you are working with Kubernetes you are required to handle all the below thing yourself such as:

    1. Create and Operate K8s clusters.
    2. Deploy Master Nodes
    3. Deploy Etcd
    4. Setup CA for TLS encryption.
    5. Setup Monitoring, AutoScaling and Auto healing.
    6. Setup Worker Nodes.

    But with AWS EKS you only need to manage worker node other all rest Masters node, etcd in high availability , API server, KubeDNS, Scheduler, Controller Manager, Cloud Controller all the things are taken care of Amazon EKS.

    You need to pay 0.20 US dollar per hour for your AWS EKS cluster which takes you to 144 US Dollar per month.

    Installing tools to work with AWS EKS Cluster

    1. AWS CLI: Required as a dependency of eksctl to obtain the authentication token. To install AWS cli run the below command.
    pip3 install --user awscli
    After you install aws cli make sure to set the access key and secret key id in aws cli so that it can create the EKS cluster.
    1. eksctl: To setup and operate EKS cluster. To install eksctl run the below commands. Below command will download the eksctl binary in the tmp directory.
    curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/v0.69.0/eksctl_Linux_amd64.tar.gz" | tar xz -C /tmp
    • Next, move the eksctl directory in the executable directory.
    sudo mv /tmp/eksctl /usr/local/bin
    • To check the version of eksctl and see if it is properly install run below command.
    eksctl version
    1. kubectl: Interaction with k8s API server. To install the kubectl tool run the below first command that updates the system and installs the https package.
    sudo apt-get update && sudo apt-get install -y apt-transport-https
    • Next, run the curl command that will add the gpg key in the system to verify the authentication with the kubernetes site.
    curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
    • Next, add the kubernetes repository
    echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
    • Again update the system so that it takes the effect after addition of new repository.
    sudo apt-get update
    • Next install kubectl tool.
    sudo apt-get install -y kubectl
    • Next, check the version of the kubectl tool by running below command.
    kubectl version --short --client
    1. IAM user and IAM role:
    • Create an IAM user with administrator access and use that IAM user to explore the AWS resources on the console. This is the user which also be used in the EC2 instance that you will use to manage AWS EKS cluster by passing user’s credentials in aws cli.
    • Also make sure to create a IAM role that you will apply on the EC2 instance from where you will manage AWS EKS and other AWS resources.

    Creating AWS EKS using EKSCTL command line tool

    Up to now you installed and setup the tools that are required for creating an AWS EKS Cluster. To know how to create a cluster using the eksctl command then run the help command which will provide you flags that you need to use while creating a AWS EKS cluster.

    eksctl create cluster --help 
    1. Lets begin to create a EKS cluster. To do that create a file named eks.yaml and copy and paste the below content.
      • apiVersion is the kubernetes API version that will mange the deployment.
      • Kind denotes what kind of resource/object will kubernetes will create. In the below case as you need to provision cluster you should give Clusterconfig
      • metadata: Data that helps uniquely identify the object, including a name string, UID, and optional namespace.
      • nodegroups: Provide the name of node group and other details required for node group that will be used in your EKS cluster.
    apiVersion: eksctl.io/v1alpha5
    kind: ClusterConfig
    
    metadata:
      name: EKS-course-cluster
      region: us-east-1
    
    nodeGroups:
      - name: ng-1
        instanceType: t2.small
        desiredCapacity: 3
        ssh: # use existing EC2 key
          publicKeyName: eks-course
    1. Now, execute the command below to create the cluster.
    eksctl create cluster -f eks.yaml
    1. Once cluster is successfully created run the below command to know the details of the cluster.
    eksctl get cluster
    1. Next, Verify the AWS EKS cluster on AWS console.
    1. Also verify the nodes of the nodegroups that were created along with the cluster by running the below commands.
    kubectl get nodes
    1. Also, verify the Nodes on AWS console. To check the nodes navigate to EC2 instances.
    1. Verify the nodegroups in the EKS Cluster by running the eksctl command.
    eksctl get nodegroup --cluster EKS-cluster
    1. Finally Verify the number of Pods in the EKS Cluster by running the below eksctl command.
    eksctl get pods --all-namespaces

    Adding one more Node group in the AWS EKS Cluster

    To add another node group in EKS Cluster follow the below steps:

    1. Create a yaml file as shown below and copy/paste the below content. In below file you will notice that previous nodegroup is already mentioned otherwise if you run this file without it it will override previous changes and remove the ng-1 node group from the cluster.
    apiVersion: eksctl.io/v1alpha5
    kind: ClusterConfig
    
    metadata:
      name: EKS-cluster
      region: us-east-1
    
    nodeGroups:
      - name: ng-1
        instanceType: t2.small
        desiredCapacity: 3
        ssh: # use existing EC2 key
          publicKeyName: testing
    # Adding the another Node group nodegroup2 with min/max capacity as 3 and 5 resp.
      - name: nodegroup2
        minSize: 2
        maxSize: 3
        instancesDistribution:
          maxPrice: 0.2
          instanceTypes: ["t2.small", "t3.small"]
          onDemandBaseCapacity: 0
          onDemandPercentageAboveBaseCapacity: 50
        ssh:
          publicKeyName: testing
    1. Next run the below command that will help you to create a nodegroups.
    eksctl create nodegroup --config-file=node_group.yaml.yaml --include=' nodegroup2'
    1. If you wish to delete the node group in EKS Cluster run anyone of the below commands.
    eksctl delete nodegroup --cluster=EKS-cluster --name=nodegroup2
    eksctl delete nodegroup --config-file=eks.yaml --include='nodegroup2' --approve
    • To Scale the node group in EKS Cluster
    eksctl scale nodegroup --cluster=name_of_the_cluster --nodes=5 --name=node_grp_2

    Cluster Autoscaler

    The cluster Autoscaler automatically launches additional worker nodes if more resources are needed, and shutdown worker nodes if they are underutilized. The AutoScaling works within a node group, so you should create a node group with Autoscaler feature enabled.

    Cluster Autoscaler has the following features:

    • Cluster Autoscaler is used to scale up and down the nodes within the node group.
    • It runs as a deployment based on CPU and Memory utilization.
    • It can contain on demand and spot instances.
    • There are two types of scaling
      • Multi AZ Scaling: Node group with Multi AZ ( Stateless workload )
      • Single AZ Scaling: Node group with Single AZ ( Stateful workload)

    Creating and Deploying Cluster Autoscaler

    The main function and use of Autoscaler is it dynamically on the fly adds or removes the node within the nodegroup. The Autoscaler works as a deployment and depends on the CPU/Memory requests.

    There are two types of scaling available : Multi AZ v/s Single AZ ( Stateful Workload) as EBS cannot be spread across multiple availability zone

    To create the cluster Autoscaler you can add multiple nodegroups in the cluster as per need . In this examples lets consider to deploy 2 node groups with single AZ and 1 node groups across 3 AZs using spot instance with Autoscaler enabled

    1. Create a file create and name it as autoscaler.yaml.
    apiVersion: eksctl.io/v1alpha5
    kind: ClusterConfig
    
    metadata:
      name: EKS-cluster
      region: us-east-1
    
    nodeGroups:
      - name: scale-east1c
        instanceType: t2.small
        desiredCapacity: 1
        maxSize: 10
        availabilityZones: ["us-east-1c"]
    # iam holds all IAM attributes of a NodeGroup
    # enables IAM policy for cluster-autoscaler
        iam:
          withAddonPolicies:
            autoScaler: true
        labels:
          nodegroup-type: stateful-east1c
          instance-type: onDemand
        ssh: # use existing EC2 key
          publicKeyName: eks-ssh-key
      - name: scale-spot
        desiredCapacity: 1
        maxSize: 10
        instancesDistribution:
          instanceTypes: ["t2.small", "t3.small"]
          onDemandBaseCapacity: 0
          onDemandPercentageAboveBaseCapacity: 0
        availabilityZones: ["us-east-1c", "us-east-1d"]
        iam:
          withAddonPolicies:
            autoScaler: true
        labels:
          nodegroup-type: stateless-workload
          instance-type: spot
        ssh: 
          publicKeyName: eks-ssh-key
    
    availabilityZones: ["us-east-1c", "us-east-1d"]
    1. Run the below commands to add a nodegroups or delete a nodegroups.
    eksctl create nodegroup --config-file=autoscaler.yaml
    
    1. eksctl get nodegroups –cluster=EKS-Cluster
    1. Next, to deploy the Autoscaler run the below kubectl command.
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
    kubectl -n kube-system annotate deployment.apps/cluster-autoscaler cluster-autoscaler.kubernetes.io/safe-to-evict="false"
    1. To edit the deployment and set your AWS EKS cluster name run the below kubectl command.
    kubectl -n kube-system edit deployment.apps/cluster-autoscaler
    1. Next, describe the deployment of the Autoscaler by running the below kubectl command.
    kubectl -n kube-system describe deployment cluster-autoscaler
    1. Finally view the cluster Autoscaler logs by running the kubectl command on kube-system namespace.
    kubectl -n kube-system logs deployment.apps/cluster-autoscaler
    1. Verify the Pods. You should notice below that first pod is for Nodegroup1 , similarly second is for Nodegroup2 and finally the third is Autoscaler pod itself.

    Nginx Deployment on the EKS cluster when Autoscaler is enabled.

    1. To deploy the nginx application on the EKS cluster that you just created , create a yaml file and name it something which you find it convenient and copy/paste the below content into that.
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: test-autoscaler
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            service: nginx
            app: nginx
        spec:
          containers:
          - image: nginx
            name: test-autoscaler
            resources:
              limits:
                cpu: 300m
                memory: 512Mi
              requests:
                cpu: 300m
                memory: 512Mi
          nodeSelector:
            instance-type: spot
    
    
    
    1. Now to apply the nginx deployment, run the below command.
    kubectl apply -f nginx-deployment.yaml
    1. After successful deployment , check the number of Pods.
    kubectl get pods
    1. Checking the number of nodes and type of node.
    kubectl get nodes -l instance_type=spot
    • Scale the deployment to 3 replicas ( that is 3 pods will be scaled)
    kubectl scale --replicas=3 deployment/test-autoscaler
    • Checking the logs and filtering the events.
    kubectl -n kube-system logs deployment.apps/cluster-autoscaler | grep -A5 "Expanding Node Group"

    EKS Cluster Monitoring and Cloud watch Logging

    By, now you have already setup EKS cluster but it is also important to monitor your EKS cluster. To monitor your cluster follow the below steps:

    1. Create a below eks.yaml file and copy /paste below code into the file.
    apiVersion: eksctl.io/v1alpha5
    kind: ClusterConfig
    
    metadata:
      name: EKS-cluster
      region: us-east-1
    
    nodeGroups:
      - name: ng-1
        instanceType: t2.small
        desiredCapacity: 3
        ssh: # use existing EC2 key
          publicKeyName: eks-ssh-key
    cloudWatch:
      clusterLogging:
        enableTypes: ["api", "audit", "authenticator"] # To select only few log_types
        # enableTypes: ["*"]  # If you need to enable all the log_types
    1. Now apply the cluster logging by running the command.
    eksctl utils update-cluster-logging --config-file eks.yaml --approve 
    1. To Disable all the configuration types
    eksctl utils update-cluster-logging --name=EKS-cluster --disable-types all

    To get container metrics using cloudwatch: First add IAM policy (CloudWatchAgentServerPolicy ) to all your nodegroup(s)- to nodegroup(s) role and Deploy Cloudwatch Agent – After you deploy it will have its own namespace (cloudwatch-agent)

    1. Now run the below command.
    curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/master/k8s-yaml-templates/quickstart/cwagent-fluentd-quickstart.yaml | sed "s/{{cluster_name}}/EKS-course-cluster/;s/{{region_name}}/us-east-1/" | kubectl apply -f -
    1. To check what all has been created in namespaces
    kubectl get all -n amazon-cloudwatch
    
    kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --limits=cpu=500m --expose --port=80
    kubectl run --generator=run-pod/v1 -it --rm load-generator --image=busybox /bin/sh
    Hit enter for command prompt
    while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done
    

    What is Helm?

    Helm is the package manager similar to what you have in ubuntu or python such as apt or pip. Helm contains mainly three components.

    • Chart: All the dependency files and application files.
    • Config: Any configuration that you would like to deploy.
    • Release: It is an running instance of a chart.

    Helm Components

    • Helm client: Manages repository, Managing releases, Communicates with Helm library.
    • Helm library: It interacts with Kubernetes API server.

    Installing Helm

    • To install helm make sure to create the directory with below commands and then change the directory
    mkdir helm && cd helm
    • Next, add official stable helm repository which contains sample charts to install
    curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
    helm version
    • To find all the lists of the repo
    helm repo list
    • To Update the repository
    helm repo update
    • To check all the charts in the helm repository.
    helm search repo
    • To install one of the charts. After running the below command then make sure to check the number of Pods running by using kubectl get pods command.
    helm install name_of_the_chart stable/redis
    • To check the deployed charts
    helm ls # 
    • To uninstall helm deployments.
    helm uninstall <<name-of-release-from-previous-output>>

    Creating AWS EKS Cluster Admin user

    To manage all resources in the EKS cluster you need to have dedicated users either ( Admin or Read only ) to perform tasks accordingly. Lets begin by creating an admin user first.

    1. Create IAM user in AWS console (k8s-cluster-admin) and store the access key and secret key for this user locally on your machine.
    2. Next, add user to configmap aws-auth section within map Users section. But before you add a user, lets find all the configmap in kube-system namespace because we need to store all the users in aws-auth.
    kubectl -n kube-system get cm
    1. Save the kubectl command in the yaml formatted file.
    kubectl -n kube-system get cm aws-auth -o yaml > aws-auth-configmap.yaml
    1. Next, edit the aws-auth-configmap.yaml and add the mapUsers with the following information:
      • userarn
      • username
      • groups as ( system:masters) which has admin/all permissions basically a role
    1. Run the below command to apply the changes of newly added user.
    kubectl apply -f aws-auth-configmap.yaml -n kube-system

    After you apply changes you will notice that in AWS EKS you will not see any warning such as kubernetes objects cannot be accessed or something like that.

    1. Now check if user has been properly created by running the describe command.
    kubectl -n kube-system describe cm aws-auth
    1. Next, add user to aws credentials file in dedicated section (profile) and then export it using export command or store it in aws cli command line.
    export AWS_PROFILE="profile_name"
    1. Finally check which user is currently running the aws cli commands
    aws sts get-caller-identity

    Creating a read only user for the dedicated namespace

    Similarly, now create a read only user for AWS-EKS service. Lets follow the below steps to create a read only user and map it in configmap with IAM.

    1. Create a namespace using below namespace.
    kubectl create namespace production
    1. Create a IAM user on AWS Console
    1. Create a file rolebinding.yaml and add both the role and role bindings that includes the permissions that a kubernetes user will have.
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      namespace: production
      name: prod-viewer-role
    rules:
    - apiGroups: ["", "extensions", "apps"]
      resources: ["*"]  # can be further limited, e.g. ["deployments", "replicasets", "pods"]
      verbs: ["get", "list", "watch"] 
    kind: RoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: prod-viewer-binding
      namespace: production
    subjects:
    - kind: User
      name: prod-viewer
      apiGroup: ""
    roleRef:
      kind: Role
      name: prod-viewer-role
      apiGroup: ""
    1. Now apply the role and role bindings using the below command.
    kubectl apply -f rolebinding.yaml
    1. Next edit the yaml file and apply the changes such as userarn, role and username as you did previously.
    kubectl -n kube-system get cm aws-auth -o yaml > aws-auth-configmap.yaml
    kubectl apply -f aws-auth-configmap.yaml -n kube-system
    1. Finally test the user and setup

    EKS Networking

    • Amazon VPC contains CNI Plugins from which each Pod receives IP address which is linked with ENI .
    • Pods have same IP address within the VPC that means inside and outside the EKS cluster.
    • Make sure to use maximum IP address by using CIDR/18 which has more IP address.
    • EC2 instance can also have limited amount of ENI/IP address that is each EC2 instance can have limited PODS ( like 36 or so according to Instance_type)

    IAM and RBAC Integration in AWS EKS

    • Authentication is done by IAM
    • Authorization is done by kubernetes RBAC
    • You can assign RBAC directly to IAM entities.

    kubectl ( USER SENDS AWS IDENTITY) >>> Connects with EKS >>> Verify AWS IDENTITY ( By Authorizing AWS Identity with Kubernetes RBAC )

    Worker nodes join the cluster

    1. When you create a worker node, assign the IAM Role and authorize that IAM Role needs to be authorized in RBAC in order to join the cluster. Add system:bootstrappers and system:nodes groups in your ConfigMap. The value for rolearn is the NodeInstanceRole and then run the below command
    kubectl apply -f aws-auth.yaml
    1. Check current state of cluster services and nodes
    kubectl get svc,nodes -o wide

    How to Scale Up and Down Kubernetes Pods

    There are three ways of Scaling up/down the kubernetes Pods, Lets look at all of these three.

    1. Scale the deployment to 3 replicas ( that is 3 pods will be scaled) using kubectl scale command.
    kubectl scale --replicas=3 deployment/nginx-deployment
    1. Next, update the yaml file with 3 replicas and run the below kubectl apply command. ( Lets say you have abc.yaml file)
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            service: nginx
            app: nginx
        spec:
          containers:
          - image: nginx
            name: nginx 
            resources:
              limits:
                cpu: 300m
                memory: 512Mi
              requests:
                cpu: 300m
                memory: 512Mi
          nodeSelector:
            instance-type: spot
    kubectl apply -f abc.yaml
    1. You can scale the Pods using the kubernetes Dashboard.
    1. Apply the manifest file that you created earlier by running below command.
    kubectl apply -f nginx.yaml
    1. Next verify if the deployment has been done succesfully.
    kubectl get deployment --all-namespaces

    Conclusion

    In this tutorial you learned everything about AWS EKS from beginners to Advanced level.

    Now, you have string understanding of AWS EKS which applications do you plan to manage on it ?

    Python Compilation and Working !!

    Table of Content

    1. Understanding the difference between high level and low level language.
    2. Interpreted v/s Compiled Language
    3. Introduction to Python
    4. How Python Works ?
    5. Python Interpreter
    6. Python Standard Library
    7. Python Implementations
    8. Python Installation
      • Python Installation on Linux Machine
      • Python Installation on Windows Machine
      • Python Installation on MacOS
    9. Conclusion

    Understanding the difference between High & Low-level Languages

    High-Level Language: High-level language is easier to understand than is it is human readable. It is either compiled or interpreted. It consumes way more memory and is slow in execution. It is portable. It requires a compiler or interpreter for a translation.

    The fastest translator that converts high level language is .

    Low-Level Language: Low-level Languages are machine-friendly that is machines can read the code but not humans. It consumes less memory and is fast to execute. It cannot be ported. It requires an assembler for translation.

    Interpreted v/s Compiled Language

    Compiled Language: Compiled language is first compiled and then expressed in the instruction of the target machine that is machine code. For example – C, C++, C# , COBOL

    Interpreted Language: An interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program and these kinds of languages are known as interpreter languages. For example JavaScript, Perl, Python, BASIC

    Introduction to Python

    Python is a high-level language, which is used in designing, deploying, and testing at lots of places. It is consistently ranked among today’s most popular programming languages. It is also dynamic and object-oriented language but also works on procedural styles as well, and runs on all major hardware platforms. Python is an interpreted language.

    How does Python Works?

    Bytecode, also termed p-code, is a form of instruction set designed for efficient execution by a software interpreter

    • First step is to write a Python program such as test.py
    • Then using Python interpreter program is in built compiled and gets converted into byte code that is test.pyc.
    • Python saves byte code like this as a startup speed optimization. The next time you run your program, Python will load the .pyc files and skip the compilation step, as long as you haven’t changed your source code since the byte code was last saved.
    • Once your program has been compiled to byte code (or the byte code has been loaded from existing .pyc files), it is shipped off for execution to something generally known as the Python Virtual Machine
    • Now byte code that is test.pyc is further converted into machine code using virtual machine such as (10101010100010101010)
    • Finally Program is executed and output is displayed.
    How Python runs? – Indian Pythonista

    Python Interpreter

    Python includes both interpreter and compiler which is implicitly invoked.

    • In case of Python version 2, the Python interpreter compiles the source file such as file.py and keep it in same directory with an extension file.pyc
    • In case of Python version 3 : the Python interpreter compiles the source file such as file.py and keep it in subdirectory __pycache__
    • Python does not save the compiled bytecode when you run the script directly; rather, Python recompiles the script each time you run it.
    • Python saves bytecode files only for modules you import however running Python command with -B flag avoids saving compiled bytecode to disk.
    • You can also directly execute Python script in the Unix operating system if you add shebang inside your script.
    #! /usr/bin/env python

    Python Standard Library

    Python standard library contains several well-designed Python modules for convenient reuse like representing data, processing text, processing data, interacting with operating systems and filesystems, and web programming. Python modules are basically Python Programs like a file (abc.py) that are imported.

    There are some extension modules that allows Python code to access functionalities supplied by underlying OS or other software’s components such as GUI, database and network, XML parsing. You can also wrap existing C/C++ libraries into python extension modules.

    Python Implementations

    Python is more than a language, you can utilize the implementation of Python in many ways such as :

    • CPython: CPython is an interpreter, compiler, set of built in and optional extension modules all coded in C language. Python code is converted into bytecode before interpreting it.
    • IronPython: Python implementation for the Microsoft-designed Common Language Runtime (CLR), most commonly known as .NET, which is now open source and ported to Linux and macOS
    • PyPy: PyPy is a fast and flexible implementation of Python, coded in a subset of Python itself, able to target several lower-level languages and virtual machines using advanced techniques such as type inferencing
    • Jython: Python implementation for any Java Virtual Machine (JVM) compliant with Java 7 or better. With Jython, you can use all Java libraries and framework and it supports only v2 as of now.
    • IPython: Enhances standard CPython to make it more powerful and convenient for interactive use. IPython extends the interpreter’s capabilities by allowing abbreviated function call syntax, using question mark to query an objects documentation etc.

    Python Installation

    Python Installation on Linux Machine

    If you are working on the Latest platforms you will find Python already installed in the systems. At times Python is not installed but binaries are available in the system which you can install using RPM tool or using APT in Linux machines and for Windows use MSI( Microsoft Installer ) .

    Ubuntu 16 server
    Ubuntu 18 server

    Python Installation on Windows Machine

    Python can be installed in Windows with a few steps, and to install Python steps can be found here.

    Python Installation on macOS

    Python V2 comes installed on macOS but you should install the latest Python version always. The popular third-party macOS open-source package manager Homebrew offers, among many other packages, excellent versions of Python, both v2 and v3

    • To install Homebrew, open Terminal or your favorite OS X terminal emulator and run
    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
    • Add homebrew directory at the top of your PATH environment variable.
    export PATH="/usr/local/opt/python/libexec/bin:$PATH"
    • Now install Python3 using the following commands.
    brew install python3
    • Verify the installation of Python using below command

    Conclusion

    In this tutorial, you learned a basic introduction to python, why it is interpreted, and high-level language. Also, you learned lots of details of python data types, keywords, and how python works. There were a handful of examples that you learned. Hope this tutorial will help you and if you like please share it.

    Learn ELK Stack from Scratch: Elasticsearch, Logstash, Kibana dashboard, and AWS Elasticsearch

    If you want to analyze data for your website or applications, consider learning ELK Stack or Elastic Stack that contains Elasticsearch, logstash, and Kibana dashboard.

    Elasticsearch is a powerful analytics search engine that allows you to store, index, and search the documents of all types of data in real-time. But if you need your search engine to automatically scale, load-balanced then AWS Elasticsearch (Amazon OpenSearch) is for you.

    In this tutorial, you will learn what is Elastic Stack, Elasticsearch, Logstash, kibana dashboard, and finally AWS Elasticsearch from Scratch, and believe me, this tutorial will be helpful for you.

    Let’s get into it.

    Related: Install ELK Stack on Ubuntu: Elasticsearch, Logstash, and Kibana Dashboard.

    Join 50 other followers

    Table of Content

    1. What is ELK Stack or Elastic Stack?
    2. What is Elasticsearch ?
    3. QuickStart Kibana Dashboard
    4. What is Logstash?
    5. Features of Logstash
    6. What is AWS Elasticsearch or Amazon OpenSearch Service?
    7. Creating the Amazon Elasticsearch Service domain or OpenSearch Service domain
    8. Uploading data in AWS Elasticsearch
    9. Search documents in Kibana Dashboard
    10. Conclusion

    What is ELK Stack or Elastic Stack?

    The ELK stack or Elastic Stack is used to describe a stack that contains: Elasticsearch, Logstash, and Kibana. The ELK stack allows you to aggregate logs from all your systems and applications, analyze these logs, and create visualizations for application and infrastructure monitoring, faster troubleshooting, security analytics, and more.

    • E = Elasticsearch: Elasticsearch is a distributed search and analytics engine built on Apache Lucene
    • L = Logstash: Logstash is an open-source data ingestion tool that allows you to collect data from a various sources and then transforms it and send it to your desired destination
    • K = Kibana: Kibana is a data visualization and exploration tool for reviewing logs and events.
    ELK Stack architecture
    ELK Stack architecture

    What is Elasticsearch ?

    Elasticsearch is an analytics and full-text search engine built on the Apache Lucene search engine library where the indexing, search, and analysis operations occur. Elasticsearch is a powerful analytics search engine that allows you to store, index, and search the documents of all types of data in real-time.

    Even if you have structured or unstructured text numerical data, Elasticsearch can efficiently store and index it in a way that supports fast searches. Some of the features of Elasticsearch are:

    • Provides the search box on the website, web page or on applications.
    • Stores and analyze the data and metrics.
    • Logstash and Beats helps with collecting, aggregating the data and storing it in Elasticsearch.
    • Elasticsearch is used in the machine learning.
    • Elasticsearch stores complex data structures that have been serialized as JSON documents.
    • If you have multiple Elasticsearch nodes in Elasticsearch cluster then documents are distributed across the cluster and can be accessed immediately from any node.
    • Elasticsearch also has the ability to be schema-less, which means that documents can be indexed without explicitly specifying how to handle each of the different fields.
    • The Elasticsearch REST APIs support structured queries, full text queries, and complex queries that combine the two.You can access all of these search capabilities using Elasticsearch’s comprehensive JSON-style query language (Query DSL).
    • Elasticsearch index can be thought of as an optimized collection of documents and each document is a collection of fields, which are the key-value pairs that contain your data.
    • Elasticsearch index is really just a logical grouping of one or more physical shards, where each shard is actually a self-contained index.
    • There are two types of shards: primaries and replicas. Each document in an index belongs to one primary shard. The number of primary shards in an index is fixed at the time that an index is created, but the number of replica shards can be changed at any time.
    • Sharding splits index or indices into smaller pieces. It is used so that more number of documents can be stored at index level, easier to fit large indices into nodes, improve query throughput. By default index have one shard and you can add more shards.
    Elasticsearch Cluster
    Elasticsearch Cluster

    Elasticsearch provides REST API for managing your cluster and indexing and searching your data. For testing purposes, you can easily submit requests directly from the command line or through the Kibana dashboard by running the GET request in the Kibana console under dev tools, as shown below.

    <IP-address-of-elasticsearch>/app/dev_tools#/console
    
    Kibana console with Dev tools
    Kibana console with Dev tools
    • You can find the Elasticsearch cluster health by running the below command where _cluster is API and health is the command.
    GET _cluster/health
    
    Checking the health of the Elasticsearch cluster
    Checking the health of the Elasticsearch cluster
    • To check the Elasticsearch node details using below command.
    GET _cat/nodes?v
    
    Checking the health of the elasticsearch node
    Checking the health of the elasticsearch node
    • To check the Elasticsearch indices configured, run the below command. You will notice kibana is also listed as indices because kibana data is also stored in elasticsearch.
    GET _cat/indices
    
    Checking the Elasticsearch indices on the elasticsearch cluster
    Checking the Elasticsearch indices on the elasticsearch cluster
    • To check the Primary and replica shards from a kibana console run the below request.
    GET _cat/shards
    
    Checking all the primary shards and replica shards in elasticsearch cluster
    Checking all the primary shards and replica shards in the elasticsearch cluster

    QuickStart Kibana Dashboard

    Kibana allows you to search the documents, observe the data and analyze the data, visualize in charts, maps, graphs, and more for the Elastic Stack in the form of a dashboard. Your data can be structured or unstructured text, numerical data, time-series data, geospatial data, logs, metrics, security events.

    Kibana also manages your data, monitor the health of your Elastic Stack cluster, and control which users have access to the Kibana Dashboard.

    Kibana also allows you to upload the data into the ELK stack by uploading your file and optionally importing the data into an Elasticsearch index. Let’s learn how to import the data in the kibana dashboard.

    • Create a file named shanky.txt and copy/paste the below content.
    [    6.487046] kernel: emc: device handler registered
    [    6.489024] kernel: rdac: device handler registered
    [    6.596669] kernel: loop0: detected capacity change from 0 to 51152
    [    6.620482] kernel: loop1: detected capacity change from 0 to 113640
    [    6.636498] kernel: loop2: detected capacity change from 0 to 137712
    [    6.668493] kernel: loop3: detected capacity change from 0 to 126632
    [    6.696335] kernel: loop4: detected capacity change from 0 to 86368
    [    6.960766] kernel: audit: type=1400 audit(1643177832.640:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lsb_release" pid=394 comm="apparmor_parser"
    [    6.965983] kernel: audit: type=1400 audit(1643177832.644:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe" pid=396 comm="apparmor_parser"
    
    • Once the file is uploaded successfully you will see the details of all code that you uploaded.
    Data uploaded in the kibana
    Data uploaded in the kibana
    Details of the data uploaded in the kibana
    Details of the data uploaded in the kibana
    • Next create the elasticsearch index and click on import.
    Creating the elasticsearch index on Elasticsearch cluster
    Creating the elasticsearch index on the Elasticsearch cluster
    • After import is successful you will see the status of your elasticsearch index as below.
    Status of file upload in kibana
    Status of file upload in kibana
    • Next, click View index in Discover as shown in the the previous image. Now you should be able to see the logs within elasticsearch index (shankyindex).
    Checking the logs in kibana with newly created index
    Checking the logs in kibana with newly created index

    Kibana allows you to perform the below actions such as:

    • Refresh, flush, and clear the cache of your indices or index.
    • Define the lifecycle of an index as it ages.
    • Define a policy for taking snapshots of your Elasticsearch cluster.
    • Roll up data from one or more indices into a new, compact index.
    • Replicate indices on a remote cluster and copy them to a local cluster.
    • Alerting allows you to detect conditions in different Kibana apps and trigger actions when those conditions are met.

    What is Logstash?

    Logstash allows you to collect the data with real-time pipelining capabilities. Logstash allows you to collect data from various sources beats and push it to the elasticsearch cluster. With Logstash, any type of event is transformed using an array of input, filter, and output plugins, further simplifying the ingestion process.

    Working of Logstash
    Working of Logstash

    Features of Logstash

    Now that you have a basic idea about Logstash, let’s look at some of the benefits of Logstash, such as:

    • Logstash hndle all types of logging data and easily ingest web logs like Apache, and application logs like log4j for Java.
    • Logstash captures other log formats like syslog, networking and firewall logs.
    • One of the main benefits of Logstash is to securely ingest logs with Filebeat.

    What is AWS Elasticsearch or Amazon OpenSearch Service??

    Amazon Elasticsearch Service or OpenSearch is a managed service that deploys and scales the Elasticsearch clusters in the cloud. Elasticsearch is an open-source analytical and search engine that performs real-time application monitoring and log analytics.

    Amazon Elasticsearch service provisions all resources for Elasticsearch clusters and launches it. It also replaces the failed Elasticsearch nodes in the cluster automatically. Let’s look at some of the key features of the Amazon Elasticsearch Service.

    • AWS Elasticsearch or Amazon OpenSearch can scale up to 3 PB of attached storage and works with various instance types.
    • AWS Elasticsearch or Amazon OpenSearch easily integrates with other services such as IAM for security, VPC, AWS S3 for loading data, AWS Cloud Watch for monitoring and AWS SNS for alerts notifications.

    Creating the Amazon Elasticsearch Service domain or OpenSearch Service domain

    Now that you have a basic idea about the Amazon Elasticsearch Service domain or OpenSearch Service let’s create the Amazon Elasticsearch Service domain or OpenSearch Service domain using the Amazon Management console.

    • While in the Console, click on the search bar at the top, search for ‘Elasticsearch’, and click on the Elasticsearch menu item.

    Now Elasticsearch service has been replaced with Opensearch service.

    Searching for Elasticsearch service
    Searching for Elasticsearch service
    • Creating a Amazon Elasticsearch domain is same as that of Elasticsearch cluster that means domains are clusters with the settings, instance types, instance counts, and storage resources that you specify. Click on create a new domain.
    Creating an Amazon Elasticsearch domain
    Creating an Amazon Elasticsearch domain
    • Next, select the deployment type as Development and testing.
    Choosing the deployment type.
    Choosing the deployment type.

    Next, select the below settings as defined below:

    • For Configure domain provide the Elasticsearch domain name as “firstdomain”. A domain is the collection of resources needed to run Elasticsearch. The domain name will be part of your domain endpoint.
    • For Data nodes, choose the t3.small.elasticsearch and ignore rest of the settings and click on NEXT.
    • For Network configuration, choose Public access.
    • For Fine-grained access control, choose Create master user and provide username as user and password as Admin@123. Fine-grained access control keeps your data safe.
    • For Domain access policy, choose Allow open access to the domain. Access policies control whether a request is accepted or rejected when it reaches the Amazon Elasticsearch Service domain.
    • Further keep clicking on NEXT button and create the domain which takes few minutes for Domain to get Launched.
    Viewing the Elasticsearch domain or Elasticcluster endpoint
    Viewing the Elasticsearch domain or Elasticcluster endpoint
    • After successful creation of Elasticsearch domain. Click on the firstdomain Elasticsearch domain.
    firstdomain Elasticsearch domain
    Elasticsearch domain (first domain)

    Uploading data in AWS Elasticsearch

    You can load streaming data into your Amazon Elasticsearch Service (Amazon ES) domain from many different sources like Amazon Kinesis Data Firehose, Amazon Cloud Watch Logs, Amazon S3, Amazon Kinesis Data Streams, Amazon DynamoDB, AWS Lambda functions as event handlers.

    • In this tutorial you will use a sample data to upload the data. To upload the sample data go to the Elasticsearch domain URL using the username user and password Admin@123 and then click on Add data.
    Adding data in Elasticsearch
    Adding data in Elasticsearch
    • Now use sample data and add e-commerce orders.
     sample data to add e-commerce orders in Elasticsearch cluster
    sample data to add e-commerce orders in Elasticsearch cluster

    Search documents in Kibana Dashboard

    Kibana is a popular open-source visualization tool that works with the AWS Elasticsearch service. It provides an interface to monitor and search the indexes. Let’s use Kibana to search the sample data you just uploaded in AWS ES.

    • Now in the Elasticsearch domain URL itself, Click on Discover option on the left side to search the data.
    Click on the Discover option
    Click on the Discover option.
    • Now you will notice that Kibana has the data that got uploaded. You can modify the timelines and many other fields accordingly.
    Viewing the data in Kibana dashboard
    Viewing the data in the Kibana dashboard

    Join 50 other followers

    Kibana provided the data when we searched in the dashboard using the sample data you uploaded.

    Conclusion

    In this tutorial, you learned what Elastic Stack, Elasticsearch, Logstash, kibana dashboard, and AWS Elasticsearch from Scratch using Amazon Management console. Also, you learned t how to upload the sample data in AWS ES.

    Now that you have a strong understanding of ELK Stack, Elasticsearch, kibana, and AWS Elasticsearch, which site are you planning to monitor using ELK Stack and components?

    The Ultimate Guide: Getting Started with Groovy and Groovy Scripts

    Powerful, dynamic language, with static-typing and static compilation capabilities, for the Java platform aimed at improving developer productivity .Groovy syntax is simple and easy. It saves a lot of code and effort thus increasing the productivity of developer if he had to do the same thing in Java.

    In this tutorial you will learn what is groovy and how to install Groovy on windows and Linux machine . Later you will see two examples which helps you kickstart for Writing Groovy Scripts.

    Table of Content

    1. What is Groovy?
    2. Prerequisites
    3. How to Install Groovy on Windows Machine
    4. How to Install Groovy on Ubuntu Machine
    5. Groovy Syntax
    6. Groovy Examples
    7. Conclusion

    What is Groovy?

    Groovy is a Powerful static as well as dynamic language which is almost same as Java language with few differences. Groovy language is vastly used in Jenkins Pipelines. It integrates with Java libraries very well to deliver powerful enhancements and features including domain specific language authoring and scripting capabilities.

    Basic Features of Groovy

    • Groovy supports all Java libraries and it has its own libraries as well.
    • It has a simple similar syntax as that of Java but in more simpler form
    • It has both static and dynamic nature
    • It has great extensibility for the language and tooling.
    • Last but not the least it a free open source language which is being used by lot of developers.

    Prerequisites

    • Ubuntu 18 Machine or Windows machine
    • Make sure to have Java 8 plus installed on machines. To check Java version run the following command.
    java --version
    On Ubuntu Machine
    On Windows Machine

    How to Install Groovy on ubuntu Machine

    Installing Groovy on Ubuntu machine is pretty straightforward. Lets Install the Groovy on Ubuntu 18 machine.

    • First Update the ubuntu official repository by running the apt command.
    sudo apt update
    • Now, download the groovy installation script by running the curl command.
    curl -s get.sdkman.io | bash
    • Now Install the groovy using the sdk command

    How to install Groovy on Windows machine

    • Now you will see windows installer package, once you click on it it will automatically download the file.
    • Now click on the the downloaded windows installer package and installation will begin.
    • Accept the license Agreement
    • Make sure you select Typical for Setup Type and click on Install
    • Now Groovy is successfully installed on windows machine. Open Groovy console from the Start menu & run a simple command to test.

    Groovy Syntax

    Shebang line

    • This allows script to run groovy scripts directly from command line provided you have groovy installed and groovy command is available on the PATH
    #!/usr/bin/env groovy
    println "Hello from the shebang line"

    Strings

    • Strings are basically chain of characters. Groovy strings are written with single quotes ' or double quotes '' and even with triple quotes '''
    'This is an example of single line'
    
    "This is an example of double line"
    
    def threequotes = '''
    line1
    line2
    line3
    '''

    String interpolation

    Groovy expressions can be interpolated and it is just like replacing a placeholder with its value. Placeholder in groovy are surrounded by ${} or $ . Also if you pass GString in any method where string is required then you should replace it by calling toString() on that method.

    def name  = "automate"
    def greet =  "Hello $name"

    Groovy Examples

    Lets now see two examples of Groovy

    1. JsonSlurper : JsonSlurper is a class that parses JSON text or reader content into Groovy data
      • creating instance of the JsonSlurper class
      • Using the parseText function of the JsonSlurper class to parse some JSON text
      • access the values in the JSON string via the key.
    import groovy.json.JsonSlurper 
    
    class Example {
       static void main(String[] args) {
          def jsonSlurper = new JsonSlurper() # creating instance of the JsonSlurper class
          def object = jsonSlurper.parseText('{ "name":  "John", "ID" : "1"}') 
     	
          println(object.name);
          println(object.ID);
       } 
    }
    1. Catching Exceptions
      • Accessing an array with an index value which is greater than the size of the array
    class Example {
       static void main(String[] args) {
          try {
             def arr = new int[3];
             arr[5] = 5;
          }catch(ArrayIndexOutOfBoundsException ex) {
             println("Catching the Array out of Bounds exception");
          }catch(Exception ex) {
             println("Catching the exception");
          }
    		
          println("Let's move on after the exception");
       } 
    }

    Conclusion

    This tutorial is pretty straightforward and to get you started with Groovy. In this tutorial you learnt what is groovy and how to install Groovy on windows and Linux machine . Later you learnt two examples which helps you kickstart for Writing Groovy Scripts.

    Well , Groovy is used at various places such as Jenkins pipelines etc.What do you plan to code with Groovy next ?

    The Ultimate Guide: Getting Started with GitLab

    With lots of software development and testing around different applications and products you certainly need a best way to deploy it in effective and in best way. With So many microservices and code it becomes very crucial for any developer or system engineers to collaborate and make a successful product ready.

    Managing the code is now very well taken care by Git which is distributed code repository but on the top of it deployment has been very effective and easily managed with the help of GitLab

    In this tutorial you will learn all about GitLab , Managing Pipelines , Projects and many more which a devops engineer should know to get started.

    Table of Content

    1. What is GitLab?
    2. Prerequisites
    3. Creating Projects on GitLab
    4. Creating a Repository on GitLab
    5. Creating a Branch on GitLab
    6. Get started with GitLab CI/CD Pipelines
    7. Pipeline Architecture
    8. Conclusion

    What is GitLab?

    Git is a distributed version control designed to handle small to large projects with speed and efficiency. On the top of Git , GitLab is fully integrated platform to manage devops lifecycle or software developments.

    It is single application to manage entire DevOps lifecycle.

    Prerequisites

    • You should have GitLab account handy. If you don’t have create it from here

    Creating Projects on GitLab

    GitLab projects hold all the files , folders , code and all the documents you need to build your applications.

    • To create a project in GitLab click on Projects on the top and then click on Create a Project
    • Now click on Create blank project
    • On the Blank project tab provide the Project name and as this is demo we will keep this repository Private.
    • Now Project is successfully created.
    • You are ready to upload files either manually create/upload on GitLab
    • Also you can push the files using command line by cloning the repository and adding the files as show below.
    git clone https://gitlab.com/XXXXXXXXX/XXXXX.git
    cd firstgitlab
    touch README.md
    git add README.md
    git commit -m "add README"
    git push -u origin master

    Creating a Repository on GitLab

    A repository is a place where you store all your code and related files. It is part of a Project. You can create multiple repositories in a single project.

    To create a new repository, all you need to do is create a new project or fork an existing project. Once you create a new project, you can add new files via UI or via command line.

    Creating a Branch on GitLab

    • By Now, you saw GitLab Project creation. By default if you add any file it will be checked in master branch.
    • Click on New file and then select Dockerfile and add content and then commit the file by adding the comments.
    • You will see that Dockerfile is now added in master branch under FirstGitLab project.
    • So far we created a file which by default gets added in master branch. But if you need a separate Branch click on the Branches and then hit New Branch.
    • Provide a name for the new branch.

    Get started with GitLab CI/CD Pipelines

    Before you start CI/CD part on GitLab make sure to have following

    • runners : runners are agents that run your CI/CD jobs. To check the available runners Go to Settings > CI/CD and expand Runners. As long as you have at least one active available runner then you will be able to run the Job.
    • .gitlab-ci.yml file : In this file you define your CI/CD jobs , decisions which runner should take with specific conditions, structure of job and order of Jobs. Go to Project overview and then click on New file & name it as .gitlab-ci.yml
    • Now Paste the below content
    build-job: 
        stage: build 
        script:
           - echo "Hello, $GITLAB_USER_LOGIN"
    test-job:
        stage: test
        script: 
           - echo "Testing CI/CD Pipeline"
    deploy-job:
        stage: deploy
        script:
           - echo "Deploy from the $CI_COMMIT_BRANCH branch" 
    
    • Now Pipeline should automatically trigger for this pipeline configuration. Click on Pipelines to validate and View status of pipeline.
    • To view details of a job, click the job name, for example build.
    • Pipelines can be scheduled to run automatically as and when required.

    Pipeline Architecture

    Pipelines are the fundamental building blocks for CI/CD in GitLab. There are three main ways to structure your pipelines, each with their own advantages. These methods can be mixed and matched if needed:

    • Basic: Good for straightforward projects where all the configuration are stored at one place. This is the simplest pipeline in GitLab. It runs everything in the build stage at the same time and once all of those finish, it runs everything in the test stage the same way, and so on.

    If Build A is completed it waits for BUILD B and once both are completed it moves to next TEST STAGE. Similarly if TEST B is completed it will wait for TEST A and then once both are completed they move to DEPLOY STAGE.

    Directed Acyclic Graph: Good for large, complex projects that need efficient execution and you want everything to run as quickly as possible.

    If Build A and TEST A both are completed it moves to next DEPLOY STAGE even if TEST B is still running

    Child/Parent Pipelines: Good for monorepos and projects with lots of independently defined components. This job is run mostly using trigger keyword.

    Conclusion

    GitLab is the first single application for software development, security, and operations that enables continuous DevOps. GitLab makes the software lifecycle faster and improves the speed of business.

    GitLab provides solutions for each of the stages of the DevOps lifecycle. So Which application are you going to build ?

    Hope you had learnt a lot from this guide and helped you. If you like please share.

    Complete Python Course ( Python for beginners)

    Python’s standard library is very extensive, offering a wide range of facilities. The library contains built-in modules (written in C) that provide access to system functionality such as file I/O that would otherwise be inaccessible to Python programmers, as well as modules written in Python that provide standardized solutions for many problems that occur in everyday programming

    In this tutorial, we will learn everything which a beginner and a DevOps engineer should know in Python. We will cover the basic definition of python and some brilliant examples which will be enough to get you started with Python and for sure you will love it.

    Table of content

    1. What is Python?
    2. Prerequisites
    3. Python Keywords
      • Python Numbers
      • Python Strings
      • Python Tuple
      • Python Lists
      • Python Dictionary
      • Python Sets
    4. Python variables
    5. Python Built-in functions
    6. Python Handling Exceptions
    7. Python Functions
    8. Python Searching
    9. Conclusion

    Python String

    Python strings are a collection of characters surrounded by quotes ” “. There are different ways in which strings are declared such as:

    1. str() – In this method you decalre the characters or words or data inside the double quotes.
    1. Directly calling it in quotes – “Hello, this is method2 to display string”
    1. Template strings – Template strings are designed to offer a simple string substitution mechanism. These built-in methods work for tasks where simple word substitutions are necessary.
    from string import Template
    new_value = Template("$a b c d")       #  a will be substituted here
    x = new_value.substitute(a = "Automation")
    y = new_value.substitute(a = "Automate")
    print(x,y)

    Some Tricky Examples of declaring string

    Input String

    print('This is my string 1')   # Correct String
    print("This is my string 2")   # Correct String
    
    
    # Examples of Special characters inside the String such as quotes
    
    # print('Hello's Everyone')  # Incorrect Statement
    print('Hello\'s Everyone')   # Correct Statement after using escape (To insert characters that are illegal in a string, use an escape character. )
    print("Hello's Everyone")    # Correct Statement enclose within double quotes
    print('Hello "shanky')       # Correct Statement 
    print('Hello "shanky"')      # Correct Statement
    # print("Hello "S"shanky") # Incorrect Statement
    print("Hello ""shanky")  
    
    # No need to Escape if using triple quotes but proper use of triple quotes
    print(''''This is not a string "''')
    print('''Hello" how' are"" u " I am " f'ine'r''')
    print('''''Hello" how' are"" u " I am " f'ine'r''')
    print("""'''''Hello" how' are"" u " I am " f'ine'r""") 
    
    

    Output String

    This is my string 1
    This is my string 2
    Hello's Everyone
    Hello's Everyone
    Hello "shanky
    Hello "shanky"
    Hello shanky
    'This is not a string "
    Hello" how' are"" u " I am " f'ine'r
    ''Hello" how' are"" u " I am " f'ine'r
    '''''Hello" how' are"" u " I am " f'ine'r
    

    Python Tuple

    Tuples: Tuples are immutable ordered sequence of items that cannot be modified. The items of a tuple are arbitrary objects and may be of different types and allow duplicate values. For Example

    # 10,20,30,30 are fixed at respective index 0,1,2,3 positions 
    (10,20,30,30) or (3.14,5.14,6.14)

    Python Dictionaries

    Dictionary are written as key:value pair , where key is an expression giving the item’s key and value is an expression giving the item’s value. A dictionary is a collection which is ordered*, changeable and does not allow duplicates.

    # Dictionary with three items where x,y and z are keys.
    # where x,y and z have 42, 3.14 and 7 as the values.
    {'x':42, 'y':3.14, 'z':7} 

    Python Sets

    Sets: Set stores multiple items in a single variable. It contains unordered and unindexed data. Sets cannot have two items with the same value.

    {"apple", "banana", "cherry"}
    Data typesMutable or Immutable
    StringImmutable (Cannot be modified)
    TuplesImmutable (Cannot be modified)
    IntegersImmutable (Cannot be modified)
    ListMutable (Can be modified)
    SetsMutable (Can be modified)
    Floating pointImmutable (Cannot be modified)
    DictionariesMutable (Can be modified)

    Python variables

    Variables are stored as a information it could be number , symbol , name etc. which are used to be referenced. Lets see some of the examples of Python variables.

    • There are few points one must remember when using variables such as
      • Variables cannot start with digits
      • Spaces are not allowed in variables.
      • Avoid using Python keywords

    Example 1:

    • In below example var is a variable and value of var is this is a variable
    var="this is a variable" # Defining the variable
    print(var)    # Printing the value of variable

    Example 2:

    • In below example we are declaring three variable.
      • first_word and second_word are storing the values
      • add_words is substituting the variables with values
    first_word="hello"
    second_word="devops"
    add_words=f"{first_word}{second_word}"
    print(add_words)
    • If you wish to print words in different line then use "\n" as below
    first_word="hello"
    second_word="devops"
    add_words=f"{first_word}\n{second_word}"
    print(add_words)

    Dictionary

    In simple words these are key value pairs where keys can be number, string or custom object. Dictionary are represented in key value pairs separated by comma within curly braces.

    map = {'key-1': 'value-1', 'key-2': 'value-2'}
    • You can access the particular key using following way
    map['key-1']

    Lets see an example to access values using get() method

    my_dictionary = {'key-1': 'value-1', 'key-2': 'value-2'}
    my_dictionary.get('key-1')    # It will print value of key-1 which is value-1
    print(my_dictionary.values()) # It will print values of each key
    print(my_dictionary.keys())   # It will print keys of each value
    my_dictionary.get('key-3')    # It will not print anything as key-3 is missing

    Python Built-in functions

    There are various single line command which are already embedded in python library and those are known as built in functions. You invoke a function by typing the function name, followed by parentheses.

    • To check the Python version on windows or Linux machine run the following command.
    python3 --version
    • To print the output of a program , use the print command.
    print("Hello Devops")
    • To generate a list of number through a range built-in function run the following command.
    list(range(0,10))

    Handling Exceptions

    Exceptions are error which causes a program to stop if not handled properly. There are many built-in exceptions, such as IOErrorKeyError, and ImportError. Lets see a simple example below.

    • Here we defined a list of characters and stored it in a variable devops
    • Now, while true indicated that till the ,condition is true it will execute the try block.
    • .pop() is built in method to remove each item one by one.
    • Now in our case as soon as all the characters are removed then except block catches the IndexError and prints the message.
    devops = ['d','e','v','o','p','s']
     
    while True:
        try:
            devop = devops.pop()
            print(devop)
        except IndexError as e:
            print("I think I did lot of pop ")
            print(e)
            break
     
    Output:
     
    s
    p
    o
    v
    e
    d
    I think I did lot of pop
    pop from empty list
    

    Python Functions

    Earlier in this tutorial we have already seen that there are numerous built in function and some of them you used above. But you can define and create your own functions. Lets see the syntax of function.

    def <FUNCTION NAME>(<PARAMETERS>):
        <CODE BLOCK>
    <FUNCTION NAME>(<ARGUMENTS>)

    Lets look at some of the Python functions examples

    EXAMPLE 1

    • Here each argument use order of arguments to assign value which is also known as positional argument.
    • a and b variables are parameters which are required to run the function
    • 1 and 2 are arguments which are used to pass the value to the function ( arguments are piece of information that’s passed from a function call to a function)
    def my_function(a,b):
      print(f" value of a is {a}")
      print(f" value of b is {b}")
    my_function(1, 2)

    EXAMPLE 2:

    • With keyword arguments, assign each argument a default value:
    def my_function(a=3,b=4):
      print(f" value of a is {a}")
      print(f" value of b is {b}")
    my_function()
    

    EXAMPLE 3

    Passing arbitrary number of arguments. When you are not sure about the number of parameters to be passed then we call it as arbitrary. Lets look at an example

    • Find the Even in the string
    
    mylist = []
    def myfunc(*args):      #  args is to take any number of arguments together in myfunc
        for item in args:
            if int(item)%2 == 0:
                mylist.append(item)
        print(mylist)
    myfunc(5,6,7,8,9)

    EXAMPLE 4

    • IF LOOP: Find the least among two numbers if both numbers are even else return greater among both the numbers
    
    def two_of_less(a,b):    # Defining the Function where a and b variables are parameters
        if a%2==0 and b%2==0:
          print(min(a,b))       # using built in function min()
        if a%2==1 or b%2==1:
          print(max(a,b))       # using built in function max()
    two_of_less(2,4)
    

    EXAMPLE 5

    • Write a function takes a two-word string and returns True if both words begin with same letter
    
    def check(a):
        m = a.split()
        if m[0][0] == m[1][0] :
         print("Both the Words in the string starts with same letter")
        else:
         print("Both the Words in the string don't start with same letter")    
    check('devops Engineer')
    

    Python Searching

    The need to match patterns in strings comes up again and again. You could be looking for an identifier in a log file or checking user input for keywords or a myriad of other cases.

    Regular expressions use a string of characters to define search patterns. The Python re package offers regular expression operations similar to those found in Perl.

    Lets look at example which will give you overall picture of in built functions which we can use with re module.

    • You can use the re.search function, which returns a re.Match object only if there is a match.
    import re
    import datetime
     
    name_list = '''Ezra Sharma <esharma@automateinfra.com>,
       ...: Rostam Bat   <rostam@automateinfra.com>,
       ...: Chris Taylor <ctaylor@automateinfra.com,
       ...: Bobbi Baio <bbaio@automateinfra.com'''
     
    # Some commonly used ones are \w, which is equivalent to [a-zA-Z0-9_] and \d, which is equivalent to [0-9]. 
    # You can use the + modifier to match for multiple characters:
     
    print(re.search(r'Rostam', name_list))
    print(re.search('[RB]obb[yi]',  name_list))
    print(re.search(r'Chr[a-z][a-z]', name_list))
    print(re.search(r'[A-Za-z]+', name_list))
    print(re.search(r'[A-Za-z]{5}', name_list))
    print(re.search(r'[A-Za-z]{7}', name_list))
    print(re.search(r'[A-Za-z]+@[a-z]+\.[a-z]+', name_list))
    print(re.search(r'\w+', name_list))
    print(re.search(r'\w+\@\w+\.\w+', name_list))
    print(re.search(r'(\w+)\@(\w+)\.(\w+)', name_list))
     

    OUTPUT

    <re.Match object; span=(49, 55), match='Rostam'>
    <re.Match object; span=(147, 152), match='Bobbi'>
    <re.Match object; span=(98, 103), match='Chris'>
    <re.Match object; span=(0, 4), match='Ezra'>
    <re.Match object; span=(5, 10), match='Sharm'>
    <re.Match object; span=(13, 20), match='esharma'>
    <re.Match object; span=(13, 38), match='esharma@automateinfra.com'>
    <re.Match object; span=(0, 4), match='Ezra'>
    <re.Match object; span=(13, 38), match='esharma@automateinfra.com'>
    <re.Match object; span=(13, 38), match='esharma@automateinfra.com'>
    

    Ultimate Guide on how to add apt-repository and PPA repositories and working with ubuntu repository

    As a Linux administrator it is very important to know how you are managing your applications & Software’s. Every command and every installation of packages require critical attention before executing it.

    So In this Ultimate guide we will learn everything you should know about ubuntu repositories , how to add apt-repository & PPA repositories and working with ubuntu repository and apt commands.

    Table of Content

    1. What is ubuntu repository?
    2. How to add a ubuntu repository?
    3. Manually Adding apt-repository in ubuntu
    4. Adding PPA Repositories
    5. Working with Ubuntu repositories
    6. How apt or apt-get command work with Ubuntu Repository
    7. Conclusion

    What is ubuntu repository?

    APT repository is a network shared server or a local directory containing deb packages and metadata files that are readable by the APT tools. When installing packages using the Ubuntu Software Center or the command line utilities such as apt or apt-get the packages are downloaded from one or more apt software repositories.

    On Ubuntu and all other Debian based distributions, the apt software repositories are defined in the /etc/apt/sources.list file or in separate files under the /etc/apt/sources.list.d/ directory.

    The names of the repository files inside the /etc/apt/sources.list.d/ directory must end with .list.

    How to add apt-repository in ubuntu ?

    add-apt-repository is basically a python script that helps in addition of repositories in ubuntu.

    Lets take a example to add a mongodb repository in ubuntu machine

    • add-apt-repository utility is included in software-properties-common package.
    sudo apt update
    sudo apt install software-properties-common
    • Import the repository public key by running apt-key command
    sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4
    • Add the MongoDB repository using the command below.
    sudo add-apt-repository 'deb [arch=amd64] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse'
    Verified in /etc/apt/source.list repository has been added succesfully

    Manually Adding apt-repository in ubuntu

    To add repositories manually in ubuntu edit the /etc/apt/sources.list file and add the apt repository line to the file.

    To add the repository open the sources.list file with your favorite editor

    sudo vi /etc/apt/sources.list
    

    Add the repository line to the end of the file:

    sudo add-apt-repository 'deb [arch=amd64] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse'
    • If required to add manually public key for which you can use wget or curl command

    Adding PPA Repositories

    Personal Package Archives (PPA) allows you to upload Ubuntu source packages that are built and published with Launchpad as an apt repository.

    When you add a PPA repository the add-apt-repository command creates a new file under the /etc/apt/sources.list.d/ directory.

    Lets take a example of Addition of ansible PPA repository in ubuntu machine

    • PPA utility is included in software-properties-common package similar to add-apt-repository
    sudo apt update
    sudo apt install software-properties-common
    • Add PPA ansible Repository in the system.
    sudo apt-add-repository --yes --update ppa:ansible/ansible 
    #  PPA is Personal Package Archive 
    
    • Lets check the directory /etc/apt/sources.list.d/ has ansible PPA repository

    Working with Ubuntu repositories

    Repositories in ubuntu machine are basically file servers or network shares under which it has lot of packages , it could be .deb packages or files which are readable by apt or apt-get command.

    /etc/apt/sources.list or 
    
    /etc/apt/sources.list.d

    What does sources.list and sources.list.d contains ?

    • Software in Ubuntu’s repository is divided into four categories or components – main, restricted, universe and multiverse.
      • main: contains applications that are free software that are fully supported by ubuntu.
      • multiverse: contains software’s that are not free that requires license.
      • restricted: only to promote free software and ubuntu team cannot fix it & then provide it back to author if any issues are found.
      • universe: They have all the possible software’s which are free and open sourced but ubuntu don’t provide regular patch guarantee.
    deb http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ bionic main restricted
    deb-src http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ bionic main restricted
    deb http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ bionic-updates main restricted
    deb-src http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ bionic-updates main restricted
    deb http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ bionic universe
    deb-src http://security.ubuntu.com/ubuntu bionic-security multiverse
    • deb or deb-src are either .deb packages or source code
    • http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ is the repository URL
    • bionic , bionic-security , xenial are distributions code name.
    • main, restricted, universe and multiverse are repository categories.

    How apt or apt-get command work with Ubuntu Repository

    APT stands for Advanced Packaging Tool which performs functions such as installation of new software packages, upgrade of existing software packages, updating of the package list index, and even upgrading the entire Ubuntu system by connecting with repositories stored under /etc/apt/sources.list or /etc/apt/source.list.d/

    Let us see an example of how apt command works with ubuntu repositories.

    • Install below three packages
    apt install curl
    
    apt install wget
    
    apt install telnet
    • You will notice that all the above packages are already up to date and latest
    • Now run the apt update command to update the repositories. apt command contains three types of lines.
      • Hit: If there is no change in package version from the previous version
      • Ign: It means package is being ignored.
      • Get: It means it has a new version available. It will download the information about the version (not the package itself). You can see that there is download information (size in kb) with the ‘get’ line in the screenshot above.
    apt update
    • After completion of command it provides the details if upgrade is required by any package or not. In our case it shows 37 packages can be upgraded. Lets see the list of packages which can be upgraded by running the following command.
    apt list --upgradable

    You can either upgrade a single package or upgrade all packages together.

    To upgrade a single package use : apt install <package-name>

    To upgrade all packages use : apt upgrade

    • Lets just update the curl package by running the apt install command and verify
     apt install curl
    • You will notice that updating curl command upgraded 2 packages which were related to curl and rest of 35 are still not upgraded.
    • Now, lets upgrade rest of the 35 packages together by running apt upgrade command.
    apt upgrade
    • Lets run apt update command again to verify if ubuntu still requires any software to be upgrade. Command output should look like “All packages are up to date”
    apt update

    Conclusion

    In this tutorial we learnt everything about ubuntu repositories and how to add various repositories and how to work with them . Finally we saw how apt command works with ubuntu repositories.

    This Ultimate Guide will give you a very strong understanding of package management which is most important thing for a Linux administrator. Hope you liked this tutorial and was helpful. Please share.

    How to Connect Windows to Linux and Linux to Windows using PowerShell 7 SSH Remoting ( PS Remoting Over SSH)

    PowerShell Remoting has various benefits. It started with Windows when Windows administrators use to work remotely work with tons of windows machine over WinRM protocol. With Automation and Unix distribution spreading across the world and require by every single IT engineer, PowerShell introduced PSRemoting over SSH in PowerShell 7 to connect Windows to Linux and Linux to Windows remotely .

    In this tutorial we will learn how to setup PS Remoting on windows machine and on Linux machine using PS remoting over SSH ( PowerShell 7 supported) . Finally we will connect both Windows to Linux and Linux to Windows machine. Lets get started.

    Table of Content

    1. What is PSRemoting or PowerShell Remoting Over WinRM?
    2. What is PSRemoting or PowerShell Remoting Over SSH?
    3. Prerequisites
    4. Step by step set up SSH remoting on Windows
    5. Step by step set up SSH remoting on Ubuntu
    6. Test the OpenSSH connectivity from Windows machine to Linux using PSRemoting
    7. Test the OpenSSH connectivity from Linux to Windows machine using PSRemoting
    8. Conclusion

    What is PSRemoting or PowerShell Remoting?

    PowerShell Remoting is a feature of PowerShell. With PowerShell Remoting you can connect with a single or tons of servers at a single time.

    PS Remoting Over SSH (Windows to Linux and Windows to Windows)

    WS-Management or Web services management or WS-Man provides a common way for systems to access and exchange management information across the IT infrastructure.

    Microsoft implemented WS-Management or Web services management or WS-Man in WinRM that is Windows Remote Management that allows hardware and operating systems, from different vendors to connect to each other. For WinRM to obtain data from remote computers, you must configure a WinRM listener. WinRM listener can work on both HTTP or HTTPS Protocols.

    PS Remoting Over WinRM (Linux to Windows)

    When PowerShell Remoting takes place between two servers that is one server try to run commands remotely on other server, the source server connects to destination server on WinRM Listener. To configure PSRemoting on local machine or remote machine please visit the link

    What is PSRemoting or PowerShell Remoting Over SSH?

    Microsoft introduced PowerShell 7 Remoting over SSH, which allows true multiplatform PowerShell remoting between Linux, macOS, and Windows. PowerShell SSH Remoting creates a PowerShell host process on the target machine as an SSH subsystem. Normally, Windows PowerShell remoting uses WinRM for connection negotiation and data transport. However, WinRM is only available on Windows-based machines. That means Linux machines can connect with windows or windows can connect to Windows over WinRM but Windows cannot connect to Linux.

    With PowerShell 7 Remoting over SSH Now its possible to remote between Linux, macOS, and Windows.

    PS Remoting Over SSH ( Windows to Linux , Linux to Windows)

    Prerequisites

    • Microsoft Windows Server 2019 standard . This machine should also have PowerShell 7 installed. If you don’t have PowerShell installed please follow here to install.
    • Make sure you have local account setup in Windows server 2019. We will be using “automate” user.
    • Make sure you set the password for ubuntu user on ubuntu machine or if you have it then ignore.
    • Ubuntu machine with PowerShell 7 installed.

    Step by step set up SSH remoting on Windows

    Here we will discuss about how to setup SSH remoting on Windows Machine and run the PSRemoting commands.

    • Assuming you are on Windows 2019 standard machine with PowerShell 7 installed. Lets verify it once.
    • Before we setup SSH on windows machine & if you try to make a SSH session with Linux machine you will received an error message like this.
    • Next step is to install Open SSH client and server on Windows 2019 standard server. Lets use the PowerShell utility Add-WindowsCapabillity and run the commands.
    Add-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0
     
    Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
    • Once Open SSH is installed successfully , we need to start the OpenSSH Services.
    Start-Service sshd
    Set-Service sshd -StartupType Automatic
    • Now, Edit the OpenSSH configuration sshd_config located in C:\Windows\System32\OpenSSH or you can find it in C:\ProgramData\ssh\sshd_config by adding Subsystem for PowerShell.
    Subsystem powershell c:/progra~1/powershell/7/pwsh.exe -sshs -NoLogo -NoProfile
    • Also make sure OpenSSH configuration file sshd_config has PasswordAuthentication set to yes
    • Restart the service
    Restart-Service sshd
    • SSH remoting is now properly set on Windows Machine

    Step by step set up SSH remoting on Ubuntu

    Previously we configured SSH remoting on Windows Machine , now we need to perform similar steps in ubuntu machines with various commands.

    • PowerShell 7 must be installed on ubuntu machine
    • Install OpenSSH client and server on ubuntu machine
    sudo apt install openssh-client
    sudo apt install openssh-server
    • Similarly Edit the Sshd_config file in ubuntu machine
    vi /etc/ssh/sshd_config
    • Paste the below content (Add the Subsystem for PowerShell) and make sure PasswordAuthentication set to yes
    Subsystem powershell /usr/bin/pwsh -sshs -NoLogo -NoProfile
    • Restart the service
    sudo service sshd restart

    Test the OpenSSH connectivity from Windows machine to Linux using PSRemoting

    Here now we are set with windows and ubuntu SSH remoting steps , now lets verify the SSH connectivity between from windows to ubuntu machine

    Verification Method 1

    • Create a session and then enter into session and run commands from windows PowerShell to Linux PowerShell
    New-PSSession -Hostname  54.221.35.44 -UserName ubuntu # Windows to Linux Create Session
    
    Enter-PSSession -Hostname 54.221.35.44 -UserName ubuntu # Windows to Linux Enter Session
    

    Verification Method 2

    • Create the session and then test the connectivity from Windows machine to Linux using Invoke-Command command
    $SessionParams = @{
         HostName = "54.221.35.44"
         UserName = "ubuntu"
         SSHTransport = $true
     }
    Invoke-Command @SessionParams -ScriptBlock {Get-Process}

    Test the OpenSSH connectivity from Linux to Windows machine using PSRemoting

    Lets verify the SSH connectivity between from ubuntu machine to windows Machine.

    • Open PowerShell on ubuntu machine with following command
    pwsh
    • Although you are on ubuntu machine lets verify the ubuntu version [Optional Step]
    • Now SSH into Windows machine using following command
    ssh automate@3.143.233.234
    • Here we go , You can clearly see that we are have SSH into Windows machine successfully

    Conclusion

    PowerShell Remoting has various benefits. It started with Windows when Windows administrators use to work remotely work with tons of windows machine over WinRM protocol. With Automation and Unix distribution spreading across the world and require by every single IT engineer. To resolve problem of connecting windows to Linux and Linux to windows PowerShell introduced PSRemoting over SSH to connect Windows to Linux and Linux to Windows remotely with easy setups .

    Hope you find this tutorial helpful. If you like please share it with your friends.

    What is PSRemoting or PowerShell Remoting and how to Enable PS Remoting

    PSRemoting or PowerShell Remoting is a PowerShell based remoting which allows you to connect to one or thousands of remote computers and execute commands. PSRemoting allows you to sit at one place and execute commands on remote machine as if you are executing physically on the servers.

    In this tutorial you will learn what is PS Remoting that is PowerShell Remoting and how to enable PowerShell Remoting locally and on remote machines.

    Table of Content

    1. What is PSRemoting or PowerShell Remoting?
    2. Prerequisites
    3. How to Enable PS Remoting Locally on system?
    4. How to Enable PS Remoting on remote system?
    5. Conclusion

    What is PSRemoting or PowerShell Remoting?

    PowerShell Remoting is a feature of PowerShell. With PowerShell Remoting you can connect with a single or tons of servers at a single time.

    WS-Management or Web services management or WS-Man provides a common way for systems to access and exchange management information across the IT infrastructure.

    Microsoft implemented WS-Management or Web services management or WS-Man in WinRM that is Windows Remote Management that allows hardware and operating systems, from different vendors to connect to each other. For WinRM to obtain data from remote computers, you must configure a WinRM listener. WinRM listener can work on both HTTP or HTTPS Protocols.

    When PowerShell Remoting takes place between two servers that is one server try to run commands remotely on other server, the source server connects to destination server on WinRM Listener.

    How to check WinRM listeners on Windows Host?

    To check the WinRM listeners on windows host use the following command

     winrm e winrm/config/listener

    Prerequisites

    • Make sure you windows machine with PowerShell 7 installed . If you don’t have, Install it from here.

    How to Enable PS Remoting Locally on system?

    There are two ways in which you can enable PSRemoting on the local machine.

    Use Enable-PSRemoting to Enable PS Remoting Locally on system

    • Invoke the command Enable-PSRemoting and this performs the following function
      • WinRM service is started
      • Creates listener on 5985 for HTTP
      • Registers and Enable PowerShell sessions
      • Set PowerShell sessions to allow remote sessions.
      • Restarts WinRM server
    
    Enable-PSRemoting  # By Default its enabled in Windows
    • On a Server OS, like Windows Server 2019, the firewall rule for Public networks allows on remote connections from other devices on the same network. On a client OS, like Windows 10, you will receive an error stating that you are a public network.
    Command Ran on Windows 2019 server
    Command Ran on Windows 10 Machine
    • If you want to ignore the Error message because of Network Profile on client like windows 10 use the following command
    Enable-PSRemoting -SkipNetworkProfileCheck

    Use WinRM to Enable PS Remoting Locally on system

    • We can use WinRM quickconfig command as well to enable PS Remoting on local machine
    winrm quickconfig

    How to Enable PS Remoting on remote system?

    There are two ways in which you can enable PSRemoting on the remote machine.

    Use PS exec to Enable PS Remoting on remote system

    • Using PS exec you can run command on remote machine after connecting to remote machine. When you run PS exec command , it initialize the PowerShell session on remote machine and then run the command.
    .\psexec.exe \\3.143.113.23 -h -s powershell.exe Enable-PSRemoting -Force # 3.143.113.23 is remote machine's IP address

    Use WMI to Enable PS Remoting on remote system

    Using PowerShell and the Invoke-CimMethod cmdlet. Using the Invoke-CimMethod cmdlet, you can instruct PowerShell to connect to the remote computer over DCOM and invoke methods.

    $SessionArgs = @{
         ComputerName  = 'WIN-U22NTASS3O7'
         Credential    = Get-Credential
         SessionOption = New-CimSessionOption -Protocol Dcom
     }
     $MethodArgs = @{
         ClassName     = 'Win32_Process'
         MethodName    = 'Create'
         CimSession    = New-CimSession @SessionArgs
         Arguments     = @{
             CommandLine = "powershell Start-Process powershell -ArgumentList 'Enable-PSRemoting -Force'"
         }
     }
     Invoke-CimMethod @MethodArgs

    Conclusion

    In this tutorial, you have learned what is PSRemoting and how to enable PSRemoting with various methods locally on the machine as well as remotely on the machine. This will give you great opportunity to automate with various remote machines together.

    How to Install PowerShell 7.1.3 on Ubuntu and Windows Machine Step by Step.

    With some many windows or Linux Administrators in the world automation has always been top most requirement. PowerShell is one the most widely and command line shell which gives you string ability to perform any tasks with any remote operating system very easily.

    In this tutorial we will go through basic definition of PowerShell, benefits and features of PowerShell and finally how to install latest PowerShell on both Windows and Ubuntu Machine.

    Table of content

    1. What is PowerShell?
    2. Working with PowerShell
    3. Install PowerShell 7.1.3 on Windows Machine
    4. How to Install PowerShell 7.1.3 on Ubuntu Machine
    5. Conclusion

    What is PowerShell?

    PowerShell is a command line tool or command line shell which helps in automation of various tasks , allows you to run scripts & helps you in managing variety of configuration. PowerShell runs on Windows , Linux and macOS

    PowerShell is built on .NET Command Language Runtime that is ( CLR ) . It works currently on .NET 5.0 Framework as its runtime.

    Features of PowerShell

    • It provides tab completion
    • It works with all .NET Frameworks objects
    • It allows pipelines of commands.
    • It has built support for various file formats such as JSON, CSV and XML

    Working with PowerShell

    PowerShell is a command line tool or command line shell which was meant for windows automation . But it has widely grown and upgraded with lots of feature and benefits. Lets check out some of the key benefits.

    • PowerShell can be used for cloud management such as retrieve or deploy new resources.
    • PowerShell can be used with Continuous integration and continuous deployment pipelines i.e.. CI/CD
    • PowerShell is widely used now by Devops and sysops engineers.
    • PowerShell comes with hundreds of preinstalled commands
    • PowerShell command are called cmdlets

    To check the version of PowerShell , although there are various command but lets run the following

    $PSVersionTable.PSVersion

    Install PowerShell on Windows Machine

    By default PowerShell is already present on the windows machine. To verify click on start bar and look for PowerShell.

    • Verify the current version of PowerShell by running the following command.
    Get-Host | Select-Object Version
    • Extract the downloaded binary on the desktop
    • Execute the pwsh.exe
    • Now you should see PowerShell version7.1.3 when you run the following command.
    • Lets verify PowerShell by invoking the Get-Command

    How to Install PowerShell 7.1.3 on Ubuntu Machine

    We will install PowerShell on ubuntu 18.04 via Package repository. So lets dive in here and start

    • Update the list of packages
    sudo apt-get update
    • Install pre-requisite packages.
    sudo apt-get install -y wget apt-transport-https software-properties-common
    • Download the Microsoft repository GPG keys
    wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb

    he apt software repositories are defined in the /etc/apt/sources.list file or in separate files under the /etc/apt/sources.list.d/ directory.
    • Register the Microsoft repository GPG keys. You will notice that as soon as we run the below command a repository is added inside /etc/apt/source.list.d directory
    sudo dpkg -i packages-microsoft-prod.deb
    • Update the Repository again
    sudo apt-get update
    • Enable the “universe” repositories
    sudo add-apt-repository universe
    • Install PowerShell
    sudo apt-get install -y powershell
    • Start PowerShell
    pwsh
    • Lets verify PowerShell by invoking the Get-Command

    Conclusion

    This tutorial is pretty straightforward and to get you started with PowerShell. In this tutorial we defined what is PowerShell and what are benefits of PowerShell. Later we Installed Latest PowerShell 7.1.3 on both ubuntu and windows machine. Hope this tutorial helps you with PowerShell setup & please share it if you like.

    Brilliant Guide to Check all possible ways to view Disk usage on Ubuntu Machine

    Monitoring of application or system disk utilization has always remained a top most and crucial responsibility of any IT engineer. In the IT world with various software’s , automation and tools it is very important to keep a track of disk utilization regularly.

    Having said that, In this tutorial we will show you best commands and tools to work with your disk utilization. Please follow me along to read and see these commands and their usage.

    Table of content

    1. Check Disk Space using disk free or disk filesystems command ( df )
    2. Check Disk Space using disk usage command ( du )
    3. Check Disk Usage using ls command
    4. Check Disk Usage using pydf command
    5. Check Disk Usage using Ncdu command( Ncurses Disk Usage )
    6. Check Disk Usage using duc command
    7. conclusion

    Check Disk Space using disk free or disk filesystems command (df)

    It stands for disk free. This command provides us information about the available space and used space on a file system. There are multiple parameters which can be passed along with this utility to provide additional outputs. Lets look at some of the commands from this utility.

    • To see all disk space available on all the mounted file systems on ubuntu machine.
    df
    • To see all disk space available on all the mounted file systems on ubuntu machine in human readable format.
      • You will notice a difference in this command output and a previous. The difference is instead of 1k-blocks you will see size which is human readable.
    df -h
    • To check the disk usage along with type of filesystem
    df -T
    • To check disk usage of particular Filesystem
    df /dev/xvda1
    • To check disk usage of multiple directories.
    df -h  /opt /var /etc /lib
    • To check only Percent of used disk space
    df -h --output=source,pcent
    • To check data usage based on filesystem wise
    df -h -t ext4

    Check Disk Space using disk usage command ( du )

    du command provides disk usage information. This command provides file and directories space utilization. Lets see some of the example .

    • To check disk usage of directory
    du /lib # Here we are taking lib directory
    • To check disk usage of directory with different block size type .
      • M for MB
      • G for GB
      • T for TB
    du -BM /var
    • To check disk usage according to the size
      • Here s represents summarize
      • Here k represents size in KB , you can use M, G or T and so on
      • Here sort represents sort
      • Here n represents in numerical order
      • Here r represents in reverse order
    du -sk /opt/* | sort -nr

    Check Disk Usage using ls command

    ls command is used of listing of files but also provides information about disk utilized by directories and files. Lets see some of these command.

    • To list the files in human readable format.
    ls -lh
    • To list the file in descending order of size of files.
    ls -ls

    Check Disk Usage using pydf command

    pydf is a python based command-line tool which is used to display disk usage with different colors. Lets dive into command now.

    • To check the disk usage with pydf
    pydf -h 

    Check Disk Usage using Ncdu command (Ncurses Disk Usage)

    Ncdu is a disk utility for Unix systems. This command is text-based user interface under the [n]curses programming library.Let us see a command from Ncdu

    ncdu

    Check Disk Usage using duc command

    Duc is a  command line utility which queries the disk usage database and also create, maintain and the database.

    • Before we run a command using duc be sure to install duc package.
    sudo apt install duc
    • duc is successfully installed , now lets now run a command
    duc index /usr
    • To list the disk usage using duc command with user interface
    duc ui /usr

    Conclusion

    There are various ways to identify and view disk usage in Linux or ubuntu operating system. In this tutorial we learnt and showed best commands and disk utilities to work with . Now are you are ready to troubleshoot disk usage issues or work with your files or application and identify the disk utilization.

    Hope this tutorial gave you in depth understanding and best commands to work with disk usage . Hoping you never face any disk issues in your organization. Please share if you like.

    How to Delete EBS Snapshots from AWS account using Shell script

    Well AWS EBS that is Elastic block store is a very important and useful service provided by AWS. Its a permanent and shared storage and is used with various applications deployed in AWS EC2 instance. Automation is playing a vital role in provisioning or managing all the Infrastructure and related components.

    Having said that , In this tutorial we will learn what is an AWS EBS , AWS EBS Snapshots and many amazing things about storage types and how to delete EBS snapshots using shell script on AWS step by step.

    AWS EBS is your pendrive for instances, always use it when necessary and share with other instances.

    Table of Content

    1. What is Shell script ?
    2. What is AWS EBS ?
    3. What are EBS Snapshots in AWS ?
    4. Prerequisites
    5. Install AWS CLI Version 2 on windows machine
    6. How to Delete EBS Snapshots from AWS account using shell script
    7. Conclusion

    What is Shell Scripting or Bash Scripting?

    Shell Script is simply a text of file with various or lists of commands that are executed even on terminal or shell one by one. But in order to make thing little easier and run together as a group and in quick time we write them in single file and run it.

    Main tasks which are performed by shell scripts are : file manipulation , printing text , program execution. We can include various environmental variables in script that can be used at multiple places , run programs and perform various activities are known as wrapper scripts.

    A good shell script will have comments, preceded by a pound sign or hash mark, #, describing the steps. Also we can include conditions or pipe some commands to make more creative scripts.

    When we execute a shell script, or function, a command interpreter goes through the ASCII text line-by-line, loop-by-loop, test-by-test, and executes each statement as each line is reached from the top to the bottom.

    What is EBS ?

    EBS stands for Amazon Elastic block store which is permanent storage just like your pendrive or harddisk. You can mount EBS volume to AWS EC2 instances. It is very much possible to create your own file system on top of these EBS volumes.

    EBS volumes are mounted on AWS EC2 instance and are not dependent on AWS EC2 instance life. They remain persistent.

    Amazon Elastic block store (EBS)

    Key features of EBS

    • EBS can be created in any Availability zones
    • EBS cannot be directly attached with any instance in different Availability zone. We would need to create a Snapshot that is like a backup copy and then from that snapshot restore it to new volume and then finally use it in other Availability zone.

    What are HDD and SSD storage ?

    HDD ( Hard disk drive )

    Hard disk drive is a old technology. They depends on spinning disks and platters to read and write data. There is a motor which spins the platter whenever any request comes to read or write the data. Platter contains tracks and each track contains severs sectors. These drives run slowly. They are less costly.

    SSD ( Solid State drive )

    Solid State drive is a new technology. It uses flash memory so they consume less energy and runs much faster as compared to HDD and is highly durable. It depends on electronic energy rather than mechanical energy so its easy to maintain and more efficient. They are more costly than HDD’s.

    • EBS are classified further into 4 types
      • General purpose SSD : Used in case of general use such as booting a machine or test labs.
      • Provisioned IOPS SSD: Used in case of scalable and high IOPS applications.
      • Throughput Optimized HDD: These are low cost magnetic storage which depends on throughput rather than IOPS such as EMR, data warehouse.
      • Cold HDD: These are also low cost magnetic storage which depends on throughput rather than IOPS.

    How to create AWS EBS manually in AWS account?

    • You must have AWS account to create AWS EBS. If you don’t have AWS account please create from AWS account or AWS Account
    • Go to AWS account and on the top search for AWS EC2 service
    • Click on Create volume
    • Now fill all the details such as type of volume, size , IOPS , Tags etc.
    • Now click on Create volume and verify

    What are EBS Snapshots in AWS ?

    We just discussed about EBS that is storage. There are high chances that you might require backup to keep yourself in safe position. So basically EBS snapshots are backup of EBS volumes. There is also a option to backup your EBS with point in time snapshot which are incremental backups and these gets stored in AWS S3. This helps in saving tons of mins by keeping the snapshots with only difference to what changed in previous backup.

    How to create EBS snapshots?

    • Go to AWS EBS console
    • Choose the AWS EBS volume for which you wish to create Snapshot
    • Add the description and Tag and then click on Create Snapshot.
    • Verify the Snapshot
    • If you wish to create AWS EBS snapshots using AWS CLI , please run the command ( Make sure you have AWS CLI installed and if not then we have explained below in this tutorial.
    aws ec2 create-snapshot --volume-id <vol-1234567890> --description "My volume snapshot"

    Prerequisites

    1. AWS account to create AWS IAM user. If you don’t have AWS account please create from AWS account or AWS Account
    2. Windows 7 or plus edition where you will execute the shell script.
    3. Python must be installed on windows machine which will be required by AWS cli. If you want to install python on windows machine follow here
    4. You must have Git bash already installed on your windows machine. If you don’t have install from here
    5. Code editor for writing the shell script on windows machine. I would recommend to use visual studio code on windows machine. If you wish to install visual studio on windows machine please find steps here

    In this demo , we will use shell script to launch AWS IAM user. So In order to use shell scripts from your local machine that is windows you will require AWS CLI installed and configured. So First lets install AWS CLI and then configure it.

    Install AWS CLI Version 2 on windows machine

    • Download the installed for AWS CLI on windows machine from here
    • Select I accept the terms and then click next button
    • Do custom setup like location of installation and then click next button
    • Now you are ready to install the AWS CLI 2
    • Click finish and now verify the AWS cli
    • Verify the AWS version by going to command prompt and type
    aws --version

    Now AWS cli version 2 is successfully installed on windows machine, now its time to configure AWS credentials so that our shell script connects AWS account and execute commands.

    • Configure AWS Credentials by running the command on command prompt
    aws configure
    • Enter the details such as AWS Access key , ID , region . You can skip the output format as default.
    • Check the location on your system C:\Users\YOUR_USER\.aws file to confirm the the AWS credentials
    • Now, you’re AWS credentials are configured successfully.

    How to Delete EBS Snapshots from AWS account using shell script

    Now we have configured AWS CLI on windows machine , its time to create our shell script to delete EBS snapshots. In this demo we will delete two AWS EBS snapshots which already exists in AWS account. Lets get started.

    • Create a folder on your desktop and under that create file delete-ebs-snapshots.sh
    #!/usr/bin/env bash
    
    # To check if access key is setup in your system 
    
    if ! grep -q aws_access_key_id ~/.aws/config; then
      if ! grep -q aws_access_key_id ~/.aws/credentials; then
        echo "AWS config not found or CLI not installed. Please run \"aws configure\"."
        exit 1
      fi
    fi
    
    # To Fetch all the SNAPSHOT_ID with Tag Name=myEBSvolumesnapshot
    
    SNAPSHOTS_ID=$(aws ec2 describe-snapshots --filters Name=tag:Name,Values="myEBSvolumesnapshot" --output text | cut -f 6)
    echo $SNAPSHOTS_ID
    
    # Using For Loop Delete all Snapshots with Tag Name=myEBSvolumesnapshot
    
    for id in $SNAPSHOTS_ID; do
        aws ec2 delete-snapshot --snapshot-id "$id"
        echo "Successfully deleted snapshot $id"
    done
    • Now open visual studio code and open the location of file delete-ebs-snapshots.sh and choose terminal as Bash
    • Now run the script
    ./delete-ebs-snapshots.sh
    • Script ran successfully , now lets verify if AWS EBS Snapshots with Tag Name=myEBSvolumesnapshot successfully got deleted by going on AWS account.

    Conclusion

    In this tutorial, we demonstrated what is an AWS EBS , AWS EBS Snapshots and learnt many things about storage types  and learnt how to delete EBS snapshots  using shell script on AWS step by step. AWS EBS is your pendrive for instances, always use it when necessary and share with other instances.

    Hope this tutorial will help you in understanding the shell script and working with AWS EBS on Amazon cloud. Please share with your friends

    How to build docker images , containers and docker services with Terraform using docker provider.

    Docker has been a vital and very important tool for deploying your web applications securely and because of light weighted technology and they way it works has captured the market very well. Although some of the steps are manual but after deployment docker make things look very simple.

    But can we automate even before the deployment takes place? So here comes terraform into play which is a infrastructure as a code tool which automate docker related work such as creation of images, containers and service with few commands.

    In this tutorial we will see what is docker , terraform and how can we use docker provider in terraform to automate docker images and containers.

    Table of content

    1. What is Docker?
    2. What is Terraform?
    3. How to Install terraform on ubuntu machine.
    4. What is Docker provider?
    5. Create Docker Image, containers and docker service using docker provider on AWS using terraform
    6. Conclusion

    What is docker ?

    Docker is an open source tool for developing , shipping and running applications. It has ability to run applications in loosely isolated environment using containers. Docker is an application which helps in management of containers in a very smooth and effective way. In containers you can isolate your applications. Docker is quite similar to virtual machine but it is light weighted and can be ported easily.

    Containers are light weighted as they are independent of hypervisors load and configuration. They directly connect with machines ie. hosts kernel.

    Prerequisites

    • Ubuntu machine preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account
    • Recommended to have 4GB RAM
    • At least 5GB of drive space
    • Ubuntu machine should have IAM role attached with full access of ec2 instance or it is always great to have administrator permissions to work with terraform demo.

    You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

    What is Terraform?

    Terraform is a tool for building , versioning and changing the infrastructure. Terraform is Written in GO Language and the syntax language of configuration files is hcl which stands for HashiCorp configuration language which is much easier than yaml or json.

    Terraform has been in use for quite a while now . I would say its an amazing tool to build , change the infrastructure in very effective and simpler way. It’s used with variety of cloud provider such as Amazon AWS, Oracle, Microsoft Azure , Google cloud and many more. I hope you would love to learn it and utilize it.

    How to Install Terraform on Ubuntu 18.04 LTS

    • Update your already existing system packages.
    sudo apt update
    • Download the latest version of terraform in opt directory
    wget https://releases.hashicorp.com/terraform/0.13.0/terraform_0.13.0_linux_amd64.zip
    • Install zip package which will be required to unzip
    sudo apt-get install zip -y
    • unzip the Terraform download zip file
    unzip terraform*.zip
    • Move the executable to executable directory
    sudo mv terraform /usr/local/bin
    • Verify the terraform by checking terraform command and version of terraform
    terraform               # To check if terraform is installed 
    
    terraform -version      # To check the terraform version  
    • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.

    What is Docker provider in terraform?

    Docker provider helps to connect with docker images and docker containers using docker API. So in case of terraform , we would need to configure docker provider so that terraform can work with docker images and containers.

    There are different ways in which docker provider can be configured. Lets see some of them now.

    • Using docker host’s hostname
    provider "docker" {
      host = "tcp://localhost:2376/"
    }
    • Using dockers IP address
    provider "docker" {
      host = "tcp://127.0.0.1:2376/"
    }
    • In case your docker host is remote machine
    provider "docker" {
      host = "ssh://user@remote-host:22"
    }
    • Using docker socket

    unix:///var/run/docker.sock is a Unix socket the docker daemon listens on. Using this Unix socket you can connect with multiple docker images and containers.

    This Unix socket is also used when containers need to communicate with docker daemon such as during the mount binding.

    provider "docker" {                             
    host = "unix:///var/run/docker.sock"
        }

    Create a Docker Image, container and service using docker provider on AWS using terraform

    Let us first understand terraform configuration files before we start creating files for our demo.

    • main.tf : This file contains actual terraform code to create service or any particular resource
    • vars.tf : This file is used to define variable types and optionally set the values.
    • output.tf: This file contains the output of resource we wish to store. The output are displayed
    • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
    • provider.tf: This file is very important as we provide information to terraform regarding on which cloud provider it needs to execute the code

    Now, In below demo we will create docker image , container , services using docker provider. Now Lets configure terraform files which are needed for this demo. In this demo we only need one file that is main.tf to start with.

    main.tf

    provider "docker" {                                 # Create a Docker Provider
    host = "unix:///var/run/docker.sock"
     }
    
    resource "docker_image" "ubuntu" {                  # Create a Docker Image
        name = "ubuntu:latest"
    }
    
    resource "docker_container" "my_container" {        # Creates a Docker Container
      image = docker_image.ubuntu.latest         # Using same image which we created earlier
      name = "my_container"
    }
    
    resource "docker_service" "my_service" {            # Create a Docker Service
      name = "myservice"
      task_spec {
       container_spec {
         image = docker_image.ubuntu.latest     # Using same image which we created eearlier
        }
       }
      endpoint_spec {
        ports {
         target_port = "8080"
           }
        }
    }
    
    • Now your files and code are ready for execution . Initialize the terraform
    terraform init
    • Terraform initialized successfully ,now its time to run the terraform plan command.
    • Terraform plan is a sort of a blueprint before deployment to confirm if correct resources are being provisioned or deleted.
    terraform plan

    NOTE:

    If you intend to create Docker service on same machine without having multiple nodes please run below command first so that our docker service gets created successfully using terraform

    docker swarm init
    • After verification , now its time to actually deploy the code using apply.
    terraform apply
    • Now lets verify all the three components one by one if they are created successfully using terraform with docker provider.
    • Verify docker image
    docker images
    • Verify docker containers
    docker ps -a
    • Verify docker service
    docker service ps 

    Conclusion:

    In this tutorial we will see what is docker , terraform and how can we use docker provider in terraform to automate docker images and containers.

    Hope this tutorial will help you in understanding the terraform and provisioning the docker components using terraform. Please share with your friends if you find it useful.

    How to Launch an Amazon DynamoDB tables in AWS Account

    With rise in number of database it has become a big challenge to make the right selection. As data grows our database should also scale and perform equally well.

    Now Organizations have started to move toward big data and working with real time applications we certainly need a non relational and a good performance database. For these types of challenges and work AWS has always been on the top and served various services which solves our problems and one such service is AWS DynamoDB which manages non-relational databases for you and can store unlimited data and perform very well. .

    Table of content

    1. What is Relational database management system ?
    2. What is SQL and NO SQL database?
    3. What is Amazon DynamoDB ?
    4. Prerequisites
    5. How to Create tables in DynamoDB in AWS Account
    6. Conclusion

    What is Relational database management system ?

    • Relational database is based on tables and structured data
    • They have relationships which are logically connected.
    • Oracle Database, MySQL, Microsoft SQL Server, and IBM DB2. PostgreSQL , SQLite (for mobiles) are few example of RDMS.

    Figure shows Relational Database Management System based on relational model

    What is SQL and NO SQL database?

    SQL:

    • The full form of SQL is structured query language which is used to manage data in relational database management system i.e RDMS.
    • SQL database belongs to the relational database management system.
    • The SQL type database follow structure pattern that’s why they are suitable for static or predefined schemas.
    • They are good in solving complex queries and highly scalable in nature but in vertical direction.
    • SQL database follows table based methodology and that’s the reason they are good for applications such as accounting systems.

    NoSQL:

    • The full form of NoSQL is non-sql or non-relational.
    • This database is used for dynamic storage or those kind of managements where data is not fixed or static
    • This database is not tabular in nature rather its a key pair values.
    • They are good for big data and real time web application and scalable in nature but in horizontal direction
    • Some of the NoSQL databases which are DynamoDB, Foundation DB, Infinity DB, MemcacheDB, , Oracle NoSQL Database, , Redis MongoDB, Cassandra, Scylla, HBase.

    What is Amazon DynamoDB ?

    DynamoDB is a NoSQL database service that means it is different from the relational database which consists of tables in tabular form. DynamoDB service has very fast performance and is very scalable. DynamoDB service is one of the AWS managed service where you don’t need to worry about capacity , workload , setup , configuration , software patches , replications or even cluster scaling.

    With DynamoDB service you just need to create tables where you can add data or retrieve data otherwise DynamoDB takes care of everything else. If you wish to monitor your resources you can do it on AWS console.

    Whenever there is a traffic or high request coming in DynamoDB scales up while maintaining the performance.

    Basic components of Amazon DynamoDB

    • Tables: It stores data.
      • In below example we used a database table
    • Items: Items are present in table. You can store as many item you wish in a table.
      • In below example different Employee ID are items.
    • Attributes: Each items contains one or more attributes.
      • In below example office , designation and phone are attributes of EmployeeID.
    
    {
    "EmployeeID": "1"
    "office": "USA"
    "Designation": "Devops engineer"
    "Phone": "1234567890"
    }
    
    
    {
    "EmployeeID": "2"
    "office": "UK"
    "Designation": "Senior Devops Engineer"
    "Phone": "0123456789"
    }
    
    

    To work with Amazon DynamoDB , applications will need API’s to communicate.

    • Control Plane: It allows you to create and manage DynamoDB tables.
    • Data lane: It allows you to perform actions on the data in DynamoDB tables.

    Prerequisites

    • You should have AWS account with Full access permissions on DynamoDB . If you don’t have AWS account, please create a account from here AWS account.

    How to Create tables in DynamoDB in AWS Account

    • Go to AWS account and search for DynamoDB on the top of the page.
    • Click on Create Table and then you need to Enter the name of the Table and primary Key
    • Now click on Organisation that is table name
    • Now click on Items
    • Add the list of items such address , designation and phone number.
    • Verify if table has required details.

    So this was the first way to use AWS provided web service and directly start creating DynamoDB tables . The other way is to download it manually on your machine , setup and then create you’re tables . You can find the steps here

    Conclusion

    You should now have a basic knowledge about relational database management system and non relational. We also learned about Amazon DynamoDB which is NO SQL database . We also covered on how to create tables on Amazon DynamoDB service & store the data .

    This tutorial consists of all the practical’s which were done on our lab server with lots of hard work and efforts. Please share the word if you like it and hoping you get benefit out of this tutorial.

    The Ultimate Guide for Beginners on Bash Scripting / Shell Scripting step by step

    Table of content

    1. What is Shell ?
    2. What is Bash ?
    3. What is Shell Scripting or Bash Scripting?
    4. How to create Shell scripts and execute it ?
    5. Basic fundamentals of Shell Scripting?
    6. Run bash scripts on Visual Studio
    7. Conclusion

    What is Shell ?

    Shell is a command line interpreter and a programming language, basically what ever you are executing on terminal of your Linux machine is a shell command. There are thousands of commands which are already inbuilt such as cat , cd , ls , kill , history or pwd. The shell provides variables, flow control constructs, scripts, and functions. It also allows you to pipe commands, substitute command , do conditional testing , iterations etc. Whatever scripts you run , commands you execute are executed on shell or commonly known as Unix Shell.

    • There are different types of Unix shell available:
      • Bourne shell (sh) which is present in /bin/sh or /usr/bin/sh
      • Korn shell (ksh) which is present in /bin/ksh or /usr/bin/ksh
      • Bourne Again shell (bash) which is present in /bin/bash or /usr/bin/bash
      • POSIX shell (sh)
      • C shell (csh)
      • TENEX/TOPS C shell (tcsh)

    • To check on which shell you’re
     echo $SHELL

    What is Bash?

    Bash is a Unix shell and also a command line interpreter. It is also known as Bourne again shell . This is improved version of Bourne shell that “sh”. This is present in almost all the operating system. It is a default login shell mostly in all Linux distributions. Also it is default login shell in Apple macOS and Solaris. Bash process shell commands. In bash you write all your commands in text format and execute commands. When bash executes any commands from other files then they can called as shell scripts.

    It also contains keywords , variable , functions etc. just like sh shell . It is very much similar to Bourne shell (sh) .Latest version is bash-5.1 which was released in 2020-12-07.

    To check the location of bash , you can use command.

    echo $BASH

    What is Shell Scripting or Bash Scripting?

    Shell Script is simply a text of file with various or lists of commands that are executed even on terminal or shell one by one. But in order to make thing little easier and run together as a group and in quick time we write them in single file and run it.

    Main tasks which are performed by shell scripts are : file manipulation , printing text , program execution. We can include various environmental variables in script that can be used at multiple places , run programs and perform various activities are known as wrapper scripts.

    A good shell script will have comments, preceded by a pound sign or hash mark, #, describing the steps. Also we can include conditions or pipe some commands to make more creative scripts.

    When we execute a shell script, or function, a command interpreter goes through the ASCII text line-by-line, loop-by-loop, test-by-test, and executes each statement as each line is reached from the top to the bottom.

    How to create Shell scripts and execute it ?

    Now we will create a very simple script and execute it.

    • Create a directory under /opt directory
    mkdir script-demo
    • Create a file myscript.sh
    touch myscript.sh
    • Edit the file
    vi myscript.sh
    • Paste the code as shown in code snippet
    #!/bin/bash
    # This is a comment 
    echo Hello World, its automateinfra.com here!
    echo I am using $SHELL which is a default shell. 
    
    • Let us go through the code
      • #! is known as Shebang which is a syntax for a bash script. You can ignore this if you run your script by adding prefix bash . For example bash myscript.sh
      • Here #!/bin/bash or #!/usr/bin/bash declares a Bourne-Again (Bash) shell
      • Similarly for Bourne shell we can use #!/bin/sh or #!/usr/bin/sh declares a Bourne shell
      • # is a comment
      • echo is a command
    • Grant the execution permissions
    chmod + x myscript.sh
    • Execute the script
    ./myscript.sh
    • Script has been executed successfully.

    Basic fundamentals of Shell Scripting

    • Shell Scripts are case sensitive
    • To execute a function
    function function-name 
    {
      Commands
    }
    • You can run your scripts with specific shells as
      • ksh myscript.sh
      • bash myscript.sh
      • csh myscript.sh
    • If you are running a script in a particular location you should provide absolute path and if you are running in same directory then use “./”
    /home/ubuntu/myscript.sh  # Complete path
    
    ./myscript                # Run in same directory
    • Use of if loops
    if [condition]
    then 
       command
    else
       command
    fi
    • Use of for loops
    for condition
    do
       commands
    done
    • To Create a variable we use “$” symbol and this substitutes the variable to a value.
    a = 5
    echo $a
    • The command-line arguments $1, $2, $3,…$9 are positional parameters, with $0 pointing to the actual command, program, shell script, or function and $1, $2, $3, …$9 as the arguments to the command
    • Let us know see Special variables
    $0 is the filename of the current script.
    $n These variables correspond to the arguments with which a script was invoked.
    $# The number of arguments supplied to a script.
    $* All the arguments are double quoted. If a script receives two arguments, $* is equivalent to $1 $2.
    $@ All the arguments are individually double quoted. If a script receives two arguments, $@ is equivalent to $1 $2.
    $? The exit status of the last command executed.
    $$ The process number / process ID of the current shell.
    $! The process number of the last background command.
    Table 1.1

    Run bash scripts on Visual Studio

    • From the dropdown menu of terminals select default shell
    • Then you will see Git Bash and click on it.
    • Now type a command to test if bash script works
    hello automateinfra.com

    Conclusion

    You should now have a very sound knowledge of what is shell , what is Bash shell and Shell Scripting or Bash scripting. . Also we discussed how to work create a shell script and what are basic fundamentals or points one should know to getting started on bash scripting. Finally we ran bash script in windows machine on Microsoft’s Visual Studio Code.

    This tutorial consists of all the practical’s which were done on our lab server with lots of hard work and efforts.

    Please share the tutorial with everyone if you like it and hoping you get benefit out of this tutorial.

    The Ultimate Guide for beginners: Getting started with Git commands

    One of the best version control that I have used so far for managing multiple repositories and files in a smooth and distributed . The open source tool which we are talking here is Git the best control version. Please follow along to know the more on on how to work with Git , setup Git on ubuntu and windows machine & best commands and setup guide.

    This tutorial is going to add a lot of valuable knowledge on Git in your pockets, Stay tuned !!

    Table of Content

    1. Getting Started with Git
    2. Create a new git repository using graphical interface on GitHub
    3. Creating a new git repository using command line on ubuntu machine
    4. Getting Started with Git Commands for beginners
    5. Summary

    Getting Started with Git

    What is Repository ?

    Repository is a place where you keep all your source code it could be project wise or you can create it for a particular technology or you can create even for a single file, it all depends on you and your requirement. Its always a good idea to provide the name of repository according to work which you are going to do. There are some cloud based source code tools which allows you to create your repositories such as bitbucket , GitLab, GitHub ,Subversion , SVN and Mercurial etc. where you can create your own repositories.

    Create a new git repository using graphical interface on GitHub

    • Before we create the first repository make sure you have GitHub account created . If you don’t have please click here and Sign up.
    • If you have access to it then open your browser and go to GitHub website from here and click on Sign in.
    • Now click on New button to create a new repository
    • Now, provide a suitable name of the repository and you may keep it as public open to world if you wish people can see your code else keep it private. We are keeping as public as this is a demo tutorial and finally select create repository.
    • Now your repository is ready to be used. Please make sure you below steps which we will use later in this tutorial.
      • create a new repository on the command line and
      • push an existing repository from the command line

    Creating a new git repository using command line on ubuntu machine

    • You must have ubuntu machine preferably 18.04 version + and if you don’t have any machine you can create a ec2 instance on AWS account
    • Recommended to have 4GB RAM
    • At least 5GB of drive space
    • SSH into your ubuntu machine
    • Now create a folder under /opt directory
    cd /opt
    mkdir git-demo
    cd git-demo/
    • Initialize your new repository
    git init
    • Create a file inside the same directory using the command
    echo "My first change" > adding-new-file.txt
    • Now check the status of git repository using the command
    git status
    • Add the file in git repository using the command
    git add .
    • Again check the status of git repository using the command
    git status
    • Commit your changes in git repository using the command
     git commit -m "MY FIRST COMMIT"
    • Add the remote repository which we created earlier as a origin.
    git remote add origin https://github.com/Engineercloud/git-demo.git
    • Push the changes in the remote branch ( Enter your credentials when prompted)
    git push -u origin master
    • Now Verify if the text file which we created is present in repository

    Getting Started with Git Commands for beginners

    Here we will work with most important commands of Git. Let us work with same directory which we created earlier.

    cd /opt
    mkdir git-demo
    cd git-demo/
    This image has an empty alt attribute; its file name is image-300.png
    • Check the status of the repository
    git status
    • Add a file in the directory and run the command
    echo "Adding new file again" > second_file.txt
    • Now check the status of repository again, it should show untracked files as they are added in repo yet
    • Add files in repository by using command
    git add .
    • Check the status of git again
    • To check the status of git with short status
    git status -s
    • Now edit the file which we created and check the status .
      • You will notice that there will be two things one is changes to be committed because we already added them using git add.
      • Also you will notice changes not staged for commit because we recently edited our file but didn’t add it.
    echo "I am editing my second file " >> second_file.txt
    • Add the file in repository which we modified.
    git add .
    
    git status
    • Although we know that we modified a line and checked the status , but if you want to check the difference between the two you can use command.
    git diff        # Git diff provides the information which is not committed
    • Also you can check the committed change so far by using the command.
    git diff --cached      # Git diff --cached provides the information what is committed
    • Commit all your changes
    git commit -m "Committing all my changes" .       # m here means message
    • To delete your staged file properly, that is how to delete those files which are already committed
    git rm second_file.txt  # This command will also remove the file from the directory
    • If you wish to delete your staged file properly but keep the file in the directory for future use
    git rm --cached second_file.txt  # This command will not remove the file from the directory
    • To check history of your commits in a git repos use a command
    git log  # The command gives the list of commits in reversal order which is expected.
    • To check history of your commits with the difference between each commit we use -p flag
     git log -p -2    # -p stands for patch that is difference and 2 here is last 2 commits
    
    • There are some more commands to view the commits in more presentable way , lets checkout.
    git log --pretty=format
    
    git log --pretty=format:"%h %s" --graph

    Summary

    You should now have a very sound knowledge of what Git is , how to create Git repository using graphical mode as well as using command line tool . Also we discussed how to work with Git repositories using different commands. This tutorial consists of all the practical’s which were done on our lab server with lots of hard work and efforts.

    Please share the word if you like it and hoping you get benefit out of this tutorial.

    You can also visit : Introduction-to-git-a-version-control-getting-started-with-git-step-by-step

    Introduction to Git Version control and Getting Started with Git step by step

    Are you tired of taking backups of your multiple files and facing syncing errors of your files ? Do you have any central place where you can store your files with proper versions and history ? Most of us faced this issue of file sync issues or how to control you same files with some updates. Git is a version control which solves all these issues & has grown the popularity for the way it works .

    In this tutorial we will discuss what is version control , what are types of version controls and how to install git a version controller on windows machine and then setup to working with it.

    Table of Content

    1. What is Git ?
    2. How to Install git version 2.31.0 on Windows machine?
    3. Three types of control versions
    4. Setting up Git Bash on windows machine
    5. Summary

    What is Git ?

    Git is a version control which means if you have lots of file and each of these files are updated with few changes from the previous one then you need some way to handle it and this is taken care very well by Git. Sometimes you need to revert the changes back to previous state or you need to check few things of a month back , those all things are maintained by version control very well.

    Three types of control versions

    • Local control version – In this case you take files from version control database to your local machine it’s just like a copy from one folder to other. In this case no team member knows what you are doing.
    • Centralized control version – In this case you can checkout files from centralized location which is better than local control as team is aware about the project and what team members are doing but in case if server goes down nobody will have full backup as in complete repository data and can cause significant loss. Example: SVN, subversion
    • Distributed control version: This is version control which manages everything in distributed way that means when you clone any repository it takes all data of one repository and copies on your local machine. Git is among the distributed control version

    How to Install git version 2.31.0 on Windows machine

    • Open your browser and click on the link here so that installation starts automatically.
    • Now Click on the downloaded Git-2.31.0-64-bit.exe and select YES
    • Follow along with me and click the Next button and in most of the case select default values which are already selected
    • Select the location on your system where you like to place GIT installation files.
    • It is better to add it on Desktop
    • Please select the recommended
    • Here you will be asked to choose SSH executable and select OpenSSH. This will allow us to work with SSH connections such as logging into Linux machine.
    • Use MinTTY as default
    • Finally Git is launched on your machine and GIT bash will be available to use.

    Setting up Git Bash on windows machine

    • Let us verify is Open SSH is working from Git Bash.
    • Now, you can check the config file where you can modify some configurations if required and to check that run the command below.
    git config --list --show-origin
    • You can modify the global name while using Git by using below command
    • Check all your settings by using command
    git config --list

    Summary

    You should have a basic knowledge of what Git is and how it’s different from any centralized version control systems you may have been using previously. You should also now have a working version of Git on your system that’s set up with your personal identity.

    How to run Node.js applications on Docker Engine

    Table of content

    1. What is Node.js ?
    2. What is docker ?
    3. Prerequisites
    4. How to Install Node.js on ubuntu machine
    5. Install Node.js Express Web Framework
    6. Create a Node.js Application
    7. Create a Docker file for Nodejs application
    8. Build Docker Image
    9. Run the Nodejs application on a Container
    10. Conclusion

    What is Node.js ?

    Node.js is an open source JavaScript runtime environment. Now, what is JavaScript ? Basically JavaScript is a language which is used with other languages to create a web page and add some dynamic features such as roll over and graphics.

    Node.js runs as a single process without wasting much of memory and CPU and never blocks any threads or process which is why its performance is very efficient. Node.js also allows multiple connections at the same time.

    With the Node.js it has become one of the most advantage for JavaScript developer as now they can create any apps utilizing it as both frontend or as a backend.

    Building applications that runs in the any browser is a completely different story than than creating a Node.js application although both uses JavaScript language.

    What is docker ?

    Docker is an open source tool for developing , shipping and running applications. It has ability to run applications in loosely isolated environment using containers. Docker is an application which helps in management of containers in a very smooth and effective way. In containers you can isolate your applications. Docker is quite similar to virtual machine but it is light weighted and can be ported easily.

    Containers are light weighted as they are independent of hypervisors load and configuration. They directly connect with machines ie. hosts kernel.

    Prerequisites

    You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

    How to Install node.js on ubuntu machine

    • Update your system packages.
    sudo apt update
    • Lets change the directory and Download the node js package
    cd /opt
    
    sudo apt install nodejs
    • Install node js package manager
    sudo apt install npm
    • Verify the node js package installation
    nodejs -v

    Install Nodejs Express Web Framework

    • Install Nodejs Express Web Framework and initialize it
    npm init
    • package.json which got created after initializing the Nodejs framework will have all the dependencies which are required to run. Let us add one dependency which is highly recommended.
    npm install express --save

    CREATE NODE.JS APPLICATION

    main.js

    var express = require('express')    //Load express module with `require` directive
    var app = express() 
    
    //Define request response in root URL (/)
    app.get('/', function (req, res) {
      res.send('Hello Welcome to Automateinfra.com')
    })
    
    
    app.listen(8081, function () {
      console.log('app listening on port 8081!')
    })
    • Now Run the Node.js application locally on ubuntu machine to verify
    node main.js
    This image has an empty alt attribute; its file name is image-38.png

    Create a docker file for Node.js application

    Docker file is used to create a customized docker images on top of basic docker image. It is a text file that contains all the commands to build or assemble a new docker image. Using docker build command we can create new customized docker images . Its basically another layer which sits on top of docker image. Using newly built docker image we can run containers in similar way.

    This image has an empty alt attribute; its file name is image-43.png
    • Create a docker file and name it as Docker file . Keep this file also in same directory as main.js
    FROM node:7              # Sets the base image
    WORKDIR /app             # Sets the working directory in the container
    COPY package.json /app   # copy the dependencies file to the working directory
    RUN npm install          # Install dependencies
    COPY . /app              # Copy the content of the local src directory to the working directory
    CMD node main.js         # Command to run on container start  
    EXPOSE 8081

    Build a Docker Image

    • Now we are ready to build our new image . So lets build our image
    docker build -t nodejs-image .
    • You should see the docker images by now.
    docker images

    Run the Nodejs application on a Container

    • Now run our first container using same docker image ( nodejs-image)
    docker run -d -p 8081:8081 nodejs-image
    • Verify if container is successfully created.
    docker ps -a

    Great, You have dockerized Nodejs application on a single container .

    This image has an empty alt attribute; its file name is image-38.png

    Conclusion:

    In this tutorial we covered what is docker , what is Nodejs and using Nodejs application created a application on docker engine in one of the containers.

    Hope this tutorial will helps you in understanding and setting up Nodejs and Nodejs applications on docker engine in ubuntu machine.

    Please share with your friends.

    How to Setup Python on Windows machine

    You might be already aware of difference between low level and high level language. Buts lets quickly see what are main difference between the both. High level language such as c , c ++ and python are portable , human friendly, that is they are easier to debug and understand while writing the programs. They need either compiler or interpreter to run the programs.

    On the other hand, Low level language such as machine language or assembly level language they are difficult to understand and are written in assembly language which are converted using assembler.

    In case of Linux distributions Python programming language by default , but in windows it does not come with it. In this tutorial, we will learn what is python which is widely used in artificial intelligence, by astronauts, by developers for creating the apps. Learning and setting python on windows is a need of the hour. So lets get started with it.

    Table of content

    1. What is Python?
    2. Prerequisites
    3. How to install Python on windows machine?
    4. A simple python code to start your day
    5. Conclusion

    What is Python?

    Python is a high level , oops based , interactive and a general purpose scripting programing language. Python is very easier to understand and readable. It focuses on object over functions. Python is also a interpreted language because it converts codes in machine level code even before it runs. It works on variety of protocols such as https, ftp , smtp and many more. The latest version is 3.9.2 which was released on December 2020. Python works very well with most of the such as atom, notepad ++ , vim.

    Prerequisites

    • Windows 7 + operating system with admin rights
    • Command Prompt which is already present in windows operating system

    How to Install Python on windows machine?

    • Now run the executable installer which got downloaded.
    • Next step will prompt you to select whether to Disable path length limit , you ignore it or disable it.
    • Now python should be installed on your windows machine.

     Verify Python Was Installed On Windows

    • Now Navigate to command prompt and type python
    • Pip is basically python package manager which is used to fetch lots of other additional modules. Pip comes with most of pythons executable. To check if it is installed use
    pip -V
    • To verify if python environmental variables was successfully added. Search for sysdm.cpl and click on edit environmental variables.

    A simple python code to start your day

    • Create a simple file main.py on your desktop
    mystring = " This is Author of Automateinfra and its time to start learning python"
    mylist   = [letter for letter in mystring] # This is function which iterates over each alphabet in mystring
    print(mylist)       # This will print the result of mylist
    • Navigate to your desktop location on command prompt and use below command to run your first program in python
    python main.py
    • Great work ! python was installed successfully and we could run our first program of python quickly.

    Conclusion

    In this tutorial we covered basic difference between high level language and low level language. We also discussed what is python and how to install python on windows machine to get you started in organization scripting and automation work.

    Hope this tutorial will helps you in understanding and setting up Python on Windows machine.

    Please share with your friends.

    How to Setup Apache Solr on ubuntu machine step by step

    You might have heard solar energy that right very powerful source of energy. But here we will discuss about Solr the most widely and best search tool in industry. It’s very fast and popular open source tool based on Java. From this tool we can fetch any content or perform search activity in quick seconds.

    In this tutorial you’ll install using Solr on Ubuntu 18.04 . You’ll then work with Solr to index a page and later retrive using search engine.

    Table of Content

    1. What is Solr
    2. Prerequisites
    3. How to Install Solr 8.2.0 on Ubuntu 18.04 LTS
    4. Conclusion

    What is Solr ?

    Solr is very efficient tool when it comes to search and real time indexing. It is very optimized for high volume of internet traffic. It is very highly scalable and fault tolerant. It has internal monitoring of its own instances as it publishes its own data via JMX. It provides lots of Extensible Plugin which are used for search as well as for indexing.

    Solr can be installed on both windows as well as Unix based distributions. You index your data in Solr i.e. upload or put using JSON, XML or CSV or using http & later you can retrieve it using get http.

    Prerequisites

    • Ubuntu machine preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account
    • Recommended to have 4GB RAM
    • At least 5GB of drive space
    • Java version 8 or 8+ ( If Java is not installed please follow me to next step else skip it )

    You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

    • Install Java Version 11 on ubuntu 18.04 machine
    sudo apt install default-jdk  # Here we are installing Java Version: Java SE 11 (LTS)
    java -version               # To check the Installed Java Version
    which java :         # It will locate executable file location which is /usr/bin/java 
    
    whereis java         # It will give the location  of all the files related to Java 

    The Installation directory of Java is /usr/lib/jvm/java-1.8.0-openjdk-amd64 and this confirms that Java is successfully installed on our ubuntu 18.04 machine. Now, let us install Solr Engine.

    How to Install Solr 8.2.0 on Ubuntu 18.04 LTS

    • Update your system packages.
    sudo apt update
    • Lets change the directory and Download the Solr package
    cd /opt
    
    sudo wget https://archive.apache.org/dist/lucene/solr/8.2.0/solr-8.2.0.tgz
    • Your downloaded package will be in tarball format , Lets extract the tarball into the folder.
    sudo tar xzf solr-8.2.0.tgz solr-8.2.0/bin/install_solr_service.sh --strip-components=2
    • Execute the the installer
     sudo ./install_solr_service.sh solr-8.2.0.tgz
    • Now, Solr should be up and running, you may verify by checking the status of Solr service.
    sudo service solr stop
    sudo service solr start
    sudo service solr status
    • Solr should be running now, lets verify by opening a browser and enter : <ip-address>:8983/solr

    Conclusion

    You should now have a Apache Solr 8.2.0 instance running . Your Solr should now be ready to begin helping you manage your organization’s search.

    Hope this tutorial will help you in understanding and setting up Apache Solr on ubuntu machine. Please share with your friends.

    How to run Python flask applications on Docker Engine

    Cannot we isolate our apps so that they are independent of each other and run perfectly ? The answer is absolutely “YES”, that correct that’s very much possible with docker and containers. They provide you isolated environment and are your friend for deploying many applications with each taking its own container. You can run as many as containers in docker and are independent of each other. They all share same kernel memory.

    In this tutorial we will go through a simple demonstration of a python application which will run on docker engine.

    Table of content

    1. What is Python ?
    2. What is docker ?
    3. Prerequisites
    4. Create a Python flask application
    5. Create a Docker file
    6. Build Docker Image
    7. Run the Python flask application Container
    8. Conclusion

    What is Python ?

    Python is a language from which you create web applications and system scripts. It is a used vastly across the organizations and very easy to learn. Python apps require isolated environment to run its application very well. This is quite possible with Docker and containers which we will use in this tutorial.

    If you wish to know more about python please visit our Python’s Page to learn all about Python.

    What is docker ?

    Docker is an open source tool for developing , shipping and running applications. It has ability to run applications in loosely isolated environment using containers. Docker is an application which helps in management of containers in a very smooth and effective way. In containers you can isolate your applications. Docker is quite similar to virtual machine but it is light weighted and can be ported easily.

    Containers are light weighted as they are independent of hypervisors load and configuration. They directly connect with machines ie. hosts kernel.

    Prerequisites

    You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

    Create a Python flask application

    • Before we create our first program using python flask we need to install python flask and python virtual environment for flask to run.
    
    pip install virtualenv # virtual python environment 
    
    • Create and activate a virtual environment named virt:
    virtualenv venv
    source virt/bin/activate
     
    
    • Finally install Flask
    
    pip install flask # Install Flask from pip
    • Now create a text file and name it as app.py where we will write our first python flask code as below.
    from flask import Flask # Importing the class flask
    
    app = Flask(__name__)   # Creating the Flask class object.
    
    @app.route('/')         # app.route informs flask about the URL to be used by function
    def func():             # Creating a function
          return("Iam from Automateinfra.com")  
    
    if __name__ ==  "__main__":    # Programs starts from here.
        app.run(debug=True)
    • Create one more file in same directory and name it as requirements.txt where we will define the dependency of flask application
    Flask==1.1.1
    • Now our python code app.py and requirements.txt are ready for execution. Lets execute our code using below command.
    python app.py
    This image has an empty alt attribute; its file name is image-42.png
    • Great, so our python flask application ran successfully on our local machine. Now we need to execute same code on docker . Lets now move to docker part.

    Create a docker file

    Docker file is used to create a customized docker images on top of basic docker image. It is a text file that contains all the commands to build or assemble a new docker image. Using docker build command we can create new customized docker images . Its basically another layer which sits on top of docker image. Using newly built docker image we can run containers in similar way.

    This image has an empty alt attribute; its file name is image-43.png
    • Create a docker file and name it as Docker file . Keep this file also in same directory as app.py and requirements.txt
    FROM python:3.8           # Sets the base image 
    WORKDIR /code             # Sets the working directory in the container
    COPY requirements.txt .   # copy the dependencies file to the working directory
    RUN pip install -r requirements.txt  # Install dependencies
    COPY src/ .               # Copy the content of the local src directory to the working directory
    CMD [ "python", "./app.py" ] # Command to run on container start  
    This image has an empty alt attribute; its file name is image-44.png

    Build docker Image

    • Now we are ready to build our new image . So lets build our image
    docker build -t myimage .
    • You should see the docker images by now.
    docker images
    This image has an empty alt attribute; its file name is image-45.png

    Run the Python flask application Container

    • Now run our first container using same docker image ( myimage)
    docker run -d -p 5000:5000 myimage
    • Verify if container is successfully created.
    docker ps -a
    This image has an empty alt attribute; its file name is image-45.png

    Conclusion

    In this tutorial we covered what is docker , what is python and using python flask application created a application on docker engine in one of the containers.

    Hope this tutorial will helps you in understanding and setting up Python flask and python flask applications on docker engine in ubuntu machine.

    Please share with your friends.