How to allow only HTTPS requests on AWS S3 buckets using AWS S3 Policy

It is important for your infrastructure to be secure. Similarly if you wish to secure your AWS bucket contents in AWS contents you need to make sure that you allow only secure requests that works on HTTPS.

In this quick tutorial you will learn How to allow only HTTPS requests on AWS S3 buckets using AWS S3 Policy on a bucket.

Lets get started.

Prerequisites

  • AWS account
  • One AWS Bucket

Creating AWS S3 bucket Policy for AWS S3 bucket

The below policy has two statements which performs the below actions:

  • Version is a standard date used in S3 policy.
  • The Statement below restricts all the requests except HTTPS on the AWS S3 bucket ( my-bucket )
  • Deny Here means it denies any requests that are not secure.
{
    "Version": "2012-10-17",
    "Statement": [{
        "Sid": "RestrictToTLSRequestsOnly",
        "Action": "s3:*",
        "Effect": "Deny",
        "Resource": [
            "arn:aws:s3:::my-bucket",
            "arn:aws:s3:::my-bucket/*"
        ],
        "Condition": {
            "Bool": {
                "aws:SecureTransport": "false"
            }
        },
        "Principal": "*"
    }]
}

Conclusion

This tutorial demonstrated how to allow only HTTPS requests on AWS S3 buckets using AWS S3 Policy.

Advertisement

How AWS s3 list bucket and AWS s3 put object

Are you Struggling to list your AWS S3 bucket and unable to upload data, if yes then don’t worry this tutorial is for you.

In this quick tutorial you will learn how you can list all the AWS Amazon S3 buckets and upload objects into it by assigning IAM policy to a user or a role.

Lets get started.

Prerequisites

  • AWS account
  • One AWS Bucket

Creating IAM policy for AWS S3 to list buckets and put objects

The below policy has two statements which performs the below actions:

  • First statement allows you to list objects in the AWS S3 bucket named (my-bucket-name).
  • Second Statement not only allow to list objects but allow you to perform any actions such as put:object, delet:objects etc. in the AWS S3 bucket named (my-bucket-name).
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListObjectsInBucket",
            "Effect": "Allow",
            "Action": ["s3:ListBucket"],
            "Resource": ["arn:aws:s3:::my-bucket-name"]
        },
        {
            "Sid": "AllObjectActions",
            "Effect": "Allow",
            "Action": "s3:*Object",
            "Resource": ["arn:aws:s3:::my-bucket-name/*"]
        }
    ]
}

Conclusion

This tutorial demonstrated how you can list all the AWS Amazon S3 buckets and upload objects into it by assigning IAM policy to a user or a role. .

How to Deny IP addresses to Access AWS Cloud using AWS IAM policy with IAM policy examples

Do you know you can restrict the certain IP addresses to access AWS services to be accessed with a single policy.

In this quick tutorial you will learn Deny IP addresses using AWS IAM policy with IAM policy examples

Lets get started.

Prerequisites

  • AWS account
  • Permissions to create IAM Policy

Lets describe the below IAM Policy in the AWS Cloud.

  • Version is Policy version which is fixed.
  • Effect is Deny in statement as we don’t want to allow IP addresses be able to Access AWS cloud.
  • Resources are * wild character as we want action to be allowed for all AWS services.
  • This policy deny all the IP address to access AWS cloud except few IP addresses using the NotIpAddress Condition and aws:ViaAWSService which is used to limit access to an AWS service makes a request to another service on your behalf.
{
    "Version": "2012-10-17",
    "Statement": {
        "Effect": "Deny",
        "Action": "*",
        "Resource": "*",
        "Condition": {
            "NotIpAddress": {
                "aws:SourceIp": [
                    "192.0.2.0/24",
                    "203.0.113.0/24"
                ]
            },
            "Bool": {"aws:ViaAWSService": "false"}
        }
    }
}
}

Conclusion

This tutorial demonstrated that if you need to deny IP addresses using AWS IAM policy with IAM policy examples.

How to Access AWS EC2 instance on Specific Dates using IAM Policy

Do you know you can restrict the user or group of IAM users to access AWS services to be accessed with a single policy.

In this quick tutorial you will learn how to Access AWS EC2 instance on Specific Dates using IAM Policy

Lets get started.

Prerequisites

  • AWS account
  • Permissions to create IAM Policy

Creating IAM Policy to Access AWS EC2 instance on Specific Dates

Lets describe the below IAM Policy in the AWS Cloud.

  • Version is Policy version which is fixed.
  • Effect is Allow in statement as we want to allow users or group be able to Describe AWS EC2 instance.
  • Resources are * wild character as we want action to be allowed for all AWS EC2 instances.
  • This policy allows users or groups to describe instance within specific dates using DateGreaterthan and DateLessThan attributes within the Condition.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            
            "Action": "ec2:DescribeInstances",
            "Resource": "*",
            "Condition": {
                "DateGreaterThan": {"aws:CurrentTime": "2023-03-11T00:00:00Z"},
                "DateLessThan": {"aws:CurrentTime": "2020-06-30T23:59:59Z"}
            }
        }
    ]
}

Conclusion

This tutorial demonstrated that if you need to create a IAM Policy to Deny AWS Resources outside AWS Regions.

What is Amazon EC2 in AWS?

If you are looking to start your career in AWS cloud then knowing your first service that is AWS EC2 can give you a good understanding around the compute resources in AWS cloud. With AWS EC2 you will also understand which all services utilize AWS EC2.

Lets get and start learning AWS EC2.

Table of Content

  1. Amazon EC2 (AWS Elastic compute Cloud)
  2. Amazon EC2 (AWS Elastic compute Cloud)
  3. Pricing of Amazon Linux 2
  4. Configure SSL/TLS on Amazon Linux 2
  5. How to add extra AWS EBS Volumes to an AWS EC2 instance
  6. AMI (Amazon Machine Image)
  7. Features of AMI
  8. AMI Lifecycle
  9. Creating an Amazon EBS Backed Linux AMI
  10. Creating an Instance Store backed Linux AMI
  11. Copying an Amazon AMI
  12. Storing and restoring an Amazon AMI
  13. Amazon Linux 2
  14. AWS Instances
  15. Stop/Start Instance EBS Backed instance
  16. Reboot AWS EC2 Instance
  17. Hibernated Instance ( EBS Backed instance)
  18. Terminated Instance EBS Backed instance
  19. AWS Instance types
  20. AWS Instance Lifecycle
  21. Monitoring AWS EC2 instance
  22. Cloud-init
  23. AWS EC2 Monitoring
  24. AWS EC2 Networking
  25. Local Zones
  26. AWS Wavelength
  27. Elastic Network Interface
  28. Configure your network interface using ec2-net-utils for Amazon Linux
  29. IP Address
  30. Assign a secondary private IPv4 address
  31. What is Elastic IP address?
  32. Associate an Elastic IP address with the secondary private IPv4 address
  33. Conclusion

Amazon EC2 (AWS Elastic compute Cloud)

Amazon EC2 stands for Amazon Elastic compute cloud that allows you to launch servers or virtual machines that are scalable in the Amazon Web service cloud. Also, with AWS EC2 instance, you don’t require to invest in any hardware or electricity costs, and you just pay for what you use.

When required, you can quickly decrease or scale up the number of AWS EC2 instances.

  • Instance requires operating systems, additional software, etc to get launched, so they use templates known as Amazon machine images (AMI).
  • You can work with various configurations with respect to computing such as Memory or CPU for that you will need to select the appropriate instance_type.
  • To securely log in to these instances you will need to generate the key pair where you store the private key and AWS manages key.
  • Instance can have two types of data ie. instance store that is temporary and the Amazon Elastic block store also known as EBS volumes.

Amazon EC2 (AWS Elastic compute Cloud)

  • Provides scalable computing capacity in Amazon web service cloud. You don’t need to invest in hardware up front etc. It takes few mins to launch your virtual machine and deploy your applications.
  • You can use preconfigured templates known as Amazon machine images (AMI’s) that includes OS and additional software’s. The launched machines are known as instances and instances comes with various compute configurations such as CPU, Memory known as instance type.
  • To securely login you need to key pairs where public key is stored with AWS and private key is stored with customers. Key pair choose either RSA or ED25519 types where windows doesn’t support ED25519.
  • To use a key on mac or Linux computer grant the following permissions:
 chmod 400 key-pair-name.pem
  • Storage volumes for temporary data can use Instance store volumes however when you need permanent data then consider using EBS i.e., Elastic block store.
  • To secure your Instance consider using security groups.
  • If you need to allocate the static IP address to an instance, then consider using Elastic address.
  • Your instance can be EBS backed instance or instance store-based instance that means the root volume can be either EBS or the Instance store. Instance stored backed Instances are either running or terminated but cannot be stopped. Also, instance attributes such as RAM, CPU cannot be changed.
  • Instances launched from an Amazon EBS-backed AMI launch faster than instances launched from an instance store-backed AMI
  • When you launch an instance from an instance store-backed AMI, all the parts have to be retrieved from Amazon S3 before the instance is available. With an Amazon EBS-backed AMI, only the parts required to boot the instance need to be retrieved from the snapshot before the instance is available
  • Use Amazon Inspector to automatically discover software vulnerabilities and unintended network exposure.
  • Use Trusted advisor to inspect your environment.
  • Use separate Amazon EBS volumes for the operating system versus your data.
  • Encrypt EBS volumes and snapshots.
  • Regularly back up your EBS volumes using EBS Snapshots, create AMI’s from your instance.
  • Deploy critical applications across multiple AZ’s.
  • Set TTL to 255 or nearby on your application side so that the connection are intact otherwise it can cause reachability issues.
  • When you install Apache then you will have document root on /var/www/html directory and by default root user have access to this directory. But if you want any other use to access these files under the directory perform the below steps as below. Let’s assume the user is ec2-user
sudo usermod -a -G apache ec2-user  # Logout and login back
sudo chown -R ec2-user:apache /var/www
sudo chmod 2775 /var/www && find /var/www -type d -exec sudo chmod 2775 {} \;  # For Future files

Pricing of Amazon Linux 2

There are different plans available for different EC2 instance such as:

  • On demand Instances:  No longer commitments and you only pay per second and the minimum period should be 60 seconds.
  • Saving Plans: You can book your instance for a year or 3 years.
  • Reserved instances: You can book your instance for a year or a period of 3 years to a specific configuration.
  • Spot instances: If you need cheap instance which are unused you can go ahead and use them.

Configure SSL/TLS on Amazon Linux 2

  • SSL/TLS creates an encrypted channel between a web server and web client that protects data in transit from being eavesdropped on.  
  • Make sure you have EBS backed Amazon Linux 2, Apache installed, TLS Public Key Infrastructure (PKI) relies on DNS. Also make sure to register domain for your EC2 instance.
  • Nowadays we are using TLS 1.2 and 1.3 versions and underlying TLS library is supported and enabled.
  • Enable TLS on server by Installing Apache SSL module using below command followed by configuring it.
 yum install -y mod_ssl 

vi  etc/httpd/conf.d/ssl.conf

  • Generate certificate using
sudo ./make-dummy-cert localhost.crt inside cd /etc/pki/tls/certs

How to add extra AWS EBS Volumes to an AWS EC2 instance

Basically this section is to add the Extra volume to an instance. There are two types of volumes first is root volume and other is extra volume (EBS) which you can add. To add the extra volume on AWS EC2 below are the steps:

  • Launch one AWS EC2 instance and while launching under Configure storage, choose Add new volume. Ensure that the added EBS volume size is 8 GB, and the type is gp3. AWS EC2 instance will have two volumes one for root and other added storage.
  • Before modifying or updating the volume, make sure to take the snapshot of current vol by navigating to storage tab under EC2 and then block devices, volume ID.
  • Now create a file system and attach it to non-mounted EBS volume by running the following command.
sudo mkfs -t xfs /dev/nvme1n1
sudo mkdir /data
sudo mount /dev/nvme1n1 /data
lsblk -f
  • Now, again on AWS EC2 instance go to volume ID, click on Modify the Volume by changing the volume ID.
  • Extend the file system by first checking the size of the file system.
df -hT
  • Now to extend use the command:
sudo xfs_grofs -d /data
  • Again, check the file system sized by running (df -hT) command

AMI (Amazon Machine Image)

  • You can launch multiple instances using the same AMI. Ami includes EBS snapshots and also contains OS, software’s for instance store backed AMI’s.

To Describe the AMI you can run the below command.

aws ec2 describe-images \
    --region us-east-1 \
    --image-ids ami-1234567890EXAMPLE

Features of AMI

  • You can create an AMI using snapshot or a template.
  • You can deregister the AMI as well.
  • AMI’s are either EBS backed or instance backed.
    • With EBS backed AMI’s the Root volume is terminated and other EBS volume is not deleted.
  • When you launch an instance from an instance store-backed AMI, all the parts have to be retrieved from Amazon S3 before the instance is available.
  • With an Amazon EBS-backed AMI, only the parts required to boot the instance need to be retrieved from the snapshot before the instance is available
  • Cost of EBS backed Instance are less because only changes are stored but in case of Instance store backed instances each time customized AMI is stored in AWS S3.
  • AMI uses two types of virtualizations:  paravirtual (PV) or Hardware virtual machine (HVM) which is better performer.
  • HVM are treated like actual physical disks. The boot process is similar to bare metal operating system.
    • The most common HVM bootloader is GRUB or GRUB2.
    • HVM boots by executing master boot record of root block device of your image.
    • HVM allows you to run OS on top of VM as if its bare metal hardware.
    • HVM can take advantage of hardware extensions such as enhanced networking or GPU Processing
  • PV boots with special boot loader called PV-GRUB.
    • PV runs on hardware that doesn’t have explicit support for virtualization.
    • PV cannot take advantage of hardware extensions.
    • All current, regions, generations support HVM API however this is not true with PV.
  • The first component to load when you start a system is BIOS in case of [ Intel and AMD] instance types run on Legacy and UEFI and Unified Extensible Firmware Interface (UEFI) in case of Graviton instance.  To check the boot mode of an AMI run the below command. Note: To check the boot mode of an Instance you can run the describe instance command.
aws ec2 --region us-east-1 describe-images --image-id ami-0abcdef1234567890
  • To check the boot mode of Operating system, SSH into machine and then run the below command.
sudo /usr/sbin/efibootmgr
  • To set the boot mode you can do that while registering an image not while creating an image.
  • Shared AMI: These are created by developers and made available for others to use.
  • You can deprecate or Deregister the AMI anytime.
  • Recycle Bin is a data recovery feature that enables you to restore accidentally deleted Amazon EBS snapshots and EBS-backed AMIs. Provided you have permissions such as ec2:ListImagesInRecycleBin and ec2:RestoreImageFromRecycleBin

AMI Lifecycle

You can launch two types of AMI’s:

Creating an Amazon EBS Backed Linux AMI

  • Launch an instance1 using AMI (Marketplace, Your own AMI, Public AMI, Shared AMI)
  • Customize the instance by adding the software’s etc.
  • Create new image from customized instance. When you create a new image then you create a new AMI as well. Amazon EC2 creates snapshots of your instance’s root volume and any other EBS volumes attached to your instance
  • Launch another instance2

Creating an Instance Store backed Linux AMI

  • Launch an instance1 only from instance backed AMI.
  • SSH Into Instance, customize it.
  • Bundle it which contains image manifest and files that contain template for root volume. Bundling might take few minutes.
  • Next upload the bundle to AWS S3.
  • Now, register your AMI.

Note 1: To create and manage Instance store backed Linux AMI you will need AMI tools to create and manage instance store-backed Linux AMIs. You will also need AWS CLI and AWS S3 bucket.

Note 2: You can’t convert an instance store-backed Windows AMI to an Amazon EBS-backed Windows AMI and you cannot convert an AMI that you do not own.

Copying an Amazon AMI

  • You can copy AMI’s within region or across regions
  • You can also copy AMI along with encrypted snapshot.
  • When you copy Ami the target AMI has its own identifier.
  • Make sure your IAM principal has the permissions to copy AMI.
  • Provide or update Bucket policy so that new AMI can be copied successfully.
  • You can copy an AMI in another region
  • You can copy an AMI in another account. For copying the AMI across accounts make sure you have all the permissions such as Bucket permission, key permissions and snapshot permissions.

Storing and restoring an Amazon AMI

  • You can store AMI’s in AWS S3 bucket by using CreatStoreImageTask  API
  • To monitor the progress of AMI use DescribeStoreImageTask
  • copy AMI to another bucket.
  • You can restore only EBS backed AMI’s using CreateRestoreImageTask.
  • To store and restore AMI the S3 bucket must be in same region.

Amazon Linux 2

  • It supports kernel 4.14 and 5.10. You can also upgrade it to 5.15 version. It allows greater parallelism and scalability.
  • New improvements in EXT file system such as large files can be managed easily.
  • DAMON is better supported as the data access monitoring for better memory and performance analysis.
  • To install and verify by upgrading kernel use below command.
sudo amazon-linux-extras install kernel-5.15
  • The cloud-init package is an open-source application built by Canonical that is used to bootstrap Linux images in a cloud computing environment, such as Amazon EC2. It enables you to specify actions that should happen to your instance at boot time.
  • Amazon Linux also uses cloud-init package to perform initial configuration of the ec2-user account, setting hostname, generate host keys, prepare repositories for package management.
  • Add users public key,
  • Amazon Linux uses the cloud-init actions found in /etc/cloud/cloud.cfg.d and /etc/cloud/cloud.cfg. You can create your own cloud-init action files in /etc/cloud/cloud.cfg.d.

AWS Instances

An instance is a virtual server in the cloud. Instance type essentially determines the hardware of the host computer used for your instance. Each instance type offers different compute and memory capabilities.

The root device for your instance contains the image used to boot the instance. The root device is either an Amazon Elastic Block Store (Amazon EBS) volume or an instance store volume.

Your instance may include local storage volumes, known as instance store volumes, which you can configure at launch time with block device mapping

Stop/Start Instance EBS Backed instance:

  • All the storage and EBS Volumes remains as it is ( they are stopped not deleted).
  • You are not charged for the instance when it is in stopped stage.
  • All the EBS volumes including root device usage are billed.
  • During the instance in stopped stage you can attach or detach EBS volumes.
  • You can create AMI’s during stopped state and you can also configure few instance configurations such as kernel, RAM Disk and instance type.
  • The Elastic IP address remains associated from the instance
  • The instance stays on the same host computer
  • The RAM is erased
  • Instance store volumes data is erased
  • You stop incurring charges for an instance as soon as its state changes to stopping

Reboot AWS EC2 Instance

  • The instance stays on the same host computer
  • The Elastic IP address remains associated from the instance
  • The RAM is erased
  • Instance store volumes data is preserved

Hibernated Instance ( EBS Backed instance)

  • The Elastic IP address remains associated from the instance
  • We move the instance to a new host computer
  • The RAM is saved to a file on the root volume
  • Instance store volumes data is erased
  • You incur charges while the instance is in the stopping state, but stop incurring charges when the instance is in the stopped state

Terminated Instance EBS Backed instance:

  • The root volume device is deleted but any other EBS volumes are preserved.
  • Instances are also terminated and cannot be started again.
  • You are not charged for the instance when it is in stopped stage.
  • The Elastic IP address is disassociated from the instance

AWS Instance types

  • General Purpose: These instances provide an ideal cloud infrastructure, offering a balance of compute, memory, and networking resources for a broad range of applications that are deployed in the cloud.
  • Compute Optimized instances: Compute optimized instances are ideal for compute-bound applications that benefit from high-performance processors.
  • Memory optimized instances:  Memory optimized instances are designed to deliver fast performance for workloads that process large data sets in memory.
  • Storage optimized instances: Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latencies, random I/O operations per second (IOPS) to applications

Note:  EBS-optimized instances enable you to get consistently high performance for your EBS volumes by eliminating contention between Amazon EBS I/O and other network traffic from your instance.

You can enable enhanced networking on supported instance types to provide lower latencies, lower network jitter, and higher packet-per-second (PPS) performance

AWS Instance Lifecycle

  • Note: You cannot stop and then start an Instance store backed instance.
  • FROM AMI
  • Launch Instance 
  • Pending
    • Running to Rebooting or Stopping
      • Shutting Down
        • Terminated

Amazon EC2 instances support multithreading, which enables multiple threads to run concurrently on a single CPU core. Each thread is represented as a virtual CPU (vCPU) on the instance. An instance has a default number of CPU cores, which varies according to instance type. For example, an m5.xlarge instance type has two CPU cores and two threads per core by default—four vCPUs in total.

  • Number of CPU cores: You can customize the number of CPU cores for the instance. You might do this to potentially optimize the licensing costs of your software with an instance that has sufficient amounts of RAM for memory-intensive workloads but fewer CPU cores.
  • Threads per core: You can disable multithreading by specifying a single thread per CPU core. You might do this for certain workloads, such as high performance computing (HPC) workloads.

Monitoring AWS EC2 instance

You can monitor AWS EC2 instances either manually or automatically. Lets discuss few of Automated monitoring tools.

  • System status checks
  • Instance status checks
  • Amazon Cloud watch alarms
  • Amazon Event Bridge
  • Amazon CloudWatch Logs
  • Cloud Watch agent

Now, lets discuss few of manual tools to monitor AWS EC2 instance.

  • Amazon EC2 Dashboard.
  • Amazon Cloud Watch Dashboard
  • Instance Status Checks on the EC2 Dashboard.
  • Scheduled events on EC2 Dashboard.

Cloud-init

It is used to bootstrap the Linux images in cloud computing environment.  Amazon Linux also uses cloud-init to perform initial configuration of the ec2-user account. Amazon Linux uses the cloud-init actions found in /etc/cloud/cloud.cfg.d and /etc/cloud/cloud.cfg and you can also add your own actions in this file.

The tasks that are performed by default by this script.

  • Set the default locale.
  • Set the hostname.
  • Parse and handle user data.
  • Generate host private SSH keys.
  • Add a user’s public SSH keys to .ssh/authorized_keys for easy login and administration.
  • Prepare the repositories for package management.
  • Handle package actions defined in user data.
  • Execute user scripts found in user data.

AWS EC2 Monitoring

  • By default, AWS EC2 sends metrics to CloudWatch every 5 mins.
  • To send metric data for your instance to CloudWatch in 1-minute periods, you can enable detailed monitoring on the instance but You are charged per metric that is sent to CloudWatch.
  • To list all the metrics of a particular AWS EC2 instance use the below command.
aws cloudwatch list-metrics --namespace AWS/EC2 --dimensions Name=InstanceId,Value=i-1234567890abcdef0

To create CloudWatch alarms, you can Select the instance and choose ActionsMonitor and troubleshootManage CloudWatch alarms.

  • You can use Amazon EventBridge to automate your AWS services and respond automatically to system events, such as application availability issues or resource changes.
  • Events from AWS services are delivered to Event Bridge in near real time. For example: Activate a Lambda function whenever an instance enters the running state. Create events and rules on event on AWS EC2 service. Once generated then it will run the lambda function.
  • You can use the Cloud Watch agent to collect both system metrics and log files from Amazon EC2 instances and on-premises servers
sudo yum install amazon-cloudwatch-agent

AWS EC2 Networking

If you require a persistent public IP address, you can allocate an Elastic IP address for your AWS account and associate it with an instance or a network interface.

To increase network performance and reduce latency, you can launch instances in a placement group

To increase network performance and reduce latency, you can launch instances in a placement group.

Local Zones

A Local Zone is an extension of an AWS Region in geographic proximity to your users. Local Zones have their own connections to the internet and support AWS Direct Connect, so that resources created in a Local Zone can serve local users with low-latency communications.

AWS Wavelength

AWS Wavelength enables developers to build applications that deliver ultra-low latencies to mobile devices and end users. Wavelength deploys standard AWS compute and storage services to the edge of telecommunication carriers’ 5G networks. Developers can extend a virtual private cloud (VPC) to one or more Wavelength Zones, and then use AWS resources like Amazon EC2 instances to run applications that require ultra-low latency and a connection to AWS services in the Region.

Elastic Network Interface

  • Eni is basically a Virtual Network adapter which contains following attributes:
    • 1 primary private IPv4
    • 1 or more secondary private IPv4
    • 1 Elastic IP per private IP
    • One Public IPv4 address
    • 1 Mac address
    • You can create and configure network interfaces and attach them to instances in the same Availability Zone.
    • The below diagram is just the one ENI ( Network card adapter) however for some of them have multiple adapters.
    • Each instance has a default network interface, called the primary network interface.
    • Each instance has a default network interface, called the primary network interface.
  • Instances with multiple network cards provide higher network performance, including bandwidth capabilities above 100 Gbps and improved packet rate performance. All the instances have mostly one network card which has further ENI’s.
  • The following instances support multiple network cards. 
  • You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach).
  • You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface.

Configure your network interface using ec2-net-utils for Amazon Linux

There is an additional script that is installed by AWS which is ec2-net-utils. To install this script, use the following command.

yum install e2-net-utils

To list the configuration files that are generated can be checked using the below command:

ls -l /etc/sysconfig/network-scripts/*-eth?

IP Address

  • You can specify multiple private IPv4 and IPv6 addresses for your instances.
  • You can assign a secondary private IPv4 address to any network interface. The network interface does not need to be attached to the instance.
  • Secondary private IPv4 addresses that are assigned to a network interface can be reassigned to another one if you explicitly allow it.
  • Secondary private IPv4 addresses that are assigned to a network interface can be reassigned to another one if you explicitly allow it.
  • Although you can’t detach the primary network interface from an instance, you can reassign the secondary private IPv4 address of the primary network interface to another network interface.
  • Each private IPv4 address can be associated with a single Elastic IP address, and vice versa.
  • When a secondary private IPv4 address is reassigned to another interface, the secondary private IPv4 address retains its association with an Elastic IP address.
  • When a secondary private IPv4 address is unassigned from an interface, an associated Elastic IP address is automatically disassociated from the secondary private IPv4 address.

Assign a secondary private IPv4 address

  • In EC2, choose Network Interfaces
  • Allow secondary IP address.
  • Again verify in EC2 instance networking tab

What is Elastic IP address?

  • Static Ip address
  • It is region specific and cannot be moved to another region.
  • First thing is to allocate to the account.
  • When you associate an Elastic IP address with an instance, it is also associated with the instance’s primary network interface

Associate an Elastic IP address with the secondary private IPv4 address

  • In the navigation pane, choose Elastic IPs.
  • Again verify in EC2 instance networking tab

Conclusion

In the long ultimate guide we learned everything one must know about AWS EC2 in the AWS Cloud.

AWS KMS Keys

If you need to secure your AWS Cloud account for various data content then you must know everything about AWS KMS keys.

In this tutorial we will learn everything we should know about AWS KMS keys and how to call these AWS KMS keys in IAM Policies.

Table of Content

  1. AWS KMS (Key Management Service)
  2. Symmetric Encryption KMS Keys
  3. Asymmetric KMS keys
  4. Data keys
  5. Custom key stores
  6. Key material
  7. Key policies in AWS KMS
  8. Default Key Policy
  9. Allowing user to access KMS keys with Key Policy
  10. Allowing Users and Roles to access KMS keys with Key Policy
  11. Access KMS Key by User in different account
  12. Creating KMS Keys
  13. What is Multi-region KMS Keys?
  14. Key Store and Custom Key Store
  15. How to Encrypt your AWS RDS using AWS KMS keys
  16. Encrypt AWS DB instance using AWS KMS keys
  17. Encrypting the AWS S3 bucket using AWS KMS Keys
  18. Applying Server-side Encryption on AWS S3 bucket
  19. Configure AWS S3 bucket to use S3 Bucket Key with Server Side E-KMS for new objects
  20. Client Side Encryption on AWS S3 Bucket
  21. Conclusion

AWS KMS (Key Management Service)

KMS is a managed service that makes it easy to create and control cryptographic key that protect your data by encrypting and decrypting. KMS uses Hardware security modules to protect and validate your keys.

KMS Keys contains a reference to the key material that is used when you perform cryptographic operations with the KMS key. Also, you cannot delete this key material; you must delete the KMS key. A KMS key contains metadata, such as the key ID, key spec, key usage, creation date, description, and key state. Key identifiers act like names for your KMS keys.

keyID: It acts like a name for example 1234abcd-12ab-34cd-56ef-1234567890a

Note: A cryptographic key is a string of bits used by a cryptographic algorithm to transform plain text into cipher text or vice versa. This key remains private and ensures secure communication.

  • The KMS keys that are created by us are customer managed keys. You have control over KMS policies, enable and disable, rotating key material, adding tags , creating alias. When you create an AWS KMS key, by default, you get a KMS key for symmetric encryption.
    • Symmetric encryption keys are used in symmetric encryption, where the same key is used for encryption and decryption.
    • An asymmetric KMS key represents a mathematically related public key and private key pair.
  • The KMS keys that are created automatically by AWS are AWS Managed keys. The aliases are represented as aws/redshift etc. All AWS’ managed keys are now rotated every 3 year
  • AWS owned keys are a collection of KMS keys that an AWS service owns and manages for use in multiple AWS accounts. Although AWS owned keys are not in your AWS account, an AWS service can use an AWS owned key to protect the resources in your account.
  • Alias: A user friendly name given to KMS key is an alias. For example: alias/ExampleAlias
  • custom key store is an AWS KMS resource backed by a key manager outside of AWS KMS that you own and manage
  • cryptographic operations are API operations that use KMS keys to protect data.
  • Key material is the string of bits used in a cryptographic algorithm.
  • Key policy determines who can manage the KMS keys and who can use it. The key policy that is attached to the KMS key. The key policy is always defined in the AWS account and Region that owns the KMS key.
  • All IAM policies that are attached to the IAM user or role making the request. IAM policies that govern a principal’s use of a KMS key are always defined in the principal’s AWS account.

Symmetric Encryption KMS Keys

When you create an AWS KMS key, by default, you get a KMS key for symmetric encryption. Symmetric key material never leaves AWS KMS unencrypted. To use a symmetric encryption KMS key, you must call AWS KMS. Symmetric encryption keys are used in symmetric encryption, where the same key is used for encryption and decryption.

AWS services that are integrated with AWS KMS use only symmetric encryption KMS keys to encrypt your data. These services do not support encryption with asymmetric KMS keys. 

You can use a symmetric encryption KMS key in AWS KMS to encrypt, decrypt, and re-encrypt data, and generate data keys and data key pairs.

When you create a request or raise a request then it happens as follows:

Requested Syntax:

{
   "EncryptionAlgorithm": "string",
   "EncryptionContext": {
      "string" : "string"
   },

   "GrantTokens": [ "string" ],
   "KeyId": "string",
   "Plaintext": blob
}
Response Syntax

{
   "CiphertextBlob": blob,
   "EncryptionAlgorithm": "string",
   "KeyId": "string"
}

Asymmetric KMS keys

You can create asymmetric KMS keys in AWS KMS. An asymmetric KMS key represents a mathematically related public key and private key pair. The private key never leaves AWS KMS unencrypted.

Data keys

Data keys are symmetric keys you can use to encrypt data, including large amounts of data and other data encryption keys. Unlike symmetric KMS keys, which can’t be downloaded, data keys are returned to you for use outside of AWS KMS.

Custom key stores

custom key store is an AWS KMS resource backed by a key manager outside of AWS KMS that you own and manage. When you use a KMS key in a custom key store for a cryptographic operation

Key material

Key material is the string of bits used in a cryptographic algorithm. Secret key material must be kept secret to protect the cryptographic operations that use it. Public key material is designed to be shared. You can use key material that AWS KMS generates, key material that is generated in the AWS CloudHSM cluster of a custom key store, or import your own key material.

Key policies in AWS KMS

A key policy is a resource policy for an AWS KMS key. Key policies are the primary way to control access to KMS keys. Every KMS key must have exactly one key policy. The statements in the key policy determine who has permission to use the KMS key and how they can use it. You can also use IAM policies and grants to control access to the KMS key, but every KMS key must have a key policy.

Unless the key policy explicitly allows it, you cannot use IAM policies to allow access to a KMS key. Without permission from the key policy, IAM policies that allow permissions have no effect. Unlike IAM policies, which are global, key policies are Regional

Default Key Policy

As soon as you create the KMS keys, the default key policy is also created which gives the AWS account that owns the KMS key full access to the KMS key. It also allows the account to use IAM policies to allow access to the KMS key, in addition to the key policy.

{
  "Sid": "Enable IAM policies",
  "Effect": "Allow",
  "Principal": {
    "AWS": "arn:aws:iam::111122223333:root"
   },
  "Action": "kms:*",
  "Resource": "*"
}

Allowing user to access KMS keys with Key Policy

You can create and manage key policies in AWS KMS console, by using KMS API Operations. First you need to allow users, role or admins in Key policy to use KMS keys. As shown the below key policy allows Alice user in Account(111122223333) to use KMS key

Note: to access KMS you need to create separate IAM policies.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Describe the policy statement",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:user/Alice"
      },
      "Action": "kms:DescribeKey",
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "kms:KeySpec": "SYMMETRIC_DEFAULT"
        }
      }
    }
  ]
}

Allowing Users and Roles to access KMS keys with Key Policy

First you need to allow users, role or admins in Key policy to use KMS keys. For users to access KMS you need to create separate IAM policies. For example in the below policy allows Account(111122223333) and myRole in Account(111122223333) to use KMS keys.

{
    "Id": "key-consolepolicy",
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Enable IAM User Permissions",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:root"
            },
            "Action": "kms:*",
            "Resource": "*"
        },
        {
            "Sid": "Allow access for Key Administrators",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/myRole"
            },
            "Action": [
                "kms:Create*",
                "kms:Describe*",
                "kms:Enable*",
                "kms:List*",
                "kms:Put*",
                "kms:Update*",
                "kms:Revoke*",
                "kms:Disable*",
                "kms:Get*",
                "kms:Delete*",
                "kms:TagResource",
                "kms:UntagResource",
                "kms:ScheduleKeyDeletion",
                "kms:CancelKeyDeletion"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Allow use of the key",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/myRole"
            },
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Allow attachment of persistent resources",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/myRole"
            },
            "Action": [
                "kms:CreateGrant",
                "kms:ListGrants",
                "kms:RevokeGrant"
            ],
            "Resource": "*",
            "Condition": {
                "Bool": {
                    "kms:GrantIsForAWSResource": "true"
                }
            }
        }
    ]
}

Access KMS Key by User in different account

In this section we will go through an example where AWS KMS key is present in Account 2 and user from Account 1 named Bob needs to access it. [Access KMS Key in Account 2 by User bob in Account 1]

  • User bob needs to assume role (engineering) in Account 1.
{
    "Role": {
        "Arn": "arn:aws:iam::111122223333:role/Engineering",
        "CreateDate": "2019-05-16T00:09:25Z",
        "AssumeRolePolicyDocument": {
            "Version": "2012-10-17",
            "Statement": {
                "Principal": {
                    "AWS": "arn:aws:iam::111122223333:user/bob"
                },
                "Effect": "Allow",
                "Action": "sts:AssumeRole"
            }
        },
        "Path": "/",
        "RoleName": "Engineering",
        "RoleId": "AROA4KJY2TU23Y7NK62MV"
    }
}
  • Attach IAM Policy to IAM Role  (engineering) in Account 1. The Policy contains allows anyone to access KMS key in another account. 
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncryptFrom",
                "kms:ReEncryptTo",
                "kms:GenerateDataKey",
                "kms:GenerateDataKeyWithoutPlaintext",
                "kms:DescribeKey"
            ],
            "Resource": [
                "arn:aws:kms:us-west-2:444455556666:key/1234abcd-12ab-34cd-56ef-1234567890ab"
            ]
        }
    ]
}
  • Now, In Account 2 create KMS key policy that allows everyone to access from Account 1
{
    "Id": "key-policy-acct-2",
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Permission to use IAM policies",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::444455556666:root"
            },
            "Action": "kms:*",
            "Resource": "*"
        },
        {
            "Sid": "Allow account 1 to use this KMS key",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:root"
            },
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncryptFrom",
                "kms:ReEncryptTo",
                "kms:GenerateDataKey",
                "kms:GenerateDataKeyWithoutPlaintext",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        }
    ]
}

Creating KMS Keys

You can create the KMS Keys either in single region or multi region. By default the AWS KMS creates the key material. You need below permissions to create the KMS keys.

kms:CreateKey
kms:CreateAlias
kms:TagResource
iam:CreateServiceLinkedRole 
  • Navigate to AWS KMS service in AWS Management console.
  • Add Alias to the key and Description of the AWS Key that you created.
  • Next, add the permissions to the key and review the Key before creation.

What is Multi-region KMS Keys?

AWS KMS supports multi-Region keys, which are AWS KMS keys in different AWS Regions that can be used interchangeably – as though you had the same key in multiple Regions. Each set of related multi-Region keys has the same key material and key ID, so you can encrypt data in one AWS Region and decrypt it in a different AWS Region without re-encrypting or making a cross-Region call to AWS KMS.

  • You begin by creating a symmetric or asymmetric multi-Region primary key in an AWS Region that AWS KMS supports, such as US East (N. Virginia)
  • You set a key policy for the multi-Region key, and you can create grants, and add aliases and tags for categorization and authorization.
  • When you do, AWS KMS creates a replica key in the specified Region with the same key ID and other shared properties as the primary key. Then it securely transports the key material across the Region boundary and associates it with the new KMS key in the destination Region, all within AWS KMS

Key Store and Custom Key Store

key store is a secure location for storing cryptographic keys.  The default key store in AWS KMS also supports methods for generating and managing the keys that its stores.

By default, the cryptographic key material for the AWS KMS keys that you create in AWS KMS is generated in and protected by hardware security modules (HSMs). However, if you require even more control of the HSMs, you can create a custom key store.

custom key store is a logical key store within AWS KMS that is backed by a key manager outside of AWS KMS that you own and manage.

AWS KMS – Keys – Default Key store (IN AWS KMS) – HSM

AWS KMS – Keys – Custom Key Store (OUTSIDE AWS KMS) – Key Manager Manages it

There are two Custom Key Stores:

  • An AWS CloudHSM key store is an AWS KMS custom key store backed by an AWS CloudHSM cluster. You create and manage your custom key stores in AWS KMS and create and manage your HSM clusters in AWS CloudHSM.
  • An external key store is an AWS KMS custom key store backed by an external key manager outside of AWS that you own and control

How to Encrypt your AWS RDS using AWS KMS keys

Amazon RDS supports only symmetric KMS keys. You cannot use an asymmetric KMS key to encrypt data in an Amazon RDS database.

When you use KMS in RDS EBS or DB instances the service specifies encryption context. The encryption context is additional authenticated data ( AAD) and same encryption context is used to decrypt the data. Encryption context is also written to your CloudTrail logs.

At minimum, Amazon RDS always uses the DB instance ID for the encryption context, as in the following JSON-formatted example:

{ "aws:rds:db-id": "db-CQYSMDPBRZ7BPMH7Y3RTDG5QY" }

Encrypt AWS DB instance using AWS KMS keys

  • To encrypt a new DB instance, choose Enable encryption on the Amazon RDS console.
  • When you create an encrypted DB instance, you can choose a customer managed key or the AWS managed key for Amazon RDS to encrypt your DB instance.
  • If you don’t specify the key identifier for a customer managed key, Amazon RDS uses the AWS managed key for your new DB instance

Amazon RDS builds on Amazon Elastic Block Store (Amazon EBS) encryption to provide full disk encryption for database volumes.

When you create an encrypted Amazon EBS volume, you specify an AWS KMS key. By default, Amazon EBS uses the AWS managed key for Amazon EBS in your account (aws/ebs). However, you can specify a customer managed key that you create and manage.

For each volume, Amazon EBS asks AWS KMS to generate a unique data key encrypted under the KMS key that you specify. Amazon EBS stores the encrypted data key with the volume.

Similar to DB instances Amazon EBS uses an encryption context with a name-value pair that identifies the volume or snapshot in the request. 

Encrypting the AWS S3 bucket using AWS KMS Keys

Amazon S3 integrates with AWS Key Management Service (AWS KMS) to provide server-side encryption of Amazon S3 objects. Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3.

Amazon S3 uses server-side encryption with AWS KMS (SSE-KMS) to encrypt your S3 object data.

When you configure your bucket to use an S3 Bucket Key for SSE-KMS, AWS generates a short-lived bucket-level key from AWS KMS then temporarily keeps it in S3

Applying Server-side Encryption on AWS S3 bucket

To apply server side encryption on AWS S3 bucket you need to create a AWS S3 policy and then apply bucket policy as shown below.

{
   "Version":"2012-10-17",
   "Id":"PutObjectPolicy",
   "Statement":[{
         "Sid":"DenyUnEncryptedObjectUploads",
         "Effect":"Deny",
         "Principal":"*",
         "Action":"s3:PutObject",
         "Resource":"arn:aws:s3:::DOC-EXAMPLE-BUCKET1/*",
         "Condition":{
            "StringNotEquals":{
               "s3:x-amz-server-side-encryption":"aws:kms"
            }
         }
      }
   ]
}

Configure AWS S3 bucket to use S3 Bucket Key with Server Side E-KMS for new objects

To enable an S3 Bucket Key when you create a new bucket follow the below steps.

  1. Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.
  2. Choose Create bucket.
  3. Enter your bucket name, and choose your AWS Region.
  4. Under Default encryption, choose Enable.
  5. Under Encryption type, choose AWS Key Management Service key (SSE-KMS).
  6. Choose an AWS KMS key:
    1. Choose AWS managed key (aws/s3).
    1. Choose Customer managed key, and choose a symmetric encryption customer managed key in the same Region as your bucket.
  7. Under Bucket Key, choose Enable.
  8. Choose Create bucket.

Amazon S3 creates your bucket with an S3 Bucket Key enabled. New objects that you upload to the bucket will use an S3 Bucket Key. To disable an S3 Bucket Key, follow the previous steps, and choose disable.

Client Side Encryption on AWS S3 Bucket

Client-side encryption is the act of encrypting your data locally to ensure its security as it passes to the Amazon S3 service. The Amazon S3 service receives your encrypted data; it does not play a role in encrypting or decrypting it. For example, if you need to use KMS keys in Java application then use the below code.

AWSKMS kmsClient = AWSKMSClientBuilder.standard()
                .withRegion(Regions.DEFAULT_REGION)
                .build();

        // create KMS key for for testing this example
        CreateKeyRequest createKeyRequest = new CreateKeyRequest();
        CreateKeyResult createKeyResult = kmsClient.createKey(createKeyRequest);

// --
        // specify an AWS KMS key ID
        String keyId = createKeyResult.getKeyMetadata().getKeyId();

        String s3ObjectKey = "EncryptedContent1.txt";
        String s3ObjectContent = "This is the 1st content to encrypt";
// --

        AmazonS3EncryptionV2 s3Encryption = AmazonS3EncryptionClientV2Builder.standard()
                .withRegion(Regions.US_WEST_2)
                .withCryptoConfiguration(new CryptoConfigurationV2().withCryptoMode(CryptoMode.StrictAuthenticatedEncryption))
                .withEncryptionMaterialsProvider(new KMSEncryptionMaterialsProvider(keyId))
                .build();

        s3Encryption.putObject(bucket_name, s3ObjectKey, s3ObjectContent);
        System.out.println(s3Encryption.getObjectAsString(bucket_name, s3ObjectKey));

        // schedule deletion of KMS key generated for testing
        ScheduleKeyDeletionRequest scheduleKeyDeletionRequest =
                new ScheduleKeyDeletionRequest().withKeyId(keyId).withPendingWindowInDays(7);
        kmsClient.scheduleKeyDeletion(scheduleKeyDeletionRequest);

        s3Encryption.shutdown();
        kmsClient.shutdown();

Conclusion

In this article we learnt what is AWS KMS (Key Management Service) , key policy and IAM policies to access the KMS keys by users or roles in AWS cloud.

What is AWS RDS (Relationship Database Service)?

In this Post you will learn everything you must know end to end about AWS RDS. This tutorial will give you glimpse of each components starting from what is DB instance to scaling and multi availability zone cluster configurations and details.

Lets get started.

Table of Content

  • What is AWS RDS (Relationship Database Service)?
  • Database Instance
  • Database Engines
  • Database Instance class
  • DB Instance Storage
  • Blue/Green Deployments
  • Working with Read Replicas
  • How does cross region replication works?
  • Multi AZ Deployments
  • Multi AZ DB instance deployment
  • How to convert a single DB instance to Multi AZ DB instance deployment
  • Multi-AZ DB Cluster Deployments
  • DB pricing
  • AWS RDS performance troubleshooting
  • Tagging AWS RDS Resources
  • Amazon RDS Storage
  • Monitoring Events, Logs and Streams in an Amazon RDS DB Instance.
  • How to grant Amazon RDS to publish the notifications to the SNS topic using the IAM Policy.
  • RDS logs
  • AWS RDS Proxy
  • Amazon RDS for MySQL
  • Performance improvements on MySQL RDS for Optimized reads.
  • Importing Data into MySQL with different data source.
  • Database Authentication with Amazon RDS
  • Connecting to your DB instance using IAM authentication from the command line: AWS CLI and mysql client
  • Create database user account using IAM authentication
  • Generate an IAM authentication token
  • Connecting to DB instance
  • Connecting to AWS Instance using Python boto3 (boto3 rds)
  • Final AWS RDS Troubleshooting’s

What is AWS RDS (Relationship Database Service)?

  • It allows you to setup relational database in the AWS Cloud. AWS RDS is managed database service.
  • It is cost effective and resizable capacity because you if you invest in your own hardware, memory, CPU and it is time consuming and very costly.
  • With AWS RDS, it manages everything starting from Scaling, availability, backups, software patching, software installing, OS patching, OS installation, hardware lifecycle, server maintenance.
  • You can define permissions of your database users and database with IAM.

Database Instance

DB instance is a database environment which you launch your database users and user created databases.

  1. You can run your database instance in various AZ’s also known as multi-AZ deployments. Amazon automatically provisions and maintains secondary standby instance in different Availability zones. With this approach the primary DB replicates the data written into it to standby instance located in another AZ. Note: Instance in secondary can also be configured as read
  2. You can attach security groups to your database instance to protect your instance.
  3. You can launch DB instance in Local zones as well by enabling local zone in Amazon EC2 console.
  4. You can use Amazon CloudWatch to monitor the status of your database instance. You can monitor the following metrics:
    1. IOPS (I/O operations per second)
    1. Latency (Submitted I/O request until completed)
    1. Throughput (Number of bytes transferred per second) to or from disk.
    1. Queue depth: how many requests are pending in the queue.
  5. DB instance has a unique DB instance identifier that a customer or a user provider and should be different in AWS Region. If you provide the DB instance identifier as testing, then your endpoint formed will be as below.
testing. <account-id><region>.rds.amazonaws.com
  • DB instance supports various DB engines such as MySQL, MariaDB, PostgreSQL, Oracle, Microsoft SQL server, Amazon Aurora database engines.
  • A DB instance can host multiple databases with multiple schemas.
  • When you create any DB instance using AWS RDS service then by default it creates a master user account, and this user has all the permissions. Note: Make sure to change the password of this master user account.
  • You can create a backup of your Database instance by creating database snapshots.  You can also store your snapshots in AWS S3 bucket.
  • You can enable IAM database authentication on your database instance so that you don’t need any password to login to the database instance.
  • You can also enable Kerberos authentication to support external authentication of database users using Kerberos and Microsoft Active directory.
  • DB Instance are billed per hour.

Database Engines

Db engines are specific software’s that runs on your DB instance such as MariaDB, Microsoft SQL server, MySQL, Oracle and Postgres.

Database Instance class

Db instance class determines the computation, memory and storage capacity of a DB instance.  AWS RDS supports three types of DB instance classes:

  • General purpose:
  • Memory optimized:
  • Burstable Performance
  1. DB instance class supports Intel Hyper threading technology which enables multiple threads to run parallelly on single Intel Xeon CPU Core. Each thread is represented as vCPU on DB Instance. For example db.m4.xlarge DB Instance class has 2 CPU Core and two threads per CPU Core which makes to total of 4 vCPU’s. Note: You can disable Intel Hyper threading by specifying a single thread per CPU core when you need a high-performance computing workload.
  2. To set the Core count and Threads per core you need to edit the processor features.
  3. Quick note: To compare the CPU capacity between different DB instance class you should use ECU (Amazon EC2 instance compute units). The amount of CPU that is allocated to a DB instance is expressed in terms of EC2 compute units.
  4. You can use EBS optimised volumes which are good for your DB instance as it provides better performance by minimizing contention between I/O and other traffic from your instance.

DB Instance Storage

You can attach EBS the block level storage volumes to a running instance. DB Instance storage comes with:

  • General purpose (SSD) [gp2 and gp3]: They are cost effective which is ideal for board range of workload on medium sized Generally, they have throughput limit of 250MB/second.
  • For GP2
    • 3 IOPS for each GB with min 100 IOPS (I/O Operations per second)
    • 16000 IOPS for 5.34TB is max limit in gp2  
    • Throughput is max 250MB/sec where throughput is how fast the storage volume can perform read and write.
  • For GP3
    • Up to 32000 IOPS
  • Provisioned IOPS (PIOPS) [io1]: They are used when you need low I/O Latency, consistent I/O throughput. These are suited for production environments.
    • For io1 – up to 256000 (IOPS) and throughput up to 4000 MB/s
    • Note: Benefits of using provisioned IOPS are
      • Increase number of I/O requests that system cab process.
      • Decreased latency because less I/O requests will be in queue.
      • Faster response time and high database throughput.

Blue/Green Deployments

Blue/Green deployments copies database environments in a separate environment. You can make changes in staging environment and then later push those changes in production environments. Blue/ Green deployments are only available for RDS for MariaDB and RDS for MySQL.

Working with Read Replicas

  • Updates from primary DB are copied to the read replicas.
  • You can promote read replica to be standalone DB as well in case you require sharing (Share nothing DB)
  • You can use or create read replica in different AWS Region as well.

You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. 

Note: With Cross region read replicas you can create read replicas in a different region from the source DB instance.

How does cross region replication works?

  • IAM role of Destination must have access to Source DB Instance.
    • Source DB acts as source
    • RDS creates automated DB Snapshot of source DB
    • Copy of Snapshot starts
    • Destination read replica uses copied DB Snapshot

Note: You can configure DB instance to replicate snapshots and transaction logs in another AWS region.

Multi AZ Deployments

  • You can run your database instance in various AZ’s also known as multi-AZ deployments. Amazon automatically provisions and maintains secondary standby instance in different Availability zones. With this approach the primary DB replicates the data written into it to standby instance located in another AZ. Note: Instance in secondary can also be configured as read replicas.
  • You can align one standby or two standby instances.
  • When you have one standby instance it is known as Multi AZ DB instance deployment where one standby instance provides failover support but doesn’t act as read replica.
  • With Two standby instance it is known as Multi AZ DB cluster.
  • The failover mechanism automatically changes the Domain Name System (DNS) record of the DB instance to point to the standby DB instance

Note: DB instances with multi-AZ DB instance deployments can have increased write and commit latency compared to single AZ deployment.

Multi AZ DB instance deployment

In a Multi-AZ DB instance deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone.  You can’t use a standby replica to serve read traffic

If a planned or unplanned outage of your DB instance results from an infrastructure defect, Amazon RDS automatically switches to a standby replica in another Availability Zone if you have turned on Multi-AZ.

How to convert a single DB instance to Multi AZ DB instance deployment

  • Take a snapshot of primary DB instances EBS volume.
  • Creates a new volume for standby replicas from snapshot.
  • Next, turn on block level

Multi-AZ DB Cluster Deployments

  • It has one writer DB instance
  • It has two reader DB instances and allows clients to read the data.
  • AWS RDS replicates writer  
  • Data is synched from Writer instance to both the reader instances.
  • If a failover happens on of the writer instance then the reader instance acts as a automatic failover targets.  It does so by promoting a reader DB instance to a new writer DB instance. It happens automatically within 35 seconds and you can also do by going on Failover tab.

Cluster Endpoint

The cluster endpoint can write as well as read the data. The endpoint cannot be modified.

Reader Endpoint

Reader endpoint is used for reading the content from the DB cluster.

Instance Endpoint

These are used to connect to the DB instance directly to address the issues within instance or your application might require fine grained load balancing.

DB cluster parameter group

DB cluster parameter group acts as a container for engine configuration values that are applied to every DB instance in the Multi-AZ DB cluster

Rds Replica Lag

The Difference in time between latest transaction on writer DB instance and latest applied transaction on reader instance. This could be because of high write concurrency or heavy batch updating.

How to Solve Replica Lag

You can solve the replica lag by reducing the load on your writer DB instance. You can also use Flow control to reduce the replica lag. In Flow log you can add a delay into the end of a transaction, which decreases the write throughput on writer instance. To turn on flow control use the below parameter. By default it is set to 120 seconds and you can turn off by setting to 84000 seconds or less than 120 .

Flow control works by throttling writes on the writer DB instance, which ensures that replica lag doesn’t continue to grow unbounded. Write throttling is accomplished by adding a delay. Throttling means queue or let it flow.

rpl_semi_sync_master_target_apply_lag

To check the status of flow control use below command.

SHOW GLOBAL STATUS like '%flow_control%';

DB pricing

  • DB Instance are billed per hour.
  • Storage are billed per GB per month.
  • I/O requests (per 1 million requests per month.
  • Data transfer per GB in and out of your DB Instance.

AWS RDS performance troubleshooting

  1. Setup CloudWatch monitoring
  2. Enable Automatic backups
  3. If your DB requires more I/O, then to increase migrate to new instance class, convert from magnetic to general or provisioned IOPS.
  4. If you already have provisioned IOPS, consider adding more throughput capacity.
  5. If your app is caching DNS data of your instance, then make sure to set TTL value to less than 30 seconds because caching can lead to connection failures.
  6. Setup enough memory (RAM)
  7. Enable Enhanced monitoring to identify the Operating system issues
  8. Fine tune your SQL queries.
  9. Avoid tables in your database to grow too large as they impact Read and Writes.
  10. You can use options groups if you need to provide additional security for your database.
  11. You can use DB parameter group acts as a container for engine configuration values that are applied to one or more DB instances.

Tagging AWS RDS Resources

  • Tags are very helpful and are basically key value pair formats.
  • You can use Tags in IAM policies to manage access to AWS RDS resources.
  • Tags can be used to produce the detailed billing reports.
  • You can specify if you need tags to be applied to snapshots as well.
  • Tags are useful to determine which instance to be stopped, started, enable backups.

Amazon RDS Storage

Increasing DB instance storage capacity:

Click on Modify in Databases and then Allocated Storage and apply immediately.  

Managing capacity automatically with Amazon RDS storage autoscaling

If workload is unpredictable then enable autoscaling for an Amazon RDS DB Instance. While creating the database engine, enable storage autoscaling and set the maximum storage threshold.

Modifying settings for Provisioned IOPS SSD storage

You can change that is reduce the amount of IOPS for your instance (throughput ) i.e read and write operations however with Provisioned IOPS SSD Storage you cannot reduce the storage size.

Monitoring Events, Logs and Streams in an Amazon RDS DB Instance.

Amazon Event Bridge: Serverless Event bus service that allows to connect apps with data from various sources.

Cloud trail logs and Cloud Watch logs are useful.

Database Activities Streams: AWS RDS push activities to Amazon Kinesis data stream

How to grant Amazon RDS to publish the notifications to the SNS topic using the IAM Policy.

The IAM Policy will be attached to the SNS service.

{
  "Version": "2008-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "events.rds.amazonaws.com"
      },
      "Action": [
        "sns:Publish"
      ],
      "Resource": "arn:aws:sns:us-east-1:123456789012:topic_name",
      "Condition": {
        "ArnLike": {
          "aws:SourceArn": "arn:aws:rds:us-east-1:123456789012:db:prefix-*"
        },
        "StringEquals": {
          "aws:SourceAccount": "123456789012"
        }
      }
    }
  ]
}

RDS logs

  • Amazon RDS doesn’t provide host access to the database logs on the file system of your DB instance. You can Choose the Logs & events tab to view the database log files and logs on the console itself.
  • To publish SQL Server DB logs to CloudWatch Logs from the AWS Management Console. In the Log exports section, choose the logs that you want to start publishing to CloudWatch Logs.

Note: In CloudWatch Logs, a log stream is a sequence of log events that share the same source. Each separate source of logs in CloudWatch Logs makes up a separate log stream. A log group is a group of log streams that share the same retention, monitoring, and access control settings.

  • Amazon RDS provides a REST endpoint that allows access to DB instance log files and you can find the log using REST Endpoint.
GET /v13/downloadCompleteLogFile/DBInstanceIdentifier/LogFileName HTTP/1.1
Content-type: application/json
host: rds.region.amazonaws.com
  • RDS for MySQL writes mysql-error.log to disk every 5 minutes. You can write the RDS for MySQL slow query log and the general log to a file or a database table. You can direct the general and slow query logs to tables on the DB instance by creating a DB parameter group and setting the log_output server parameter to TABLE
    • slow_query_log: To create the slow query log, set to 1. The default is 0.
    • general_log: To create the general log, set to 1. The default is 0.
    • long_query_time: To prevent fast-running queries from being logged in the slow query log

MySQL removes log files more than two weeks old. You can manually rotate the log tables with the following command line procedures, 

CALL mysql.rds_rotate_slow_log;

AWS RDS Proxy

  • RDS Proxy allows you to pool and share db connections to improve ability to scale.
  • RDS Proxy makes applications more effective to db failures by automatically connecting to Standby DB instance.
  • RDS Proxy establishes a database connection pool and reuses connections in this pool and avoids the memory and CPU overhead of opening a new database connection each time.
  • You can enable RDS Proxy for most applications with no code changes.

You can use RDS Proxy in the following scenarios.

  • Any DB instance or cluster that encounters “too many connections” errors is a good candidate for associating with a proxy.
  • For DB instances or clusters that use smaller AWS instance classes, such as T2 or T3, using a proxy can help avoid out-of-memory conditions
  • Applications that typically open and close large numbers of database connections and don’t have built-in connection pooling mechanisms are good candidates for using a proxy.

Amazon RDS for MySQL

There are two versions that are available for MySQL database engines i.e. version 8.0  and 5.7. MySQL provides the validate_password plugin for improved security. The plugin enforces password policies using parameters in the DB parameter group for your MySQL DB instance

To find the available version in MySQL which are supported:

aws rds describe-db-engine-versions --engine mysql --query *[].{Engine:Engine,EngineVersion:EngineVersion}" --output text

SSL/TLS on MySQL DB Instance

Amazon RDS installs SSL/TLS Certificate on the DB Instance. These certificate are signed by CA.  

To connect to DB instance with certificate use below command.

mysql -h mysql–instance1.123456789012.us-east-1.rds.amazonaws.com --ssl-ca=global-bundle.pem --ssl-mode=REQUIRED -P 3306 -u myadmin -p

To check if applications are using SSL.

mysql> SELECT id, user, host, connection_type

       FROM performance_schema.threads pst

       INNER JOIN information_schema.processlist isp

       ON pst.processlist_id = isp.id;

Performance improvements on MySQL RDS for Optimized reads.

  • An instance store provides temporary block-level storage for your DB instance.
  • With RDS Optimized reads some temporary objects are stored on Instance store. These objects include temp files, internal on disk temp tables, memory map files, binary logs, cached files.
  • The storage is located on Non-Volatile Memory express SSD’s that are physically attached.
  • Applications that can uses RDS for Optimized reads are:
    • Applications that run on-demand or dynamic reporting queries.
    • Applications that run analytical queries.
    • Database queries that perform grouping or ordering on non-indexed columns
  • Try to add retry logic for read only queries.
  • Avoid bulk changes in single transaction.
  • You can’t change the location of temporary objects to persistent storage (Amazon EBS) on the DB instance classes that support RDS Optimized Reads.
  • Transactions can fail when the instance store is full.
  • RDS Optimized Reads isn’t supported for multi-AZ DB cluster deployments.

Importing Data into MySQL with different data source.

  1. Existing MySQL database on premises or on Amazon EC2: Create a backup of your on-premises database, store it on Amazon S3, and then restore the backup file to a new Amazon RDS DB instance running MySQL.
  2. Any existing database: Use AWS Database Migration Service to migrate the database with minimal downtime
  3. Existing MySQL DB instance: Create a read replica for ongoing replication. Promote the read replica for one-time creation of a new DB instance.
  4. Data not stored in an existing database: Create flat files and import them using the mysqlimport utility.

Database Authentication with Amazon RDS

For PostgreSQL, use one of the following roles for a user of a specific database.

  • IAM Database authentication: assign rds_iam role to user
  • Kerberos authentication  assign rds_ad role to the user.
  • Password authentication don’t assign above roles.

Password Authentication

  • With Password authentication, database performs all the administration of user accounts. Database controls and authenticate the user accounts.

IAM Database authentication

  • IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don’t need to use a password when you connect to a DB instance

Kerberos Authentication

Benefits of using SSO and centralised authentication of database users.

Connecting to your DB instance using IAM authentication from the command line: AWS CLI and mysql client

  • In the Database authentication section, choose Password and IAM database authentication to enable IAM database authentication.
  • To allow an IAM user or role to connect to your DB instance, you must create an IAM policy.
{

   "Version": "2012-10-17",

   "Statement": [

      {

         "Effect": "Allow",

         "Action": [

             "rds-db:connect"

         ],

         "Resource": [

             "arn:aws:rds-db:us-east-2:1234567890:dbuser:db-ABCDEFGHIJKL01234/db_user"

         ]

      }

   ]

}

Create database user account using IAM authentication

CREATE USER jane_doe IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS';
CREATE USER db_userx;
GRANT rds_iam TO db_userx;

Generate an IAM authentication token

aws rds generate-db-auth-token --hostname rdsmysql.123456789012.us-west-2.rds.amazonaws.com --port 3306 --region us-west-2  --username jane_doe

Connecting to DB instance

mysql –host=hostName –port=portNumber –ssl-ca=full_path_to_ssl_certificate –enable-cleartext-plugin –user=userName –password=authToken

Connecting to AWS Instance using Python boto3 (boto3 rds)

import pymysql
import sys
import boto3
import os

ENDPOINT="mysqldb.123456789012.us-east-1.rds.amazonaws.com"
PORT="3306"
USER="jane_doe"
REGION="us-east-1"
DBNAME="mydb"

os.environ['LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN'] = '1'

#gets the credentials from .aws/credentials
session = boto3.Session(profile_name='default')
client = session.client('rds')
token = client.generate_db_auth_token(DBHostname=ENDPOINT, Port=PORT, DBUsername=USER, Region=REGION)
try:
    conn =  pymysql.connect(host=ENDPOINT, user=USER, passwd=token, port=PORT, database=DBNAME, ssl_ca='SSLCERTIFICATE')

    cur = conn.cursor()
    cur.execute("""SELECT now()""")
    query_results = cur.fetchall()
    print(query_results)

except Exception as e:
    print("Database connection failed due to {}".format(e))

   

Final AWS RDS Troubleshooting’s

Can’t connect to Amazon RDS DB instance

  • Check Security group
  • Check Port
  • Check internet Gateway
  • Check db name

Error – Could not connect to server: Connection timed out

  • Check hostname and port
  • Check security group
  • Telnet to the DB
  • Check the username and password

Error message “failed to retrieve account attributes, certain console functions may be impaired.”

  • Account is missing permissions, or your account hasn’t been properly set up.
  • lack permissions in your access policies to perform certain actions such as creating a DB instance

Amazon RDS DB instance outage or reboot

  • You change the backup retention period for a DB instance from 0 to a nonzero value or from a nonzero value to 0. You then set Apply Immediately to true.
  • You change the DB instance class, and Apply Immediately is set to true.
  • You change the storage type from Magnetic (Standard) to General Purpose (SSD) or Provisioned IOPS (SSD), or from Provisioned IOPS (SSD) or General Purpose (SSD) to Magnetic (Standard).

Amazon RDS DB instance running out of storage

  • Add more storage in  EBS volumes attached to the DB instance.

Amazon RDS insufficient DB instance capacity

The specific DB instance class isn’t available in the requested Availability Zone. You can try one of the following to solve the problem:

  • Retry the request with a different DB instance class.
  • Retry the request with a different Availability Zone.
  • Retry the request without specifying an explicit Availability Zone.

Maximum MySQL and MariaDB connections

  • The connection limit for a DB instance is set by default to the maximum for the DB instance class. You can limit the number of concurrent connections to any value up to the maximum number of connections allowed.
  • A MariaDB or MySQL DB instance can be placed in incompatible-parameters status for a memory limit when The DB instance is either restarted at least three time in one hour or at least five times in one day or potential memory usage of the DB instance exceeds 1.2 times the memory allocated to its DB instance class. To solve the issue:
    • Adjust the memory parameters in the DB parameter group associated with the DB instance.
    • Restart the DB instance.

Conclusion

This tutorial will gave you glimpse of each components starting from what is DB instance to scaling and multi availability zone cluster configurations and AWS RDS details.

How to create IAM policy to access AWS DynamoDB table

Do you know you can allow the user or group of IAM users to access AWS DynamoDB table with a single policy.

In this quick tutorial you will learn How to create IAM policy to access AWS DynamoDB table.

Lets get started.

Prerequisites

  • AWS account
  • You should have writes to create the IAM policy.

Creating IAM Policy to Access DynamoDB table

This section will show you the IAM policy which allows users or a group to access the DynamoDB table. Lets go through the code.

  • Version is the policy version which is fixed.
  • Effect is Allow in each statement as we want to Allow users or group be able to list all the DynamoDB table.
  • There are two statements in the IAM policy where
  • First statement allows to list and describe all the dynamoDB tables.
  • Where as Second statement allows specific table to be accessed by any user or role that is Mytable.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListandDescribe",
            "Effect": "Allow",
            "Action": [
                "dynamodb:List*",
                "dynamodb:DescribeReservedCapacity*",
                "dynamodb:DescribeLimits",
                "dynamodb:DescribeTimeToLive"
            ],
            "Resource": "*",
        },
  {
            "Sid": "SpecificTable",
            "Effect": "Allow",
            "Action": [
                "dynamodb:BatchGet*",
                "dynamodb:DescribeStream",
                "dynamodb:DescribeTable",
                "dynamodb:Get*",
                "dynamodb:Query",
                "dynamodb:Scan",
                "dynamodb:BatchWrite*",
                "dynamodb:CreateTable",
                "dynamodb:Delete*",
                "dynamodb:Update*",
                "dynamodb:PutItem"
            ],
            "Resource": "arn:aws:dynamodb:*:*:table/MyTable"
        }
    ]
}

Conclusion

This tutorial demonstrated that how to create IAM policy to access AWS DynamoDB table.

How to create a IAM Policy to Deny AWS Resources outside AWS Regions.

Do you know you can restrict the user or group of IAM users to multiple services and regions with a single policy.

In this quick tutorial you will learn how to create a IAM Policy to Deny AWS Resources outside AWS Regions.

Lets get started.

Prerequisites

  • AWS account

Creating IAM Policy to Deny access to Specific AWS regions

The below policy is useful when you want any of your users or groups to be explicitly denied on AWS services in AWS Regions.

  • Version is Policy version which is fixed.
  • Effect is Deny in each statement as we want to deny users or group be able to work on specific services and regions.
  • NotActions: We have different actions such as ListAllbuckets to list the buckets etc. NotAction is opposite of actions that means we don’t apply Effect on these resources.
  • This policy denies access to any actions outside the Regions specified (eu-central-1, eu-west-1, eu-west-2, eu-west-3) and except for actions in the services specified using NotAction such as accessing Cloud front, IAM, route53, support. The below policy contains following attributes.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyAllOutsideRequestedRegions",
            "Effect": "Deny",
            "NotAction": [
                "cloudfront:*",
                "iam:*",
                "route53:*",
                "support:*"
            ],
            "Resource": "*",
            "Condition": {
                "StringNotEquals": {
                    "aws:RequestedRegion": [
                        "eu-central-1",
                        "eu-west-1",
                        "eu-west-2",
                        "eu-west-3"
                    ]
                }
            }
        }
    ]
}

Conclusion

This tutorial demonstrated that if you need to create a IAM Policy to Deny AWS Resources outside AWS Regions.

How to Access AWS S3 bucket using S3 policy

Are you Struggling to Access your AWS S3 bucket, if yes then this tutorial is for you.

In this quick tutorial you will learn how you can grant read-write access to an Amazon S3 bucket by assigning S3 policy to the role.

Lets get started.

Prerequsites

  • AWS account
  • One AWS Bucket named sagarbucket2023

Creating IAM S3 Policy

The below policy is useful when you want any of your application intending to use the AWS S3 bucket may be for reading the data from a website or storing the data i.e. writing it to AWS S3 bucket.

The below policy contains following attributes

  • Version is Policy version which is fixed.
  • Effect is Allow in each statement as we want to allow users or group be able to work with AWS S3.
  • Actions: We have different actions such as ListAllbuckets to list the buckets etc.
  • Resource is my AWS S3 bucket named sagarbucket2023
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetBucketLocation",
        "s3:ListAllMyBuckets"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": ["arn:aws:s3:::sagarbucket2023"]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject"
      ],
      "Resource": ["arn:aws:s3:::sagarbucket2023/*"]
    }
  ]
}

Conclusion

This tutorial demonstrated that if you need to read or write data in AWS S3 bucket then your policy either attached to IAM user or IAM role should be defined as we showed.

Html full form( Hypertext Markup Language): Learn complete HTML

HTML introduction

HTML stands for HyperText Markup Language which is a markup language that is used to create a web page and markup language means a language that uses tags to define elements within a document.

With HTML you can create static pages however if you combine CSS, Javascript and HTMl together you will be able to create dynamic and more functional web pages or websites.

HTML Basic Example and HTML Syntax

Now that you have a basic idea of HTML, let’s kick off this tutorial by learning how to declare HTML Syntax. In the below HTML code:

  • <!DOCTYPE html> specific that it is a HTML5 document.
  • <html> is the root of html page.
  • <head> contains page information.
  • <title> is title of the page
  • <body> is documents body
  • <h1> is heading
  • <p> is the paragraph
<!DOCTYPE html>
<html>
<head>
<title> Page title </title>
</head>
<body>
<h1> My first heading</h1>
<p> My fist para </p>
</body>
</html>
<!DOCTYPE html>  # Document Type 
<html lang="en-US">    # Language Attribute to declare the language of the Web page
<head>
# Also <meta> element is used to specify the character set, page description, author and viewport.
<meta name="viewport" content="width=device-width, initial-scale-1.0"  # Setting the viewport which is a users visible area of a web page, initial-scale=1.0 sets the initial zoom level. 
<style>                                                     # Head element is the container of title, style, meta, link, script etc.
body{background-color: red;}                                # Internal CSS and define style information for a single HTML page
h1{ color:red; }                                            # Internal CSS
p {
  border: 2px red;                                          # Border( Border becomes more dark when pixels increased)
  padding: 30px;                                            # Padding ( Space between text and Border)
  margin: 10px;                                             # Margin (Space outside the border)
 }
a: link, a:visted, a:hover, a:active {                      # HTML Links with different with different scenerios
  text-align: center;
  color: blue;
}
.city {                                                     # Decalring the CSS for Class City
 background-color: tomato;
 color: white;
}
<link rel="stylesheet" href="styles.css">                   # External CSS and link is a 
<link rel="icon" type="image/x-icon" href="/images/favicon.ico" # Adding the Favicon in HTML page.
</style>
</head>
<body>
<div class="city">                                          # Creating a class named city 
  <h2> Hello this is my City </h2>
</div>                                                      # Class City ends here
<!--      -->                                               # Comments in HTML 
<p style="background:red" background-image: url('a.jpg') background-repeat>...........</p>                   # Paragraph
<p><a href="#C4">Jump to Chapter 4</a>                      # Creating a Link to create a BookMark using the ID 
<h2 id="C4"> Chapter 4 </h2>                                # Creating a heading with id that will be tagged with a link to create a bookmark. ID's are unique and used only with one HTML element rather than class being used by multiple HTML elements
<a href="google.com"> This is a link</a>                    # Link
<a href="google.com" target="_blank"> This is a link</a>    # Opens the document in a new window or tab
<a href="google.com" target="_parent"> This is a link</a>   # Opens the document in parent frame
<a href="google.com" target="_top"> This is a link</a>      # Opens the document in full body of the window
<a href="google.com" target="_self"> This is a link</a>     # Opens the document in same window
<iframe src="a.html" name="iframe_a" height="10" width="10" title="Title Iframe"></iframe>   # Creating a Iframe ( An inline fram) 
<p><a href="google.com" target="iframe_a">Hello, the link will open when clicked on the link</a></p>   # Using Iframe in the link  
<ol>                                                        # Ordered List
  <li>Coffee</li>                                           # Lists
  <li>Tea</li>
</ol>
<img src="a.jpeg" alt="Image" width="2" height="2">         # Image
<img src="computer_table.jpeg" usermap="#workmap">          # Using Image Map
<map name="workmap">
   <area shape="" coords="34,44,270,350" href="computer.htm">
   <area shape="" coords="31,41,21,35"   href="phone.htm">
<map>
</body>

<script>                                                   # Creating a Javascript inside the Html Page
  function myfunc() {
  document.getElementById("C4").innerHTML = "Have a nice DAY "
  var x = document.getElementsByClassName("city");         # Using the Class city within the Javascript within a HTML Page
  for (var i = 0; i< x.length; i ++ ) {
    x[i].style.display = "none"
  }
}

</script>
</html>


<header> - Defines a header for a document or a section
<nav> - Defines a set of navigation links
<section> - Defines a section in a document
<article> - Defines an independent, self-contained content
<aside> - Defines content aside from the content (like a sidebar)
<footer> - Defines a footer for a document or a section
<details> - Defines additional details that the user can open and close on demand
<summary> - Defines a heading for the <details> element




Everything you should know about Prometheus Grafana dashboard

Are you worried about the performance of your systems and dozens of applications running on them? Stop worrying as you are at the right place to learn about one of the most latest and widely used tools Prometheus and Grafana.

Prometheus is an open-source monitoring system that collects real-time metrics and is used as the data source in Grafana for visualization.

In this tutorial, you will learn everything you should know about the Prometheus Grafana dashboard.

Let’s get started.

Prerequisites

Prometheus installed on your Ubuntu machine

What is Prometheus?

Prometheus is a powerful, open-source monitoring system that collects real-time time-series metrics from services and stores them in memory and local disk in its own custom and efficient format [time-series database]. It is also used for alerting.

For example:

Time-series database contains a set of key-value pairs called labels. It is written in Go Language and executes powerful queries using Flexible query language (PromQL which is read-only). Prometheus provides great visualization using its own built-in expression browser and works well with Grafana dashboards and alert notifications

There are dozens of client libraries such as Java, Python, Scala, Ruby, and multiple integrations available for Prometheus.

In Software or programming language binaries are compiled code that allow a program to be installed without having to compile the source code.

In Software or programming language library is a collection of non-volatile resources used by computer programs, often for software development such as configuration data, documentation, help data, message templates, pre-written code and subroutines, classes, values or type specifications.

  • Syntax of Prometheus metrics is shown below. The below notation of syntax contains metrics name followed by key-value pairs also known as labels.
# Notation of time series
<metric name> {<label name>=<label value>,.....} 
# Example
node_boot_time {instance="localhost:9000",job="node_exporter"}

How does Prometheus Work

Prometheus collects metrics from monitored targets by scraping metrics HTTP Endpoint using the Prometheus configuration file. A single Prometheus server is able to ingest up to one million samples per second.

Using Exporters and Prometheus in Prometheus Configuration file

  • The Prometheus configuration file is stored in YAML format and by default it looks like. The path of Prometheus configuration file is /etc/prometheus/prometheus.yml
# my global config
global:
  scrape_interval: 15s
  evaluation_interval: 15s

# To scrape metrics from prometheus itself add the below block of code.

scrape_configs:
  - job_name: 'prometheus'              # Prometheus will scrape 
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'node_exporter_metrics'  # we are including Node Expoter
    scrape_interval: 5s
    static_configs:
      - targets: ['localhost:9100']
  • After you configure or update the prometheus configuration file, you can reload it using the below command without having to restart the prometheus.
kill -SIGHUP <pid>
  • To check the Prometheus configuration file on the UI which was configured on the server.

Using EC2 as Service Discovery mechanism in Prometheus Configuration file

Prometheus server continuously pulls metrics from jobs or apps but at times it is not able to pull the metrics due to servers not reachable due to NAT or firewall and in that case, it uses Pushgateway.

  • Pushgateway is used as an intermediary service. You can also query the Prometheus server using the PromQL and can visualize the data in its own Web UI and the Grafana Dashboard.
  • Service discovery automatically detects devices and services offered on the computer network. Lets say if you need to self discover the AWS EC2 instance then add the following configuration in the Prometheus.
# my global config
global:
  scrape_interval: 15s
  evaluation_interval: 15s

# To scrape metrics from prometheus itself add the below block of code.

scrape_configs:
  - job_name: 'AWS EC2'              # Prometheus will scrape AWS EC2
    ec2_sd_configs:
      - region: us-east-1
         access_key: ACCESS_KEY
         secret_key: SECRET_KEY
         port: 9100

Using Kubernetes as Service Discovery mechanism in Prometheus Configuration file

  • Now if you need to add the Kubernetes configuration then add code in below format.
- job_name: 'kubernetes service endpoints'        
  kubernetes_sd_configs:
      -
        api_servers:
           - https://kube-master.prometheus.com
        in_cluster: true

Using Configuration file Service Discovery mechanism in Prometheus Configuration file

[
  {
     "targets": ["myslave:9104"]
     "labels": {
        "env": "prod"
         "job": "mysql_slave"
     }
  }
]
  • To add the file named target.json for service discovery add the below configuration
scrape_configs:
  - job_name: 'dummy'              # Prometheus will scrape file
    file_sd_configs:
      - files:
         - targets.json

Prometheus and Python Client Library

import random, time

from flask import Flask, render_template_string, abort
from prometheus_client import generate_latest, REGISTRY , Counter, Gauge, Histogram

app = Flask(__name__)

REQUESTS = Counter('http_requests_total', 'Total HTTP Requests(Count)', ['method', 'endpoint', 'status_code'])

IN_PROGRESS = Gauge('http_requests_inprogress', 'Number of in progress HTTP requests')

TIMINGS = Histogram('http_request_duration_seconds', 'HTTP request latency (seconds)')

@app.route('/')

@TIMINGS.time()
@IN_PROGRESS.track_inprogress()

def hello_world():
    REQUESTS.labels(method='GET', endpoint="/", status_code=200).inc()
    return 'Hellos World'

if __name__ == "__main__":
    app.run(host='127.0.0.9',port=4455,debug=True)

Prometheus Alerting

Prometheus Alerting is divided into two categories Alerting Rules and AlertManager

Alerting rules: allows you to define the alert condition and send the alerts to an external services

  • Rules live in Prometheus server configuration and you can confgure the rules in /etc/prometheus/pometheus.yml
rule_files:
- "/etc/pometheus/alert.rules"
  • Create a file named alert.rules as shown below.
groups:
- name: Importanta Instance
# Alert for any instance that is unreachable for greater than 5 minutes
   rules:
   - alerts: Instancedown
      exp: up == 0
      for: 5m
      labels: 
         severity: critical
      annotations: 
         summary: Machine not available

Alertmanager: It handles all the alerts fired by the Prometheus servers such as grouping, rerouting, and deduplication of alerts. It routes alerts to Pagerduty, Opsgenie, email, slack.

  • The configuration of alertmanager is stored at /etc/alertmanager/alertmanager.yml
global: 

   smtp_smarthost: 'localhost:25'
   smtp_from: 'support@automateinfra.com'
   smtp_username: 'shanky'
   smtp_password: 'password123'
templates:
-  '/etc/alertmanager/template/*.tmpl'

route:
    repeat_interval: 1h
    receiver: operations-team

receivers:
-  name: 'operations-team'
    email_configs:
    - to: 'shanky@automateinfra.com'
    salck_configs:
    - api_url: https://hook.slack.com/servcies/xxxxx/xxxxxxxx/xxxxx
       channel: ''

Alert States:

  • Inactive
  • Pending
  • Firing

How to Install kubernetes on ubuntu 20.04 step by step

If you are looking to dive into the Kubernetes world, learning how to install Kubernetes is equally important.

Kubernetes is more than just management of containers as it keeps the load balanced between the cluster nodes, provides a self-healing mechanism, zero downtime deployment capabilities, automatic rollback, and many more features.

Let’s dive into this tutorial and learn how to install Kubernetes on ubuntu 20.04.

Join 48 other followers

Table of Content

Prerequisites

  • Two Ubuntu machines, one for Kubernetes Master and the other for Kubernetes enslaved person or worker node.
  • On both the Linux machines, make sure Inbound and outbound rules are all open to the world as this is the demonstration.

In the production environment for the control Panel and worker node needs following ports to be open: Master 6443,10250/10251/10252 2379,2380 [All Inbound] and for worker node 30000-32767

  • Docker is installed on both the Ubuntu machines. To check if docker is running, use the below command.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

service docker status
Checking the docker status
Checking the docker status

Setup Prerequisites for Kubernetes installation on ubuntu 18.04 machine

Before installing Kubernetes on Ubuntu, you should first run through a few prerequisite tasks to ensure the installation goes smoothly.

To get started, open your favorite SSH client, connect to MASTER and Worker node and follow along.

  • Install transport-https and curl package using apt-get install the command. Transport-https package allows the use of repositories accessed via the HTTP Secure protocol, and curl allows you to transfer data to or from a server or download, etc.
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
Installing the transport-https and curl package on each ubuntu system
Installing the transport-https and curl package on each ubuntu system
  • Add the GPG key for the official Kubernetes repository to your system using curl command.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
  • Add the Kubernetes repository to APT sources and update the system.
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update

You can also use sudo apt-add-repository “deb http://apt.kubernetes.io/ kubernetes-xenial main” command to add the kubernetes repository

  • Finally, rerun the sudo apt update command to read the new package repository list and ensure all of the latest packages are available for installation.

Installing Kubernetes on the Master and Worker Nodes

Now that you have the prerequisite packages installed on both MASTER and WORKER, it’s time to set up Kubernetes. Kubernetes consists of three packages/tools, kubeadmkubelet, and kubectl. Each of these packages contains all of the binaries and configurations necessary to set up a Kubernetes cluster.

Assuming you are still connected to the MASTER and Worker node via SSH:

  • Now Install Kubectl ( which manages cluster), kubeadm (which starts cluster), and kubelet ( which manages Pods and containers) on both the machines.
sudo apt-get install -y kubelet kubeadm kubectl
Installing the kubeadm kubelet kubectl package/tool on each ubuntu machine
Installing the kubeadm kubelet kubectl package/tool on each ubuntu machine

If you don’t specify the runtime, then kubeadm automatically detects an installed container. For Docker runtime the Path to Unix socket is /var/run/docker.sock & for containerd it’srun/containerd/containerd.sock

Initialize Kubernetes cluster

Now that you have Kubernetes installed on your controller node and worker node. But unless you initialize it, it is doing nothing. Kubernetes is initialized on the controller node; let’s do it.

  • Initialize your Cluster using Kubeadm init command on the Controller node, i.e., the control panel node.

The below command tells Kubernetes the IP address where its kube-apiserver is located with the --apiserver-advertise-address parameter. In this case, that IP address is the controller node itself.

The command below also defines the range of IP addresses to use for the pod network using the -pod-network-cidr parameter. The pod network allows pods to communicate with each other. Setting the pod network like this will automatically instruct the controller node to assign IP addresses for every node.

kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.111.4.79
THE MASTER NODE STARTS THE CLUSTER AND ASKS YOU TO  JOIN YOUR WORKER NODE
THE CONTROLLER NODE STARTS THE CLUSTER AND ASKS YOU TO JOIN YOUR WORKER NODE
  • Once your controller node that is the control panel is started is initialized, run the below commands on Controller Node to run the Kubernetes cluster with a regular user.
# Run the below commands on Master Node to run Kubernetes cluster with a regular user
# Creating a directory that will hold configurations such as the admin key files, which are required to connect to the cluster, and the cluster’s API address.
   mkdir -p $HOME/.kube
   # Copy all the admin configurations into the newly created directory 
   sudo cp -i /etc/Kubernetes/admin.conf $HOME/.kube/config
   # Change the user from root to regular user that is non-root account
   sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm join I0.111.4.79:6443 --token: zxicp.......................................
  • After running the command, the worker node joins the control panel successfully.
WORKER NODE JOINS THE CLUSTER
WORKER NODE JOINS THE CLUSTER
  • Now, verify the nodes on your controller node by running the kubectl command as below.
kubectl get nodes
Checking the kubernetes nodes
Checking the Kubernetes nodes
  • You will notice that the status of both the nodes is NotReady because there is no networking configured between both the nodes. To check the network connectivity, run the kubectl command as shown below.
kubectl get pods --all-namespaces
  • Below, you can see that coredns pod is in Pending, which configures network connecting between both the nodes. To configure the networking, they must be in Running status.
Checking the kubernetes Pods
Checking the Kubernetes Pods

To fix the networking issue, you will need to Install a Pod network on the cluster so that your Pods can talk to each other. Let’s do that !!

Install a Pod network on the cluster

Earlier, you installed Kubernetes on the Controller node, and the worker node was able to join it, but to establish the network connectivity between two nodes, you need to deploy a pod network on the Controller node, and one of the most widely used pod networks is Flannel. Let’s deploy it with the kubectl apply command.

Kubernetes allows you to set up pod networks via YAML configuration files. One of the most popular pod networks is called Flannel. Flannel is responsible for allocating an IP address lease to each node.

The Flannel YAML file contains the configuration necessary for setting up the pod network.

  • Run the below kubectl apply command on the Controller node.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  • After running this command, you will see the below output.
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
  • Now re-run kubectl commands to verify if both the nodes are in ready status and the coredns pod is running.
kubectl get nodes
kubectl get pods --all-namespaces
kubernetes network is setup
Kubernetes network is set up.
  • To check the cluster status, run the kubectl cluster-info command.
Checking the Kubernetes cluster status

Join 48 other followers

Conclusion

You should now know how to install Kubernetes on Ubuntu. Throughout this tutorial, you walked through each step to get a Kubernetes cluster set up and deploy your first application. Good job!

Now that you have a Kubernetes cluster set up, what applications will you deploy next to it?

AWS News: Amazon Launches Online Metaverse Amazon Games (AWS Cloud Quest)

Are you already an AWS Engineer or developer looking to utilize your skills while playing Games? Does that sound interesting?

You heard it right; recently, Amazon launched an Online metaverse-like game based on role-playing AWS Cloud Quest so that you can utilize your theoretical knowledge in practicals.

Metaverse means a virtual reality space in which users can interact with a computer-generated environment and other users.

AWS Cloud Quest helps in learning and passing the AWS Cloud Practitioner program. It’s a game that helps you learn cloud computing skills and become a cloud practitioner.

Cloud Quest is the most unexplored way to build your aws skills, taught by hands-on practice, using a reward system that allows users to collect gems in their quest to solve challenges.

It is a 3D online role-playing game today where players move through a virtual city, helping people solve real-world technology issues and cloud use cases to build Amazon Web Services (AWS) cloud skills and prepare for the AWS Certified Cloud Practitioner exam.

  • There are tons of customizations, unique challenges which makes Cloud Quest an ultra friendly way to develop and learn cloud computing skills at the AWS Cloud Practioner level.
  • You can choose the Avtar of your chooice.
  • Understand solutions Requirements with Real world Problems

Cloud Quest has 12 essential assignments going from cloud computing essentials to file systems and finally towards highly available applications.

If you are interested in utilizing or understanding the AWS cloud skills, definitely try it.

The Ultimate Ansible tutorial with Ansible Playbook Examples

Are you tired of managing multiple hosts manually? It’s time to learn and automate configuration management with Ansible with this Ultimate Ansible tutorial with Ansible Playbook Examples.

Ansible is the most popular and widely used automation tool to manage configuration changes across your on-prem and cloud resources.

In this article, you’re going to learn everything you should know about Ansible and will get a jump start on running your Ansible commands and Ansible Playbooks!

Let’s get started!

Join 48 other followers

Table of Contents

  1. What is Ansible?
  2. Ansible Architecture Diagram and Ansible components
  3. Ansible Inventory
  4. Ansible Adhoc command
  5. What is Ansible playbook and Ansible playbook examples!
  6. Executing Ansible when conditional using Ansible playbook
  7. Ansible Variables and Dictionary
  8. Ansible Error Handling
  9. Ansible Handlers
  10. Ansible Variables
  11. Ansible Tags
  12. Ansible Debugger
  13. What are Ansible Roles and a Ansible roles examples?
  14. Conclusion

Join 48 other followers

What is Ansible?

Ansible is an IT automation tool and is most widely used for deploying applications and system configurations easily on multiple servers, either hosted on a data center or on Cloud, etc.

Ansible is an agentless automation tool that manages machines over the SSH protocol by default. Once installed, Ansible does not require any database, no daemons to start or keep running.

Ansible uses inbuilt ansible ad hoc command and Ansible Playbooks to deploy the changes. Ansible Playbooks are YAML-based configuration files that you will see later in the tutorial in depth.

Ansible Architecture Diagram and Ansible components

Now that you have a basic idea about what is Ansible? Let’s further look at Ansible Architecture Diagram and Ansible components that will give you how Ansible works and the components Ansible requires.

Ansible Control Node

Ansible Control Node, also known as Ansible Controller host, is the server where Ansible is installed. This node executes all the Ansible ad hoc commands and Ansible playbooks. /commands. You can have multiple Control nodes but not windows. Must have Python Installed.

Ansible Remote Node or Ansible Managed Nodes

Ansible remote nodes or Ansible managed nodes are the servers or network devices on which you deploy applications or configurations using the Ansible ad hoc commands or playbook. These are also known as Ansible hosts.

Ansible Inventory

Ansible inventory is the file on the Ansible controller host or control node, which contains a list of all the remote hosts or managed nodes.

Ansible Core

Ansible core modules or ansible-core are the main building block and architecture for Ansible, including CLI tools such as ansible-playbook, ansible-doc, and interacting with automation.

The Ansible core modules are owned and managed by the core ansible team and will always ship with ansible itself.

Ansible Modules

Ansible modules or Ansible core modules are the code plugins or libraries plugins that can be used from the command line or a playbook task. Ansible executes each module, usually on the remote managed node, and collects return values.

Ansible Collections

Ansible Collections are a distribution format for Ansible content. Using collections, you can package and distribute playbooks, roles, modules, and plugins. A typical collection addresses a set of related use cases. You can create a collection and publish it to Ansible Galaxy or a private Automation Hub instance.

Ansible Task

Ansible Task is a unit of action executed when running the Ansible Playbook. Ansible Playbook contains one or more Ansible tasks.

Ansible Architecture Diagram and Ansible components
Ansible Architecture Diagram and Ansible components

Ansible Inventory

Ansible works against Ansible remote nodes or hosts to create or manage the infrastructure, but how does Ansible know those Ansible remote nodes? Yes, you guessed it right. Ansible Inventory is a file containing the list of all Ansible remote nodes or remote nodes grouped together, which Ansible uses while deploying or managing the resources.

The default location for Ansible inventory is a file called /etc/ansible/hosts. You can specify a different inventory file using the -I <path> option at the command line. Ansible Inventory is declared in two formats, i.e., INI and YAML.

Ansible installed on the control node communicates with remote nodes over SSH Protocol.

Related: working-with-ssh-connectivity

  • Ansible inventory in ini format is declared as shown below.
           automate2.mylabserver.com
           [httpd]
           automate3.mylabserver.com
           automate4.mylabserver.com
           [labserver]
           automate[2:6].mylabserver.com
  • Ansible inventory in YAML format is declared as shown below.
           all:
             hosts:
                automate2.mylabserver.com
             children:
                 httpd:
                   hosts:
                     automate3.mylabserver.com
                     automate4.mylabserver.com
                 labserver:
                    hosts:
                     automate[2:6].mylabserver.com

Ansible Adhoc command

If you plan to create, launch, deploy, or work with a single configuration file such as restarting an nginx service, a reboot of the machine, copying a file to a remote machine, starting or stopping any particular service, etc. on Ansible remote node then running ad hoc commands will suffice.

Ad hoc commands are a quick and efficient way to run a single command on Ansible remote nodes. Let’s quickly learn how to run Ansible Adhoc commands.

  • To ping all the Ansible remote nodes using Ansible Adhoc command.
ansible all -m ping            # Ping Module to Ping all the nodes
ping all the Ansible remote nodes
ping all the Ansible remote nodes
  • To run echo command on all the Ansible remote nodes using Ansible Adhoc command.
ansible all -a "/bin/echo Automate"  # Echo Command to Provide the output
echo command on all the Ansible remote nodes
echo command on all the Ansible remote nodes
  • To check uptime of all the Ansible remote nodes using Ansible Adhoc command.
ansible all -a /usr/bin/uptime   # Provides the Uptime of the Server
checking uptime of all the Ansible remote nodes
checking uptime of all the Ansible remote nodes
  • To create a user on a Ansible remote node using Ansible adhoc command.
ansible all -m ansible.builtin.user -a "name=name password=password" -b
creating Linux user on Ansible remote node
creating Linux user on Ansible remote node
  • To install Apache nginx service on all the Ansible remote nodes using Ansible Adhoc command.
# b is to become root and gain the root privileges
ansible all -m apt -a  "name=apache2 state=latest" -b  
  • To start Apache nginx service on all the Ansible remote nodes using Ansible Adhoc command.
ansible all -m ansible.builtin.service -a "name=apache2 state=started"
start Apache nginx service on all the Ansible remote nodes
start Apache nginx service on all the Ansible remote nodes
  • To reboot all the remote nodes that are part of america-servers group in Ansible inventory using Ansible adhoc command.
    • america-servers is a group of hosts which is saved in /etc/hosts.
    • To run a command use -a flag.
    • “/sbin/reboot” is to command to reboot the machine
    • -f is used to simultenously execute command on 100 servers.
    • -u is used to run this using a different username.
    • –become is used to run as a root.
ansible america-servers -a "/sbin/reboot"  -f 100 -u username --become

What is Ansible playbook and Ansible playbook examples!

Ansible playbooks are used to deploy complex applications, offer reusable and simple configuration management, offer multi-machine deployments, and perform multiple tasks multiple times. Ansible playbooks are written in YAML format containing multiple tasks and executed in sequential order.

Let’s learn how to declare the Ansible playbook to install Apache on the remote node.

# Playbook apache.yml
---
- name: Installing Apache service 
  hosts: my_app_servers                                  # Define all the hosts
  remote_user: ubuntu                                    # Remote_user is ubuntu
  # Defining the Ansible task
  tasks:                                                  
  - name: Install the Latest Apache
    apt:
      name: httpd
      state: latest
  • Before you run your first Ansible playbook using ansible-playbook command make sure to verify the Playbook to catch syntax errors and other problems before you run them using below command.
ansible-playbook apache.yml --syntax-check
executing ansible-playbook
executing ansible-playbook
  • To verify the playbook in detailed view run the ansible-lint command.
ansible-lint apache.yml
verify the playbook in detailed view
verify the playbook in the detailed view
  • To verify the Ansible playbook run the below command with –check flag.
ansible-playbook apache.yml --check
verify the Ansible playbook with --check flag
verify the Ansible playbook with –check flag
  • Finally execute the Ansible playbook using the below command.
ansible-playbook apache.yml 
 execute the Ansible playbook
execute the Ansible playbook
  • If you intend to install and start the Apache service using Ansible Playbook, copy/paste the below content and run the ansible-playbook command.
# Playbook of apache installation and service startup
---
- name: Install and start Apache service 
  hosts: my_app_servers                                  # Define all the hosts
  remote_user: ubuntu                                    # Remote_user is ubuntu
 # Defining the tasks
  tasks:                                                 
  - name: Install the Latest Apache
    apt:
      name: httpd
      state: latest
  - name: Ensure apache2 serice is running
    service:  
      name: apache2
      state: started
    become: yes                   # if become is set to yes means it has activated privileges
    become_user: ubuntu    # Changes user to ubuntu and by default it is root

Executing Ansible when conditional using Ansible playbook

The Ansible when is a Jinja2 expression that evaluates the test or the condition for all remote nodes and wherever the test passes (returns a value of True), run the Ansible task or Ansible playbook on that host.

  • In the below Ansible Playbook there are four tasks i.e
    • To check the latest version
    • Ensuring Apache service is running
    • Creating users with_item
    • Finally editing the file only when ip matches using Ansible when.
---
- name: update web servers
  hosts: webserver
  remote_user: ubuntu

  tasks:
   - name: ensure apache is at the latest version
     apt:
       name: apache2
       state: latest

   - name: Ensure apache2 serice is running
     service:
        name: apache2
        state: started
     become: yes
 
   - name: Create users
     user:
         name: "{{item}}"
     with_items:
      -bob
      -sam
      -Ashok
 
   - name: Edit the text file
     lineinfile:
     path:  /tmp/file
     state: present
     line:  "LogLevel debug"
     when:
     - ansible_hostname == "ip-10-111-4-18" 
  • Finally execute the Ansible playbook using the below command.
ansible-playbook apache.yml 
execute the Ansible playbook
execute the Ansible playbook

In case you have a common parent layer then you don’t need to declare common things every time in the Ansible task as it automatically inherits directives applied at the block level.

In the below example, Ansible when, Ansible become Ansible become_user, ignore_errors are common for the entire block.

---
- name: update web servers
  hosts: webserver
  remote_user: ubuntu

  tasks:
   - name: Install, configure and start Apache
     block:
      - name: Install apache
        apt:
          name: apache2
          state: present
      - name: Ensure apache2 serice is running
        service:
           name: apache2
           state: started
    # Below Directives are common for entire block i.e all tasks
     when: ansible_facts['distribution'] == "Debian" 
     become: true
     become_user: root
     ignore_errors: yes
Execute the Ansible playbook using Ansible when
Execute the Ansible playbook using Ansible when

Ansible Variables and Dictionary

In this section, let’s quickly look into how Ansible loops iterate over a dictionary item and convert it into list items.

  • Create another playbook named abc.yaml and copying the code below.
# Playbook using iterating over a dictionary

 - name: Add several users 
   ansible.builtin.user: 
      name: "{{ item.name }}" 
      state: present 
      groups: "{{ item.groups }}" 
   loop: 
     - { name: 'automate1', groups: 'root' }
     - { name: 'automate2', groups: 'root' } 
  • Now execute the Ansible Playbook using the below command.
ansible-playbook abc.yaml

As you can see below, the Dictionary items have been converted into the List.

Ansible Playbook containing Ansible variables and dictionary
Ansible Playbook containing Ansible variables and dictionary

Ansible Error Handling

When Ansible receives a non-zero return code from a command or a failure from a module, by default, it stops executing on that host and continues on other hosts. Still, at times, a non-zero return code indicates success, or you want a failure on one host to stop execution on all hosts.

Ansible uses Error handling to work with the above conditions to handle these situations and help you get the behavior and output you want.

  • Create below playbook named main.yml and copy/paste the below code and execute the playbook. The below task perform various tasks such as:
    • One of the Ansible tasks prints a message “i execute normally”
    • Fails a task
    • One task will not proceed
ansible-playbook main.yml
---
- name: update web servers
  hosts: localhost
  remote_user: ubuntu
  tasks:
  - name: Ansible block to perform error handling 
    block:

      - name: This task prints a message i execute normally
        ansible.builtin.debug:
          msg: 'I execute normally'

      - name: This task will fail
        ansible.builtin.command: /bin/false

      - name: This task will not proceed due to the previous task failing
        ansible.builtin.debug:
          msg: 'I never execute, '

    rescue:
      - name: Print when errors
        ansible.builtin.debug:
          msg: 'I caught an error, can do stuff here to fix it'

    always:
      - name: Always do this
        ansible.builtin.debug:
          msg: 'This executes always'
Ansible Playbook with Ansible Error Handling
Ansible Playbook with Ansible Error Handling

Ansible Handlers

Ansible handlers are used when you need to perform any task only when notified. You add all the tasks inside the handler block in Ansible Playbook, and Ansible handlers run whenever notify calls them.

For example, whenever there is any update or change in configuration or restarting, the service is required. Ansible Handlers run when they are notified.

In the below example, if there are changes in the /tmp/file.txt, the apache service is restarted. Ansible Handler will restart the Apache service only when the lineinfile task notifies it.

---
- name: update web servers
  hosts: webserver
  remote_user: ubuntu
 
  tasks:
   - name: ensure apache is at the latest version
     apt:
      name: apache2
      state: latest

   - name: Ensure apache2 serice is running
     service:
        name: apache2
       state: started
     become: yes

# Edit the text file using lineinfile module
   - name: Edit the text file 
      lineinfile:
       path:  /tmp/file.txt
       state: present
       line:  "LogLevel debug"
      when:
        - ansible_hostname == "ip-10-111-4-18"
      notify:
      -  Restart Apache

  handlers:
   - name: Restart Apache
     ansible.builtin.service:
       name: apache2
       start: restarted 
Ansible Playbook with Ansible Handlers
Ansible Playbook with Ansible Handlers

Ansible Variables

Ansible uses Ansible variables to manage multiple configurations with various attributes. With Ansible, you can execute tasks and playbooks on multiple systems with a single command with variations among different systems.

Ansible variables are declared with standard YAML syntax, including lists and dictionaries. There are lots of ways in which you can set your Ansible variables inside the ansible-play; let’s learn by:

  • Defining Ansible variable normally
  • Defining Ansible variables from file
  • Defining Ansible variables from roles
  • Defining Ansible variable at run time
---
- name: update web servers
  hosts: webserver
  remote_user: ubuntu
  remote_install_path: /tmp/mypath       # Defining Simple Variable      ~1
  vars:                                  # Defining Variables from Role  ~2
     favcolor: blue   
  vars_files:                           # Defining Variables from file   ~3  
     - /vars/external_vars.yml
 
  tasks:
   - name: Check Config
     ansible.builtin.template:
       src: my.cfg.j2
       dest: '{{remote_install_path}}/my.cfg'
  • Defining Ansible variable at run time with key : value format
ansible-playbook myplaybook.yml --extra-vars "version=1.23.45 other_variable=auto"
  • Defining Ansible variable at run time with JSON format
ansible-playbook myplaybook.yml --extra-vars '{"version":"1.23.45","other_variable":"auto"}'

Ansible Tags

When you need to run specific tasks in the Ansible playbook, you need to consider using Ansible Tags. Ansible tags are applied on Ansible blocks level, Ansible playbook, Ansible task, or at Ansible role level.

Let’s learn how to add Ansible Tags at different levels.

  • Adding Ansible Tags to individual Ansible tasks.
---
 - hosts: appservers
   tasks:
   - name: Deploy App Binary
     copy:
	   src: /tmp/app.war
	   dest: /app/
	   
	tags: 
	  - apptag          # Applying Ansible Tag to task 1
	 
 - hosts: dbserver	   
   tasks:
   - name: Deploy DB Binary
     copy: 
       src: /tmp/db.war`
       dest: /db
     tags:   	         # Applying Ansible Tag to task 2
       - dbtag
  • Adding Ansible Tags to Ansible block.
tasks:
- block:
  tags: ntp                     # Applying Ansible Tags to Ansible block
  - name: Install ntp
    ansible.builtin.yum:
      name: ntp
      state: present

  - name: Enable and run ntpd
    ansible.builtin.service:
      name: ntpd
      state: started
      enabled: yes
  • Adding Ansible Tags to Ansible Playbook.
- hosts: all
  tags: ntp                     #  Applying Ansible Tags to Ansible Playbook
  tasks:
  - name: Install ntp
    ansible.builtin.yum:
      name: ntp
      state: present

Ansible Debugger

Ansible provides a debugger to fix errors during execution instead of editing it and then running.

  • Using Ansible debugger on a Ansible task.
- name: Execute a command
  ansible.builtin.command: "false"
  debugger: on_failed
Ansible debugger in Ansible Playbook
Ansible debugger in Ansible Playbook
  • Using Ansible debugger on a Ansible Playbook.
- name: My play
  hosts: all
  debugger: on_skipped
  tasks:
    - name: Execute a command
      ansible.builtin.command: "true"
      when: False

  • Using Ansible debugger on a Ansible Playbook and Ansible task.
- name: Play
  hosts: all
  debugger: never
  tasks:
    - name: Execute a command
      ansible.builtin.command: "false"
      debugger: on_failed

What are Ansible Roles and a Ansible roles examples

Ansible Roles is a way to structurally maintain your Ansible playbooks, i.e., it lets you load the variables from files, variables, handlers, tasks based on the structure. Similar to the Ansible modules, you can create different Ansible roles and reuse them as many times as you need.

  • Let’s look at what an Ansible Role directory structure looks like.
    • Tasks : This directory contains one or more files with tasks . These file can also refer to files and templates without needing to provide the exact path.
    • Handlers: Add all your handlers in this directory
    • Files: This directory contains all your static files and scripts that might be copied or executed to remote machine.
    • Templates: This directory is reserved for templates that generate files on remote hosts.
    • Vars: You define variables inside this directory and then can be referenced elsewhere in role.
    • defaults: This directory lets you define default variables for included or dependent role.
    • meta: This directory is used for dependency management such as dependency roles.
Ansible Role directory structure
Ansible Role directory structure

Join 48 other followers
  • Creating a Sample Ansible Playbook as shown below which you will break in different file to understand how Ansible role has file structure.
--- 
- hosts: all 
  become: true 
  vars: 
    doc_root: /var/www/example 
  tasks: 
    - name: Update apt 
      apt: update_cache=yes 
 
    - name: Install Apache 
      apt: name=apache2 state=latest 
 
    - name: Create custom document root 
      file: path={{ doc_root }} state=directory owner=root group=root  
 
    - name: Set up HTML file 
      copy: src=index.html dest={{ doc_root }}/index.html owner=root group=root mode=0644 
 
    - name: Set up Apache virtual host file 
      template: src=vhost.tpl dest=/etc/apache2/sites-available/000-default.conf 
      notify: restart apache 
 
  handlers: 
    - name: restart apache 
      service: name=apache2 state=restarted
  • Let’s break the playbook that you created previously into Ansible roles by:
    • Creating a directory named roles inside home directory.
    • Creating a directory named apache inside roles inside home directory.
    • Creating a directory named defaults, tasks, files, handlers, vars, meta, templates inside apache
  • Create main.yml inside ~/roles/apache/tasks directory.
---    
- name: Update apt
      apt: update_cache=yes

    - name: Install Apache
      apt: name=apache2 state=latest

    - name: Create custom document root
      file: path={{ doc_root }} state=directory owner=www-data group=www-data

    - name: Set up HTML file
      copy: src=index.html dest={{ doc_root }}/index.html owner=www-data group=www-data mode=0644

    - name: Set up Apache virtual host file
      template: src=vhost.tpl dest=/etc/apache2/sites-available/000-default.conf
      notify: restart apache
  • Create main.yml inside ~/roles/apache/handlers directory.
---
handlers:
    - name: restart apache
    service: name=apache2 state=restarted
  • Create index.html inside ~/roles/apache/files directory.
<html>
<head><title>Configuration Management Hands On</title></head>
<h1>This server was provisioned using Ansible</h1>
</html>
  • Create vhost.tpl inside cd ~/roles/apache/templates
<VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot {{ doc_root }}

<Directory {{ doc_root }}>
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
  • Create main.yml inside cd ~/roles/apache/meta
---
dependencies:
  - apt
  • Create my_app.yml inside home directory
---
- hosts: all
  become: true
  roles:
    - apache
  vars:
    - doc_root: /var/www/example

Join 48 other followers

Conclusion

In this Ultimate Guide, you learned what is Ansible, Ansible architecture, and understood Ansible roles and how to declare Ansible Playbooks.

Now that you have gained a handful of Knowledge on Ansible, what do you plan to deploy using it?

The Ultimate Kubernetes Interview questions for Kubernetes Certification (CKA)

If you are preparing for a DevOps interview or for Kubernetes Interview questions or Kubernetes Certification, consider marrying this Ultimate Kubernetes Interview questions for Kubernetes Certification (CKA) tutorial, which will help you forever in any Kubernetes interview.

Without further delay, let’s get into this Kubernetes Interview questions for Kubernetes Certification (CKA).

Join 48 other followers

Table of Content

Related: Kubernetes Tutorial for Kubernetes Certification [PART-1]

Related: Kubernetes Tutorial for Kubernetes Certification [PART-2]

PAPER-1

Q1. How to create kubernetes namespace using kubectl command.

Answer: Kubernetes namespace can be created using the kubectl create command.

kubectl create namespace namespace-name

Q2. How to create a kubernetes namespace named my-namespace using a manifest file?

Answer: Create the file named namespace.yaml as shown below.

apiVersion: v1
kind: Namespace
metadata: 
    name: my-namespace
  • Now execute the below kubectl command as shown below.
kubectl create -f namespace.yaml
Creating the Kubernetes namespace(my-namespace)
Creating the Kubernetes namespace(my-namespace)

Q3. How to switch from one Kubernetes namespace to another Kubernetes namespace ?

Answer: To switch beetween two kubernetes namespaces run the kubectl config set command.

kubectl config set-context $(kubectl config current-context) --namespace my-namespace2
switch from one Kubernetes namespace to other Kubernetes namespace
switch from one Kubernetes namespace to another Kubernetes namespace

Q4. How To List the Kubernetes namespaces in a Kubernetes cluster ?

Answer: Run the kubectl get command as shown below.

kubectl get namespaces

Q5. How to create the Kubernetes namespaces in a Kubernetes cluster ?

Answer: Execute the below kubectl command.

kubectl create namespace namespace-name

Q6. To delete kubernetes namespace using kubectl command ?

Answer: kubectl delete command allows you to delete the Kubernetes API objects.

kubectl delete namespaces namespace-name

Q7. How to create a new Kubernetes pod with nginx image?

Answer: Use Kubectl run command to launch a new Kubernetes Pod.

kubectl run nginx-pod --image=nginx
Running kubectl run command to create a new Pod.
Running kubectl run command to create a new Pod.

Q8. How to Create a new Kubernetes pod in different Kubernetes namespace?

Answer: Use Kubectl run command to launch a new Kubernetes Pod followed by namspace flag.

kubectl run nginx-pod --image=nginx --namespace=kube-system
Creating a new Kubernetes pod in different Kubernetes namespace
Creating a new Kubernetes pod in a different Kubernetes namespace

Q9. How to check the running Kubernetes pods in the Kubernetes cluster?

Answer:

kubectl get pods
Checking the running Kubernetes pods
Checking the running Kubernetes pods

Q10. How to check the running Kubernetes pods in the Kubernetes cluster in different kubernetes namespace?

Answer:

 kubectl get pods  --namespace=kube-system | grep nginx
Checking the running Kubernetes pods in different kubernetes namespace
Checking the running Kubernetes pods in different kubernetes namespace

Q11. How to check the Docker image name for a running Kubernetes pod and get all the details?

Answer: Execute the kubernetes describe command.

kubectl describe pod-name
Describing the kubernetes Pod
Describing the kubernetes Pod

Q12. How to Check the name of the Kubernetes node on which Kubernetes pods are deployed?

Answer:

kubectl get pods -o wide
Checking the name of the Kubernetes node
Checking the name of the Kubernetes node

Q13. How to check the details of docker containers in the Kubernetes pod ?

Answer:

kubectl describe pod pod-name
Checking the details of docker containers
Checking the details of docker containers

Q14. What does READY status signify in kubectl command output?

Answer: The READY status gives the stats of the number of running containers and the total containers in the cluster.

kubectl get pod -o wide command
Checking the Ready Status
Checking the Ready Status

Q15. How to delete the Kubernetes pod in the kubernetes cluster?

Answer: Use the kubectl delete command.

kubetcl delete pod webapp
Deleting the Kubernetes pod
Deleting the Kubernetes pod

Q16. How to edit the Docker image of the container in the Kubernetes Pod ?

Answer: Use the Kubernetes edit command.

kubectl edit pod webapp

Q17. How to Create a manifest file to launch a Kubernetes pod without actually creating the Kubernetes pod?

Answer: –dry-run=client flag should be used

kubectl run nginx --image=nginx --dry-run=client -o yaml > my-file.yaml
launch a Kubernetes pod without actually creating the Kubernetes pod
launch a Kubernetes pod without actually creating the Kubernetes pod

Q18. How to check the number of Kubernetes Replicasets running in the kubernetes cluster ?

Answer: Run Kubectl get command.

kubectl get rs
kubectl get replicasets
Checking the Replicasets in kubernetes cluster
Checking the Replicasets in kubernetes cluster

Q19. How to find the correct version of the Kubernetes Replicaset or in Kubernetes deployments ?

Answer:

kubectl explain rs | grep VERSION
Finding the Kubernetes replicaset or kubernetes deployment version
Finding the Kubernetes replicaset or kubernetes deployment version

Q20. How to delete the Kubernetes Replicasets in the Kubernetes cluster?

Answer: Run the below command.

kubectl delete rs replicaset-1 replicaset-2
delete the Kubernetes Replicasets
delete the Kubernetes Replicasets

Q21. How to edit the Kubernetes Replicasets in the Kubernetes cluster?

Answer: Run the below command.

kubectl edit rs replicaset-name

Q22. How to Scale the Kubernetes Replicasets in the Kubernetes cluster?

Answer: To scale the Kubernetes Replicasets you can use any of three below commands.

kubectl scale  --replicas=5 rs rs_name
kubectl scale --replicas=6 -f file.yml # Doesnt change the number of replicas in the file.
kubectl replace -f file.yml

Q23. How to Create the Kubernetes deployment in the kubernetes Cluster?

Answer: Use the kubernetes create command.

kubectl create deployment nginx-deployment --image=nginx
Create the Kubernetes deployment
Creating the Kubernetes deployment
kubectl create deployment my-deployment --image=httpd:2.4-alpine
Create the Kubernetes deployment
Creating the Kubernetes deployment

Note: Deployment strategy are of two types:

  • Recreate strategy where we replace all the pods of deployment together and create new pods
  • Rolling update Strategy where we replace few pods with newly created pods.

To Update the deployment use the below commands.

  • To update the deployments
kubectl apply deployment-definition.yml
  • To update the deployment such as using nginx:1.16.1 instead of nginx:1.14.2
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record

Q24. How to Scale the Kubernetes deployment in the kubernetes Cluster?

Answer:

kubectl scale deployment my-deployment --replicas=3
Scaling the Kubernetes deployment
Scaling the Kubernetes deployment

Q25. How to Edit the Kubernetes deployment in the kubernetes Cluster?

Answer:

kubectl edit deployment my-deployment
Editing the Kubernetes deployment
Editing the Kubernetes deployment

Q26. How to Describe the Kubernetes deployment in the kubernetes Cluster?

Answer:

kubectl describe deployment my-deployment
 Describing the Kubernetes deployment
Describing the Kubernetes deployment

Q27. How to pause the Kubernetes deployment in the kubernetes Cluster?

Answer: Use the Kubectl rollout command.

kubectl rollout pause deployment.v1.apps/my-deployment
Pausing the kubernetes deployment
Pausing the kubernetes deployment
Viewing the Paused kubernetes deployment
Viewing the Paused kubernetes deployment
  • To check the status of Rollout and then check all the revisions and rollouts you can check using below command.
kubectl rollout status deployment.v1.apps/my-deployment

kubectl rollout history deployment.v1.apps/my-deployment

Q28. How to resume the Kubernetes deployment in the kubernetes Cluster?

Answer:

kubectl rollout resume deployment.v1.apps/my-deployment
Resuming the Kubernetes deployment
Resuming the Kubernetes deployment

Q29. How to check the history the Kubernetes deployment in the kubernetes Cluster?

Answer:

For Incorrect Kubernetes deployments such as an incorrect image the deployment crashes. Make sure to stop the deployment using cltr + c and execute rollout history command.

kubectl rollout history deployment.v1.apps/nginx-deployment

Q30. How to rollback to the previous kubernetes deployment version which was stable in the kubernetes Cluster?

Answer: Run the undo command as shown below.

kubectl rollout undo deployment.v1.apps/nginx-deployment

Q31. How to Create a manifest file to create a Kubernetes deployment without actually creating the Kubernetes deployment?

Answer: use the –dry-run=client command.

kubectl create deployment nginx --image=nginx --dry-run=client -o yaml
Creating the kubernetes deployment manifest file
Creating the kubernetes deployment manifest file

Q32. How to Create a manifest file to create a Kubernetes deployment with Replicasets without actually creating the Kubernetes deployment?

Answer: use the –dry-run=client command.

kubectl create deployment nginx --image=nginx --replicas=4 --dry-run=client -o yaml
Creating the kubernetes deployment with replicasets with manifest file
Creating the kubernetes deployment with replicasets with manifest file

Q33. How to Create a Kubernetes service using manifest file ?

Answer: Create the kubernetes file and then run kubernetes create commnad.

kubectl create -f service-defination.yml

Q34. How to Check running Kubernetes service in the kubernetes cluster?

Answer: To check the running Kubernetes services in the kubernetes cluster run below command.

kubectl get svc
kubectl get services
Checking Kubernetes service in kubernetes cluster
Checking Kubernetes service in kubernetes cluster

Q35. How to Check details of kubernetes service such as targetport, labels, endpoints in the kubernetes cluster?

Answer:

kubectl describe service 
Describing the Kubernetes service in kubernetes cluster
Describing the Kubernetes service in kubernetes cluster

Q36. How to Create a Kubernetes NodePort service in the kubernetes cluster?

Answer: Run kubectl expose command.

kubectl expose deployment nginx-deploy --name=my-service --target-port=8080 --type=NodePort --port=8080 -o yaml -n default  # Make sure to add NodePort seperately
 Kubernetes NodePort service
Kubernetes NodePort service

Q37. How to Create a Kubernetes ClusterIP service named nginx-pod running on port 6379 in the kubernetes cluster?

Answer: Create a pod then expose the Pod using kubectl expose command.

kubectl run nginx --image=nginx --namespace=kube-system
kubectl expose pod --port=6379 --name nginx-pod -o yaml --namespace=kube-system
Creating the Kubernetes Pods
Creating the Kubernetes Pods
 Kubernetes ClusterIP service
Kubernetes ClusterIP service
Verifying the Kubernetes ClusterIP service
Verifying the Kubernetes ClusterIP service

Q38. How to Create a Kubernetes ClusterIP service named redis-service in the kubernetes cluster?

Answer:

kubectl create service clusterip --tcp=6379:6379  redis-service --dry-run=client -o yaml
Creating the Kubernetes ClusterIP
Creating the Kubernetes ClusterIP

Q39. How to Create a Kubernetes NodePort service named redis-service in the kubernetes cluster?

Answer: kubectl expose command.

kubectl create service nodeport --tcp=6379:6379  redis-service  -o yaml
Creating the Kubernetes NodePort
Creating the Kubernetes NodePort

Q40. How to save a Kubernetes manifest file while creating a Kubernetes depployment in the kubernetes cluster?

Answer: Use > nginx-deployment.yaml

kubectl create deployment nginx --image=nginx --dry-run=client -o yaml > nginx-deployment.yaml

Join 48 other followers

Related: Kubernetes Tutorial for Kubernetes Certification [PART-1]

Related: Kubernetes Tutorial for Kubernetes Certification [PART-2]

Conclusion

In this Ultimate guide (Kubernetes Interview questions for Kubernetes Certification (CKA), you had a chance to revise everything you needed to pass and crack the Kubernetes interview.

Now that you have sound knowledge of Kubernetes and are ready for your upcoming interview.

Kubernetes Tutorial for Kubernetes Certification [PART-2]

In the previous Kubernetes Tutorial for Kubernetes Certification [PART-1], you got a jump start into the Kubernetes world; why not gain a more advanced level of knowledge of Kubernetes that you need to become a Kubernetes pro.

In this Kubernetes Tutorial for Kubernetes Certification [PART-2] guide, you will learn more advanced levels of Kubernetes concepts such as Kubernetes deployment, kubernetes volumes, Kubernetes ReplicaSets, and many more.

Without further delay, let’s get into it.

Join 48 other followers

Table of Content

  1. kubernetes deployment
  2. Kubernetes ReplicaSets
  3. Kubernetes DaemonSet
  4. Kubernetes Jobs
  5. What is a kubernetes service
  6. Kubernetes ClusterIP
  7. Kubernetes NodePort
  8. kubernetes loadbalancer service
  9. Kubernetes Ingress
  10. kubernetes configmap or k8s configmap
  11. Kubernetes Secrets
  12. Kubernetes Volume and kubernetes volume mounts
  13. kubernetes stateful sets
  14. Conclusion

Introduction to YAML

YAML format is easier to understand and to see three different types of syntax, lets checkout below.

The below is the XML syntax.

<servers>
      <server>
               <name>server1</name>
               <owner>sagar</owner>
               <status>active</status>
     <server>
<servers>

The below is the JSON syntax.

{
         Servers: [
      {
        name: server1,
        owner: sagar,
        status: active,
     }
   ]
}

The below is the YAML syntax.

servers:
      -name: server1
      -owner: sagar
       -status: active

The below is again an example of the YAML syntax.

Fruits:
  - Apple:
        Calories: 95
        Fat: 0.3
        Carbs: 25
  - Banana:
      Calories: 105
      Fat: 0.4
      Carbs: 27
  - Orange:
        Calories: 45
        Fat: 0.1
        Carbs: 11
Vegetables:
  - Carrot:
        Calories: 25
        Fat: 0.1
        Carbs: 6  
  - Tomato:
        Calories: 22
        Fat: 0.2
        Carbs: 4.8  
  - Cucumber:
        Calories: 8
        Fat: 0.1
        Carbs: 1.9          

kubernetes deployment

Kubernetes deployments allow you to create Kubernetes Pods and containers using YAML files. Using Kubernetes deployment, you specify the number of pods or replica sets you to need for a particular Kubernetes deployment.

Unlike kubernetes replicaset, Kubernetes deployment allows you to roll back, update the rollouts, resume or pause the deployment and never cause downtime. When you create a Kubernetes deployment by defining the replicas the kubernetes replicaset are also created.

A ReplicaSet ensures that a specified number of Pods are running simultaneously; however, a Kubernetes deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to pods along with a lot of other useful features.

Let’s check out an example to create Kubernetes deployments.

  • Create a file named deployment.yaml and copy/paste the below content into the file.
    • The name of the deployment is nginx-deployment defined in metadata.name field.
    • The deployment will create three kubernetes Pods using the spec.replicas field.
    • Kubernetes pods characterstics ae defined using the spec.selector field.
    • Pods will be launched if matches deployment Label defined using spec.selector.matchlabels.app
    • Pods are labeled using spec.template.metadata.labels.app
    • Containers specifications are done using spec.template.spec respectively.

When you execute the kubectl apply command to create the kubernetes object then your YAML file or requuest to Kube API server is first converted into JSON format.

The below is an example of the Pod YAML syntax.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment  # Name of the deployment
  labels: 
     app:nginx  # Declaring the deployments labels.
spec:
  replicas: 3  # Declaring the number of Pods required
  selector:
    matchLabels:
      app: nginx # Pods will be launched if matches deployment Label.
  template:
    metadata:
      labels:
        app: nginx # Labels of the Pods.
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
  • Run the commands below.
kubectl create deployment --help

kubectl create -f deployment.yml

kubectl create deployment my-dep --image=busybox --replicas=3 
  • Now, run kubectl get deployments to check if the Kubernetes deployment has been created.
kubectl get deployments
Creating kubernetes deployments
Creating kubernetes deployments
  • Next, run kubectl get rs to check the Kubernetes ReplicaSets created by the Deployment,
kubectl get deployments
Checking the kubernetes deployments
Checking the kubernetes deployments
  • If you wish to check the labels which are automatically generated for each Pod, run the below command.
kubectl get pods --show-labels
Checking labels of Pods
Checking labels of Pods
  • To check the information of the deployment use the below command.
kubectl describe deployment nginx-deployment
  • To update the deployment such as using nginx:1.16.1 instead of nginx:1.14.2
kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record

Kubernetes Update and Rollback

  • First check the numbers of pods and make sure no resource is present in this namespace.
kubectl get pods
  • Next create the deployment. Record option record the changes.
kubectl create -f deployment.yml --record=true
  • Next check the status of the rollout by using below command.
kubectl rollout status deployment app-deployment
  • Next check the history of the deployment by using below command..
kubectl rollout history deployment app-deployment
  • Next describe the deployment by using below command.
kubectl create -f deployment.yml
  • Next Edit the deployment by using below command. For example change the image version.
kubectl create -f deployment.yml
  • Next if any issues in the deployment then you can undo the deployment by using below command..
kubectl rollout undo deployment app-deployment
  • Next check the status of the rollout by using below command.
kubectl rollout status deployment app-deployment

Kubernetes ReplicaSets

Kubernetes ReplicaSets maintains a set of Kubernetes Pods running simultaneously and makes sure the pods are load-balanced properly; however, a Kubernetes deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to pods along with a lot of other useful features.

Even if you declare the replica sets as 1, kubernetes makes sure that you have this 1 pod running all the time.

Kubernetes Replicasets are deployed in the same way as Kubernetes deployments. For ReplicaSets, the kind is always a ReplicaSet, and you can scale delete the pods with the same kubectl command as you did for deployments.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx-replicasets
  labels: 
      app:nginx 
spec:
  replicas: 3 
  selector:
    matchLabels:   # Replicaset Label To create replicasets only when it matches label app: nginx 
      app: nginx 
  template:
    metadata:
     labels:      # Container label app: nginx 
        app: nginx 
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
  • Next run the below command to create the kubernetes Replicaset.
kubectl apply -f replicasets.yml
kubectl create -f rc-definition.yml
  • To replace the Kubenetes Replicaset run the below command.
kubectl replace -f replicasets.yml
  • To scale the Kubernetes Replicasets run the below command.

Changing the Kubernetes Replicasets doesn’t change the number of replicas in the Kubernetes manifest file.

kubectl scale --replicas=6 -f replicasets.yml 
kubectl scale  --replicas=6 replicaset name-of-the-replicaset-in-metadadata
Kubectl commands to work with kubernetes replicasets
Kubectl commands to work with kubernetes replicasets
  • To find the replicasets run the below command.
kubectl get replicationcontroller
  • Some of the important commands of replicasets are:
kubectl create -f replicaset-definition.yml
kubectl get replicaset
kubectl delete replicaset myapp-replicaset
kubectl replcase -f replicaset-definition.yml
kubectl scale --replicas=6 -f replicaset-definition.yml

If your replicaset are running with same labels previously and you try to create new pods with same label then it will terminate the new pods by replicasets.

Kubernetes DaemonSet

Kubernetes DaemonSet ensures that each node in the Kubernetes cluster runs a copy of Pod. When any node is added to the cluster, it ensures Pods are added to that node, or when a node is removed, Pods are also removed, keeping the Kubernetes cluster clean rather than getting stored in the garbage collector.

Generally, the node that a Kubernetes Pod runs on is chosen by the Kubernetes scheduler; however, for Kubernetes, DaemonSet pods are created and scheduled by the DaemonSet controller. To deploy, replace or update the Kubernetes Daemonset, you need to use the same Kubectl command for Kubernetes deployments.

  • Create a file named daemonset.yaml and copy/paste the below code.
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
  • Now, execute the kubectl apply command to create a Kubernetes daemonset.
kubectl apply -f  daemonset.yaml
creating a Kubernetes daemonset
creating a Kubernetes daemonset

Kubernetes Jobs

The main function of the Kubernetes job is to create one or more Kubernetes pods and check the successful deployment of the pods. Deleting a Kubernetes job will remove the Pods it created, and suspending a Kubernetes job will delete its active Pods until it is resumed again.

For example, while creating a new Pod, if it fails or is deleted due to a node hardware failure or a node reboot, the Kubernetes Job will provide the same. Kubernetes Job wallows you to run multiple Pods parallel or on a particular schedule.

When a Kubernetes Job completes, no more Pods are created or deleted, allowing you to still view the logs of completed pods to check for errors, warnings, etc. The Kubernetes job remains until you delete it using the kubectl delete job command.

  • To create a Kubernetes Job create a file named job.yaml and copy/paste the below content into it.
apiVersion: batch/v1
kind: Job
metadata:
  name: tomcatjob
spec:                  # It is of List and a array
  template:
    # This is the pod template
    spec:
      containers:
      - name: tomcatcon
        image: Tomcat
        command: ['sh', '-c', 'echo "Hello, Tomcat!" && sleep 3600']
      restartPolicy: OnFailure

  • To create the Kubernetes Jobs run the kubectl apply command followed by kubectl get job command to verify.
kubectl apply -f job.yaml

kubectl get jobs
creating the Kubernetes Jobs
creating the Kubernetes Jobs
  • To list all the Pods that belong to a Kubernetes Job use kubectl get pods command as shown below.
pods=$(kubectl get pods --selector=job-name=pi --output=jsonpath='{.items[*].metadata.name}')
echo $pods
list all the Pods that belong to a Kubernetes Job
list all the Pods that belong to a Kubernetes Job
list all the Pods that belong to a Kubernetes Job
list all the Pods that belong to a Kubernetes Job

What is a kubernetes service

Kubernetes service allows you to expose applications running on a set of Pods as a network service. Every Kubernetes Pods gets a unique IP address and DNS name, and sometimes these are deleted or added to match the state of your cluster, leading to a problem as IP addresses are changed.

To solve Kubernetes service was introduced, which aligns static Permanent IP address on a set of Pods as a network service. There are different Kubernetes service types: ClusterIP, NodePort, Loadbalancer, and ExternalName.

Kubernetes ClusterIP

Kubernetes ClusterIP exposes the service on an internal IP and is reachable within the cluster only and possibly only within the cluster nodes. You cannot access the ClusterIP service outside the Kubernetes cluster. When you create a Kubernetes ClusterIP, then a virtual IP is assigned.

Kubernetes ClusterIP architecture
Kubernetes ClusterIP architecture
  • Lets learn to create a ClusterIP using a file named clusterip.yaml and copy/paste the below content.
kind: Service 
apiVersion: v1 
metadata:
  name: backend-service 
spec:
  type: ClusterIP
  selector:
    app: myapp 
  ports:      
    - port: 8080     # Declaring the ClusterIP service port
# target port is the pod's port and If not set then it takes the same value as the service port
      targetPort: 80   
  • To create the ClusterIP service run the kubectl apply command followed by kubectl get service command to verify.
kubectl apply -f clusterip.yaml

kubectl get service
Creating the ClusterIP and verifying
Creating the ClusterIP and verifying

Kubernetes NodePort

Kubernetes NodePort exposes the Kubernetes service to be accessible outside your cluster on a specific port called the NodePort. Each node proxies the NodePort (the same port number on every Node) into your Service. The Kubernetes control plane allocates a port default: 30000-32767, If you want a specific port number, you can specify a value in the nodePort field.

Kubernetes NodePort architecture
Kubernetes NodePort architecture

Let’s learn how to create a simple Kubernetes NodePort service. In the below nodeport.yaml manifest file:

  • Kind should be set to Service as you are about to launch a new service.
  • The name of the service is hostname-service.
  • Expose the service on a static port on each node to access the service from it outside the cluster. When the node receives a request on the static port, 30162 then forwards the request to one of the pods with the label “app: echo-hostname”.
  • Three types of ports for a service are as follows:
    • nodePort – The static port assigned to each node.
    • port – The service port exposed internally in the cluster.
    • targetPort – Container port or pod Port on which application is hosted.
kind: Service 
apiVersion: v1 
metadata:
  name: hostname-service 
spec:
  type: NodePort
  selector:
    app: echo-hostname 
# Client access the Node Port which is forwarded to the service Port and to the target Port
  ports:       
    - nodePort: 30162  # Node Port
      port: 8080 # Service Port
      targetPort: 80   # Pod Port ( If not set then it takes the same service Port)

  • To create the Kubernetes NodePort service run the kubectl apply command followed by kubectl get service command to verify.
kubectl apply -f nodeport.yaml

kubectl get service
Checking Kubernetes NodePort service
Checking Kubernetes NodePort service

If there is a single pod on a single node or multiple pods on a single node or multiple pods on multiple nodes then NodePort remains the same but with a different URL for the client.

https://node1:30008
https://node2:30008
https://node3:30008

kubernetes loadbalancer service

Kubernetes load balancer service exposes the service externally using a cloud provider’s load balancer. If you access the service with NodePort, you will need to use different URLs to access and overcome this use load balancer.

  • Let’s learn how to create a simple kubernetes loadbalancer service. In the below lb.yaml manifest file:
kind: Service 
apiVersion: v1 
metadata:
  name: loadbalancer-service 
spec:
  type: LoadBalancer
  selector:
    app: echo-hostname 
# Client access the Load balancer which forwards to NodePort to the targetPort.
  ports:  
    - nodePort: 30163  # Node Port
      port: 8080 # Service Port
      targetPort: 80   # Pod Port ( If not set then it takes the same service Port)
  • To create the kubernetes Loadbalancer service run the kubectl apply command followed by kubectl get service command to verify.
kubectl apply -f lb.yaml

kubectl get service
Checking Kubernetes Load balancer service
Checking Kubernetes Load balancer service

Kubernetes Service commands

kubectl get service

kubectl get svc

kubectl describe svc <name-of-service>


Kubernetes Ingress

Earlier in the previous section, you learned how to enable the Kubernetes load balancer or NodePort service to access the Kubernetes service from outside the cluster. But as your environment grows, you need to expose the service on a proper link, configure multiple URL redirection, apply SSL certificates, etc. To achieve this, you need to have Kubernetes Ingress.

To deploy Kubernetes Ingress, you need a Kubernetes ingress controller and Ingress resources as they are not automatically deployed within a cluster. As you can see in the below image, Ingress sends all its traffic to Kubernetes Service and further to the Pods.

Kubernetes Ingress architecture
Kubernetes Ingress architecture

Let’s learn how to create a Kubernetes Ingress resource. The name of an Ingress object must be a valid DNS subdomain name, and annotations configure the Ingress controller. The Ingress spec configures a load balancer or proxy server and the rules.

  • If you don’t specify any host within the spec parameter then the rule is applied applies to all inbound HTTP traffic via IP address.
  • /testpath is the path associated with backend service and port.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test
            port:
              number: 80
Kubernetes Ingress architecture diagram
Kubernetes Ingress architecture diagram

kubernetes configmap or k8s configmap

Kubernetes configmap allows you to store non-confidential data in key-value pairs such as environmental values or command-line arguments or as a configuration file in a volume such as a database subdomain name.

Kubernetes ConfigMaps does not provide secrecy or encryption. If the data you want to store are confidential, use a Secret rather than a ConfigMap.

  • There are multiple waysto use kubernetes configmap to configure containers inside a Pod such as.
    • By using commands in the containers.
    • Environmental variable on containers.
    • Attaching it in the volume.
    • Write a code or script which Kubernetes API reads configmap.
Kubernetes Configmaps architecture diagram
Kubernetes Configmaps architecture diagram
  • Let’s learn how to create a k8s configmap using the below manifest file.
apiVersion: v1
kind: ConfigMap
metadata:
  name: game-demo
data:
  players: "3"
  ui_properties_file_name: "user-interface.properties"
  • Now that you have created Kubernetes configmap, lets use values from game-demo Kubernetes configmap to configure a Pod:
apiVersion: v1
kind: Pod
metadata:
  name: configmap-demo-pod
spec:
  containers:
    - name: demo
      image: alpine
      command: ["sleep", "3600"]
      env:
        # Define the environment variable
        - name: PLAYER 
          valueFrom:
            configMapKeyRef:
              name: game-demo          
              key: players 

Kubernetes Secrets

Kubernetes Secrets allow you to store sensitive information such as passwords, OAuth tokens, SSH keys and enable encryption. There are three ways to use Kubernetes Secrets with POD like environmental variable on the container, attach as a file in volume and use by kubelet when you pull the image.

Let’s learn how to create Kubernetes Secrets using the below manifest file.

apiVersion: v1
kind: Secret
metadata:
  name: secret-basic-auth
type: kubernetes.io/basic-auth
stringData:
  username: admin
  password: password123

You can also create Kubernetes secrets using kubectl command.

kubectl create secret docker-registry secret-tiger-docker \
  --docker-username=user \
  --docker-password=pass \
  --docker-email=automateinfra@gmail.com

Kubernetes Volume and kubernetes volume mounts

Kubernetes volumes are used to store data for containers in Pod. If you store the data locally on a container, then it’s a risk as, and when pod or a container dies, the data is lost. Kubernetes volumes remain persistent and are backed up easily.

Kubernetes volumes can be mounted to other Kubernetes volumes. Each container in the Pod’s configuration must independently specify Kubernetes volume mounts.

  • There are different persistent volumes which kubernetes supports such as:
    • AWS EBS : An AWS EBS volume mounts into your pod provided your nodes on which pods are running must be AWS EC2 instances
    • azure disk : The azure Disk volume type mounts a Microsoft Azure Data Disk into a pod
    • Fiber channel: Allows an existing fiber channel block storage volume to mount to a Pod.
  • Let’s learn how to declare Kubernetes volume using AWS EBS configuration example.
apiVersion: v1
kind: Pod
metadata:
  name: test-ebs
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /test-ebs
      name: test-volume
  volumes:
  - name: test-volume
    # This AWS EBS volume must already exist.
    awsElasticBlockStore:
      volumeID: "<volume id>"
      fsType: ext4

kubernetes stateful sets

Kubernetes stateful sets manage stateful applications such as MySQL, Databases, MongoDB, which need persistent storage. Kubernetes stateful sets manage the deployment and scaling of a set of Pods and provide guarantees about the ordering and uniqueness of these Pods.

With Kubernetes stateful sets with N replicas, when Pods are being deployed, they are created sequentially, in order from {0..N-1} and are terminated in reverse order, from {N-1..0}.

Let’s check out how to declare Kubernetes stateful sets configuration example below.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels

Conclusion

Now that you have learned everything you should know about Kubernetes, you are sure going to be the Kubernetes leader in your upcoming projects or team or organizations.

So with that, which applications do you plan to host on Kubernetes in your next adventure?

Kubernetes Tutorial for Kubernetes Certification [PART-1]

Kubernetes Tutorial for Kubernetes Certification [PART-1]

If you are looking to learn to Kubernetes, you are at the right place; this Kubernetes Tutorial for Kubernetes Certification tutorial will help you gain complete knowledge that you need from basics to becoming a Kubernetes pro.

Kubernetes is more than just management of docker containers as it keeps the load balanced between the cluster nodes, provides a self-healing mechanism such as replacing a new healthy container and many features.

Let’s get started with Kubernetes Tutorial for Kubernetes Certification without further delay.

Join 48 other followers

Table of Content

  1. What is kubernetes?
  2. Why Kubernetes?
  3. Docker swarm vs kubernetes
  4. kubernetes Architecture: Deep dive into Kubernetes Cluster
  5. kubernetes master components or kubernetes master node
  6. Worker node in kubernetes Cluster
  7. Highly Available Kubernetes Cluster
  8. What is kubernetes namespace?
  9. Kubernetes Objects and their Specifications
  10. Kubernetes Workloads
  11. What is a kubernetes Pod? 
  12. Deploying multi container Pod
  13. Conclusion

What is kubernetes?

Kubernetes is an open-source Google-based container orchestration engine for automating deployments, scaling, and managing the container’s applications. It is also called k8s because eight letters are between the “K” and the “s” alphabet.

Kubernetes is portable and extensible which supports declarative as well as automatic approaches.

Kubernetes also helps in service discovery, such as exposing a container using the DNS name or using their own IP address, provides a container runtime, zero downtime deployment capabilities, automatic rollback, automatic storage allocation such as local storage, public cloud providers, etc.

Kubernetes has the ability to scale when needed, which is known as AutoScaling. You can automatically manage configurations like secrets or passwords and mount EFS or other storage when required.

Why Kubernetes?

Now that you have a basic idea about what is Kubernetes. Earlier applications used to run on the physical server that had issues related to resource allocation, such as CPU memory. You would need more physical servers, which were too expensive.

To solve the resource allocation issue, virtualization was adopted in which you could isolate applications and align the necessary resources as per the need. With virtualization, you can run multiple virtual machines running from single hardware, allowing better utilization of resources and saving hardware costs.

Later, the containerizations-based approach was followed, such as docker and after Kubernetes, which is light-weighted and allowed portable deployments in which containers share the same OS, CPU, and memory from the host but have their own file systems and can launch anywhere from local machines or on cloud infrastructure.

Finally, Kubernetes takes care of scaling and failover for your applications and easily manages the canary deployment of your system.

Some of the key features of Kubernetes are:

  • Kubernetes exposes a container using a DNS name or using an IP address.
  • Kubernetes allows you to mount storage system of your choice such as local storage, public cloud providers and more.
  • You can rollback the state anytime for your deployments.
  • Kubernetes replaces containers that fails or whose health check fails.
  • Kubernetes allows you to store secrets and sensitive information such as passwords, OAuth tokens and SSH keys. Also you can update the secret information multiple times without impacting container images.

Every Kubernetes object contains two nested objects (object spec and object status) whee spec describes the description of the object you set and status shows

Physical Server to Virtualization to Containerization
Physical Server to Virtualization to Containerization

Docker swarm vs kubernetes

In previous sections, you learned what Kubernetes is and why there is a shift from physical to virtual machines and towards docker, the container-based technology.

Docker is a light weighted application that allows you to launch multiple containers. Still, to manage or orchestrate the containers, you need orchestration tools such as the Docker swarm or the Kubernetes.

Let’s look at some of the key differences between Docker swarm vs Kubernetes.

Docker SwarmKubernetes
Docker swarm use YAML files and deploy on nodesUsers can encrypt data between nodes.
Users can encrypt data between nodesAll Pods can interact with each other without encryption
Kubernetes Installation is difficult, but the cluster is powerfulDocker swarm is easy to install, but the cluster doesn’t ha many advanced features.
There is no autoscaling enabled in the Docker swarm.Can do autoscaling
Docker swarm is easy to install but the cluster doesn’t ha many advanced features.Kubernetes Installation is difficult but the cluster is very strong
Docker swarm vs Kubernetes

kubernetes Architecture: Deep dive into Kubernetes Cluster

When you Install Kubernetes, you create a Kubernetes cluster that mainly contains two components master or the controller nodes and worker nodes. Nodes are the machines that contain their own Linux environment, which could be a virtual machine or either physical machine.

The application and services are deployed in the containers within the Pods inside the worker nodes. Pods contain one or more docker containers. When a Pod runs multiple containers, all the containers are considered a single entity and share the Node resources.

Bird-eye view of kubernetes cluster
Bird-eye view of Kubernetes cluster

kubernetes master components or kubernetes master node

Kubernetes master components or Kubernetes master node manages the Kubernetes clusters state, storage information about the different nodes, container alignments, the data, cluster events, scheduling new Pods, etc.

Kubernetes master components or Kubernetes master node contains various components such as Kube-apiserver, an etcd storage, a Kube-controller-manager, and a Kube-scheduler.

Let’s learn about each Kubernetes master component or Kubernetes master node.

kube api server

The most important component in the Kubernetes master node is the kube API server or API server that orchestrates all the operations within the cluster. Kubernetes cluster exposes the Kube API server and acts as a gateway or an authenticator for users.

Kube API server also connects with worker node and other control panel components. It also allows you to query and manipulate the state of API objects in Kubernetes such as Pods, Namespaces, ConfigMaps, and events from the etcd server.

The kubectl command-line interface or kubeadm uses the kube API server to execute the commands.

If you deploy the kube API server using the kubeadm tool, it is installed as a Pod else for non-kubeadm setup; you will find it on cat /etc/systemd/system/kube-apiserver.service.

  • To check if kube api server is running in the kubernetes cluster using kubectl command.
kubectl get pods --all-namespaces
Checking the kube API server in the Kubernetes cluster with the kubectl command
Checking the kube API server in the Kubernetes cluster with the kubectl command
  • To check if kube api server is running in the kubernetes cluster with the process command.
ps -aux | grep kube-apiserver
Checking the kube API server in the Kubernetes cluster with process command
Checking the kube API server in the Kubernetes cluster with process command

You can also use client libraries if you want to write an application using Kubenetes API server based on different languages.

etcd kubernetes

etcd is again an important component in the Kubernetes master node that allows storing the cluster data, cluster state, secrets, configs, pod state, etc. in key-value pair format. etcd holds two types of state; one is desired, and the other is the current state for all resources and keeps them in sync.

When you run the kubectl get command, the request goes to the etcd server, and the same command to add or update anything in the Kubernetes cluster using kubectl add or kubectl update, etcd is updated.

For example, the user runs the kubectl command, then the request goes to ➜ the Kube API server (Authenticator) ➜ , etcd ( reads the value) and pushes those values back to the kube API server.

Tabular or relational database
Tabular or relational database
Key-value store
Key-value store

Kube scheduler

Kube scheduler helps schedule new Pods and containers to the appropriate worker nodes according to the pod’s requirement, such as CPU or memory, before allocating the pods to the worker node of the cluster.
Whenever the controller manager finds any discrepancies in the cluster, it forwards the request to Scheduler via the kube API server to fix the gap. For example, If there is any change in node or if pod is created without assigned node, then:

  • Scheduler monitors the kube API server continously.
  • Kube API server checks with etcd and etcd respond back to kube API server with required information.
  • Next Controller manager informs Kube API server to schedule new pods using Scheduler.
  • Scheduler takes use of Kube API server asks kublet to assigns the node to the Pod.
  • Kubectl after assigning the pod responds back to kube API server with the information and kube API further communicates to etcd to update.
Scheduling a Pod
Scheduling a Pod

Kube controller manager

Kube controller manager runs the controller process. Kubernetes comes with a set of built-in controllers that run inside the kube-controller-manager. These built-in controllers provide important core behaviors.

  • Node Controller: Node controller in kube controller manager checks the status of the node like when would node gets on or off.
  • Replication controller: Replication controller in kube controller manager maintains the correct number of containers are running in the replication group.
  • Endpoint controller: Providers endpoints of pods and services.
  • Service and token controller: Create Accounts and API access tokens.

In Kubernetes kube controller managers control loops that watch the state of your cluster, then make or request changes where needed. Each controller tries to move the current cluster state closer to the desired state and if there are any gaps, then forwards the request to Scheduler via the kube API server to fix the gap.

Worker node in kubernetes Cluster

Worker Node is part of a Kubernetes cluster used to manage and run containerized applications. The worker node performs any actions when any Kube API server triggers any request. Each node is managed by the control plane or Master Node that contains the services necessary to run Pods.

The Worker node contains various components, including a Kubelet, Kube-proxy, container runtime, and Node components run on every node, maintaining the details of all running pods.

kubelet in kubernetes

kubelet in Kubernetes is an agent that runs on each worker node and manages containers in the pod after communicating with the kube API server. Kubelet command listens to the Kube API server and acts accordingly, such as adding or deleting containers.

Kube API server fetches the information from kubelet about the worker nodes’ health condition and, if necessary, schedules the necessary resources with the help of Scheduler.

Kubelet is not installed as a pod with the kubeadm tool; you must install it manually.

kube proxy in kubernetes

Kube proxy is a networking component that runs on each worker node in the Kubernetes cluster, forwards traffic within the worker nodes, and handles network communications.

Container Runtime

Container Runtime is an important component responsible for providing and maintaining the runtime environment to containers running inside the Pod. The most common container runtime is Docker, but others like containerd or CRI-O may also be possible.

Other than Master or Worker Node

  • Now tha you know that a Kubernetes cluster contains a master and worker node but it also neds a DNS server which servers DNS records for kubernetes service.
  • Next, optional but it is a good practice that you to install or setup Kubernetes Dashbaod (UI) which allows users to manage and troubleshoot applications running in the cluster.

Highly Available Kubernetes Cluster

Now that you have a good idea and knowledge about the Kubernetes cluster components. Do you know Kubernetes auto-scales your cluster if required, and there are two ways to achieve it?

  • With etcd co-located with control panel nodes and have a stacked etcd.
  • With etcd running on separate nodes from the control panel nodes and have a external stacked etcd.

etcd is co-located with control panel

In the case of etcd are co-located with control panel all the three components API server, scheduler, controller manager communicates with etcd separately.

In this case, if any node gets down, both the components are down, i.e., API processor, etcd. To solve this, add more nodes to make it Highly Available. This approach requires less infrastructure.

etcd is co-located with control panel
etcd is co-located with control panel

etcd running on separate nodes from the control panel

In the second case of etcd running on separate nodes with control panel all the three components kube API server, scheduler, controller manager communicates with etcd externally with an external stacked etcd.

In this case, if any node gets down, your etcd is not impacted, and you still have a highly available environment than stacked etcd, but this approach requires more infrastructure.

etcd running on separate nodes from the control panel
etcd running on separate nodes from the control panel

What is kubernetes namespace?

Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called Kubernetes namespace.

In the Kubernetes namespace, all the resources should have a unique name, but not across namespaces. Kubernetes namespaces help different projects, teams, or customers to share a Kubernetes cluster and divide cluster resources between multiple users.

There are three types of Kubernetes namespaces when you launch a Kubernetes cluster:

  • Kube-system: kube-system contains the all the cluster components such as etcd, api-server, networking, proxy server etc.
  • Default: Default namespace is where you will actually launch the resources or other kubernetes objects by default.
  • Kube-public: This namespace is available for all users accessing the kubernetes service publically.

Let’s look at an example related to the Kubernetes namespace.

  1. If you wish to connect service named db-service within the same namespace then you can access the service directly as:
 mysql.connect("db-service")
  1. To access service named db-service service in other namespace like dev then you should access the service as:
    • <service-name>.<namespace-name>.svc.cluster.local because when you create a service the DNS entry is created
    • svc is subdomain for the service.
    • cluster.local is default domain name of the kubernetes cluster
mysql.connect("db-service.dev.svc.cluster.local") 

Most Kubernetes resources (e.g., pods, services, replication controllers, and others) are created in the same namespace or different depending on the requirements.

  • To List the current namespaces in a cluster run the kubectl command as below.
kubectl get namespaces
kubectl describe namespaces
kubectl create namespace namespace-name
kubectl delete namespaces namespace-name
  • To allocate resource quota to namespace, create a file named resource.yaml and run the kubectl command.
apiVersion: v1
kind: ResourceQuota
metadata: 
    name: compute-quota
    namespace: my-namespace2
spec:
  hard:
    pods: "10"
    requests.cpu: "1"
    requests.memory: 0.5Gi
    limits.cpu: "1"
    limits.memory: 10Gi
kubectl create -f resource.yaml
allocate resource quota to namespace,
allocate resource quota to namespace
  • To check the resource consumption for a particular namespace run the below command.
kubectl describe resourcequota compute-quota
checking the resource consumption for a particular namespace
checking the resource consumption for a particular namespace

Kubernetes Objects and their Specifications

Kubernetes objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster, such as how many containers are running inside the Pod and on which node, what all resources are available If there are any policies on applications.

These Kubernetes objects are declared in .YAML formats are used during deployments. The YAML file is used by the kubectl command, which parses it and converts it into JSON.

  • Spec : While you create the object you need to specify the spc parameter which define the characteristics of the resources you want in the kubernetes cluster.
  • Labels are key/value pairs that are attached to objects, such as pods. Labels are intended to be used to specify identifying attributes of objects that are relevant to users. Labels are used to organize and to select subsets of objects.
  • apiVersion – Which version of the Kubernetes API you’re using to create this object
  • kind – What kind of object you want to create
  • metadata – Data that helps uniquely identify the object, including a name string, UID, and optional namespace
apiVersion: apps/v1             # Which Version your kubernetes API server uses
kind:       Deployment          # What kind of Object you would like to have
metadata:   tomcat deployment   # To Identify the Object
specs:                                  # What would you like to achieve using this template
   replicas: 2                        # Run 2 pods matching the template
   template:
     metadata:                  # Data that identify object like name, UID and namespace
       labels: 
         app : my_tomcat_app
   spec:
     containers:
     - name: my_tomcat_container
       image: tomcat
       ports:
       - containerPort : 8080

Kubernetes Workloads

The workload is the applications running on the Kubernetes cluster. Workload resources manage the set of Kubernetes Pods in the form of Kubernetes deployments, Kubernetes Replica Set, Kubernetes StatefulSet, Kubernetes DaemonSet, Kubernetes Job, etc.

Kubernetes deployments, Kubernetes Replica Set, Kubernetes StatefulSet, Kubernetes DaemonSet, Kubernetes Job, etc
Kubernetes deployments, Kubernetes Replica Set, Kubernetes StatefulSet, Kubernetes DaemonSet, Kubernetes Job, etc

Let’s learn each of the Kubernetes in the upcoming sections.

What is a kubernetes Pod?

The Kubernetes pod is a Kubernetes entity where your docker container resides hosting the applications. The number of pods in the node increases, not containers, if there is an increase in traffic on the apps.

The Kubernetes pod contains a single or group of containers that work with shared storage and network. It is recommended to add Pods than adding containers in the Pod because more containers mean more complex structures and interconnections.

The Kubernetes Pods are created using workload resources such as Kubernetes deployment or Kubernetes Job with the help of yaml file or using directly calling Kubernetes API and are assigned unique IP addresses.

To create a highly available application, you should consider deploying multiple Pods known as replicas. Healing of Pods is done by controller-manager as it keeps monitoring the health of each pod and later asks the scheduler to replace a new Pod.

All containers in the Pod can access the shared volumes, allowing those containers to share data same network namespace, including the IP address and network ports. Inside a Pod, the containers that belong to the Pod can communicate with one another using localhost.

The below is an example syntax

apiVersion: v1
kind: Pod
metadata:
  name: postgres
  labels:
    tier: db-tier
spec:
  containers:
    - name: postgres
      image: postgres
      env: 
       - name: POSTGRES_PASSWORD
         value: mysecretpassword
  • To create a Kubernetes Pod create a yaml file named pod.yaml and copy/paste the below content.
# pod.yaml template file that creates pod
apiVersion: v1        # It is of type String
kind: Pod               # It is of type String
metadata:             # It is of type Dictionary
  name: nginx
  labels: 
    app: nginx
    tier: frontend
spec:                  # It is of type List and Array
  containers:
  - name: nginx
    image: nginx
  • Now to create a Kubernetes Pod execute the kubectl command.
kubectl create -f pod-defination.yml
kubectl apply -f pod.yaml  # To run the above pod.yaml manifest file
Creating a Kubernetes Pod
Creating a Kubernetes Pod
  • You can also use below kubectl command to run a pod in kubernetes cluster.
kubectl run nginx --image nginx

kubectl get pods -o wide  # To verify the Kubernetes pods.

Kubectl describe pod nginx # To describe the pod in more better way

Deploying multi container Pod

In the previous section, you learned how to launch a Pod with a single container, but you sometimes need to run Kubernetes pods with multiple containers. Let’s learn how you can achieve this.

  • To create a multi container Kubernetes Pod create a yaml file named multi-container-demo.yaml and copy/paste the below content.
apiVersion: v1
kind: Pod
metadata:
  name: multicontainer-pod
spec:
  restartPolicy: Never
  volumes:
  - name: shared-data
    emptyDir: {}
  containers:
  - name: nginx-container-1                  # Container 1
    image: nginx
    volumeMounts:
    - name: shared-data
      mountPath: /usr/share/nginx/html
  - name: ubuntu-container-2                  # Container 2
    image: nginx
  • Now to create multi container Kubernetes Pod execute the kubectl command.
kubectl apply -f multi-container-demo.yaml  # To run the above pod.yaml manifest file
  • To check the Kubernetes Pod run kubectl get pods command.
Creating multi-container Kubernetes Pod
Creating multi-container Kubernetes Pod
  • To describe both the containers in the kubernetes Pods run kubectl describe command as shown below.
kubectl describe pod multicontainer-pod
describe both the containers in the kubernetes Pods
describe both the containers in the Kubernetes Pods

Join 48 other followers

Conclusion

In this Ultimate Guide, you learned what is Kubernetes, Kubernetes architecture, and understood Kubernetes cluster end to end and how to declare Kubernetes manifest files to launch Kubernetes Pods.

Now that you have gained a handful of Knowledge on Kubernetes, continue with the PART-2 guide and become the pro of Kubernetes.

Kubernetes Tutorial for Kubernetes Certification [PART-2]

Linux Interview Questions and Answers for experienced

If you are preparing for a DevOps interview or for a Linux administrator, consider this Linux Interview Questions and Answers for the experienced tutorial as your lucky friend, which will help you pass the Certificate or the Linux interview.

Let’s get into this Ultimate Linux Interview Questions and Answers for an experienced guide without further delay.

Join 48 other followers

PAPER-1

Q1. In which directory are Linux logs stored in the Linux file system?

  • Linux log files are stored under /var/log directory which contains system generated and application logs.
/var/log/ directory
/var/log/ directory

Q2. What does the /var/log/dmesg directory contain in the Linux file system?

  • When you Power on Linux machines BIOS calls POST and MBR is loaded. MBR further calls GRUB and finally GRUB calls kernel and kernel gets loaded in memory. While all these process takes place kernel generates lots of messages related to hardware, BIOS, mount file system and stores in /var/log/dmesg directory in the Linux file system which can be viewed using dmesg command.
/var/log/dmesg directory
/var/log/dmesg directory

Q3. What does the /var/log/apt/history.log directory contain in the Linux file system?

  • /var/log/apt/history.log directory contain in the Linux file system provides all the details such as which software or Linux packages were removed, installed or upgraded in the Linux machine. For example when you execute commands like sudo apt update, sudo apt install apcahe2 then all the logs are captured in the /var/log/apt/history.log directory.
/var/log/apt/history.log
/var/log/apt/history.log

Q4. What does the /var/log/auth.log directory contain in the Linux file system?

  • /var/log/auth.log directory contains logs related to the system and user access. Whenever any user log in to the machine or SSH into remote machine to perform any activity all the logs get captured under this file. /var/log/auth.log directory in the Linux file system helps you to find password locked out issues or someone trying to SSH into your machine unnecessarily.
/var/log/auth.log
/var/log/auth.log

Q5. To list all packages in the log file installed using the below command.

cat /var/log/apt/history.log | grep "install"
Listing all the packages in the history.log file
Listing all the packages in the history.log file

Q6. What does the /var/log/kern.log directory contain in the Linux file system?

  • /var/log/kern.log directory provides the information related to kernel warnings, Kernel error messages and Kernel events to the Linux system.
/var/log/kern.log
/var/log/kern.log

Q7. What does the /var/log/Syslog directory in the Linux file system?

  • /var/var/log/Syslog directory provides the in-depth information about your system, application, kernel warnings, Kernel error messages and Kernel events etc. and stores logs. If you are unable to find information in any other files then Syslog directory is your last resort.
/var/log/syslog
/var/log/syslog

Q8. What does the /var/log/apport.log directory contain in the Linux file system?

  • /var/log/apport.log directory provides the in-depth information about the system crash, reports access related issues and OS related failures.
/var/log/apport.log directory
/var/log/apport.log directory
  • If you wish you can configure to schedule these logs in the /etc/logrotate.d/apport directory.
/etc/logrotate.d/apport directory
/etc/logrotate.d/apport directory

Q9. What does the /var/log/ufw.log directory contain in the Linux file system?

  • /var/log/ufw.log directory provides the in-depth information about firewall and network connectivity in the Linux machine.

Q10. What does the /var/log/daemon.log directory contain in the Linux file system?

  • /var/log/daemon.log directory provides the information logged by the various background daemons running in the Linux machine.

Q11. What does the /var/log/apache2/access.log directory contain in the Linux file system?

  • /var/log/apache2/access.log directory contains information logged by apache2 web server. As soon as you install apache2 using the apt install apache2 command you will see this access.log gets created in the Linux machine.
/var/log/apache2/access.log directory
/var/log/apache2/access.log directory

Q12. What does the /var/log/dpkg.log directory contain in the Linux file system?

  • /var/log/dpkg.log directory provides the information related to package installation and similar to /var/log/apt file in the Linux machine.

Q13. Why “.” files are not included in the ls command in the Linux file system?

  • The reason is the configuration files are less likely to be accessed or not accessed at all in the Linux machine.

Q14. Explain the File permissions of the below file in the Linux file system?

drwxr-xr-x   2     sam sam 4096 2015-05-16 17:07 Documents
  • d is a filetype (directory) however it can also have - which represents a file or c that represents character device and finally b represents block device
  • rwxr-xr-x are the permissions of user, group and others respectively where r is read, w is write and x is execute permission. User has rwx permissions, group has r-x permissions and others have r-x permissions.
    • r represents read permission and it has 4 octal value.
    • w represents write permission and it has 2 octal value.
    • x represents execute permission and it has 1 octal value.
    • u represents user, g represents group , o represents others and a represents all( user, group and others)
  • l is the number of links that a file contains.
  • sam is the owner of the file that is documents is owned by the user sam. Group contains multiple users and sam user is part of the sam group which is his primary group.
  • 4096 is the size of the file.
  • 2015-05-16 17:07 displays when was file last accessed.
  • Documents is the file name

Q15. Explain the chmod command practically to change the permissions on the file in the Linux file system?

Chmod: To change the permissions on the file using the chmod command. Let’s learn how to work with the chmod command.

  • chmod a-w file.txt – Subtracting the write permission from all user, group and others.
  • chmod u+rwx file.txt – Adding the read, write and execute permission for user.
  • chmod g-r file.txt – Subtracting the read permission from group
  • chmod 666 file.txt – Granting the read and write access to all user, group and others
  • chmod 714 file.txt
    • Granting the read, write and execute permissions to user.
    • Granting the execute permission to group.
    • Granting the read and write access to others.

Q16. Explain the umask command practically in the Linux file system? Linux file system?

The umask command works like a filter that masks your permissions.

  • 0 represents that it has all the permissions ie. read, write and execute
  • 7 represents no permissions that means it masks all the actual values.
  • UMASK 037 will create a file with -rwxr—– permissions
  • UMASK 000 will create a file with -rwxrwxrwx permissions
  • UMASK 222 will create a file with -r-x-r-x-r-x

Q17. How to check the file type of a file using the file command.

file name_of_the_file
check the filetype of a file
check the filetype of a file

Q18. How to check the permissions of the file using the getfacl command.

 getfacl name_of_the_file
getfacl command
getfacl command

Q19. How to set the permissions of the file using the setfacl command.

# Setting rwx permission for user sam on name_of_the_file
setfacl -m u:sam:rwx name_of_the_file

Q20. Explain all the below Linux commands?

  • Find command in linux: find command in linux search files and directories according to conditions. For example find all .mp3 files that are greater than 10MB and delete them using one single command.
find / -type f -name *.mp3 -size +10M -exec rm{} \;
  • gzip command in linux: gzip command in linux compresses the file size
gzip file_name
  • chattr command in linux: chattr command in linux helps to alter attributes of files but only with root user. For Ex: make directory(+i) and sub files(-R) immutable.
chattr -R +i  temp/
  • Chown command in linux: Chown command in linux allows to change the owner of a file or directory.
chown -R tomat:tomcat /opt
  • mkfs command in linux: mkfs command in linux allows to create a file system.
mkfs.ext2 ~/tmp.img
  • unzip command in linux: unzip command in linux extracts all files from specified ZIP achieves.
unzip myfile.zip
  • mount command in linux: mount command allows to mount the Linux file systems to a directory.
mount  /dev/sda1 media/usb
  • dd command in linux: dd command in linux copies a file, converts and then formats according to the operands.
# Backup the MBR partitioned system, where size is 512b, "if" is input source and "of" is output file.
sudo dd if=/dev/sda bs=512 count=1 of=mbr.img
  • fdisk command in linux: fdisk command in linux stands for Format disk which allows you to create, view, resize, delete and change the partitions on a hard drive.
fdisk /dev/sda
  • sort command in linux: sort command in linux helps in sorting the lines of text files or arranging them.
sort -o output.txt file.txt
  • swap command in linux: swap command in linux is used when the amount of physical RAM memory is full. When a Linux system runs out of RAM, inactive pages are moved from the RAM to the swap space
  • tar command in linux: tar command in linux which stands for Tape Archive is used to compress files which is known as tarball.
tar cjvf abc.tar.gz directory1
  • uniq command in linux: uniq command in linux filters or removes repeated lines in a file.
# -c option specifies how many uniq lines were removed
uniq -c file.txt

Join 48 other followers

Conclusion

In this ultimate guide, you had a chance to revise everything, such as the Linux file system, Linux commands, and Linux log files that you need to know to pass and crack the Linux interview.

Now that you have sound knowledge of Linux and are ready for your upcoming interview.

Pass Terraform Certification with top Terraform Interview Questions and Answers

If you are preparing for a DevOps interview or for a Terraform administrator or a developer, consider this Pass Terraform Certification with top Terraform Interview Questions and Answers tutorial as your lucky friend, which will help you pass the Certificate or the Terraform interview.

Without further delay, let’s get into this UltimatePass Terraform Certification with the top Terraform Interview Questions and Answers guide.

Join 48 other followers

PAPER-1

Q1. What is IAC ?

Answer: IAC stands for Infrastructure as a code which allows to write code, check the code, compile code then execute the code and if required update the code again and redeploy. It is easier to use IAC as you can create and destroy the infrastructure quickly and efficiently.

Q2. Are there any benefits of Infrastructure as Code ?

Answer: Yes there are lot many. IAC allows you to automate multiple things such as with one script with same syntax throughout you can update, scale up-down and destroy the resources quickly. Infrastructure as a code has also capabilities to reuse the code and version it in version control. Terraform is an Infrastructure as code open source tool.

Q3. What are use cases of Terraform ?

Answer: There are multiple use cases of Terraform such as:

  • Heroku App Setup – PAAS based application
  • Multi Tier apps ( For ex: web apps + DB + API + Caching )
  • Disposable environments such as DEV and Stage for testing purpose.
  • Multi cloud deployment.
  • Resource schedulers such as Kubernetes , Borg which can schedule containers , Spark etc.

Q4. What is Terraform state file ?

Answer: Terraform state file maintains the status of your infrastructure such as resource which are provisioned or needs to be provisioned. When you run Terraform plan command a JSON structured output is generated (initially empty) and when you deploy all the resources ID and other details come in JSON file .

Q5. What are different format of Terraform configuration file?

Answer: The format of Terraform configuration file is .tf or .tf.json. Some of the example of Terraform configuration file are main.tf, vars.tf, output.tf, terraform.tfvars , provider.tf etc.

Q6. What are Terraform Providers ?

Answer: Terraform Providers are most important part of terraform which allow to connect to remote systems by the help of API’s. There are different Terraform providers such as google provider, terraform aws provider, terraform azure provider , Oracle, MySQL , Postgres etc.

Q7. Name three Terraform provisioners that are used in Terraform ?

Answer: Terraform provisioners: Local exec , Remote exec and File.

Q8. What happens when you run Terraform init ?

Answer: Terraform init allows all the Terraform modules and Terraform providers to initialize with latest version if there are no dependency locks.

Q9. How do you define Terraform provider version ?

Answer:

terraform {
      required_providers {
           aws = "~> 1.0" } 
     }

Q10. How to update Terraform provider version?

Answer:

terraform init --upgrade

Q11. What is the other way to define Terraform provider other than in Terraform Block?

Answer:

provider {
      version  = "1.0" 
           }

Q12. In case you have two Terraform providers with same name but need to deploy resources in different regions, What do you do in that case?

Answer: Use alias to solve this issue.

Q13. How do you format and validate Terraform configuration files?

Answer: Use command terraform fmt and terraform validate

Q14. What is Command to Check the current status of infrastructure applied and how you can list resources from your state file?

Answer: terraform show and terraform state list

Q15. What is difference between local exec and remote exec Terraform provisioners?

Answer: local exec is used to run the commands locally on your system like output on the terminal while running terraform plan command and remote exec is to execute remotely on the resources such as EC2.

Q16. What are the two types of connections used while you use remote exec Terraform provisioner?

Answer: SSH or Winrm

Q17. When does Terraform mark the resources as tainted ?

Answer: When resources are created successfully but fails during provisioning. Terraform represents this by marking the object as “tainted” in the Terraform state, and Terraform will propose to replace it in the next plan you create.

Q18. What happens to tainted resource when you run Terraform Plan next time ?

Answer: Terraform ignores them as they are risking objects and will create or replace new resources instead.

Q19. How to manually taint a resource and does taint modify your infrastructure ?

Answer: You can use terraform taint command followed by resource.id. No, only state file is modified.

Q20. How to By Pass any failure in Terraform apply ?

Answer: You can use on_failure setting. Never continue if you thing this failure can cause issues.

PAPER-2

Q1. What does the version = “~ > 1.0 ” mean ?

Answer: It means any version greater than 1 but less than 2.0

Q2. What is more secure practice in terraform ? Using hard coded credentials or Instance profile ?

Answer: Instance Profile.

Q3. How can you remove resource that failed while terraform apply without affecting entire infrastructure ?

Answer: We can use terraform taint resource.id

Q4. What is Terraform workspace and what is default Terraform workspace name?

Answer: Terraform workspace is used to store the permanent data inside terraform state file in backend and by default there is only one terraform state file and if you would like to have multiple terraform state file associated with one backend then you need workspaces. By default there is only one workspace named as default.

Q5. What is the command to list the Terraform workspaces and create new Terraform workspace. ?

Answer: terraform workspace list and terraform workspace new *new_workspace*

Q6. Can you delete default Terraform workspace ?

Answer: No, you cannot delete default Terraform workspace.

Q7. If you want to create one resource in default Terraform workspace and other five resource in different terraform workspace using count, then how can you achieve this?

Answer: Run the below command

resource "aws_instance" "mymachine" {
     count = "${terraform.workspace == "default"  ? 1 : 5 } "
}

Q8. How can you check a single resource attribute in state file?

Answer: terraform state show ‘resource name’.

Q9. How can you bring state file locally on machine and upload to remote location ?

Answer: terraform state pull – To bring the state file to local machine and terraform state push to manually upload the state file to remote location such as S3 bucket in AWS.

Q10. How to remove items from Terraform state file?

Answer: terraform state rm “packet_device.worker”

Q11. How to move items from Terraform state file?

Answer: To move items within Terraform state file run the below command.

terraform state mv 'module.app' 'module.parent.module.app'

Q12. Where are Terraform modules located ?

Answer: Terraform modules can be stored in repositories such as AWS S3 bucket, GIT, local filesystem or Terraform Registry.

Q13. Where are your Terraform providers located ?

Answer: Within the Terraform registry.

Q14. What is the command to check the current status of infrastructure applied and how you can list resources from your state file?

Answer: terraform show and terraform state list

Q15. How do you download Terraform modules in a file ?

Answer: Using module block containing source and version.

Q16. What are terraform Module ?

Answer: Terraform module contains set of Terraform configuration files in a single directory and allows others to reuse for simplicity and ease.

Q17. What is “${}” know as ?

Answer: “${}” is interpolation that was used with previous versions and still can be used.

Q18. What is default data type in Terraform ?

Answer: String.

Q19. What does .terraform directory contains?

Answer: .terraform directory stores downloaded packages and plugins and Terraform provider details.

Q20. What are Core Terraform commands?

Answer: terraform init ➔ terraform plan ➔ terraform apply

PAPER-3

Q1. How do you protect any Terraform provisioner to fail on terraform apply ?

Answer: By using on_failure settings as shown below.

resource "aws_instance" "web" {
  provisioner "local-exec" {
    command  = "echo The server's IP address is ${self.private_ip}"
    on_failure = "continue" # This will ignore the error and continue with creation or destruction or 
    fail       = It will Raise an Error 
  }
}

Q2. Is it Possible to skip Terraform backend ? If Yes, then how?

Answer: Yes you can skip terraform backend by running below command. command

terraform init -backend=false

Q3. How can you remove Plugin installation while initializing the terraform?

Answer: By running the following commands.

terraform init -get-plugins=false

Q4. What is the use of terraform plan command ?

Answer: Terraform plan command helps in creation of execution plan and determines which actions are necessary to achieve the desired state.

Q5. How can you allow terraform to self approve and deploy the infrastructure ?

Answer: Using below command.

terraform apply -auto-approve

Q6. How can you preview the behavior of terraform destroy command ?

Answer: Use the below command that will inform which resources will be destroyed.

terraform plan -destroy

Q7. How can you save the execution Plan ?

Answer: Save the execution Plan by using below command.

terraform plan -out=tf-plan

Q8. How can you see single attribute in state file?

Answer: By using below command.

terraform state show 'resource name'. 

Q9. How can you get detailed exit code while running plan in Terraform ?

Answer: By adding the -detailed-exitcode in terraform plan command.

terraform plan -detailed-exitcode. 

Q10. If you remove EC2 instance manually from AWS console which was created by terraform. What happens when you run terraform apply command next time? Does terraform recreate it ?

Answer: Yes , it does recreate it as this is already defined in state file.

Q11. What are Terraform backends?

Answer: Terraform backend determines where Terraform state is stored or loaded from. By default it is stored on local machine but you can also give remote backed such as AWS S3 bucket.

Q12. What do you mean by state lock ?

Answer: State lock gets applied as soon as you work on the resource. It helps in corruption of your state file.

Q13. Can you revert from remote backend to Local backend ? If yes then what next needs to be done?

Answer: Yes you can revert from remote backend to local backend by configuring in Terraform configuration file and later running terraform init command.

Q14. What is Command to Sync or reconcile your terraform state file if you modify terraform created resource manually?

Answer: Use Terraform refresh command.

Q15. Can you use output generated from one Terraform module to other Terraform module? If yes how?

Answer: Yes the output generated from one Terraform module can be used in other Terraform module. You can define in module block by specifying the source and version and then use it.

Q16. What is correct approach for declaring meta argument ? a = “${a}” or “${}” = a

Answer: a = “${a}” is correct way to use meta arguments. Now interpolation is used very rarely.

Q17. Name some important Data types in Terraform ?

Answer: String , lists , set, map , tuple , bool, number and object.

Q18. How do you convert built in function from String to number ?

Answer: parseint(“100″,”10”)

Q19. Which built in function evaluates expression and return Boolean result ?

Answer: can function.

Q20. How can you encode built in function to a string using JSON Syntax ?

Answer: jsonencode({“hello”=”America”})

Join 48 other followers

Conclusion

In this ultimate guide(Pass Terraform Certification with top Terraform Interview Questions and Answers), you had a chance to revise everything you needed to pass and crack the Terraform interview.

Now that you have sound knowledge of Terraform, and are ready for your upcoming interview.

Install ELK Stack on Ubuntu: Elasticsearch, Logstash, and Kibana Dashboard.

If you are looking to quickly install ELK Stack, previously known as Elastic stack, then you have come to the right place.

ELK Stack contains mainly four components, i.e., Elasticsearch, Logstash, Kibana Dashboard, Filebeat, and Metricbeat. Combing all these components, it is easier to store, search, analyze, and visualize logs generated from any source in any format.

In this tutorial, you will learn how to install ELK Stack, Elasticsearch, install Logstash, and install Kibana Dashboard on the Ubuntu machine.

Let’s dive in quickly.

Join 48 other followers

Table of Content

  1. Prerequisites
  2. How to Install Elasticsearch on ubuntu
  3. Configuring Elasticsearch on Ubuntu Machine
  4. How to Install Kibana on ubuntu
  5. Viewing Kibana Dashboard on Ubuntu Machine
  6. Verify the Kibana Dashboard
  7. How to Install Logstash
  8. Configuring Logstash with Filebeat
  9. Installing and Configuring Filebeat
  10. Installing and Configuring Metricbeat
  11. Verifying the ELK Stack in the Kibana Dashboard
  12. Conclusion
ELK Stack architecture
ELK Stack architecture

Prerequisites

  • Ubuntu machine preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account
  • Recommended to have 4GB RAM, at least 5GB of drive space.
  • Apache installed on the Ubuntu machine that works as a web server and proxy server.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Elasticsearch on ubuntu

Let’s kick off this tutorial by first installing the first component’s ELK stack that is Elasticsearch, but before you install Elasticsearch, you need to have java installed on the machine.

  • Login to Ubuntu machine using your favorite SSH client.
  • First, update your existing list of packages by running the below command.
sudo apt update
  • Now, install java using the apt install command as shown below.
# Installing Java Version: Java SE 11 (LTS)
sudo apt install default-jdk  
Installing Java
Installing Java
  • Next, verify the java version on your machine. As you can see below Java has been succesfully installed on ubuntu machine.
java -version               # To check the Installed Java Version
To check the Installed Java Version
To check the Installed Java Version
  • Further add the GPG key for the official Elastic repository to your system. This key builds the trust of your machine with Elastic official repository and enable access to all the open-source software in the ELK stack.
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Adding the GPG key for the official Elastic repository to your system
Adding the GPG key for the official Elastic repository to your system
  • Install below prerequisites softwares so that apt uses packages over https protocol. The apt transport software allow your machine to connect with external respositories to connect over HTTPS or HTTP over TLS.
sudo apt install apt-transport-https ca-certificates curl software-properties-common
Installing softwares
Installing software
  • Now, add the Elastic repository to APT sources so that you can install all the required ELK package.
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee –a /etc/apt/sources.list.d/elastic-7.x.list
  • Next, update the system using the following commands.
sudo apt update
  • Now its time to Install Elasticsearch with the following command:
sudo apt-get install elasticsearch
Install Elasticsearch
Install Elasticsearch

Configuring Elasticsearch on Ubuntu Machine

Now that you have successfully installed Elasticsearch on your ubuntu machine, it is important to configure the hostname and the port in the Elasticsearch configuration file. Let’s do it.

  • Open the Elasticsearch configuration file with below command and uncomment the network.host, http.port parameter.
vi /etc/elasticsearch/elasticsearch.yml
 uncomment the network.host, http.port parameter.
Uncomment the network. Host, http. Port parameter.
  • In the Elasticsearch configuration file update the discovery.type as below.
update the discovery.type
Update the discovery.type
  • Now, start and enable the Elasticsearch service on the ubuntu machine using below commands.
sudo systemctl enable elasticsearch.service
sudo systemctl start elasticsearch.service
start and enable the Elasticsearch service
start and enable the Elasticsearch service
 Checking the Elasticsearch service status
Checking the Elasticsearch service status
  • Finally, verify the Elasticsearch installtion by running the curl command on your machine on port 9200.
curl http://127.0.0.1:9200
Verify the Elasticsearch service
Verify the Elasticsearch service

How to Install Kibana on ubuntu

Now that you have successfully installed Elasticsearch and configured it. The next component you need to install in the ELK stack is Kibana and view the kibana dashboard. Let’s install Kibana.

  • Installing kibana is simple and you need to run a single command as shown below.
sudo apt-get install kibana
Installing kibana
Installing kibana

Join 48 other followers
  • Now Kibana is installed succesfully. You will need to make changes in configuration file of Kibana as you did earlier for elasticsearch. To make the configuration changes open the kibana.yml configuration file and uncomment the following lines:
server.port: 5601
server.host: "localhost"
elasticsearch.hosts: ["http://localhost:9200"]
uncomment the kibana port, URL and elasticsearch host
uncomment the kibana port, URL, and elasticsearch host

Kibana works on Port 5061 by default

  • Once the configuration file is updated, start and enable the Kibana service that you recently installed.
sudo systemctl start kibana
sudo systemctl enable kibana
starting and enabling the Kibana service
starting and enabling the Kibana service

Viewing Kibana Dashboard on Ubuntu Machine

Great, now you have elasticsearch running on Port 9200 and Kibana running on Port 5601. Still, to view the Kibana dashboard on the Ubuntu machine, you need to use the Apache server as your proxy server, allowing the Kibana Dashboard to be viewed on Port 80.

Let’s configure apache to run as a proxy server.

  • Create the configuration file named domain.conf in /etc/apache2/sites-available directory and copy/paste the below configuration file.
vi /etc/apache2/sites-available/domain.conf
<VirtualHost *:80>
    ServerName localhost
    ProxyRequests Off
    ProxyPreserveHost On
    ProxyVia Full
    <Proxy *>
        Require all granted
    </Proxy>
    ProxyPass / http://127.0.0.1:5601/
    ProxyPassReverse / http://127.0.0.1:5601/
</VirtualHost>
  • After changing the Apache configuration file run the below commands so that apache works as proxy server.
sudo a2dissite 000-default
sudo a2enmod proxy proxy_http rewrite headers expires
sudo a2ensite domain.conf
sudo service apache2 restart

Verify the Kibana Dashboard

Earlier in the previous section, you installed kibana and configured it to run behind the apache server. Let’s verify by viewing the Kibana dashboard by navigating to the IP address of the server followed by Port 80.

As you can see below, the Kibana dashboard loads successfully.

Kibana dashboard loads successfully.
Kibana dashboard loads successfully.

How to Install Logstash

Logstash is a lightweight, open-source, server-side data processing pipeline that allows you to collect data from various sources, transform it on the fly, and send it to your desired destination. Logstash is a tool that collects data from multiple sources, stores it in Elasticsearch, and is parsed by Kibana.

With that, let’s install the third component used in Elastic Stack. Let’s install Logstash on an Ubuntu machine.

  • Install Logstash by running the following command.
sudo apt-get install logstash
Installing Logstash
Installing Logstash
  • Now start and enable the Logstash by running the systemctl commands.
sudo systemctl start logstash
sudo systemctl enable logstash
Starting and Enabling the Logstash
Starting and Enabling the Logstash
  • Finally verify the Logstash by running the below command.
sudo systemctl status logstash
Verifying the Logstash
Verifying the Logstash

Configuring Logstash with Filebeat

Awesome, now you have Logstash installed. You will configure beats in the Logstash; although beats can send the data directly to the Elasticsearch database, it is good to use Logstash to process the data. Let’s configure beats in the Logstash with the below steps.

  • Create a file named logstash.conf and copy/paste the below data that allows you to set up Filebeat input .
# Specify the incoming logs from the beats in Logstash over Port 5044

input {
  beats {
    port => 5044
  }
}

# By filter syslog messages are sent to Elasticsearch

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

# Specify output will push logstash logs to an Elastisearch instance

output {
  elasticsearch { hosts => ["localhost:9200"]
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

  • Now test your Logstash configuration with below command. If you see Configuration OK message then the setup is properly done.
sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
test your Logstash configuration
Test your Logstash configuration
  • Finally start and enable Logstash with below command.
sudo systemctl start logstash
sudo systemctl enable logstash

Installing and Configuring Filebeat

The Elastic Stack uses lightweight data shippers called beats ( such as Filebeat, Metricbeat ) to collect data from various sources and transport them to Logstash or Elasticsearch. You will learn to install and configure Filebeat on an Ubuntu machine that will be used to push data in Logstash and further to Kibana.

  • Install Filebeat on ubuntu machine using following commnads.
sudo apt install filebeat
Installing the Filebeat
Installing the Filebeat
  • Next, edit the Filebeat configuration file so that filebeat is able to connect to Logstash. Uncomment the output.logstash and hosts: [“localhost:5044”] and comment the output.elasticsearch: and hosts: [“localhost:9200”].
vi /etc/filebeat/filebeat.yml
Uncomment the output.logstash and hosts: ["localhost:5044"]
Uncomment the output.logstash and hosts: [“localhost:5044”]
comment the output.elasticsearch: and hosts: ["localhost:9200"]
comment the output.elasticsearch: and hosts: [“localhost:9200”]
  • Next enable the filebeat with below command.
sudo filebeat modules enable system
sudo filebeat setup --pipelines --modules system
Enabling the filebeat
Enabling the filebeat
  • Now, Load the index template from the Filebeat into Logstash by running the below command. Index template are collection of documents that have similar characteristics.
sudo filebeat setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
Load the index template from the Filebeat into Logstash
Load the index template from the Filebeat into Logstash
  • Also run the below command so that Logstash can further push to Elasticsearch.
sudo filebeat setup -e -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200'] -E setup.kibana.host=localhost:5601
Logstash can further push to Elasticsearch
Logstash can further push to Elasticsearch
  • Now you can start and enable Filebeat.
sudo systemctl start filebeat
sudo systemctl enable filebeat
start and enable Filebeat

Installing and Configuring Metricbeat

Previously you learned to install and configure Filebeat, but this time you will learn to install and configure Metricbeat. Metricbeat is a lightweight shipper that you can install on your servers to periodically collect metrics from the operating system and from services running on the server.

  • To download and install Metricbeat, open a terminal window and use the commands that work with your system:
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.16.3-amd64.deb
sudo dpkg -i metricbeat-7.16.3-amd64.deb
  • From the Metricbeat install directory, enable the system module:
sudo metricbeat modules enable system
  • Set up the initial environment for Metricbeat and Start Metricbeat by running the following commands.
sudo metricbeat setup -e
sudo service metricbeat start

Verifying the ELK Stack in the Kibana Dashboard

Now that you have your ELK Elastic Stack set up completely. Filebeat and Metricbeat will begin pushing the Syslog and authorization logs to Logstash, then load that data into Elasticsearch. To verify if Elasticsearch is receiving the data, query the index with the below command.

 curl -XGET http://localhost:9200/_cat/indices?v
  • As you can see below the request is successful, that means the data that was pushed by filebeat is successfully stored in elasticsearch.
Filebeat and Metricbeat pushing the data in ELasticsearch
Filebeat and Metricbeat pushing the data in ELasticsearch
Kibana Dashboard with beats configured
Kibana Dashboard with beats configured
Logs from Metricbeat in Kibana Dashboard.
Logs from Metricbeat in Kibana Dashboard.

Join 48 other followers

Conclusion

In this tutorial, you learned how to install ELK Stack, including installing components, i.e., Elasticsearch, Logstash, Kibana Dashboard, Filebeat, and Metricbeat on the Ubuntu machine.

Now that you have a strong understanding of ELK Stack and all the components, which application do you plan to monitor next?

Learn Terraform: The Ultimate terraform tutorial [PART-2]

In the previous; Learn Terraform: The Ultimate terraform tutorial [PART-1], you got a jump start into Terraform world; why not gain a more advanced level of knowledge of Terraform that you need to become a terraform pro.

In this Learn Terraform: The Ultimate terraform tutorial [PART-2] guide, you will learn more advanced level of Terraform concepts such as terraform lifecycle, terraform function, terraform modules, terraform provisioners, terraform init, terraform plan, terraform apply commands and many more.

Without further delay, let’s get into it.

Join 48 other followers

Table of Content

  1. What are Terraform modules?
  2. Terraform provisioner
  3. Terraform Lifecycle
  4. Terraform jsonencode example with Terraform json
  5. Terraform locals
  6. Terraform conditional expression
  7. Terraform dynamic block conditional
  8. Terraform functions
  9. Terraform can function
  10. Terraform try function
  11. Terraform templatefile function
  12. Terraform data source
  13. Terraform State file
  14. Terraform backend [terraform backend s3]
  15. Terraform Command Line or Terraform CLI
  16. Quick Glance of Terraform CLI Commands
  17. Terraform ec2 instance example (terraform aws ec2)

What are Terraform modules?

Terraform modules contain the terraform configuration files that may be managing a single resource or group of resources. For example, if you are managing a single resource in the single terraform configuration file that is also a Terraform module, or if you wish to manage multiple resources defined different files and later clubbed together in a single file, that is also known as Terraform modules or a root module.

A Terraform root module can have multiple individual child modules, data blocks, resources blocks, and so on. To call a child module, you will need to explicitly define the location of the child module using the source argument as shown below.

  • In the below code the location of module EFS is one directory behind the current directory, so you defined the local Path as ./modules/EFS
module "efs" {                            # Module and Label is efs
  source               = "./modules/EFS"  # Define the Path of Child Module                             
  subnets              = var.subnet_ids
  efs_file_system_name = var.efs_file_system_name
  security_groups      = [module.SG.efs_sg_id]
  role_arn             = var.role_arn
}
  • In some cases the modules are stored in Terraform Registry, GitHub, Bitbucket, Mercurial Repo, S3 bucket etc and to use these repsoitories as your source you need to declare as shown below.
module "mymodule1" {                              # Local Path located  Module
  source = "./consul"
}

module "mymodule2" {                              # Terraform Registry located Module
  source = ".hasicorp/consul/aws"
  version = "0.1.0"
}

module "mymodule3" {                              # GIT located  Module
  source = "github.com/automateinfra/"
}

module "mymodule4" {                              # Mercurial located  Module
  source = "hg::https://automateinfra.com/vpc.hg"
}

module "mymodule5" {                               # S3 Bucket located  Module
  source = "s3::https://s3-eu-west-1.amazonaws.com/vpc.zip"
}
The diagram displaying the root module ( module1 and module2) containing the child modules such as (ec2, rds, s3 etc)
The diagram displaying the root module ( module1 and module2) containing the child modules such as (ec2, rds, s3, etc.)

Terraform provisioner

Do you know what Terraform allows you to perform an action on your local machine or remote machine such as running a command on the local machine, copying files from local to remote machines or vice versa, Passing data into virtual machines, etc. all this can be done by using Terraform provisioner?

Terraform provisioners allow to pass data in any resource that cannot be passed when creating resources. Multiple terraform provisioners can be specified within a resource block and executed in the order they’re defined in the configuration file.

The terraform provisioners interact with remote servers over SSH or WinRM. Most cloud computing platforms provide mechanisms to pass data to instances at the time of their creation such that the data is immediately available on system boot. Still, you can pass the data with Terraform provisioners even after creating the resource.

Terraform provisioners allows you to declare conditions such as when = destroy , on_failure = continue and If you wish to run terraform provisioners that aren’t directly associated with a specific resource, use null_resource.

Let’s look at the example below to declare multiple terraform provisioners.

  • Below code creates two resources where resource1 create an AWS EC2 instance and other work with Terraform provisioner and performs action on the AWS EC2 instance such as copying apache installation instrution from local machine to remote machine and then using file installing apache on the AWS EC2 instance.

resource "aws_instance" "resource1" {
  instance_type = "t2.micro"
  ami           = "ami-9876"
  timeouts {                     # Customize your operations longevity
   create = "60m"
   delete = "2h"
   }
}

resource "aws_instance" "resource2" {

  provisioner "local-exec" {
    command = "echo 'Automateinfra.com' >text.txt"
  }
  provisioner "file" {
    source      = "text.txt"
    destination = "/tmp/text.txt"
  }
  provisioner "remote-exec" {
    inline = [
      "apt install apache2 -f /tmp/text.txt",
    ]
  }
}

Join 48 other followers

Terraform Lifecycle

Terraform lifecycle defines the behavior of resources how they should be treated, such as ignoring changes to tags, preventing destroy the infrastructure.

There are mainly three arguments that you can declare within the Terraform lifecycle such as :

  1. create_before_destroy: By default Terraform destroys the existing object and then create a new replacement object but with create_before_destroy argument within terraform lifecycle the new replacement object is created first, and then the legacy or prior object is destroyed.
  2. prevent_destroy: Terraform skips the destruction of the existing object if you declare prevent_destroy within the terraform lifecycle.
  3. ignores-changes: When you execute Terraform commands if there are any differences or changes required in the infrastructure terraform by default informs you however if you need to ignores the changes, then consider using ignore_changes inside the terraform lifecycle.
  • In the below code aws_instance will ignore any tag changes for the instance and for azurerm_resource_group new resource group is created first and then destroyed once the new replacement is ready.
resource "aws_instance" "automate" {
  lifecycle {
    ignore_changes = [
      tags,
    ]
  }
}

resource "azurerm_resource_group" "automate" {
  lifecycle {
    create_before_destroy = true
  }
}

Terraform jsonencode example with Terraform json

If you need to encode Json files in your terraform code, consider using terraform jsonencode function. This is a quick section about terraform jsonencode, so let’s look at a basic Terraform jsonencode example with Terraform json.

  • The below code creates an IAM role policy in which you are defining the policy statement in json format.
resource "aws_iam_role_policy" "example" {
  name   = "example"
  role   = aws_iam_role.example.name
  policy = jsonencode({
    "Statement" = [{
      # This policy allows software running on the EC2 instance to access the S3 API
      "Action" = "s3:*",
      "Effect" = "Allow",
    }],
  })
}

Terraform locals

Terraform locals are the values that are declared once but can be referred to multiple times in the resource or module block without repeating it.

Terraform locals helps you to decrease the number of code lines and reduce the repetitive code.

locals {                                         # Declaring the set of related locals in a single block
  instance = "t2.micro"
  name     = "myinstance"
}

locals {                                         # Using the Local values 
 common_tags {
  instance_type  = local.instance
  instance_name  = local.name
   }
}

resource "aws_instance" "instance1" {            # Using the newly created Local values
  tags = local.common_tags
}

resource "aws_instance" "instance2" {             # Using the newly created Local values
  tags = local.common_tags
}

Terraform conditional expression

There is multiple time when you will encounter using conditional expressions in Terraform. Let’s look at some important terraform conditional expression examples below, which will forever help you using Terraform. Let’s get into it.

  • Below are examples on how to retrieve outputs with different conditions.
aws_instance.myinstance.id      # This will provide you a result with Ec2 Instance details.
aws_instance.myinstance[0].id   # This will provide you a result with first Ec2 Instance details.
aws_instance.myinstance[1].id   # This will provide you a result with second Ec2 Instance details
aws_instance.myinstance.*id     # This will provide you a result with all Ec2 Instance details
  • Now, let us see few complex examples where different conditions are applied to retrieve outputs.
[for value in aws_instance.myinstance:value.id]    # Returns all instance values by their ids.
var.a != "auto" ? var.a : "default-a"              # if var.a is auto then use var.a else default-a
[for a in var.list : a.instance[0].name]           # var.list[*].instance[0].name
[for a in var.list: upper(a)]                      # iterates over each item in var.list and lists upper case 
[for a in var.list : a => upper(a)]     # list original objects and corresponding upper case [("a","A"),("c","C")]                                                         


Terraform dynamic block conditional

Terraform dynamic block conditional is used when resource or module block cannot accept the static value of the argument and instead depend on separate objects that are related to, embedded within the other block or outputs.

For example application = "${aws_elastic_beanstalk_application.tftest.name}" .

Also, while creating any resource in the module, you are not allowed to provide the arguments multiple times, such as name and value, so in that case, you can use dynamic settings. Below is a basic example of a dynamic setting.

resource "aws_elastic_beanstalk_environment" "tfenvtest" {
  name                = "tf-test-name"
  application         = "${aws_elastic_beanstalk_application.tftest.name}"
  solution_stack_name = "64bit Amazon Linux 2018.03 v2.11.4 running Go 1.12.6"

  dynamic "setting" {
    for_each = var.settings
    content {
      namespace = setting.value["namespace"]
      name = setting.value["name"]
      value = setting.value["value"]
    }
  }
}

Terraform functions

The Terraform includes multiple terraform functions, also known as built-in functions that you can call from within expressions to transform and combine values. The syntax for function calls is a function name followed by comma-separated arguments in parentheses: min, join, element, jsonencode, etc.

min(2,3,4)                                 # The output of this function is 2

join("","","hello", "Automate", "infra")   # The output of this function is hello, Automate , infra

element(["a", "b", "c"], length(["a", "b", "c"])-1)   # The output of this function is c

lookup({a="ay", b="bee"}, "c", "unknown?")          # The output of this function is unknown

jsonencode({"hello"="Automate"})          # The output of this function is {"hello":"Automate"}

jsondecode("{\"hello\": \"Automate\"}")   # The output of this function is { "hello"="Automate"}                                                           
                                                                 

Terraform can function

Terraform can evaluate the given expression or condition and accordingly returns a boolean value (true if the expression is valid, else false if the result has any errors. This special function can catch errors produced when evaluating its argument.

local.instance {
  myinstance1 = "t2.micro"
  myinstance2 = "t2.medium"
}

can(local.instance.myinstance1) #  This is True
can(local.instance.myinstance3) #  This is False

variable "time" {
  validation {
     condition  = can(formatdate("","var.time"))   # Checking the 2nd argument
  }
}


Terraform try function

Terraform try function evaluates all of its argument expressions in turn and returns the result of the first one that does not produce any errors.

As you can check below, the terraform try function checks the expression and returns the first correct option, t2.micro, in the first and second options in the second case.

local.instance {
  myinstance1 = "t2.micro"
  myinstance2 = "t2.medium"
}

try(local.instance.myinstance1, "second-option") # This is available in local so output is t2.micro
try(local.instance.myinstance3, "second-option") # This is not available in local so output is second-option

Terraform templatefile function

The Terraform templatefile function reads the file at a given directory or path and renders the content present in the file as a template using the template variable.

Syntax: templatefile(path,vars)
  • Lets understand the example of Terraform templatefile function with Lists. Given below is the backend.tpl template file with below content. When you execute the templatefile() function it renders the backend.tpl and assigns the address and port to the backend argument.
# backend.tpl

%{for addr in ipaddr ~}      # Condition via Directive
backend ${addr}:${port}      # Prints this      
%{end for ~}                 # Condition via Directive

templatefile("${path.module}/backend.tpl, { port = 8080 , ipaddr =["1.1.1.1","2.2.2.2"]})

backend 1.1.1.1:8080
backend 2.2.2.2:8080
  • Lets checkout another example of Terraform templatefile function but this time with maps. When you execute the templatefile() function it renders the backend.tpl and assigns the value of set with each config mentioned in the templatefile (a=automate and i=infra).
# backend.tmpl

%{ for key,value in config }
set ${key} = ${value}
%{ endfor ~}

  • Execute the function
templatefile("${path.module}/backend.tmpl,
     { 
        config = {
              "a" = "automate"
              "i" = "infra"
           } 
      })

set a = automate
set i = infra

Terraform data source

Terraform data source allows you to fetch the data defined outside of Terraform, defined by another separate Terraform configuration, or modified by functions. After fetching the data, Terraform data source can use it as input and apply it to other resources.

Let’s learn with a basic example. In the below code, you will notice that using data it is fetching the instance details with a provided instance_id.

data "aws_instance" "my-machine1" {          # Fetching the instance
  instance_id = "i-0a0269cf952a02832"
  }

Terraform State file

The main function of the Terraform state file is to store the terraform state, which contains bindings between objects in remote systems and is defined in your Terraform configuration files. Terraform State file is by default stored locally on your machine where you run the Terraform commands with the name of terraform.tfstate.

The Terraform state is stored in JSON formats. When you run terraform show or terraform output command, it fetches the output in JSON format from the Terraform state file. Also, you can import existing infrastructure which you have created by other means such as manually or using scripts within Terraform state file.

When you are an individual, it is ok to keep the Terraform state file in your local machine but when you work in a team, consider storing it in a repository such as AWS S3, etc. While you write anything on the resource that is Terraform configuration file, then the Terraform state file gets Locked, which prevents someone else from using it simultaneously and avoids it being corrupted.

You can store your remote state file in S3, Terraform Cloud, Hasicorp consul, Google cloud storage, Azure blob storage, etc.

Join 48 other followers

Terraform backend [terraform backend s3]

Terraform backend is a location where terraform state file resides. The Terraform state file contains all the resource details and tracking which were provisioned or will be provisioned with Terraform, such as terraform plan or terraform apply command.

There are two types of Backend; one is local that resides where you run terraform from it could be Linux machine, windows machine or wherever you run it from, and other is remote backend which could be SAAS based URL or storage location such as AWS S3 bucket.

Let’s take a look at how you can configure local backend or remote backend with terraform backend s3

# Local Backend
# whenever statefile is created or updates it is stored in local machine.

terraform {
  backend "local" {
    path = "relative/path/to/terraform.tfstate"
  }
}

# Configuring Terraform to use the remote terraform backend s3.
# whenever statefile is created or updates it is stored in AWS S3 bucket. 

terraform {
  backend "s3" {
    bucket = "mybucket"
    key    = "path/to/my/key"
    region = "us-east-2"
  }
}

Terraform Command Line or Terraform CLI

The Terraform command-line interface or Terraform CLI can be used via terraform command, which accepts a variety of subcommands such as terraform init or terraform plan. Below is the list of all of the supported subcommands.

  • terraform init: It initializes the provider, module version requirements, and backend configurations.
  • terraform init -input=true ➔ You can need to provide the inputs on the command line else terraform will fail.
  • terraform init -lock=false ➔ Disable lock of terraform state file but this is not recommended
  • terraform init -upgrade ➔ Upgrades Terraform modules and Terraform plugins
  • terraform plan: terraform plan command determines the state of all resources and compares them with real or existing infrastructure. It uses terraform state file data to compare and provider API to check.
  • terraform plan -compact-warnings ➔ Provides the summary of warnings
  • terraform plan -out=path ➔ Saves the execution plan on the specific directory.
  • terraform plan -var-file= abc.tfvars ➔ To use specfic terraform.tfvars specified in the directory.
  • terraform apply: To apply the changes in a specific cloud such as AWS or Azure.
  • terraform apply -backup=path ➔ To backup the Terraform state file
  • terraform apply -lock=true ➔ Locks the state file
  • terraform apply -state=path ➔ prompts to provide the path to save the state file or use it for later runs.
  • terraform apply -var-file= abc.tfvars ➔ Enter the specific terraform.tfvars which contains environment-wise variables.
  • terraform apply -auto-approve ➔ This command will not prompt to approve the apply command.
  • terraform destroy: It will destroy terraform-managed infrastructure or the existing enviornment created by Terraform.
  • terraform destroy -auto-approve ➔ This command will not prompt to approve the destroy command.
  • terraform console: Provides interactive console to evaluate the expressions such as join command or split command.
  • terraform console -state=path ➔ Path to local state file
  • terraform fmt: terraform fmt command formats the configuration files in the proper format.
  • terraform fmt -check ➔ Checks the input format
  • terraform fmt – recursive ➔ It formats Terraform configuration files stored in subdirectories.
  • terraform fmt – diff ➔ displays the difference between the current and previous formats.
  • terraform validate -json ➔ Output is in json format
  • terraform graph: terraform graph generates a visual representation of the execution plan in graph form.
  • terraform graph -draw-cycles
  • terraform graph -type=plan
  • terraform output: terraform output command extracts the values of an output variable from the state file.
  • terraform output -json
  • terraform output -state=path
  • terraform state list: It lists all the resources present in the state file created or imported by Terraform.
  • terraform state list – id=id ➔ This command will search for a particular resource using resource id in Terraform state file.
  • terraform state list -state=path ➔ This command will prompt you to provide the path of the state file and then provides the list of all resources in terraform state file.
  • terraform state show: It shows attributes of specific resources.
  • terraform state show -state=path ➔ This command will prompt you to provide the path and then provide the attributes of specific resources.
  • terraform import: This command will import existing resources from infrastructure which was not created using terraform but will be imported in terraform state file and will be included in Terraform next time we run it.
  • terraform refresh: It will reconcile the Terraform state file. Whatever resource you created using terraform and if they are manually or by any means modified, the refresh will sync them in the state file.
  • terraform state rm: This command will remove the resources from the Terraform state file without actually removing the existing resources.
  • terraform state mv: This command moves the resources within the Terraform state file from one location to another
  • terraform state pull: This command will manually download the Terraform state file from a remote state in your local machine.

Quick Glance of Terraform CLI Commands

Initialize ProvisionModify ConfigCheck infraManipulate State
terraform initterraform planterraform fmtterraform graphterraform state list
terraform getterraform applyterraform validateterraform outputterraform state show
terraform destroyterraform consoleterraform state show terraform state mv/rm
terraform state listterraform state pull/push
Terraform CLI commands

Terraform ec2 instance example (terraform aws ec2)

Let’s wrap up this ultimate guide with a basic Terraform ec2 instance example or terraform aws ec2.

  • Assuming you already have Terraform installed on your machine.
  • First create a folder of your choice in any directory and a file named main.tf inside it and copy/paste the below content.
# This is main.tf terraform file.

resource "aws_instance" "my-machine" {
  ami = "ami-0a91cd140a1fc148a"
  for_each  = {
      key1 = "t2.micro"
	  key2 = "t2.medium"
   }
    instance_type      = each.value
	key_name       = each.key
    tags =  {
	   Name  = each.value
	}
}

resource "aws_iam_user" "accounts" {
  for_each = toset( ["Account11", "Account12", "Account13", "Account14"] )
  name     = each.key
}

  • Create another file vars.tf inside the same folder and copy/paste the below content.

#  This is var.tf terraform file.

variable "tag_ec2" {
  type = list(string)
  default = ["ec21a","ec21b"]
}
  • Finally, create another file output.tf again in the same folder and copy/paste the below content.
# This is  output.tf terraform file

output "aws_instance" {
   value = "${aws_instance.my-machine.*.id}"
}
output "aws_iam_user" {
   value = "${aws_iam_user.accounts.*.name}"
}


Make sure your machine has Terraform role attached or Terraform credentials configured properly before you run the below Terraform commands.

terraform -version  # It gives Terraform Version information
Finding Terraform version
Finding Terraform version
  • Now Initialize the terraform by running the terraform init command in same working directory where you have all the above terraform configuration files.
terraform init   # To initialize the terraform 
Initializing the terraform using terraform init command
Initializing the terraform using terraform init command
  • Next run the terraform plan command. This command provides the blueprint of what all resources will be deployed before deploying actually.
terraform plan   
Running the terraform plan command
Running the terraform plan command
terraform validate   # To validate all terraform configuration files.
Running the terraform validate command
Running the terraform validate command
  • Now run the Terraform show command provides the human readable output of state file that gets generated only after terraform plan command.
terraform show   # To provide human-readable output from a state or plan file.
Running the terraform show command
Running the terraform show command
  • To list all resources within terraform state file run the terraform state list command.
terraform state list 
Running the terraform state list command
Running the terraform state list command
terraform apply  # To Actually apply the resources 
Running the terraform apply command
Running the terraform apply command
  • To provide graphical view of all resources in configuration files run terraform graph command.
terraform graph  
Running the terraform graph command
Running the terraform graph command
  • To Destroy the resources that are provisioned using Terraform run Terraform destroy command.
terraform destroy   # Destroys all your resources or the one which you specified 
Running the terraform destroy command
Running the terraform destroy command

Join 48 other followers

Conclusion

Now that you have learned everything you should know about Terraform, you are sure going to be the Terraform leader in your upcoming projects or team or organizations.

So with that, what are you planning to automate using Terraform in your next adventure?

Learn Terraform: The Ultimate terraform tutorial [PART-1]

If you are looking to learn to terraform, then you are in the right place; this Learn Terraform: The Ultimate terraform tutorial guide will simply help you to gain complete knowledge that you need from basics to becoming a terraform pro.

Terraform infrastructure as a code tool to build and change the infrastructure effectively and simpler way. With Terraform, you can work with various cloud providers such as Amazon AWS, Oracle, Microsoft Azure, Google Cloud, and many more.

Let’s get started with Learn Terraform: The Ultimate terraform tutorial without further delay.

Join 48 other followers

Table of Content

  1. Prerequisites
  2. What is terraform?
  3. Terraform files and Terraform directory structure
  4. How to declare Terraform variables
  5. How to declare Terraform Output Variables
  6. How to declare Terraform resource block
  7. Declaring Terraform resource block in HCL format.
  8. Declaring Terraform resource block in terraform JSON format.
  9. Declaring Terraform depends_on
  10. Using Terraform count meta argument
  11. Terraform for_each module
  12. Terraform provider
  13. Defining multiple aws providers terraform
  14. Conclusion

Prerequisites

What is terraform?

Let’s kick off this tutorial with What is Terraform? Terraform is a tool for building, versioning, and updating the infrastructure. It is written in GO Language, and the syntax language of Terraform configuration files is HCL, i.e., HashiCorp Configuration Language, which is way easier than YAML or JSON.

Terraform has been in use for quite a while now and has several key features that make this tool more powerful such as

  • Infrastructure as a code: Terraform execution and configuration files are written in Infrastructure as a code language which comes under High-level language that is easy to understand by humans.
  • Execution Plan: Terraform provides you in depth details of execution plan such as what terraform will provision before deploying the actual code and resources it will create.
  • Resource Graph: Graph is an easier way to identify and manage the resource and quick to understand.

Terraform files and Terraform directory structure

Now that you have a basic idea of Terraform and some key features of Terraform. Let’s now dive into Terraform files and Terraform directory structure that will help you write the Terraform configuration files later in this tutorial.

Terraform code, that is, Terraform configuration files, are written in a tree-like structure to ease the overall understanding of code with .tf format or .tf.json or .tfvars format. These configuration files are placed inside the Terraform modules.

Terraform modules are on the top level in the hierarchy where configuration files reside. Terraform modules can further call another child to terraform modules from local directories or anywhere in disk or Terraform Registry.

Terraform modules folder structure
Terraform modules folder structure

Terraform contains mainly five files as main.tf , vars.tf , providers.tf , output.tf and terraform.tfvars.

  1. main.tf – Terraform main.tf file contains the main code where you define which resources you need to build, update or manage.
  2. vars.tf – Terraform vars.tf file contains the input variables which are customizable and defined inside the main.tf configuration file.
  3. output.tf : The Terraform output.tf file is the file where you declare what output paraeters you wish to fetch after Terraform has been executed that is after terraform apply command.
  4. .terraform: This directory contains cached provider , modules plugins and also contains the last known backend configuration. This is managed by terraform and created after you run terraform init command.
  5. terraform.tfvars files contains the values which are required to be passed for variables that are refered in main.tf and actually decalred in vars.tf file.
  6. providers.tf – The povider.tf is the most important file whrere you define your terraform providers such as terraform aws provider, terraform azure provider etc to authenticate with the cloud provider.

How to declare Terraform variables

In the previous section, you learned Terraform files and Terraform directory structure. Moving further, it is important to learn how to declare Terraform variables in Terraform configuration file (var. tf)

Declaring the variables allows you to share modules across different Terraform configurations, making your module reusable. There are different types of variables used in Terraform, such as boolean, list, string, maps, etc. Let’s see how different types of terraform variables are declared.

  • Each input variable in the module must be declared using a variable block as shown below.
  • The label after the variable keyword is a name for the variable, which should be unique within the same module
  • The following arguments can be used within the variable block:
    • default – A default value allows you to decalre the value in this block only and makes the variable optional.
    • type – This argument declares the value types.
    • description – You can provide the description of the input variable’s.
    • validation -To define validation rules if any.
    • sensitive – If you specify the value as senstive then terraform will not print the values in while executing.
    • nullable – Specify null if you dont need any value for the variable.
variable "variable1" {                        
  type        = bool
  default     = false
  description = "boolean type variable"
}

variable  "variable2" {                       
   type    = map
   default = {
      us-east-1 = "image-1"
      us-east-2 = "image2"
    }

   description = "map type  variable"
}

variable "variable3" {                   
  type    = list(string)
  default = []
  description = "list type variable"
}

variable "variable4" {
  type    = string
  default = "hello"
  description = "String type variable"
}                        

variable "variable5" {                        
 type =  list(object({
  instancetype        = string
  minsize             = number
  maxsize             = number
  private_subnets     = list(string)
  elb_private_subnets = list(string)
            }))

 description = "List(Object) type variable"
}


variable "variable6" {                      
 type = map(object({
  instancetype        = string
  minsize             = number
  maxsize             = number
  private_subnets     = list(string)
  elb_private_subnets = list(string)
  }))
 description = "Map(object) type variable"
}


variable "variable7" {
  validation {
 # Condition 1 - Checks Length upto 4 char and Later
    condition = "length(var.image_id) > 4 && substring(var.image_id,0,4) == "ami-"
    condition = can(regex("^ami-",var.image_id)    
# Condition 2 - It checks Regular Expression and if any error it prints in terraform error_message =" Wrong Value" 
  }

  type = string
  description = "string type variable containing conditions"
}

Terraform variables follows below higher to the lower priority order.

  1. Specifying the environment variables like export TF_VAR_id='["id1","id2"]''
  2. Specifying the variables in the teraform.tfvars file
  3. Specifying the variables in theterraform.tfvars.json file
  4. Specifying the variables in the *.auto.tfvars or *.auto.tfvars.json file
  5. Specifying the variables on the command line with -var and -var-file options

How to declare Terraform Output Variables

In the previous section, you learned how to use terraform variables in the Terraform configuration file. As learned earlier, modules contain one more important file: outputs. tf that contains terraform output variables.

  • In the below output.tf file the you can see there are two different terraform output variables named:
  • output1 that will store and display the arn of instance after running terraform apply command.
  • output2 that will store and display the public ip address of the instance after running terraform apply command.
  • output3 that will store but doesnt display the private ip address of the instance after running terraform apply command using sensitive argument.
# Output variable which will store the arn of instance and display after terraform apply command.

output "output1" {
  value = aws_instance.my-machine.arn
}

# Output variable which will store instance public IP and display after terraform apply command
 
output "output2" {
  value       = aws_instance.my-machine.public_ip
  description = "The public IP address of the instance."
}

output "output3" {
  value = aws_instance.server.private_ip
# Using sensitive to prevent Terraform from showing the ouput values in terrafom plan and apply command.  
  senstive = true                             
}

How to declare Terraform resource block

You are going great in learning the terraform configuration file, but do you know your modules contain one more important main file.tf file, which allows you to manage, create, update resources with Terraform, such as creating AWS VPC, etc., and to manage the resource, you need to define them in terraform resource block.

# Below Code is a resource block in Terraform

resource "aws _vpc" "main" {    # <BLOCK TYPE> "<BLOCK LABEL>" "<BLOCK LABEL>" {
cidr_block = var.block          # <IDENTIFIER> =  <EXPRESSION>  #Argument (assigns value to name)
}                             

Declaring Terraform resource block in HCL format.

Now that you have an idea about the syntax of terraform resource block let’s check out an example where you will see resource creation using Terraform configuration file in HCL format.

  • Below code creates two resources where resource1 create an AWS ec2 instance and other work with Terraform provisioner and install apache on ec2 instance. Timeouts customizes how long certain operations are allowed.

There are some special arguments that can be used with resources such as depends_on, count, lifecycle, for_each and provider, and lastly provisioners.

resource "aws_instance" "resource1" {
  instance_type = "t2.micro"
  ami           = "ami-9876"
  timeouts {                          # Customize your operations longevity
   create = "60m"
   delete = "2h"
   }
}

resource "aws_instance" "resource2" {
  provisioner "local-exec" {
    command = "echo 'Automateinfra.com' >text.txt"
  }
  provisioner "file" {
    source      = "text.txt"
    destination = "/tmp/text.txt"
  }
  provisioner "remote-exec" {
    inline = [
      "apt install apache2 -f /tmp/text.txt",
    ]
  }
}

Declaring Terraform resource block in terraform JSON format.

Terraform language can also be expressed in terraform JSON syntax, which is harder for humans to read and edit but easier to generate and parse programmatically, as shown below.

  • Below example is same which you previously created using HCL configuration but this time it is using terraform JSON syntax. Here also code creates two resources resource1 → AWS EC2 instance and other resource work with Terraform provisioner to install apache on ec2 instance.
{
  "resource": {
    "aws_instance": {
      "resource1": {
        "instance_type": "t2.micro",
        "ami": "ami-9876"
      }
    }
  }
}


{
  "resource": {
    "aws_instance": {
      "resource2": {
        "provisioner": [
          {
            "local-exec": {
              "command": "echo 'Automateinfra.com' >text.txt"
            }
          },
          {
            "file": {
              "source": "example.txt",
              "destination": "/tmp/text.txt"
            }
          },
          {
            "remote-exec": {
              "inline": ["apt install apache2 -f tmp/text.txt"]
            }
          }
        ]
      }
    }
  }
}

Declaring Terraform depends_on

Now that you learned how to declare Terraform resource block in HCL format but within the resource block, as discussed earlier, you can declare special arguments such as depends_on. Let’s learn how to use terraform depends_on meta argument.

Use the depends_on meta-argument to handle hidden resource or module dependencies that Terraform can’t automatically handle.

  • In the below example while creating a resource aws_rds_cluster you need the information about the aws_db_subnet_group so aws_rds_cluster is dependent and in order to specify the dependency you need to declare depends_on meta argument within aws_rds_cluster.
resource "aws_db_subnet_group" "dbsubg" {
    name = "${var.dbsubg}" 
    subnet_ids = "${var.subnet_ids}"
    tags = "${var.tag-dbsubnetgroup}"
}

# Component 4 - DB Cluster and DB Instance

resource "aws_rds_cluster" "main" {
  depends_on                   = [aws_db_subnet_group.dbsubg]  
  # This RDS cluster is dependent on Subnet Group

Join 48 other followers

Using Terraform count meta argument

Another special argument is Terraform count. By default, terraform create a single resource defined in Terraform resource block. But at times, you want to manage multiple objects of the same kind, such as creating four AWS EC2 instances of the same type in the AWS cloud without writing a separate block for each instance. Let’s learn how to use Terraform count meta argument.

  • In the below code terraform will create 4 instance of t2.micro type with (ami-0742a572c2ce45ebf) ami as shown below.
resource "aws_instance" "my-machine" {
  count = 4 
  
  ami = "ami-0742a572c2ce45ebf"
  instance_type = "t2.micro"
  tags = {
    Name = "my-machine-${count.index}"
         }
}
Using Terraform count to create four ec2 instance
Using Terraform count to create four ec2 instance
  • Similarly in the below code terraform will create 4 AWS IAM users named user1, user2, user3 and user4.
resource "aws_iam_user" "users" {
  count = length(var.user_name)
  name = var.user_name[count.index]
}

variable "user_name" {
  type = list(string)
  default = ["user1","user2","user3","user4"]
}
Using Terraform count to create four IAM user
Using Terraform count to create four IAM user

Terraform for_each module

Earlier in the previous section, you learned to terraform count is used to create multiple resources with the same characteristics. If you need to create multiple resources in one go but with certain parameters, then terraform for_each module is for you.

The for_each meta-argument accepts a map or a set of strings and creates an instance for each item in that map or set. Let’s look at the example below to better understand terraform for_each.

Example-1 Terraform for_each module

  • In the below example, you will notice for_each contains two keys (key1 and key2) and two values (t2.micro and t2.medium) inside the for each loop. When the code is executed then for each loop will create:
    • One instance with key as “key1” and instance type as “t2.micro”
    • Another instance with key as “key2” and instance type as “t2.medium”.
  • Also below code will create different account with names such as account1, account2, account3 and account4.
resource "aws_instance" "my-machine" {
  ami = "ami-0a91cd140a1fc148a"
  for_each  = {
      key1 = "t2.micro"
      key2 = "t2.medium"
   }
  instance_type    = each.value	
  key_name         = each.key
  tags =  {
   Name = each.value 
	}
}

resource "aws_iam_user" "accounts" {
  for_each = toset( ["Account1", "Account2", "Account3", "Account4"] )
  name     = each.key
}

Terraform for_each module example 1 to launch ec2 instance and IAM users
Terraform for_each module example 1 to launch ec2 instance and IAM users

Example-2 Terraform for_each module

  • In the below example, you will notice for_each is a variable of type map(object) which has all the defined arguments such as (instance_type, key_name, associate_public_ip_address and tags). After Code is executed every time each of these arguments get a specific value.
resource "aws_instance" "web1" {
  ami                         = "ami-0a91cd140a1fc148a"
  for_each                    = var.myinstance
  instance_type               = each.value["instance_type"]
  key_name                    = each.value["key_name"]
  associate_public_ip_address = each.value["associate_public_ip_address"]
  tags                        = each.value["tags"]
}

variable "myinstance" {
  type = map(object({
    instance_type               = string
    key_name                    = string
    associate_public_ip_address = bool
    tags                        = map(string)
  }))
}

myinstance = {
  Instance1 = {
    instance_type               = "t2.micro"
    key_name                    = "key1"
    associate_public_ip_address = true
    tags = {
      Name = "Instance1"
    }
  },
  Instance2 = {
    instance_type               = "t2.medium"
    key_name                    = "key2"
    associate_public_ip_address = true
    tags = {
      Name = "Instance2"
    }
  }
}
Terraform for_each module example 2 to launch multiple ec2 instance
Terraform for_each module example 2 to launch multiple ec2 instances

Example-3 Terraform for_each module

  • In the below example, similarly you will notice instance_type is using toset which contains two values(t2.micro and t2.medium). When the code is executed then instance type takes each value from the set values inside toset.
locals {
  instance_type = toset([
    "t2.micro",
    "t2.medium",
  ])
}

resource "aws_instance" "server" {
  for_each      = local.instance_type

  ami           = "ami-0a91cd140a1fc148a"
  instance_type = each.key
  
  tags = {
    Name = "Ubuntu-${each.key}"
  }
}
Terraform for_each module example 3 to launch multiple ec2 instances
Terraform for_each module example 3 to launch multiple ec2 instances

Terraform provider

Terraform depend on the plugins to connect or interact with cloud providers or API services, and to perform this, you need Terraform provider. There are several terraform providers that are stored in Terraform registry such as terraform provider aws or aws terraform provider or terraform azure.

Terraform configurations must declare which providers they require so that Terraform can install and use them. Some providers require configuration (like endpoint URLs or cloud regions) before using. The provider also uses local utilities like generating random strings or passwords. You can create multiple or single configurations for a single provider. You can have multiple providers in your code.

Providers are stored inside the “Terraform registry,” Some are in-house providers ( companies that create their own providers). Providers are written in Go Language.

Let’s learn how to define a single provider and then define the provider’s configurations inside terraform.

# Defining the Provider requirement 

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
    postgresql = {
      source = "cyrilgdn/postgresql"
    }
  }
  required_version = ">= 0.13"   # New way to define version 
}


# Defining the Provider Configurations and names are Local here i.e aws,postgres,random

provider "aws" {
  assume_role {
  role_arn = var.role_arn
  }
  region = var.region
}

provider "random" {}

provider "postgresql" {
  host                 = aws_rds_cluster.main.endpoint
  username             = username
  password             = password
}

Defining multiple aws providers terraform

In the previous section, you learned how to use aws provider terraform to connect to AWS resources, which is great, but with that, you can only in one particular aws region. However, consider using multiple aws providers’ Terraform configurations if you need to work with multiple regions.

  • To create multiple configurations for a given provider, you should include multiple provider blocks with the same provider name but to use the additional non-default configuration, use the alias meta-argument as shown below.
  • In the below code, there is one aws terraform provider named aws that works with the us-east-1 region by default and If you need to work with another region, consider declaring same provider again but with different region and alias argument.
  • For creating a resource in us-west-1 region declare provider.<alias-name> in the resource block as shown below.
# Defining Default provider block with region us-east-1

provider "aws" {      
  region = us-east-1
}

# Name of the provider is same that is aws with region us-west-1 thats why used ALIAS

provider "aws" {    
  alias = "west"
  region = us-west-1
}

# No need to define default Provider here if using Default Provider 

resource "aws_instance" "resource-us-east-1" {}  

# Define Alias Provider here to use west region  

resource "aws_instance" "resource-us-west-1" {    
  provider = aws.west
}

Quick note on Terraform version : In Terraform v0.12 there was no way to give a source but in the case of Terraform v 0.13 onwards you have an option to add a source address.

# This is how you define provider in Terraform v0.13 and onwards
terraform {          
  required_providers {
    aws = {
      source = "hasicorp/aws"
      version = "~>1.0"
}}}

# This is how you define provider in Terraform v 0.12
terraform {               
  required_providers {
    aws = "~/>1.0"
}}

Join 48 other followers

Conclusion

In this Ultimate Guide, you learned what is terraform, terraform provider, and understood how to declare terraform provider aws and further used to interact with cloud services.

Now that you have gained a handful of Knowledge on Terraform continue with the PART-2 guide and become the pro of Terraform.

Learn Terraform: The Ultimate terraform tutorial [PART-2]

Ultimate Ansible interview questions and answers

If you are preparing for a DevOps interview or for an Ansible administrator, consider this guide as your friend to practice ansible interview questions and answers and help you pass the exam or interview.

Without further delay, let’s get into this Ultimate Ansible interview questions and answers guide, where you will have three Papers to practice containing 20 questions of Ansible.

Join 48 other followers

PAPER-1

Q1. What is Ansible ?

Answer: Ansible is open source configuration tool written in python Language. Ansible is used to deploy or configure any software, tools or files on remote machines quickly using SSH Protocol.

Q2. What are advantages of Ansible.

Answer: Ansible is simple to manage, agent-less, has great performance as it is quick to deploy and doesn’t require much efforts to setup and no doubt it is reliable.

Q3. What are the things which Ansible can do ?

Answer: Deployment of apps such as apache tomcat, AWS EC2 instance, configuration Management such as configure multiple file in different remote nodes , Automates tasks and is used for IT orchestration.

Q4. Is it possible to have Ansible control node on windows ?

Answer: No, you can have Ansible controller host or node only on Linux based operating system however you can configure windows machine as your remote hosts.

Q5. What are requirements when the remote host is windows machine ?

Answer: Ansible needs Powershell 3.0 and at least .NET 4.0 to be installed on windows Host, winRM Listener should be created and activated before you actually deploy or configure remote node as windows machine.

Q6. Where are different components of Ansible ?

Answer: API’s, Modules, Host , Playbook , Cloud, Networking and Inventories.

Q7.What are Ansible adhoc commands ?

Answer: Ansible adhoc commands are the single line commands that are generally used for testing purpose or if you need to take an action which is not repeatable and rarely used such as restart service on a machine etc. Below is the example of ansible adhoc command.

The below command starts the apache service on the remote node.

ansible all -m ansible.builtin.service -a “name=apache2 state=started”

Q8. What is the ansible command to check uptime of all servers ?

Answer: Below is the ansible command to check uptime of the servers. This command will provide you an output stating since long the remote node is up.

ansible all -a /usr/bin/uptime 
Ansible ad hoc command to check the uptime of the server which is 33 days.
Ansible ad hoc command to check the server’s uptime, which is 33 days.

Q9. How to Install the Apache service using ansible command ?

Answer: To install apache service using ansible command you can use ansible adhoc command as shown below. In the below command b flag is to become root.

ansible all -m apt -a  "name=apache2 state=latest" -b  

Q10.What are the steps or commands to Install Ansible on Ubuntu Machine ?

Answer: The below commands you will need to execute to Install Ansible on Ubuntu Machine

# Update your system packages using apt update command
sudo apt update 
# Install below prerequisites package to work with PPA repository.
sudo apt install software-properties-common 
# Install Ansible PPA repository (Personal Package repository) 
sudo apt-add-repository –yes –update ppa:ansible/ansible
# Finally Install ansible
sudo apt install ansible

Q11. What is Ansible facts in Ansible ?

Answer: Ansible facts allows you to fetch or access the data or values such as hostname or Ip address from the remote hosts and stored.

Below is the example showing how you can run Ansible facts using ansible-playbook named main.yml.

# main.yml 
---
- name: Ansible demo
  hosts: web
  remote_user: ubuntu
  tasks:
    - name: Print all available facts
      ansible.builtin.debug:
        var: ansible_facts

 ansible-playbook main.yml
Output of the Ansible facts using ansible-playbook
The output of the Ansible facts using ansible-playbook

Q12. What are Ansible tasks ?

Answer: Ansible tasks are group of task which ansible playbook needs to perform such as copy , installing package , editing configurations on remote node and restarting services on remote node etc.

Let’s look at a basic Ansible task. In below code the Ansible task is to check if apache service is running on the remote node?

tasks:
  - name: make sure apache is running
    service:
      name: httpd
      state: started

Q13. What are Ansible Roles ?

Answer: Ansible roles is a way to structurally maintain your playbooks such that you can easily understand and work on it. Ansible role basically contains different folders for the simplicity such as it lets you load the files from files folder, variables from variable folder, handlers, tasks etc.

You can create different Ansible roles and reuse them as many times as you need.

Q14. Command to Create a user on linux machine using Ansible?

Answer: To create a user on linux machine using Ansible you can use ansible adhoc command as shown below.

ansible all -m ansible.builtin.user -a “name=name password=password” -b

Q15. What is Ansible Tower ?

Answer: Ansible tower is web based solution that makes Ansible even more to easy to use for IT teams. Ansible tower can be used for upto 10 nodes. It captures all recent activities like status of host . It integrates with notifications’ about all necessary updates. It also schedules Ansible Jobs very well.

Q16. How to connect with remote machines in Ansible?

Answer: After installing Ansible , configure Ansible inventory with the list of hosts or grouping them accordingly and finally connecting them using SSH protocol. After you configure the Ansible inventory you can test the connectivity between Ansible controller and remote nodes using ping module to ping all the nodes in your inventory

ansible all -m ping

You should see output for each host in your inventory, similar to this:

aserver.example.org | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}

Q17. Does Ansible support AWS?

Answer: Yes. There are lots of AWS modules that are present in Ansible and can be used to manage AWS resources. Refer Ansible collections in the Amazon Namespace.

Q18. Which Ansible module allows you to copy files from remote machine to control machine ?

Answer: Ansible fetch module. Ansible fetch module is used for fetching files from remote machines and storing them locally in a file tree, organized by hostname.

- name: Store file from remote node directory to host directory 
  ansible.builtin.fetch:
    src: /tmp/remote_node_file
    dest: /tmp/fetched_host_file

Q19. Where you can find Ansible inventory by default ?

Answer: The default location of Ansible inventory is /etc/ansible/hosts.

Q20. How can you check the Ansible version?

Answer: To check the ansible version, run the ansible –version command below.

ansible --version
Checking the ansible version by using ansible --version command.
Checking the ansible version by using the ansible –version command.

Join 48 other followers

PAPER-2

Q1. Is Ansible agentless ?

Answer: Yes, Ansible is an open source tool that is agentless. Agentless here means when you install Ansible on host controller and when you use to deploy or configure changes on remote nodes, the remote node doesn’t require any agent or softwares to be installed.

Q2. What is Primary use of Ansible ?

Answer: Ansible can be used in IT Infrastructure to manage and deploy software application on remote machines.

Q3. What are Ansible Hosts or remote nodes ?

Answer: Ansible hosts are machines or nodes on which Ansible controller host deploys the software’s. Ansible host could be Linux, RedHat, Windows, etc.

Q4. What is CI (Continuous Integration) ?

Answer: CI also known as Continuous integration is primarily used by developers. Successful Continuous integration means when developers code is built , tested and then pushed to Shared repository whenever there is a change in code.

Q5. What is the main purpose of role in Ansible ?

Answer: The main purpose of Ansible role is to reuse the content again by using the proper Ansible folder structure directory. These directory’s folder contains multiple configuration files or content accordingly that needs to be declared at various places and in various modules, to minimize the re-code work roles are used.

Q6. What is Control Node?

Answer: The Control node is the Ansible node on which Ansible is installed. Before you install control node make sure Python is already installed on machine prior to installing Ansible.

Q7. Can you have Windows machine as Controller node ?

Answer: No.

Q8. What is the other names of Ansible Hosts?

Answer: Ansible hosts can also be called as Managed nodes. Ansible is not installed on managed nodes.

Q9. What is host file in Ansible ?

Answer: Inventory file is also known as host file in Ansible which is by default stored in /etc/ansible/hosts directory.

Q10. What is collections in Ansible ?

Answer: Ansible collections is a distribution format that include playbooks, roles, modules, plugins.

Q11. What is Ansible module in Ansible.

Answer: Ansible contains various Ansible modules that have a specific purpose such as copying the data, adding a user and many more. You can invoke a single module within a task defined in playbooks or several different modules in a playbook.

Q12. What is task in Ansible ?

Answer: To perform any action you need a task. Similarly in Ansible you need a task to run modules. For Ansible ad hoc command you can execute task only once.

Q13. What is Ansible Playbook?

Answer: Ansible playbook is an ordered lists of tasks that you run and are designed to be human-readable and are developed in a basic text language. For example in the below ansible playbook there are two tasks first is to create a user named adam and other task is to create a user shanky in the remote node.

---
- name: Ansible Create user functionality module demo
  hosts: web # Defining the remote server
  tasks:

    - name: Add the user 'Adam' with a specific uid and a primary group of 'sudo'
      ansible.builtin.user:
        name: adam
        comment: Adam
        uid: 1095
        group: sudo
        createhome: yes        # Defaults to yes
        home: /home/adam   # Defaults to /home/<username>

    - name: Add the user 'Adam' with a specific uid and a primary group of 'sudo'
      ansible.builtin.user:
        name: shanky
        comment: shanky
        uid: 1089
        group: sudo
        createhome: yes        # Defaults to yes
        home: /home/shanky  # Defaults to /home/<username>

Creating two users using ansible playbook
Creating two users using ansible-playbook

Q14. Where do you create basic inventory in Ansible?

Answer: /etc/ansible/hosts

Q15. What is Ansible Tower ?

Answer: Ansible tower is web based solution that makes Ansible even more to easy to use for IT teams. Ansible tower can be used for upto 10 nodes. It captures all recent activities like status of host . It integrates with notification’s about all necessary updates. It also schedules Ansible Jobs very well.

Q16. What is the command for running the Ansible playbook?

Answer: The below is the command to run or execute the ansible-playbook.

ansible-playbook my_playbook

Q17. On which protocol does Ansible communicate to remote node?

Answer: SSH

Q18. How to use ping module to ping all the nodes?

Answer: Below is the command which you can use to ping all the remote nodes.

ansible all -m ping

Q19. Provide an example to run a live command on all of your nodes?

Answer:

ansible all -a "/bin/echo hello"
Printing hello on remote node using ansible command.
Printing hello on remote node using ansible command.

Q20. How to run ansible command with privilege escalation (sudo and similar) ?

Answer: Below command executes the ansible command with root access by using --become flag.

ansible all -m ping -u adam --become

PAPER-3

Q1. Which module allows you to create a directory?

Answer: Ansible file module allows you to create a directory.

Q2. How to define number of parallel processes while communicating to hosts .

Answer: By setting the forks in ansible and to set the forks you need to edit ansible.cfg file.

Q3. Is Ansible agentless configuration management Tool ?

Answer: Yes

Q4. What is Ansible Inventory ?

Answer: Ansible works against managed nodes or hosts to create or manage the infrastructure . We list down these hosts or nodes in a file known as Inventory. Inventory can be of two types one is ini and other is YAML format

Q5. How to create a Ansible inventory in the ini format ?

Answer:

automate2.mylabserver.com
[httpd]
automate3.mylabserver.com
automate4.mylabserver.com
[labserver]
automate[2:6].mylabserver.com

Q6. How to create a Ansible inventory in the YAML format?

Answer:

all:
  hosts:
     automate2.mylabserver.com
  children:
      httpd:
        hosts:
          automate3.mylabserver.com
          automate4.mylabserver.com
      labserver:
         hosts:
          automate[2:6].mylabserver.com

Q7.What is Ansible TAG ?

Answer: When you need to add tags with Ansible then you can use Ansible Tags to do this. You can apply Ansible Tags on block level , playbook level, individual task level or role level.

tasks:
- name: Install the servers
  ansible.builtin.yum:
    name:
    - httpd
    - memcached
    state: present
  tags:
  - packages
  - webservers

Q8. What are key things required for Playbook ?

Answer: Hosts should be configured in inventory, Tasks should be declared in ansible playbook and Ansible should already be installed.

Q9. How to use existing Ansible tasks ?

Answer: We can use by importing the tasks import_tasks. Ansible import_tasks imports a list of tasks to be added to the current playbook for subsequent execution.

Q10. How can you secure the data in Ansible-playbook ?

Answer: You can secure the data using ansible-vault to encrypt the data and later decrypt it. Ansible Vault is a feature of ansible that allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plaintext in playbooks or roles.

Q11. What is Ansible Galaxy?

Answer: Ansible Galaxy is a repository for Ansible Roles that are available to drop directly into your Playbooks to streamline your automation projects.

Q12. How can you download roles from Ansible Galaxy ?

Answer: Below code allows you to download roles from Ansible Galaxy.

ansible-galaxy install username.role_name

Q13. What are Variables in ansible?

Answer: Ansible variables are assigned with values which are further used for commuting. After you create variables, either by defining them in a file, passing them at the command line, or registering the return value or values of a task as a new variable

Q14. Command to generate a SSH key-pair for connecting with remote machines?

Answer: ssh-keygen

Q15. What is Ansible Tower ?

Answer: Ansible tower is web based solution that makes Ansible even more to easy to use for IT teams. Ansible tower can be used for upto 10 nodes. It captures all recent activities like status of host . It integrates with notification’s about all necessary updates. It also schedules Ansible Jobs very well.

Q16. What is the command for running a playbook?

Answer:

ansible-playbook my_playbook

Q17. Does Ansible support AWS?

Answer: Yes.There are lots of modules that are present in Ansible.

Q18. How to create encrypted files using Ansible?

Answer: By using the below ansible-vault command.

ansible-vault create file.yml 

Q19. What are some key features of Ansible Tower?

Answer:

With Ansible Tower, you can view dashboards and see whatever is going on in real-time, job updates, who ran the playbook or ansible commands, Integrated notifications, schedule ansible jobs, run ansible remote commands, and perform.

Q20. What is the first-line syntax of any Ansible playbook?

Answer: First line syntax of ansible-playbook is – – –

---   # The first Line Syntax of ansible playbook

Join 48 other followers

Conclusion

In this ultimate guide, you had a chance to revise everything you needed to pass the interview with Ansible interview questions and answers.

Now that you have sound knowledge of Ansible and various components, modules, and features, and are ready for your upcoming interview.

The Ultimate Python interview questions: learn python the hard way

If you are preparing for a DevOps interview or for a Python developer, consider this guide as your friend to practice Python interview questions: learn python the hard way and answers and help you pass the exam or interview.

Without further delay, let’s get into this Ultimate Python interview questions and answers guide, where you will have 20 questions to practice.

Let’s get into it.

Join 48 other followers

PAPER

Q1. What is difference between List and Tuple in Python?

Answer: Lists are mutable i.e they can be modified and Tuples are immutable. As you can see in the below code the list can be modified however Tuples are not modifable.

List = ["a","i",20]
Tuples = ("a","i",20)
List = ["a","i",20]
Tuples = ("a","i",20)
print(List)
print(Tuples)
List[1] = 22
print(List)
Tuples[1] = "n"
print(Tuples)

After running the above code, you will notice that the Lists can modify, but the Tuples are not.

Running Python code to modify list and tuple
Running Python code to modify list and tuple

Q2. What are key features of Python.

Answer: Python is interpreted language, that means it doesn’t required to be compiled. It can be used in various automation areas such as AI technologies, easier to understand and write language. The Latest version of Python is 3.10

Q3. Give Examples of some famous Python modules?

Answer: Some of the example of Python modules are os, json, sys, math, random, math

Q4. What are local and global variables in Python ?

Answer: Local Variables are declared inside the function and scope is inside function only however global variables are those variables whose scope is for entire program.

Q5. What are functions in Python ?

Answer: This is a block of code which is only executed when called. To declare the function you need use the below syntax def < function-name> and call it using <function-name> as shown below.

def function(n): 
     a, b = 0, 1
     while a > n:
         print(a)
         print(b)
     print(n)
function(200)     
python shanky.py
Running Python function
Running Python function

Q6. What is __init__ ?

Answer: This is a method which automatically allocate memory when object or instance is created. All the Classes have __init__method.

Q7. What is the Lambda function in Python?

Answer: Lambda functions are also known as the anonymous function, which takes any number of parameters but is written in a single statement.

square = lambda num: num ** 2

Q8. What is self in Python?

Answer: self is an instance or object of a class. This is used with __init__ function and is explicitly included as first parameter.

Q9. How can you Randomize the list in Python ?

Answer: To randamize the list in Python consider importaning random module as showb below.

from random import shuffle
list = ["a" , "b" , "c" ,"d", "e"] 
shuffle(list)
print(list)

Q10.What does *args and *Kargs mean ?

Answer: The *args are used when you are sure about the number of arguments that you will pass in a function. Similarly * Kargs are keyword arguments where you are not sure about number of argument but they are declared in the dictionary formats.

Q11. Does Python Supports OOPs concept?

Answer: Yes , Python does support OOPs concept by creating classes and objects . Backends determines where state is stored or loaded from. By default we have local backend but we can give remote backed such as S3

Q12. Name some Python Libraries ?

Answer: The Python libraries are collection of built-in modules (written in C) that provide access to system functionality such as file I/O that would otherwise be inaccessible to Python programmers, as well as modules written in Python that provide standardized solutions for many problems that occur in everyday programming. For example Pandas, Numpy etc.

Q13. What are various ways to import the modules in python?

Answer:

import math 

import math as mathes ( Alias name ) 
from flask import Flask

Q14. What is Python Flask?

Answer: Python flask is a web framework which makes life of developer easy by reusing the code, extensions for operation to build a reliable, scalable and maintainable web apps. With Python flask web frameworks, you can create and build from static to dynamic applications and work with API requests.

There are different Python Web frameworks apart from Python flask such as TORNADO, PYRAMID and DJANGO .

Related POST: Python Flask Tutorial: All about Python flask

Q15. How can we open a file in Python?

Answer:

with open("myfile.txt", "r") as newfile:

Q16. How can you see statistics of file located in directory?

Answer: To see the stats of file located in directory consider using os module.

import os

os.stat("file_name") # These stats include st_mode, the file type and permissions, and st_atime, the time the item was last accessed.

Q17. How do you define method and URL Bindings in Python Flask?

Answer:

@app.route("/login", methods = ["POST"])

Related POST: Python Flask Tutorial: All about Python flask

Q18. How does Python Flask gets executed in Python ?

Answer:

if __name__ ==  '__main__'
app.run(debug = True)

Q19. Which built in function evaluates expression and return Boolean result ?

Answer: can function

Q20. How can you make Python Script executable in Unix ?

Answer: Below are the steps that you would need to make Python Script executable in unix.

  • Define the Path of Python interpretor
#/usr/local/bin/python
  • Next convert the script into executable by using below command.
chmod +x abc.py
  • Finally run the script
python abc.py

Join 48 other followers

Conclusion

In this ultimate guide, you had a chance to revise everything you needed to pass the interview with Python interview questions and answers.

Now that you have sound knowledge of Python and various components, modules, and features, and are ready for your upcoming interview.

How to create a new Docker image using Dockerfile: Dockerfile Example

Are you looking to create your own Docker image? The docker images are the basic software applications or an operating system but when you need to create software with advanced functionalities of your choice, then consider creating a new docker image with dockerfile.

In this tutorial, you will learn how to create your own Docker Image using dockerFile, which contains a set of instructions and arguments again each instruction. Let’s get started.

Join 48 other followers

Table of Content

  1. Prerequisites
  2. What is Dockerfile?
  3. Dockerfile instructions or Dockerfile Arguments
  4. Dockerfile Example
  5. Conclusion

Prerequisites

If you’d like to follow along step-by-step, you will need the following installed:

  • Ubuntu machine with Docker installed. This tutorial uses Ubuntu 21.10 machine.
  • Docker v19.03.8 installed.

What is Dockerfile?

If you are new to dockerfile, you should know what dockerfile is. Dockerfile is a text file that contains all the instructions a user could call on the command line to assemble an image from a base image. The multiple instructions could be using the base image, updating the repository, Installing dependencies, copying source code, etc.

Docker can build images automatically by reading the instructions from a Dockerfile. Each instruction in DockerFile creates another layer (All the instructions take their own memory). While building the new image using the docker build command ( which is done by Docker daemon), if any instruction fails and if you rebuild the image, then previous instructions which are cached are used to build.

New Docker image can be built using simply executing docker build command, or if you need to build docker image from a different path use f flag.

docker build . 
docker build -f /path/to/a/Dockerfile .

Dockerfile instructions or Dockerfile Arguments

Now that you have a basic idea about what is docker and dockerfile, let’s understand some of the most important Dockerfile instructions or Dockerfile Arguments.

  • FROM : From instruction initializes new build stage and sets the base image for subsequent instructions. From instruction may appear multiple times in the dockerFile.
  • ARG: ARG is the only instruction that comes before FROM. The ARG instruction defines a variable that users can pass while building the image using the docker build command such as
 --build-arg <varname>=<value> flag
  • EXPOSE: Expose instruction informs docker about the port’s container listens on. The EXPOSE instruction does not actually publish the port.; it is just for the sake of understanding for admins to know about which ports are intended to be published
  • ENV: The ENV instruction sets the environment variable in the form of key-value pair.
  • ADD: The ADD instruction copies new files, directories, or remote file URLs from your docker host and adds them to the filesystem of the image.
  • VOLUME: The VOLUME instruction creates a mount point and acts as externally mounted volumes from the docker host or other containers.
  • RUN: The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. RUN are declared in two ways, either shell way or executable way.
  • Shell way: the command is run in a shell i.e.,/bin/sh. If you need to run multiple commands, use the backslash.
  • Executable way: RUN [“executable”, “param1”, “param2”] . If you need to use any other shell than /bin/sh, you should consider using executable.
RUN /bin/bash -c 'source $HOME/.bashrc; echo $HOME' # Shell way
RUN ["/bin/bash", "-c", "echo HOME"] # Executable way (other than /bin/sh)
  • CMD: The CMD instruction execute the command within the container just like docker run exec command. There can be only one CMD instruction in DockerFile. If you list more than one then last will take effect. CMD has also three forms as shown below
    • CMD ["executable","param1","param2"] (exec form, this is the preferred form)
    • CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
    • CMD command param1 param2 (shell form)

Let’s take an example in the below docker file; if you need that your container should sleep for 5 seconds and then exists, use the below command.

FROM Ubuntu
CMD sleep 5
  • Run the below docker run command to create a container, and then it sleeps for 5 seconds, and then the container exists.
docker run <new-image>
  • But If you wish to modify the sleep time such as 10 then either you will need to either change the value manually in the Dockerfile or add the value in the docker command.
FROM Ubuntu    
CMD sleep 10    # Manually changing the sleep time in dockerfile
  • Now execute the docker run command as shown below.
docker run <new-image> sleep 10  # Manually changing the sleep time in command line

You can also use the entry point shown below to automatically add the sleep command in the running docker run command, where you just need to provide the value of your choice.

FROM Ubuntu
ENTRYPOINT ["sleep"]

With Entrypoint, when you execute the docker run command with a new image, sleep will automatically get appended in the command as shown below. Still, the only thing you need to specify is the number of seconds it should sleep.

docker run <new-image> sleep <add-value-of-your-choice>

So in case of CMD command instruction command line parameters passed are completely replaced where in case of entrypoint parameters passed are appended.

If you don’t provide the <add-value-of-your-choice>, this will result in an error. To avoid the error, you should consider using both CMD and ENTRYPOINT but make sure to define both CMD and ENTRYPOINT in json format.

FROM Ubuntu    
ENTRYPOINT sleep       # This command will always run sleep command
CMD ["5"]              # In case you pass any parameter in command line it will be picked else 5 by default
  • ENTRYPOINT: An ENTRYPOINT allows you to run the commands as an executable in a container. ENTRYPOINT is preferred when defining a container with a specific executable. You cannot override an ENTRYPOINT when starting a container unless you add the --entrypoint flag.

Dockerfile Example

Up to now, you learned how to declare dockerfile instructions and executable of each instruction, but unless you create a dockerfile and build a new image with these commands, they are not doing much. So let’s learn and understand by creating a new dockerfile. Let’s begin.

  • Login to the ubuntu machine using your favorite SSH client.
  • Create a folder under home directory named dockerfile-demo and switch to this directory.
mkdir ~/dockerfile-demo
cd dockerfile-demo/
  • Create a file inside the ~/dockerfile-demo directory named dockerfile and copy/paste the below code. Below code contains from instruction which sets the base image as ubuntu and runs the update and nginx installation commands and builds the new image. Once you run the docker containter then Image created is printed on the screen on containers terminal using echo command.
FROM ubuntu:20.04
MAINTAINER shanky@automateinfra.com
RUN apt-get update 
RUN apt-get install nginx 
CMD [“echo”,”Image created”] 
docker build -t docker-image:tag1 .
Building the docker image and tagging successfully
Building the docker image and tagging successfully

As you can see below, once the docker container is started, the Image created is printed on the container’s screen.

Running the container
Running the container

Join 48 other followers

Conclusion

In this tutorial, you learned what is dockerfile, a lot of dockerfile instructions and executables, and finally, how to create your own Docker Image using dockerFile.

So which application are you planning to run using the newly created docker image?

Ultimate docker interview questions for DevOps

If you are looking to crack your DevOps engineer interview, docker is one of the important topics that you should prepare. In this guide, understand the docker interview questions for DevOps that you should know.

Let’s go!

Join 48 other followers

Table of Content

Q1. What is Docker ?

Answer: Docker is lightweight containerized technology. It allows you to automate deployment in portable containers which is built from docker images.

Q2. What is Docker Engine.

Answer: Docker Engine is an server where docker is installed . Docker Client and server remains on the same server or remote host. Clients can connect with server using CLI or RESTful API’s.

Q3. What is use of Dockerfile and what are common instructions used in docker file?

Answer: We can either pull docker image and use it directly to build our apps or we can use it and on top of it we can create one more layer according to the need that’s where Dockerfile comes in play. With Docker file you can design the image accordingly. Some common instructions are FROM, LABEL, RUN , CMD

Q4. What are States of Docker Containers ?

Answer: Running , Exited , Restarting and Paused.

Q5. What is DockerHUB ?

Answer: Dockerhub is a cloud based registry for docker images . You can either pull or push your images in DockerHub.

Q6. Where are Docker Volumes stored ?

Answer: Docker Volumes are stored in /var/lib/docker/volumes.

Q7.Write a Dockerfile to Create and Copy a directory and built using Python Module ?

Answer:

FROM Python:3.0
WORKDIR /app
COPY . /app

Q8. What is the medium of communication between docker client and server?

Answer: Communication between docker client and server is taken care by REST API , socker.IO and TCP Protocol.

Q9. How to start Docker container and create it ?

Answer: Below command will create the container as well as run the container. Ideally just to create container you use docker container create “Container_name”

docker run -i -t centos:6

Q10.What is difference between EXPOSE PORT and PUBLISH Port ?

Answer: Expose Port means you just exposes it locally i.e to container only . Publish Port means you are allowing from outside World.

Q11. How can you publish Port i.e Map Host to container port ? Provide an example with command

Answer:

Here p is mapping between Host and container and -d is dettached mode i.e container runs in background and you just see container ID displayed on

docker container run -d -p 80:80 nginx

Q12. How do you mount a volume in docker ?

Answer:

docker container run -d --name "My container"  --mount  source="vol1",target=/app  nginx

Q13. How Can you run multiple containers in single service ?

Answer: We can achieve this by using docker swarm or docker compose. Docker compose uses YAML formatted files.

Q14. Where do you configure logging driver in docker?

Answer: We can do that in file daemon.jason.file.

Q15. How can we go inside the container ?

Answer:

docker exec -it "Container_ID"  /bin/bash

Q16. How can you scale your Docker containers?

Answer: By using Docker compose command.

docker-compose --file scale.yml scale myservice=5

Q17. Describe the Workflow from Docker file to Container execution ?

Answer: Docker file ➤ Docker Build ➤ Docker Image (or Pull from Registry) ➤Docker run -it ➤ Docker Container ➤Docker exec -it ➤Bash

Q18. How to monitor your docker in production ?

Answer:

docker stats : Get information about CPU , memory and usage etc.

docker events : Check activities of containers such as attach , detach , die, rename , commit etc.

Q19. Is Docker swarm an approach to orchestrate containers ?

Answer: Yes it is one of them and other is kubernetes

Q20. How can you check docker version?

Answer: docker version command which gives you client and server information together.

Q21. How can you tag your Docker Image ?

Answer: Using docker tag command.

docker tag "ImageID" "Repository":tag

Conclusion

In this guide, you learned some of the basic questions around the docker interview questions for DevOps that you should know.

There is some more interview guide published on automateinfra.com; which one did you like the most?

How to Install Apache tomcat using Ansible.

If you are looking to install apache tomcat instances, consider using Ansible as a great way.

Ansible is an agentless automation tool that manages machines over the SSH protocol by default. Once installed, Ansible does not add a database, and there will be no daemons to start or keep running.

With Ansible, you can create an ansible playbook and use it to deploy dozens of tomcat in one go. In this tutorial, you will learn how to install apache tomcat using Ansible. Let’s get started.

Join 48 other followers

Table of Content

  1. Prerequisites
  2. Building tomcat Ansible-playbook on the Ansible Controller
  3. Running Ansible-playbook on the Ansible Controller
  4. Tomcat files and Tomcat directories on a remote node
  5. Conclusion

Prerequisites

This post will be a step-by-step tutorial. If you’d like to follow along, be sure you have:

  • An Ansible controller host. This tutorial will be using Ansible v2.9.18.
  • A remote Linux computer to test out the tomcat installation. This tutorial uses Ubuntu 20.04.3 LTS as the remote node.
  • An inventory file and one or more hosts are configured to run Ansible commands and playbooks. The remote Linux computer is called webserver, and this tutorial uses an inventory group called web.

Ensure your remote machine IP address is inside /etc/ansible/hosts ( either one remote machine or define it as a group)

Building tomcat Ansible-playbook on the Ansible Controller

Ansible is an automation tool used for deploying applications and systems easily; it could be Cloud, Services, orchestration, etc. Ansible uses YAML Language to build playbooks which are finally used to deploy or configure the required change. To deploy tomcat, let’s move ahead and create the ansible-playbook.

  • SSH or login into your any Linux machine.
  • Create a file named my_playbook3.yml inside /etc/ansible folder and paste below code.

The below playbook contains all the tasks to install tomcat on the remote node. The first task is to update your system packages by using the apt command, further creating tomcat user and group. The next task is to install java, install tomcat, and create necessary folders and permissions for the tomcat directory.

---
- name: Install Apache Tomcat10 using ansible
  hosts: webserver
  remote_user: ubuntu
  become: true
  tasks:
    - name: Update the System Packages
      apt:
        upgrade: yes
        update_cache: yes

    - name: Create a Tomcat User
      user:
        name: tomcat

    - name: Create a Tomcat Group
      group:
        name: tomcat

    - name: Install JAVA
      apt:
        name: default-jdk
        state: present


    - name: Create a Tomcat Directory
      file:
        path: /opt/tomcat10
        owner: tomcat
        group: tomcat
        mode: 755
        recurse: yes

    - name: download & unarchive tomcat10 
      unarchive:
        src: https://mirrors.estointernet.in/apache/tomcat/tomcat-10/v10.0.4/bin/apache-tomcat- 10.0.4.tar.gz
        dest: /opt/tomcat10
        remote_src: yes
        extra_opts: [--strip-components=1]

    - name: Change ownership of tomcat directory
      file:
        path: /opt/tomcat10
        owner: tomcat
        group: tomcat
        mode: "u+rwx,g+rx,o=rx"
        recurse: yes
        state: directory

    - name: Copy Tomcat service from local to remote
      copy:
        src: /etc/tomcat.service
        dest: /etc/systemd/system/
        mode: 0755

    - name: Start and Enable Tomcat 10 on sever
      systemd:
        name: tomcat
        state: started
        daemon_reload: true


Running Ansible-playbook on the Ansible Controller

Earlier in the previous section, you created the ansible-playbook, which is great, but it is not doing much unless you deploy it. To deploy the playbook using the ansible-playbook command.

Assuming you are logged into Ansible controller:

  • Now run the playbook using the below ansible-playbook command.
ansible-playbook my_playbook3.yml

As you can see below, all the tasks are successfully completed; if the status of TASK shows ok, that means the task was already completed; else, for changed status, Ansible performs the task on the remote node.

Running the ansible-playbook in the Ansible controller host
Running the ansible-playbook in the Ansible controller host
  • Next, verify remote machine if Apache Tomcat is installed successfully and started use the below command.
systemctl status tomcat 
service tomcat status
Verifying the tomcat service on the remote node
Verifying the tomcat service on the remote node
  • Also you can verify by running process command.
ps -ef | grep tomcat
ps -aux | grep tomcat
Checking the tomcat process
Checking the tomcat process
Checking the tomcat process
Checking the tomcat process

Join 48 other followers

Tomcat files and Tomcat directories on a remote node

Now that you have successfully installed the tomcat on the remote node and verified the tomcat service, it is equally important to check the tomcat files created and the purpose of each of them.

  • Firstly all the tomcat files and tomcat directories are stored under <tomcat-installation-directory>/*.

Your installation directory is represented by environmental variable  $CATALINA_HOME

  • The tomcat directory and files should be owned by user tomcat
  • The tomcat user should be member of tomcat group.
Verify all files of tomcat
Verify all files of tomcat