Everything you should know about Amazon VPC or AWS VPC

In this theoretical tutorial you will learn everything you should know about Amazon VPC or AWS VPC. I am sure you will have no further question on AWS VPC after going through this detailed guide.

Why not dive in right now.

Table of Content

  1. What is VPC or an Amazon VPC or what is a VPC?
  2. VPC CIDR Range
  3. What is AWS VPC Peering?
  4. What is AWS VPC Endpoint?
  5. What are VPC Flow logs?
  6. Knowing AWS VPC pricing?
  7. AWS CLI commands to create VPC
  8. Defining AWS VPC Terraform or terraform AWS VPC Code
  9. How to Publish VPC Flow Logs to CloudWatch
  10. Create IAM trust Policy for IAM Role
  11. Creating IAM Policy to publish VPC Flow Logs to Cloud Watch Logs
  12. Create VPC flow logs using AWS CLI
  13. Conclusion

What is VPC or an Amazon VPC or what is a VPC?

Amazon Virtual Private Cloud allows you to launch AWS resources in a isolated and separate virtual network where you are complete owner of that network.

In Every AWS account and in each region, you get a default VPC. it has a default subnet in each Availability Zone in the Region, an attached internet gateway, a route in the main route table that sends all traffic to the internet gateway, and DNS settings that automatically assign public DNS hostnames to instances with public IP addresses and enable DNS resolution through the Amazon-provided DNS server.

Therefore, an EC2 instance that is launched in a default subnet automatically has access to the internet. Virtual Private cloud contains subnets that are linked or tied to a particular Availability zone.

If you associate an Elastic IP address with the eth0 network interface of your instance, its current public IPv4 address (if it had one) is released to the EC2-VPC public IP address pool.

The Subnet and VPC are assigned with IP range also known as CIDR_range which define the network range in which all resources will be created.

You also need to create Route tables that are used to determine the network connectivity of your VPC with other AWS services such as:

  • Peering connection means connection between two VPCs such that you can share resources between the two VPCs.
  • Gateway endpoints:
    • Internet Gateway connects public subnets to Internet
    • NAT Gateway to connect private subnets to internet. To allow an instance in your VPC to initiate outbound connections to the internet but prevent unsolicited inbound connections from the internet, you can use a network address translation (NAT) device.
    • NAT maps multiple private IPv4 addresses to a single public IPv4 address. You can configure the NAT device with an Elastic IP address and connect it to the internet through an internet gateway
    • But if you think non default subnets those are private want to connect them to internet then make sure by attaching an internet gateway to its VPC (if its VPC is not a default VPC) and associating an Elastic IP address with the instance.
    • VPC Endpoints connect to AWS services privately without using NAT or IGW.
  • Transit Gateway acts as a central device or epicentre to route traffic between your VPCs, VPN connections, and AWS Direct Connect connections.
  • Connect your VPCs to your on-premises networks using AWS Virtual Private Network (AWS VPN).

VPC sharing allows to launch any AWS services in centrally managed Virtual Private Cloud.  In this the account that owns VPC shares one or more subnet with other accounts (participants) that belong to the same organization from AWS Organizations.

  • You must enable resource sharing from the management account for your organization.
  • You can share non-default subnets with other accounts within your organization.
  • VPC owners are responsible for creating, managing, and deleting the resources associated with a shared VPC. VPC owners cannot modify or delete resources created by participants, such as EC2 instances and security groups.

If the tenancy of a VPC is default, EC2 instances running in the VPC run on hardware that’s shared with other AWS accounts by default. If the tenancy of the VPC is dedicated, the instances always run as Dedicated Instances, which are instances that run on hardware that’s dedicated for your use.

VPC CIDR Range

  • CIDR stands for Classless Inter Domain Routing (CIDR ) Notation.
  • IPv4 contains 32 bits.
  • VPC IP CIDR ranges is in between /16 to /28
  • Subnet CIDR range is also in between /16 to /28
  • You can assign additional private IP addresses, known as secondary private IP addresses, to instances that are running in a VPC. Unlike a primary private IP address, you can reassign a secondary private IP address from one network interface to another.
  • The allowed block size is between a /16 netmask (65,536 IP addresses) and /28 netmask (16 IP addresses)
10.0.0.0 – 10.255.255.255 (10/8 prefix)10.0.0.0/16
172.16.0.0 – 172.31.255.255 (172.16/12 prefix)172.31.0.0/16
192.168.0.0 – 192.168.255.255 (192.168/16 prefix)192.168.0.0/20
  • You can associate secondary IPv4 CIDR blocks with your VPC
  • VPCs that are associated with the Direct Connect gateway must not have overlapping CIDR blocks

What is AWS VPC Peering?

A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Resources in peered VPCs can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region. Traffic between peered VPCs never traverses the public internet.

What is AWS VPC Endpoint?

VPC Endpoints connect to AWS services privately without using NAT or IGW.

What are VPC Flow logs?

To monitor traffic or network access in your virtual private cloud (VPC). You can use VPC Flow Logs to capture detailed information about the traffic going to and from network interfaces in your VPCs.

Knowing AWS VPC pricing?

There’s no additional charge for using a VPC. There are charges for some VPC components, such as NAT gateways, IP Address Manager, traffic mirroring, Reachability Analyzer, and Network Access Analyzer.

AWS Cli commands to create VPC

aws ec2 create-vpc --cidr-block 10.0.0.0/24 --query Vpc.VpcId --output text

Defining AWS VPC Terraform or terraform AWS VPC Code

You can deploy VPC using Terraform as well with just few lines of code. To understand Terraform basics you can refer.

The below Terraform contains resource block to create a Amazon VPC with cidr_block as “10.0.0.0/16” in the default tenancy with tags as “Name” = “main”.

resource "aws_vpc" "main" {
  cidr_block       = "10.0.0.0/16"
  instance_tenancy = "default"

  tags = {
    Name = "main"
  }
}

How to Publish VPC Flow Logs to CloudWatch

When publishing to CloudWatch Logs, flow log data is published to a log group, and each network interface has a unique log stream in the log group. Log streams contain flow log records. For publishing the logs you need:

  • Create an IAM role. To know how to create a role refer here.
  • Attach a IAM trust policy to an IAM role.
  • Create a IAM policy and attach to an IAM role.
  • Finally create the VPC flow logs using AWS CLI

Create IAM trust Policy for IAM Role

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "vpc-flow-logs.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
} 

Creating IAM Policy to publish VPC Flow Logs to Cloud Watch Logs

The policy below VPC flow logs policy has sufficient permissions to publish flow logs to the specified log group in CloudWatch Logs

{
  "Version": "2012-10-17",
  "Statement": [{

     "Effect": "Allow",
     "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents",
        "logs:DescribeLogGroups",
        "logs:DescribeLogStreams"
     ],
     "Resource": "*"
  }]

}

Create VPC flow logs using AWS CLI

aws ec2 create-flow-logs --resource-type Subnet --resource-ids subnet-1a2b3c4d --traffic-type ACCEPT --log-group-name my-flow-logs --deliver-logs-permission-arn arn:aws:iam::123456789101:role/publishFlowLogs

Conclusion

Now that you should have a sound knowledge on what is AWS VPC.

Advertisement

How to Install AWS CLI Version 2 and Setup AWS credentials

Are you new to AWS Cloud or tired of managing your AWS Cloud infrastructure using manual steps back and forth? If yes, you should consider installing AWS Command Line Interface (AWS CLI ) and managing infrastructure using it?

In this tutorial, you will learn how to install AWS CLI Version 2 and set up AWS credentials in the AWS CLI tool.

Let’s dive into it.

Join 53 other followers

Table of Content

  1. What is AWS CLI?
  2. Installing AWS CLI Version 2 on windows machine
  3. Creating an IAM user in AWS account with programmatic access
  4. Configure AWS credentials using aws configure
  5. Verify aws configure from AWS CLI by running a simple commands
  6. Configuring AWS credentials using Named profile.
  7. Verify Named profile from AWS CLI by running a simple commands.
  8. Configuring AWS credentials using environment variable
  9. Conclusion

What is AWS CLI?

AWS CLI enables you to interact and provides direct access to the public APIs of AWS services of various AWS accounts using the command-line shells from your local environment or remotely.

You can control multiple AWS services from the AWS CLI and automate them through scripts. You can run AWS CLI commands from a Linux shell such as bash, zsh, tcsh, and from a Windows machine, you can use command prompt or PowerShell to execute AWS CLI commands.

The AWS CLI is available in two versions, and the installation is exactly the same for both versions but in this tutorial, let’s learn how to install AWS CLI version 2.

Installing AWS CLI Version 2 on windows machine

Now that you have a basic idea about AWS CLI and connecting to AWS services using various command prompt and shells. Further in this section, let’s learn how to install AWS CLI Version 2 on a windows machine.

  • First open your favorite browser and download the AWS CLI on windows machine from here
Downloading AWS CLI Interface v2
Downloading AWS CLI Interface v2
  • Next, select the I accept the terms and Licence Agreement and then click on the next button.
Downloading AWS CLI Interface v2
Accepting the terms in the Licence Agreement of AWS CLI
  • Further, on Custom setup page provide the location of installation path and then click on Next button.
Setting the download location of AWS CLI
Setting the download location of AWS CLI
  • Now, click on the Install button to install AWS CLI version 2.
Installing the AWS CLI on Windows machine
Installing the AWS CLI on a Windows machine
  • Finally click on Finish button as shown below.
Finishing the Installation of the AWS CLI on Windows machine
Finishing the Installation of the AWS CLI on Windows machine
  • Verify the AWS version by going to command prompt and run the below command.
aws --version

As you can see below, the AWS CLI version 2 is successfully installed on a windows machine.

Checking the AWS CLI version
Checking the AWS CLI version

Creating an IAM user in AWS account with programmatic access

There are two ways to connect to an AWS account, the first is providing a username and password on the AWS login page and another is configuring the Access key ID and secret keys of IAM users in AWS CLI to connect programmatically.

Earlier, you installed AWS CLI successfully on a Windows machine, but you will need an IAM user with programmatic access to run commands from it.

Let’s learn how to create an IAM user in an AWS account with programmatic access, Access key ID, and secret keys.

  1. Open your favorite web browser and navigate to the AWS Management Console and log in.
  2. While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item.
Checking the IAM AWS service
Checking the IAM AWS service
  1. To Create a user click on Users→ Add user and provide the name of the user myuser and make sure to tick the Programmatic access checkbox in Access type which enables an access key ID and secret access key and then hit the Permissions button.
Adding the IAM user in AWS CLoud
Adding the IAM user in AWS CLoud
  1. Now select the “Attach existing policies directly” option in the set permissions and look for the “Administrator” policy using filter policies in the search box. This policy will allow myuser to have full access to AWS services.
Attaching the admin rights to IAM user in AWS CLoud
Attaching the admin rights to IAM users in AWS CLoud
  1. Finally click on Create user.
  2. Now, the user is created successfully and you will see an option to download a .csv file. Download this file which contains IAM users i.e. myuser Access key ID and Secret access key which you will use later in the tutorial to connect to AWS service from your local machine.
Downloading the AWS credentials of IAM user
Downloading the AWS credentials of IAM user

Configure AWS credentials using aws configure in AWS CLI

You are an IAM user with Access key ID and secret keys, but AWS CLI cannot perform anything unless you configure AWS credentials. Once you configure the credentials, AWS CLI allows you to connect to the AWS account and execute commands.

  • Configure AWS Credentials by running the aws configure command on command prompt.
aws configure
  • Enter the details such as AWS Access key ID, Secret Access Key, region. You can skip the output format as default or text or json .
Configure AWS CLI using aws configure command
Configure AWS CLI using aws configure command
  • Once AWS is configured successfully , verify by navigating to C:\Users\YOUR_USER\.aws  and see if two file credentials and config are present.
Checking the credentials file and config on your machine
Checking the credentials file and config on your machine
  • Now open both the files and verify and you can see below you’re AWS credentials are configured successfully using aws configure.
Checking the config file on your machine
Checking the config file on your machine
Checking the config file on your machine
Checking the config file on your machine

Verify aws configure from AWS CLI by running a simple commands

Now, you can test if AWS Access key ID, Secret Access Key, region you configured in AWS CLI is working fine by going to command prompt and running the following commands.

aws ec2 describe-instances
Describing the AWS EC2 instances using AWS CLI
Describing the AWS EC2 instances using AWS CLI
  • You can also verify the AWS CLI by listing the buckets in your acount by running the below command.
aws cli s3

Configuring AWS credentials using Named profile.

Another method to configure AWS credentials that are mostly used is configuring the Named profile. A named profile is a collection of settings and credentials that you can apply to an AWS CLI command. When you specify a profile to run a command, the settings and credentials are used to run that command. Let’s learn how to store named profiles.

  1. Open credentials files which got created earlier using aws configure and if not then create a file named credentails in C:\Users\your_profile\.aws directory of your windows machine.
  2. Add all the Access key ID and Secret access key into the credentials file in the below format and save. By defining the Named profile allows you to connect with different AWS account easily and avoiding confusion while connecting to specific AWS accounts.
Creating the Named Profile on your machine
Creating the Named Profile on your machine
  1. Similarly, create another file config  in the C:\Users\your_profile\.aws directory.
  2. Next, add the “region” into the config file and make sure to add the name of the profile which you provided in the credentials file, and save the file. This file allows you to work with a specific region.
  • For Linux and mac machine the location of credential file is ~/.aws/credentials and ~/.aws/config.
  • For windows machine the location of config file is  %USERPROFILE%\.aws\credentials and %USERPROFILE%\.aws\config rrespectively.
Creating the Named Profile config file on your machine
Creating the Named Profile config file on your machine

Verifying Named profile from AWS CLI

Previously you configured the Named profile on your machine, but let’s verify the Named profile from AWS CLI by running a simple command. Let’s open the command prompt and run the below command to verify the sandbox profile that you created earlier.

aws ec2 describe-instances --profile sandbox

As you can see below, the instance is described properly using the command with Named profile shows Named profile is configured successfully.

Verifying the Named profile in AWS CLI
Verifying the Named profile in AWS CLI

Configuring AWS credentials using the environment variable

Finally, the last Configuring AWS credentials using the environment variables works well. Let’s check out quickly.

  • Open the command prompt and set the AWS secret key and access key using the environmental variable using set . The enviornment variables doesnt changes the value until the end of the current command prompt session, or until you set the variable to a different value
Configuring AWS credentials using the environment variable
Configuring AWS credentials using the environment variable

AWS CLI Error (ImportError: cannot import name ‘docevents’ from ‘botocore.docs.bcdoc’) and Solution

If you face any issues in AWS CLI related to python or any file then use the below command.

 pip3 install --upgrade awscli

Conclusion

In this tutorial, you learned What is AWS CLI, how to install AWS CLI version 2, and various methods that allow you to configure AWS credentials and then work with AWS CLI.

So which method are you going to use while using AWS CLI to connect and manage AWS infrastructure?

How to Delete EBS Snapshots from AWS account using Shell script

Well AWS EBS that is Elastic block store is a very important and useful service provided by AWS. Its a permanent and shared storage and is used with various applications deployed in AWS EC2 instance. Automation is playing a vital role in provisioning or managing all the Infrastructure and related components.

Having said that , In this tutorial we will learn what is an AWS EBS , AWS EBS Snapshots and many amazing things about storage types and how to delete EBS snapshots using shell script on AWS step by step.

AWS EBS is your pendrive for instances, always use it when necessary and share with other instances.

Table of Content

  1. What is Shell script ?
  2. What is AWS EBS ?
  3. What are EBS Snapshots in AWS ?
  4. Prerequisites
  5. Install AWS CLI Version 2 on windows machine
  6. How to Delete EBS Snapshots from AWS account using shell script
  7. Conclusion

What is Shell Scripting or Bash Scripting?

Shell Script is simply a text of file with various or lists of commands that are executed even on terminal or shell one by one. But in order to make thing little easier and run together as a group and in quick time we write them in single file and run it.

Main tasks which are performed by shell scripts are : file manipulation , printing text , program execution. We can include various environmental variables in script that can be used at multiple places , run programs and perform various activities are known as wrapper scripts.

A good shell script will have comments, preceded by a pound sign or hash mark, #, describing the steps. Also we can include conditions or pipe some commands to make more creative scripts.

When we execute a shell script, or function, a command interpreter goes through the ASCII text line-by-line, loop-by-loop, test-by-test, and executes each statement as each line is reached from the top to the bottom.

What is EBS ?

EBS stands for Amazon Elastic block store which is permanent storage just like your pendrive or harddisk. You can mount EBS volume to AWS EC2 instances. It is very much possible to create your own file system on top of these EBS volumes.

EBS volumes are mounted on AWS EC2 instance and are not dependent on AWS EC2 instance life. They remain persistent.

Amazon Elastic block store (EBS)

Key features of EBS

  • EBS can be created in any Availability zones
  • EBS cannot be directly attached with any instance in different Availability zone. We would need to create a Snapshot that is like a backup copy and then from that snapshot restore it to new volume and then finally use it in other Availability zone.

What are HDD and SSD storage ?

HDD ( Hard disk drive )

Hard disk drive is a old technology. They depends on spinning disks and platters to read and write data. There is a motor which spins the platter whenever any request comes to read or write the data. Platter contains tracks and each track contains severs sectors. These drives run slowly. They are less costly.

SSD ( Solid State drive )

Solid State drive is a new technology. It uses flash memory so they consume less energy and runs much faster as compared to HDD and is highly durable. It depends on electronic energy rather than mechanical energy so its easy to maintain and more efficient. They are more costly than HDD’s.

  • EBS are classified further into 4 types
    • General purpose SSD : Used in case of general use such as booting a machine or test labs.
    • Provisioned IOPS SSD: Used in case of scalable and high IOPS applications.
    • Throughput Optimized HDD: These are low cost magnetic storage which depends on throughput rather than IOPS such as EMR, data warehouse.
    • Cold HDD: These are also low cost magnetic storage which depends on throughput rather than IOPS.

How to create AWS EBS manually in AWS account?

  • You must have AWS account to create AWS EBS. If you don’t have AWS account please create from AWS account or AWS Account
  • Go to AWS account and on the top search for AWS EC2 service
  • Click on Create volume
  • Now fill all the details such as type of volume, size , IOPS , Tags etc.
  • Now click on Create volume and verify

What are EBS Snapshots in AWS ?

We just discussed about EBS that is storage. There are high chances that you might require backup to keep yourself in safe position. So basically EBS snapshots are backup of EBS volumes. There is also a option to backup your EBS with point in time snapshot which are incremental backups and these gets stored in AWS S3. This helps in saving tons of mins by keeping the snapshots with only difference to what changed in previous backup.

How to create EBS snapshots?

  • Go to AWS EBS console
  • Choose the AWS EBS volume for which you wish to create Snapshot
  • Add the description and Tag and then click on Create Snapshot.
  • Verify the Snapshot
  • If you wish to create AWS EBS snapshots using AWS CLI , please run the command ( Make sure you have AWS CLI installed and if not then we have explained below in this tutorial.
aws ec2 create-snapshot --volume-id <vol-1234567890> --description "My volume snapshot"

Prerequisites

  1. AWS account to create AWS IAM user. If you don’t have AWS account please create from AWS account or AWS Account
  2. Windows 7 or plus edition where you will execute the shell script.
  3. Python must be installed on windows machine which will be required by AWS cli. If you want to install python on windows machine follow here
  4. You must have Git bash already installed on your windows machine. If you don’t have install from here
  5. Code editor for writing the shell script on windows machine. I would recommend to use visual studio code on windows machine. If you wish to install visual studio on windows machine please find steps here

In this demo , we will use shell script to launch AWS IAM user. So In order to use shell scripts from your local machine that is windows you will require AWS CLI installed and configured. So First lets install AWS CLI and then configure it.

Install AWS CLI Version 2 on windows machine

  • Download the installed for AWS CLI on windows machine from here
  • Select I accept the terms and then click next button
  • Do custom setup like location of installation and then click next button
  • Now you are ready to install the AWS CLI 2
  • Click finish and now verify the AWS cli
  • Verify the AWS version by going to command prompt and type
aws --version

Now AWS cli version 2 is successfully installed on windows machine, now its time to configure AWS credentials so that our shell script connects AWS account and execute commands.

  • Configure AWS Credentials by running the command on command prompt
aws configure
  • Enter the details such as AWS Access key , ID , region . You can skip the output format as default.
  • Check the location on your system C:\Users\YOUR_USER\.aws file to confirm the the AWS credentials
  • Now, you’re AWS credentials are configured successfully.

How to Delete EBS Snapshots from AWS account using shell script

Now we have configured AWS CLI on windows machine , its time to create our shell script to delete EBS snapshots. In this demo we will delete two AWS EBS snapshots which already exists in AWS account. Lets get started.

  • Create a folder on your desktop and under that create file delete-ebs-snapshots.sh
#!/usr/bin/env bash

# To check if access key is setup in your system 

if ! grep -q aws_access_key_id ~/.aws/config; then
  if ! grep -q aws_access_key_id ~/.aws/credentials; then
    echo "AWS config not found or CLI not installed. Please run \"aws configure\"."
    exit 1
  fi
fi

# To Fetch all the SNAPSHOT_ID with Tag Name=myEBSvolumesnapshot

SNAPSHOTS_ID=$(aws ec2 describe-snapshots --filters Name=tag:Name,Values="myEBSvolumesnapshot" --output text | cut -f 6)
echo $SNAPSHOTS_ID

# Using For Loop Delete all Snapshots with Tag Name=myEBSvolumesnapshot

for id in $SNAPSHOTS_ID; do
    aws ec2 delete-snapshot --snapshot-id "$id"
    echo "Successfully deleted snapshot $id"
done
  • Now open visual studio code and open the location of file delete-ebs-snapshots.sh and choose terminal as Bash
  • Now run the script
./delete-ebs-snapshots.sh
  • Script ran successfully , now lets verify if AWS EBS Snapshots with Tag Name=myEBSvolumesnapshot successfully got deleted by going on AWS account.

Conclusion

In this tutorial, we demonstrated what is an AWS EBS , AWS EBS Snapshots and learnt many things about storage types  and learnt how to delete EBS snapshots  using shell script on AWS step by step. AWS EBS is your pendrive for instances, always use it when necessary and share with other instances.

Hope this tutorial will help you in understanding the shell script and working with AWS EBS on Amazon cloud. Please share with your friends

How to Launch AWS S3 bucket using Shell Scripting.

Are you storing the data securely, scalable, highly available, and fault-tolerant? If not, consider using Amazon Simple Storage Service (Amazon S3) in the AWS cloud.

This tutorial will teach you how to launch an AWS S3 bucket in an Amazon account using bash or shell scripting.

Let’s dive into it quickly.

Join 53 other followers

Table of Content

  1. What is Shell Script or Bash Script?
  2. What is the Amazon AWS S3 bucket?
  3. Prerequisites
  4. Building a shell script to create AWS S3 bucket in Amazon account
  5. Executing the Shell Script to Create AWS S3 bucket in Amazon Cloud
  6. Verifying the AWS S3 bucket in AWS account
  7. Conclusion

What is Shell Script or Bash Script?

Shell Script is a text file containing lists of commands executed on the terminal or shell in one go in sequential order. Shell Script performs various important tasks such as file manipulation, printing text, program execution.

Shell script includes various environmental variables, comments, conditions, pipe commands, functions, etc., to make it more dynamic.

When you execute a shell script or function, a command interpreter goes through the ASCII text line-by-line, loop-by-loop, test-by-test, and executes each statement as each line is reached from top to bottom.

What is the Amazon AWS S3 bucket?

AWS S3, why it is S3? The name itself tells that it’s a 3 word whose alphabet starts with “S.” The Full form of AWS S3 is a simple storage service. AWS S3 service helps in storing unlimited data safely and efficiently. Everything in the AWS S3 service is an object such as pdf files, zip files, text files, war files, anything. Some of the features of the AWS S3 bucket are below:

  • To store the data in AWS S3 bucket you will need to upload the data.
  • To keep your AWS S3 bucket secure addthe necessary permissions to IAM role or IAM user.
  • AWS S3 buckets have unique name globally that means there will be only 1 bucket throughout different accounts or any regions.
  • 100 buckets can be created in any AWS account, post that you need to raise a ticket to Amazon.
  • Owner of AWS S3 buckets is specific to AWS account only.
  • AWS S3 buckets are created region specific such as us-east-1 , us-east-2 , us-west-1 or us-west-2
  • AWS S3 bucket objects are created in AWS S3 in AWS console or using AWS S3 API service.
  • AWS S3 buckets can be publicly visible that means anybody on the internet can access it but is recommended to keep the public access blocked for all buckets unless very much required.

Prerequisites

  1. AWS account to create ec2 instance. If you don’t have AWS account please create from AWS account or AWS Account
  2. Windows 7 or plus edition where you will execute the shell script.
  3. AWS CLI installed. To install AWS CLI click here.
  4. Git bash. Yo install Git bash click here
  5. Code editor for writing the shell script on windows machine such as visual studio code. To install visual studio click here.

Building a shell script to create AWS S3 bucket in Amazon account

Now that you have a good idea about the AWS S3 bucket and shell script let’s learn how to build a shell script to create an AWS S3 bucket in an Amazon account.

  • Create a folder of your windows machine at any location. Further under the same folder create a file named create-s3.sh and copy/paste the below code.
#! /usr/bin/bash
# This Script will create S3 bucket and tag the bucket with appropriate name.

# To check if access key is setup in your system 


if ! grep aws_access_key_id ~/.aws/config; then
   if ! grep aws_access_key_id ~/.aws/credentials; then
   echo "AWS config not found or you don't have AWS CLI installed"
   exit 1
   fi
fi

# read command will prompt you to enter the name of bucket name you wish to create 


read -r -p  "Enter the name of the bucket:" bucketname

# Creating first function to create a bucket 

function createbucket()
   {
    aws s3api  create-bucket --bucket $bucketname --region us-east-2
   }

# Creating Second function to tag a bucket 

function tagbucket()    {
    
   aws s3api  put-bucket-tagging --bucket $bucketname --tagging 'TagSet=[{Key=Name,Value="'$bucketname'"}]'
}

# echo command will print on the screen 

echo "Creating the AWS S3 bucket and Tagging it !! "
echo ""
createbucket    # Calling the createbucket function  
tagbucket       # calling our tagbucket function
echo "AWS S3 bucket $bucketname created successfully"
echo "AWS S3 bucket $bucketname tagged successfully "

Executing the Shell Script to Create AWS S3 bucket in Amazon Cloud

Previously you created the shell script to create an AWS S3 bucket in Amazon Cloud, which is great, but it is not doing much unless you run it. Let’s execute the shell script now.

  • Open the visual studio code and then open the location of file create-s3.sh.
Opening Shell script on visual studio code
Opening Shell script on visual studio code
  • Finally execute the shell script.
./create-s3.sh
Executing the shell script to create AWS S3 bucket
Executing the shell script to create AWS S3 bucket

Verifying the AWS S3 bucket in AWS account

Earlier in the previous section, the shell script ran successfully; let’s verify the if AWS S3 bucket has been created in the AWS account.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘S3’, and click on the S3 menu item and you should see the list of AWS S3 buckets and the bucket that you specified in shell script.
Viewing the AWS S3 bucket in AWS cloud
Viewing the AWS S3 bucket in AWS cloud
  • Also verify the tags that you applied in the AWS S3 bucket by navigating to proerties tab.
Viewing the AWS S3 bucket tags in AWS cloud
Viewing the AWS S3 bucket tags in the AWS cloud

Conclusion

In this tutorial, you learned how to set up Amazon AWS S3 using shell script on AWS step by step. Most of your phone and website data are stored on AWS S3.

Now that you have a newly created AWS S3 bucket, what do you plan to store in it?