What is CloudFront and how to Setup CloudFront with AWS S3 and ALB Distributions

Internet users are always impressed with a high speed & loading capacities websites . Why Not you have a website that loads the content within quick seconds and delivering it ?

In this tutorial you learn what is Cloud Front and how to set Cloud Front Distributions in Amazon cloud. Cloud Front enables and helps users to retrieve their content quickly by utilizing the concept of caching.

Table of Content

  1. What is Cloud Front?
  2. Prerequisites
  3. Creating an IAM user in AWS account with programmatic access
  4. Configuring the IAM user Credentials on local Machine
  5. Setting up Amazon CloudFront
  6. How to Use Custom URLs in CloudFront by Adding Alternate Domain Names (CNAMEs)
  7. Using Amazon EC2 or Other Custom Origins
  8. Conclusion

What is Cloud Front

CloudFront is a Amazon web service that helps in speeding up the distribution of content either static or dynamic such as .html, .css, .js , images and many more to users. CloudFront delivers the content using edge locations when any request is requested by users.

By utilizing the Cloud Front the content is delivered to the users very quickly using edge location. In case content is not available in edge locations then CloudFront request from the origin configured. Origins could be like AWS S3 bucket or HTTP server or Load Balancer etc.

Use cases of Cloud Front

  • It accelerates the delivery of your static website such as images , style sheets , Java script and so on.
  • Live streaming of video
  • Also use of Lambda at edge location with CloudFront adds more variety of ways to customize cloud front.

How CloudFront delivers content to your users

  • User makes a request to website or application let say a html page http://www.example.com/mypage.html
  • DNS server routes the request to Cloud Front edge locations.
  • Cloud Front checks if the request can be fulfilled with edge location .
  • IF Edge location have the files , then CloudFront sends back to the user else
  • CloudFront queries to the origin server
  • Origin server sends the files back to edge location and then Cloud front sends back to the User.

How CloudFront works with regional edge caches

This kind of cache brings the content more closer to the users to help performance. Regional edge caches help with all types of content, particularly content which becomes less popular over time such as user-generated content, such as video, photos, or artwork; e-commerce assets such as product photos and videos etc.

This cache sits in between the origin server and edge locations. The Edge location stores the content and cache but when the content is too old it removes it from its cache. There comes the role of regional cache which has wide coverage to store lots of content.

Prerequisites

  • You must have AWS account in order to setup AWS CloudFront. If you don’t have AWS account, please create a account from here AWS account.
  • You must have IAM user with Administrator rights and setup credentials using AWS CLI or using AWS Profile. You will see this below step to create IAM and configure credentials.
  • AWS S3 bucket

Creating an IAM user in AWS account with programmatic access

In order to connect to AWS Service, you should have an IAM user with an Access key ID and secret keys in the AWS account that you will configure on your local machine to connect to AWS account from your local machine.

There are two ways to connect to an AWS account, the first is providing a username and password on the AWS login page on the browser and the other way is to configure Access key ID and secret keys on your machine and then use command-line tools to connect programmatically.

  1. Open your favorite web browser and navigate to the AWS Management Console and log in.
  2. While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item.
  1. To Create a user click on Users→ Add user and provide the name of the user myuser and make sure to tick the Programmatic access checkbox in Access type which enables an access key ID and secret access key and then hit the Permissions button.
  1. Now select the “Attach existing policies directly” option in the set permissions and look for the “Administrator” policy using filter policies in the search box. This policy will allow myuser to have full access to AWS services.
  1. Finally click on Create user.
  1. Now, the user is created successfully and you will see an option to download a .csv file. Download this file which contains IAM users i.e. myuser Access key ID and Secret access key which you will use later in the tutorial to connect to AWS service from your local machine.

Configuring the IAM user Credentials on local Machine

Now, you have an IAM user myuser created. The next, step is to set the download myuser credentials on the local machine which you will use to connect AWS service via API calls.

  1. Create a new file, C:\Users\your_profile\.aws\credentials on your local machine.
  2. Next, Enter the Access key ID and Secret access key from the downloaded csv file into the credentials file in the same format and save the file.
[default]     # Profile Name
aws_access_key_id = AKIAXXXXXXXXXXXXXXXX
aws_secret_access_key = vIaGXXXXXXXXXXXXXXXXXXXX

credentials files help you to set your profile. By this way, it helps you to create multiple profiles and avoid confusion while connecting to specific AWS accounts.

  1. Similarly, create another file C:\Users\your_profile\.aws\config in the same directory
  2. Next, add the “region” into the config file and make sure to add the name of the profile which you provided in the credentials file, and save the file. This file allows you to work with a specific region.
[default]   # Profile Name
region = us-east-2

Setting up Amazon CloudFront

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘CloudFront’, and click on the CloudFront menu item.
  • Click on Create distributions and then Get Started
  • In the Origin setting provide the S3 bucket name and keep other values as default.
  • For the settings under Default Cache Behavior Set and Distribution Settings, accept the default values and then click on Create distribution.
  • AWS S3 bucket is already created before we started this tutorial. Lets upload a index.html ( with a text hello ) to the bucket and set the permission as public access as shown below.
  • Now check the Amazon S3 URL to verify that your content is publicly accessible
  • Check the CloudFront URL by hitting Domain Name /index.html and it should show the same result as your index.html file contains
domainname/index.html

How to Use Custom URLs in CloudFront by Adding Alternate Domain Names (CNAMEs)

In CloudFront as seen above the CloudFront URL is generated with a domain name *.cloudfront.net by default. If you wish to use your own domain name that is CNAME such as abc.com in the URL , you can assign it as yourself.

  • In our case by default the URL is :
http://dsx78lsseoju7.cloudfront.net/index.html
  • If you wish to use alternate domain such below, follow the step as below
http://abc.com/index.html
  • Go back to CloudFront Page and look for the distribution where you need to change the domain and click on Edit
  • Provide the domain name and you must have SSL certificate already in place.
  • Finally Create an alias resource record set in Route 53 by visiting Route53 Page .
  • Go to the Route53 Page by searching on the top of the AWS Page
  • Click on the Hosted Zone and then click on Create Record
  • Now Here, Provide the name of Record which can be anything, Record type and Route traffic as CloudFront distribution

After Successful creation of Route 53 you can verify the index page ( http://mydomain.abc.com/index.html ) if it works fine.

Using Amazon EC2 or Other Custom Origins

A custom Origin can be a Amazon Elastic Compute Cloud (AWS EC2) for example http server. You need to provide the DNS name of the server as custom origin.

Below are some key points to keep in mind while setting the custom origin as AWS EC2.

  • Host and serve the same content on all server in same way.
  • Restrict access requests to the HTTP and HTTPS ports that your custom origin listens on that is AWS EC2.
  • Synchronize the clocks of all servers in your implementation.
  • Use an Elastic Load Balancing load balancer to handle traffic across multiple Amazon EC2 instances
  • When you create your CloudFront distribution, specify the URL of the load balancer for the domain name of your origin server

Conclusion

In this tutorial you learnt what is Cloud Front and how to set Cloud Front Distributions in Amazon cloud. Cloud Front enables and helps users to retrieve their content quickly by utilizing the concept of caching.

By Now, you know what is CloudFront and how to setup CloudFront , what are you going to manage with CloudFront Next ?

How to Launch AWS S3 bucket using Shell Scripting.

We all need a place to store data such deployment scripts, deployment packages also to host a website we require space. In earlier days there were servers where data use to take lot of time to be copied and those servers were not scalable and neither fault tolerant. In case there was issue such as server down or server gets corrupted , data was either lost or application use to remain down for long hours.

In order to solve space or data storage issue with unlimited capacity and scalable with tolerant behaviors Amazon AWS provides a service AWS S3 which solves all these problem.

In this tutorial we will demo how to launch a AWS S3 bucket in Amazon account using Bash or shell scripting .

Table of Content

  1. What is Shell script?
  2. What is Amazon S3 bucket ?
  3. Prerequisites
  4. Install AWS CLI Version 2 on windows machine
  5. How to launch or create AWS S3 bucket in Amazon account using shell script
  6. Conclusion

What is Shell Scripting or Bash Scripting?

Shell Script is simply a text of file with various or lists of commands that are executed even on terminal or shell one by one. But in order to make thing little easier and run together as a group and in quick time we write them in single file and run it.

Main tasks which are performed by shell scripts are : file manipulation , printing text , program execution. We can include various environmental variables in script that can be used at multiple places , run programs and perform various activities are known as wrapper scripts.

A good shell script will have comments, preceded by a pound sign or hash mark, #, describing the steps. Also we can include conditions or pipe some commands to make more creative scripts.

When we execute a shell script, or function, a command interpreter goes through the ASCII text line-by-line, loop-by-loop, test-by-test, and executes each statement as each line is reached from the top to the bottom.

What is Amazon AWS S3 bucket?

AWS S3 , why it is S3 ? The name itself tells that its a 3 word whose alphabet starts with “S” . The Full form of AWS S3 is simple storage service. AWS S3 service helps in storing of unlimited data very safely and efficiently. There is a very basic architecture of AWS S3 . Everything in AWS S3 is a object such as pdf files, zip files , text files or war files anything. The next thing is bucket where all these objects resides.

AWS S3 Service  ➡️ Bucket  ➡️ Objects  ➡️ PDF , HTML DOCS, WAR , ZIP FILES etc.

Some of the features of AWS S3 bucket are:

  • In order to store the data in bucket you will need to upload it.
  • To keep your bucket permissions more secure provide necessary permissions to IAM role or IAM user.
  • Buckets have unique name globally that means there will be only 1 bucket throughout different accounts or any regions.
  • 100 buckets can be created in any AWS account , post that you need to raise a ticket to Amazon.
  • Owner of Bucket is specific to AWS account only.
  • Buckets are created region specific such as us-east-1 , us-east-2 , us-west-1 or us-west-2
  • Bucket objects are objected in AWS S3 using AWS S3 API service.
  • Buckets can be publicly visible that means anybody on the internet can access it. So it is always recommended to keep the public access blocked for all buckets unless very much required.

Prerequisites

  1. AWS account to create S3 bucket. If you don’t have AWS account please create from AWS account or AWS Account
  2. Windows 7 or plus edition where you will execute the shell script.
  3. Python must be installed on windows machine which will be required by AWS cli. If you want to install python on windows machine follow here
  4. You must have Git bash already installed on your windows machine. If you don’t have install from here
  5. Code editor for writing the shell script on windows machine. I would recommend to use visual studio code on windows machine. If you wish to install visual studio on windows machine please find steps here

In this demo , we will use shell script to launch AWS S3 bucket. So In order to use shell scripts from your local machine that is windows you will require AWS CLI installed and configured. So First lets install AWS CLI and then configure it.

Install AWS CLI Version 2 on windows machine

  • Download the installed for AWS CLI on windows machine from here
  • Select I accept the terms and then click next button
  • Do custom setup like location of installation and then click next button
  • Now you are ready to install the AWS CLI 2
  • Click finish and now verify the AWS cli
  • Verify the AWS version by going to command prompt and type
aws --version

Now AWS cli version 2 is successfully installed on windows machine, now its time to configure AWS credentials so that our shell script connects AWS account and execute commands.

  • Configure AWS Credentials by running the command on command prompt
aws configure
  • Enter the details such as AWS Access key , ID , region . You can skip the output format as default.
  • Check the location on your system C:\Users\YOUR_USER\.aws file to confirm the the AWS credentials
  • Now, you’re AWS credentials are configured successfully.

How to launch or create AWS S3 bucket in Amazon account using shell script

Now we have configured AWS cli on windows machine , its time to create our shell script to create AWS S3 bucket.

  • Create a folder on your desktop and under that create file create-s3.sh
#! /usr/bin/bash
# This Script will create S3 bucket and tag the bucket with appropriate name.

# To check if access key is setup in your system 


if ! grep aws_access_key_id ~/.aws/config; then
   if ! grep aws_access_key_id ~/.aws/credentials; then
   echo "AWS config not found or you don't have AWS CLI installed"
   exit 1
   fi
fi

# read command will prompt you to enter the name of bucket name you wish to create 


read -r -p  "Enter the name of the bucket:" bucketname

# Creating first function to create a bucket 

function createbucket()
   {
    aws s3api  create-bucket --bucket $bucketname --region us-east-2
   }


# Creating Second function to tag a bucket 

function tagbucket()    {
    
   aws s3api  put-bucket-tagging --bucket $bucketname --tagging 'TagSet=[{Key=Name,Value="'$bucketname'"}]'
}


# echo command will print on the screen 

echo "Creating the AWS S3 bucket and Tagging it !! "
echo ""
createbucket    # Calling the createbucket function  
tagbucket       # calling our tagbucket function
echo "AWS S3 bucket $bucketname created successfully"
echo "AWS S3 bucket $bucketname tagged successfully "
  • Now open visual studio code and open the location of file create-s3.sh and choose terminal as Bash
  • Now run the script
./create-s3.sh
  • Script ran successfully , now lets verify the AWS S3 bucket by going on AWS account.
  • Click on the Bucket name testing-s3buck2 and then click on properties

  • Great we can see that tagging was also done successfully.

Conclusion

In this tutorial, we demonstrated some benefits of Amazon AWS S3 and learnt how to set up Amazon AWS S3 using shell script on AWS step by step . Most of your phone data and your website data are stored on AWS S3. This service specially to host a website is best in market.

Hope this tutorial will help you in understanding the shell script and provisioning the AWS S3 on Amazon cloud. Please share with your friends

How to Launch AWS S3 bucket on Amazon using Terraform

Do you have issues with lots of log rotation or does your system hangs when lots of logs are generated on the disk and your system behaves very abruptly and do you have less space to keep your important deployments JAR’s or WAR’s? These are all challenges which everyone has faced while working with datacenter applications of with less capacity VM’s.

It is right time to store all your logs, deployment code and scripts and this is very much possible with Amazon’s AWS S3 which provide unlimited storage , safe and secure and quick . So in this tutorial we will go through what is AWS S3 , AWS S3 features and how to launch a S3 bucket using terraform.

Table of content

  1. What is AWS Amazon S3 bucket?
  2. Prerequisites
  3. How to Install Terraform on Ubuntu 18.04 LTS
  4. Terraform Configuration Files and Structure
  5. Launch AWS S3 bucket on Amazon Web Service using Terraform
  6. Upload an object to AWS S3 bucket
  7. Conclusion

What is Amazon AWS S3 bucket?

AWS S3 , why it is S3 ? The name itself tells that its a 3 word whose alphabet starts with “S” . The Full form of AWS S3 is simple storage service. AWS S3 service helps in storing of unlimited data very safely and efficiently. There is a very basic architecture of AWS S3 . Everything in AWS S3 is a object such as pdf files, zip files , text files or war files anything. The next thing is bucket where all these objects resides.

AWS S3 Service ➡️ Bucket ➡️ Objects ➡️ PDF , HTML DOCS, WAR , ZIP FILES etc

Some of the features of AWS S3 bucket are:

  • In order to store the data in bucket you will need to upload it.
  • To keep your bucket permissions more secure provide necessary permissions to IAM role or IAM user.
  • Buckets have unique name globally that means there will be only 1 bucket throughout different accounts or any regions.
  • 100 buckets can be created in any AWS account , post that you need to raise a ticket to Amazon.
  • Owner of Bucket is specific to AWS account only.
  • Buckets are created region specific such as us-east-1 , us-east-2 , us-west-1 or us-west-2
  • Bucket objects are objected in AWS S3 using AWS S3 API service.
  • Buckets can be publicly visible that means anybody on the internet can access it. So it is always recommended to keep the public access blocked for all buckets unless very much required.

Prerequisites

  • Ubuntu machine to run terraform preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account or AWS Account
  • Recommended to have 4GB RAM
  • At least 5GB of drive space
  • Ubuntu machine should have IAM role attached with full access to create AWS S3 bucket or it is always great to have administrator permissions to work with demo’s.
  • If you wish to create bucket manually click here for the setup instruction but we will use terraform for this demo.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Terraform on Ubuntu 18.04 LTS

  • Update your already existing system packages.
sudo apt update
  • Download the latest version of terraform in opt directory
wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
This image has an empty alt attribute; its file name is image-163.png
  • Install zip package which will be required to unzip
sudo apt-get install zip -y
  • unzip the Terraform download zip file
unzip terraform*.zip
  • Move the executable to executable directory
sudo mv terraform /usr/local/bin
  • Verify the terraform by checking terraform command and version of terraform
terraform               # To check if terraform is installed 

terraform -version      # To check the terraform version  
This image has an empty alt attribute; its file name is image-164.png
This image has an empty alt attribute; its file name is image-165.png
  • This confirms that terraform has been successfully installed on ubuntu 18.04 machine.

Terraform Configuration Files and Structure

Let us first understand terraform configuration files before running Terraform commands.

  • main.tf : This file contains code that create or import other AWS resources.
  • vars.tf : This file defines variable types and optionally set the values.
  • output.tf: This file helps in generating of the output of AWS resources .The output is generated after the terraform apply command is executed.
  • terraform.tfvars: This file contains the actual values of variables which we created in vars.tf
  • provider.tf: This file is very important . You need to provide the details of providers such as AWS , Oracle or Google etc. so that terraform can make the communication with the same provider and then work with resources.

Launch AWS S3 bucket on AWS using Terraform

Now we will create all the configuration files which are required for creation of S3 bucket on AWS account.

  • Create a folder in opt directory and name it as terraform-s3-demo 
mkdir /opt/terraform-s3-demo
cd /opt/terraform-s3-demo
  • Create main.tf file under terraform-s3-demo folder and paste the content below.
# Bucket Access

resource "aws_s3_bucket_public_access_block" "publicaccess" {
  bucket = aws_s3_bucket.demobucket.id
  block_public_acls       = false
  block_public_policy     = false
}

# Creating the encryption key which will encrypt the bucket objects

resource "aws_kms_key" "mykey" {
  deletion_window_in_days = "20"
}

# Creating the bucket

resource "aws_s3_bucket" "demobucket" {

  bucket          = var.bucket
  force_destroy   = var.force_destroy

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = aws_kms_key.mykey.arn
        sse_algorithm     = "aws:kms"
      }
    }
  }
  versioning {
    enabled               = true
  }
  lifecycle_rule {
    prefix  = "log/"
    enabled = true
    expiration {
      date = var.date
    }
  }
}
  • Create vars.tf file under terraform-s3-demo folder and paste the content below
variable "bucket" {
 type = string
}
variable "force_destroy" {
 type = string
}
variable "date" {
 type = string
}
  • Create provider.tf file under terraform-s3-demo folder and paste the content below.
provider "aws" {
  region = "us-east-2"
}
  • Create terraform.tfvars file under terraform-s3-demo folder and paste the content below.
bucket          = "terraformdemobucket"
force_destroy   = false
date = "2022-01-12"
  • Now your files and code are ready for execution . Initialize the terraform
terraform init
  • Terraform initialized successfully , now its time to see the plan which is kind of blueprint before deployment. We generally use plan to confirm if correct resources is going to provisioned or deleted.
terraform plan
  • After verification , now its time to actually deploy the code using apply.
terraform apply

Terraform command execution was done successfully. Now you should have AWS S3 bucket launched in AWS. Verify it by navigating it to AWS account and search for AWS S3 service and check if the bucket has been created.

Upload an object in AWS S3 bucket

All the files and folder inside the AWS S3 Buckets are known as objects.

  • Now Let us upload a sample text file in the bucket , Next Click on the terraformdemobucket
  • Now click on UPLOAD and then Add files
  • Choose from your system any file , We used sample.txt

Conclusion

In this tutorial, we demonstrated some benefits of Amazon AWS S3 and learnt how to set up Amazon AWS S3 using terraform on AWS step by step . Most of your phone data and your website data are stored on AWS S3. This service specially to host a website is best in market.

Hope this tutorial will help you in understanding the terraform and provisioning the AWS S3 on Amazon cloud. Please share with your friends