What is AWS CloudFront and how to Setup Amazon CloudFront with AWS S3 and ALB Distributions

Internet users are always impressed with websites’ high speed & loading capacities. Why not have a website that loads the content quickly and delivers fast with AWS Cloudfront?

In this tutorial, you learn What AWS CloudFront is and how to set up Amazon CloudFront with AWS S3 and ALB Distributions which enables users to retrieve content quickly by utilizing the concept of caching.

Let’s get started.

Join 44 other followers

Table of Content

  1. What is AWS Cloudfront?
  2. How AWS Cloudfront delivers content to your users
  3. Amazon Cloudfront caching with regional edge caches
  4. Prerequisites
  5. Creating an IAM user in AWS account with programmatic access
  6. Configuring the IAM user Credentials on local Machine
  7. How to Set up AWS CloudFront
  8. How to Use Custom URLs in AWS CloudFront by Adding alternate Domain Names (CNAMEs)
  9. Using Amazon EC2 as the Origins in the AWS CloudFront
  10. Conclusion

What is AWS Cloudfront?

AWS Cloudfront is an Amazon web service that speeds up the distribution of static and dynamic content such as .html, .css, .js, images, live streaming of video to users. Cloudfront delivers the content quickly using edge locations when the request is requested by users.

If the content is not available in edge locations, Cloudfront requests from the origin configured such as AWS S3 bucket, HTTP server or Load Balancer, etc. Also, the use of Lambda at edge location with CloudFront adds more ways to customize CloudFront.

How AWS Cloudfront delivers content to your users

Now that you have a basic idea of CloudFront knowing how AWS Cloudfront delivers content to users is also important.

Initially, when users request a website or application such as example.com/mypage.html, the DNS server routes the request to AWS Cloudfront edge locations.

Next CloudFront checks if the request can be fulfilled with edge location; else, CloudFront queries to the origin server. The Origin server sends the files back to the edge location, and further Cloudfront sends them back to the user.

AWS Cloudfront architecture
AWS Cloudfront architecture

Amazon Cloudfront caching with regional edge caches

Delivering the content from the edge location is fine. Still, if you to further improve the performance and latency of content, there is a further caching mechanism based on region, known as regional edge cache.

Regional edge caches help with all types of content, particularly content that becomes less popular over time, such as user-generated content, videos, photos, e-commerce assets such as product photos and videos, etc.

Regional edge cache sits in between the origin server and edge locations. The Edge location stores the content and cache, but when the content is too old it removes it from its cache and forwards it to the regional cache, which has wide coverage to store lots of content.

Regional edge cache
Regional edge cache

Prerequisites

  • You must have AWS account in order to setup AWS CloudFront. If you don’t have AWS account, please create a account from here AWS account.
  • AWS S3 bucket created.

Creating an IAM user in AWS account with programmatic access

To connect to AWS Service, you should have an IAM user with an Access key ID and secret keys in the AWS account that you will configure on your local machine to connect to AWS account from your local machine.

There are two ways to connect to an AWS account, the first is providing a username and password on the AWS login page on the browser, and the other way is to configure Access key ID and secret keys on your machine and then use command-line tools to connect programmatically.

  1. Open your favorite web browser and navigate to the AWS Management Console and log in.
  2. While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item.
Opening the IAM service in AWS cloud
Opening the IAM service in AWS cloud
  1. To Create a user click on Users→ Add user and provide the name of the user myuser and make sure to tick the Programmatic access checkbox in Access type which enables an access key ID and secret access key and then hit the Permissions button.
Adding the AWS IAM user with Programmatic access
Adding the AWS IAM user with Programmatic access
  1. Now select the “Attach existing policies directly” option in the set permissions and look for the “Administrator” policy using filter policies in the search box. This policy will allow myuser to have full access to AWS services.
Granting the Administrator Access to the IAM user
Granting the Administrator Access to the IAM user
  1. Finally click on Create user.
  1. Now, the user is created successfully and you will see an option to download a .csv file. Download this file which contains IAM users i.e. myuser Access key ID and Secret access key which you will use later in the tutorial to connect to AWS service from your local machine.
Downloading the AWS IAM user with programmatic access that is access key and secret key
Downloading the AWS IAM user with programmatic access that is access key and secret key

Configuring the IAM user Credentials on local Machine

Now, you have an IAM user myuser created. The next step is to set the download myuser credentials on the local machine, which you will use to connect AWS service via API calls.

  1. Create a new file, C:\Users\your_profile\.aws\credentials on your local machine.
  2. Next, Enter the Access key ID and Secret access key from the downloaded csv file into the credentials file in the same format and save the file.
[default]     # Profile Name
aws_access_key_id = AKIAXXXXXXXXXXXXXXXX
aws_secret_access_key = vIaGXXXXXXXXXXXXXXXXXXXX

credentials files help you to set your profile. By this way, it helps you to create multiple profiles and avoid confusion while connecting to specific AWS accounts.

  1. Similarly, create another file C:\Users\your_profile\.aws\config in the same directory
  2. Next, add the “region” into the config file and make sure to add the name of the profile which you provided in the credentials file, and save the file. This file allows you to work with a specific region.
[default]   # Profile Name
region = us-east-2

How to Set up AWS CloudFront

Now that you know what AWS Cloudfront is, you have an IAM user that will allow you to set up the AWS Cloudfront in the AWS cloud. Let’s set up AWS Cloudfront.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘CloudFront’, and click on the CloudFront menu item.
Searching for AWS Cloudfront in AWS Cloud
Searching for AWS Cloudfront in AWS Cloud
  • Click on Create distributions and then Get Started
Creating the AWS Cloudfront distribution
Creating the AWS Cloudfront distribution
  • Now in the Origin settings provide the AWS S3 bucket name and keep other values as default.
Aligning the AWS S3 bucket in the AWS Cloudfront in AWS Cloud
Aligning the AWS S3 bucket in the AWS Cloudfront in AWS Cloud
  • For the settings under Default Cache Behavior Set and Distribution Settings, accept the default values and then click on Create distribution.
AWS S3 bucket setup in AWS Cloudfront
AWS S3 bucket setup in AWS Cloudfront
AWS Cloudfront distribution
AWS Cloudfront distribution
  • Now upload a index.html with a text hello in the AWS S3 bucket and set the permission as public access as shown below.
Uploading the file in AWS S3 bucket
Uploading the file in AWS S3 bucket
Granting permissions to the file in AWS S3 bucket
Granting permissions to the file in the AWS S3 bucket
  • Now check the Amazon S3 URL to verify that your content is publicly accessible
Checking the content of file of AWS S3 bucket using the AWS S3 URL
Checking the content of file of AWS S3 bucket using the AWS S3 URL
  • Finally check the CloudFront URL by hitting domain-name/index.html and it should show the same result as your index.html file contains.
domainname/index.html
Checking the content of file of AWS S3 bucket using the Cloudfront URL
Checking the content of file of AWS S3 bucket using the Cloudfront URL

How to Use Custom URLs in AWS CloudFront by Adding alternate Domain Names (CNAMEs)

Previously the CloudFront URL was generated with a domain name *.cloudfront.net by default, but If in production, it is important to configure your own domain name that is CNAME, such as abc.com, in the URL. Let’s learn how to use Custom URLs in AWS CloudFront by adding alternate Domain Names (CNAMEs).

Earlier, the default URL of AWS Cloudfront was http://dsx78lsseoju7.cloudfront.net/index.html, but if you wish to use an alternate domain such as http://abc.com/index.html, follows the step below:

  • Navigate back to CloudFront Page and look for the distribution where you need to change the domain and click on Edit
Updating the custom URL in AWS Cloudfront
Updating the custom URL in AWS Cloudfront
  • Here, provide the domain name that you wish to configure with valid SSL certificate.
Updating the CNAME and SSL certificate in AWS Cloudfront
Updating the CNAME and SSL certificate in AWS Cloudfront
  • Now the domain name is succesfully update in Cloudfront but for the URL to work you will need to configure few things in Route53 AWS service such as alias record set. To do that, navigate to the Route53 page by searching on the top of the AWS Page.
Opening the AWS Route53 service
Opening the AWS Route53 service
  • Click on the Hosted Zone and then click on Create Record
Opening the Hosted zone to create a record
Opening the Hosted zone to create a record
  • Now provide the name of record, record type and route traffic as CloudFront distribution. After you configure Route53 verify the index page ( http://mydomain.abc.com/index.html ) and it should work fine.
Creating the record in Route53 to route new domain to CloudFront
Creating the record in Route53 to route new domain to CloudFront

Using Amazon EC2 as the Origins in the AWS CloudFront

A custom Origin can be an Amazon Elastic Compute Cloud (AWS EC2), for example, an http server. You need to provide the DNS name of the AWS EC2 instance as the custom origin, but while setting the custom origin as AWS EC2, make sure to follow some basic guidelines.

  • Host the same content and synchronize the clocks on all servers in the same way.
  • Restrict access requests to the HTTP and HTTPS ports that your custom origin listens on that is AWS EC2.
  • Use an Elastic Load Balancing load balancer to handle traffic across multiple Amazon EC2 instances and when you create your CloudFront distribution, specify the URL of the load balancer for the domain name of your origin server.

Conclusion

This tutorial taught you what CloudFront is and how to set up CloudFront Distributions in the Amazon cloud. The benefit of using CloudFront is it allows users to retrieve their content quickly by utilizing the concept of caching.

So next, what are you going to manage with CloudFront?

How to Launch AWS S3 bucket using Shell Scripting.

Are you storing the data securely, scalable, highly available, and fault-tolerant? If not, consider using Amazon Simple Storage Service (Amazon S3) in the AWS cloud.

This tutorial will teach you how to launch an AWS S3 bucket in an Amazon account using bash or shell scripting.

Let’s dive into it quickly.

Join 44 other followers

Table of Content

  1. What is Shell Script or Bash Script?
  2. What is the Amazon AWS S3 bucket?
  3. Prerequisites
  4. Building a shell script to create AWS S3 bucket in Amazon account
  5. Executing the Shell Script to Create AWS S3 bucket in Amazon Cloud
  6. Verifying the AWS S3 bucket in AWS account
  7. Conclusion

What is Shell Script or Bash Script?

Shell Script is a text file containing lists of commands executed on the terminal or shell in one go in sequential order. Shell Script performs various important tasks such as file manipulation, printing text, program execution.

Shell script includes various environmental variables, comments, conditions, pipe commands, functions, etc., to make it more dynamic.

When you execute a shell script or function, a command interpreter goes through the ASCII text line-by-line, loop-by-loop, test-by-test, and executes each statement as each line is reached from top to bottom.

What is the Amazon AWS S3 bucket?

AWS S3, why it is S3? The name itself tells that it’s a 3 word whose alphabet starts with “S.” The Full form of AWS S3 is a simple storage service. AWS S3 service helps in storing unlimited data safely and efficiently. Everything in the AWS S3 service is an object such as pdf files, zip files, text files, war files, anything. Some of the features of the AWS S3 bucket are below:

  • To store the data in AWS S3 bucket you will need to upload the data.
  • To keep your AWS S3 bucket secure addthe necessary permissions to IAM role or IAM user.
  • AWS S3 buckets have unique name globally that means there will be only 1 bucket throughout different accounts or any regions.
  • 100 buckets can be created in any AWS account, post that you need to raise a ticket to Amazon.
  • Owner of AWS S3 buckets is specific to AWS account only.
  • AWS S3 buckets are created region specific such as us-east-1 , us-east-2 , us-west-1 or us-west-2
  • AWS S3 bucket objects are created in AWS S3 in AWS console or using AWS S3 API service.
  • AWS S3 buckets can be publicly visible that means anybody on the internet can access it but is recommended to keep the public access blocked for all buckets unless very much required.

Prerequisites

  1. AWS account to create ec2 instance. If you don’t have AWS account please create from AWS account or AWS Account
  2. Windows 7 or plus edition where you will execute the shell script.
  3. AWS CLI installed. To install AWS CLI click here.
  4. Git bash. Yo install Git bash click here
  5. Code editor for writing the shell script on windows machine such as visual studio code. To install visual studio click here.

Building a shell script to create AWS S3 bucket in Amazon account

Now that you have a good idea about the AWS S3 bucket and shell script let’s learn how to build a shell script to create an AWS S3 bucket in an Amazon account.

  • Create a folder of your windows machine at any location. Further under the same folder create a file named create-s3.sh and copy/paste the below code.
#! /usr/bin/bash
# This Script will create S3 bucket and tag the bucket with appropriate name.

# To check if access key is setup in your system 


if ! grep aws_access_key_id ~/.aws/config; then
   if ! grep aws_access_key_id ~/.aws/credentials; then
   echo "AWS config not found or you don't have AWS CLI installed"
   exit 1
   fi
fi

# read command will prompt you to enter the name of bucket name you wish to create 


read -r -p  "Enter the name of the bucket:" bucketname

# Creating first function to create a bucket 

function createbucket()
   {
    aws s3api  create-bucket --bucket $bucketname --region us-east-2
   }

# Creating Second function to tag a bucket 

function tagbucket()    {
    
   aws s3api  put-bucket-tagging --bucket $bucketname --tagging 'TagSet=[{Key=Name,Value="'$bucketname'"}]'
}

# echo command will print on the screen 

echo "Creating the AWS S3 bucket and Tagging it !! "
echo ""
createbucket    # Calling the createbucket function  
tagbucket       # calling our tagbucket function
echo "AWS S3 bucket $bucketname created successfully"
echo "AWS S3 bucket $bucketname tagged successfully "

Executing the Shell Script to Create AWS S3 bucket in Amazon Cloud

Previously you created the shell script to create an AWS S3 bucket in Amazon Cloud, which is great, but it is not doing much unless you run it. Let’s execute the shell script now.

  • Open the visual studio code and then open the location of file create-s3.sh.
Opening Shell script on visual studio code
Opening Shell script on visual studio code
  • Finally execute the shell script.
./create-s3.sh
Executing the shell script to create AWS S3 bucket
Executing the shell script to create AWS S3 bucket

Verifying the AWS S3 bucket in AWS account

Earlier in the previous section, the shell script ran successfully; let’s verify the if AWS S3 bucket has been created in the AWS account.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘S3’, and click on the S3 menu item and you should see the list of AWS S3 buckets and the bucket that you specified in shell script.
Viewing the AWS S3 bucket in AWS cloud
Viewing the AWS S3 bucket in AWS cloud
  • Also verify the tags that you applied in the AWS S3 bucket by navigating to proerties tab.
Viewing the AWS S3 bucket tags in AWS cloud
Viewing the AWS S3 bucket tags in the AWS cloud

Conclusion

In this tutorial, you learned how to set up Amazon AWS S3 using shell script on AWS step by step. Most of your phone and website data are stored on AWS S3.

Now that you have a newly created AWS S3 bucket, what do you plan to store in it?

How to Launch AWS S3 bucket on Amazon using Terraform

Do you have lots of log rotation issues, or does your system hangs when lots of logs are generated on the disk, and your system behaves very abruptly? Do you have less space to keep your important deployments jars or wars? Consider using Amazon Simple Storage Service (Amazon S3) to solve these issues.

Storing all the logs, deployment code, and scripts with Amazon’s AWS S3 provides unlimited storage, safe and secure, and quick.

In this tutorial, learn how to Launch an AWS S3 bucket on Amazon using Terraform. Let’s dive in.

Join 44 other followers

Table of Content

  1. What is the Amazon AWS S3 bucket?
  2. Prerequisites
  3. Terraform files and Terraform directory structure
  4. Building Terraform Configuration files to Create AWS S3 bucket using Terraform
  5. Uploading the Objects in the AWS S3 bucket
  6. Conclusion

What is the Amazon AWS S3 bucket?

AWS S3, why it is S3? The name itself tells that it’s a 3 word whose alphabet starts with “S.” The Full form of AWS S3 is a simple storage service. AWS S3 service helps in storing unlimited data safely and efficiently. Everything in the AWS S3 service is an object such as pdf files, zip files, text files, war files, anything. Some of the features of the AWS S3 bucket are below:

  • To store the data in AWS S3 bucket you will need to upload the data.
  • To keep your AWS S3 bucket secure addthe necessary permissions to IAM role or IAM user.
  • AWS S3 buckets have unique name globally that means there will be only 1 bucket throughout different accounts or any regions.
  • 100 buckets can be created in any AWS account, post that you need to raise a ticket to Amazon.
  • Owner of AWS S3 buckets is specific to AWS account only.
  • AWS S3 buckets are created region specific such as us-east-1 , us-east-2 , us-west-1 or us-west-2
  • AWS S3 bucket objects are created in AWS S3 in AWS console or using AWS S3 API service.
  • AWS S3 buckets can be publicly visible that means anybody on the internet can access it but is recommended to keep the public access blocked for all buckets unless very much required.
Recommended: Private bucket
Recommended: Private bucket

Prerequisites

  • Ubuntu machine to run terraform command, if you don’t have Ubuntu machine you can create an AWS EC2 instance on AWS account with 4GB RAM and at least 5GB of drive space.
  • Terraform Installed on Ubuntu Machine. If you don’t have Terraform installed refer Terraform on Windows Machine / Terraform on Ubuntu Machine
  • Ubuntu machine should have IAM role attached with full access to create AWS S3 bucket or administrator permissions.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

Terraform files and Terraform directory structure

Now that you know what is Amazon Elastic search and Amazon OpenSearch service are. Let’s now dive into Terraform files and Terraform directory structure that will help you write the Terraform configuration files later in this tutorial.

Terraform code, that is, Terraform configuration files, are written in a tree-like structure to ease the overall understanding of code with .tf format or .tf.json or .tfvars format. These configuration files are placed inside the Terraform modules.

Terraform modules are on the top level in the hierarchy where configuration files reside. Terraform modules can further call another child to terraform modules from local directories or anywhere in disk or Terraform Registry.

Terraform contains mainly five files as main.tf , vars.tf , providers.tf , output.tf and terraform.tfvars.

  1. main.tf – Terraform main.tf file contains the main code where you define which resources you need to build, update or manage.
  2. vars.tf – Terraform vars.tf file contains the input variables which are customizable and defined inside the main.tf configuration file.
  3. output.tf : The Terraform output.tf file is the file where you declare what output paraeters you wish to fetch after Terraform has been executed that is after terraform apply command.
  4. .terraform: This directory contains cached provider , modules plugins and also contains the last known backend configuration. This is managed by terraform and created after you run terraform init command.
  5. terraform.tfvars files contains the values which are required to be passed for variables that are refered in main.tf and actually decalred in vars.tf file.
  6. providers.tf – The povider.tf is the most important file whrere you define your terraform providers such as terraform aws provider, terraform azure provider etc to authenticate with the cloud provider.

Building Terraform Configuration files to Create AWS S3 bucket using Terraform

Now that you know what are Terraform configurations files look like and how to declare each of them. In this section, you will learn how to build Terraform configuration files to create AWS S3 bucket on the AWS account before running Terraform commands. Let’s get into it.

  • Log in to the Ubuntu machine using your favorite SSH client.
  • Create a folder in opt directory named terraform-s3-demo and switch to that folder.
mkdir /opt/terraform-s3-demo
cd /opt/terraform-s3-demo
  • Create a file named main.tf inside the /opt/terraform-s3-demo directory and copy/paste the below content. The below file creates the below components:
    • Creates the AWS S3 bucket in AWS account.
    • Provides the access to the AWS S3 bucket.
    • Creating encryption keys that will protect the AWS S3 bucket.
# Providing the access to the AWS S3 bucket.

resource "aws_s3_bucket_public_access_block" "publicaccess" {
  bucket = aws_s3_bucket.demobucket.id
  block_public_acls       = false
  block_public_policy     = false
}

# Creating the encryption key which will encrypt the bucket objects

resource "aws_kms_key" "mykey" {
  deletion_window_in_days = "20"
}

# Creating the AWS S3 bucket.

resource "aws_s3_bucket" "demobucket" {

  bucket          = var.bucket
  force_destroy   = var.force_destroy

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = aws_kms_key.mykey.arn
        sse_algorithm     = "aws:kms"
      }
    }
  }
  versioning {
    enabled          = true
  }
  lifecycle_rule {
    prefix  = "log/"
    enabled = true
    expiration {
      date = var.date
    }
  }
}
  • Create one more file named vars.tf inside the /opt/terraform-s3-demo directory and copy/paste below content. This file contains all the variables that are referred in the main.tf configuration file.
variable "bucket" {
 type = string
}
variable "force_destroy" {
 type = string
}
variable "date" {
 type = string
}
  • Create one more file provider.tf file inside the /opt/terraform-s3-demo directory and copy/paste below content. The provider.tf file will allows Terraform to connect to the AWS cloud.
provider "aws" {
  region = "us-east-2"
}

  • Create one more file terraform.tfvars inside the same folder and copy/paste the below content. This file contains the values of the variables that you declared in vars.tf file and refered in main.tf file.
bucket          = "terraformdemobucket"
force_destroy   = false
date = "2022-01-12"
  • Now the folder structure of all the files should like below.
Folder structure of all the files in the /opt/terraform-s3-demo
The folder structure of all the files in the /opt/terraform-s3-demo
  • Now your files and code are ready for execution. Initialize the terraform using the terraform init command.
terraform init
Initializing the terraform using the terraform init command.
Initializing the terraform using the terraform init command.
  • Terraform initialized successfully , now its time to run the plan command which provides you the details of the deployment. Run terraform plan command to confirm if correct resources is going to provisioned or deleted.
terraform plan
Running the terraform plan command
Running the terraform plan command
  • After verification, now its time to actually deploy the code using terraform apply command.
terraform apply
Running the terraform apply command
Running the terraform apply command

Terraform commands terraform init→ terraform plan→ terraform apply all executed successfully. But it is important to manually verify the AWS S3 bucket launched in the AWS Management console.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘S3’, and click on the S3 menu item.
Verifying the AWS S3 bucket that Terraform created
Verifying the AWS S3 bucket that Terraform created

Uploading the Objects in the AWS S3 bucket

Now that you have the AWS S3 bucket created in the AWS account, which is great, let’s upload a sample text file in the bucket by clicking on the Upload button.

Navigating to the AWS S3 bucket
Navigating to the AWS S3 bucket
  • Now click on Add files button and choose any files that you wish to add in the newly created AWS S3 bucket. This tutorial uses sample.txt file and uploads it.
Adding the files in newly created bucket
Adding the files in the newly created bucket
  • As you can see the sample.txt has been uploaded successfully.
Verifying the sample.txt in the AWS bucket
Verifying the sample.txt in the AWS bucket

Join 44 other followers

Conclusion

In this tutorial, you learned Amazon AWS S3 and how to create an Amazon AWS S3 bucket using Terraform.

Most of your phone and website data is stored on AWS S3, so now what do you plan to store with this newly created AWS bucket.