What is AWS S3 Bucket?

In this Quick tutorial you will learn everything one must know regarding AWS storage service that is AWS S3.

Table of content

What is AWS S3 Bucket?

Amazon Simple storage service allows you to store objects or any sizes securely and with good performance, scalability and securely. You can ideally store unlimited data into AWS S3 bucket. Lets get into some of the important features of AWS S3 bucket.

  • There are various S3 storage classes which can be used according to the requirements.
  • You can also configure Storage lifecycle which allows you to manage your objects efficiently and you can move the objects to different storage classes.
  • S3 object lock: you can add a object lock for a particular time so that the objects are not deleted by mistake.

  • S3 replication: you can replicate objects to different destinations may be in different buckets or different regions accordingly.

  • S3 batch operations: you can manage lot of objects in a single API request using batch operations.

  • You can block public access to S3 buckets and object. By default, Block Public Access settings are turned on at the account and bucket level.

  • You can apply IAM policy to users or roles to access 3 bucket securely. You can also apply resource based policy on AWS s3 buckets and objects.

  • You can also apply access control list on a particular bucket or a particular objects.

  • You can disable ACL and take ownership of every object in your bucket. As a bucket error you have rides on every object in your bucket.

  • You can also use access analyzer for S3 two evaluate all the access policies
  • You can have up to 100 buckets in your AWS account
  • When is the bucket is created you are not allowed to change the name afterwards or  the region.
  • Every object is identified by a name that is a key and a version ID and every object in bucket has exactly one key.

You can access your bucket using the Amazon S3 console using both virtual-hosted–style and path-style URLs to access a bucket.

https://bucket-name.s3.region-code.amazonaws.com/key-name  (Virtual Hosted )

https://bucket-name.s3.region-code.amazonaws.com/key-name  ( Path Based )

AWS S3 Bucket Access Control List

  • You can set the bucket ownership and S3 object ownership in AWS S3 bucket level settings and can disable ACL so that you are owners of every object.
  • When any other AWS account upload the objects in AWS S3 in your account then that account owns the bucket and has access to it but if you disable ACL then bucket owner automatically owns every object in your bucket.

S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to disable access control lists (ACLs) and take ownership of every object in your bucket, simplifying access management for data stored in Amazon S3.

AWS S3 Object Encryption

Amazon AWS S3 encryption is done in transit and at rest. Server-side encryption encrypts the object before saving it and decrypts when you download it.

  • Server-side encryption with Amazon S3 managed keys (SSE-S3)
  • Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS)
  • Server-side encryption with customer-provided keys (SSE-C)

Client side encryption can be done before sending objects to as 3 bucket.

AWS S3 Bucket Policy

In AWS S3 bucket policy is a resource-based policy which allows you to grant permission to your bucket and objects only bucket owner of that account can associate a policy with the bucket and bucket policies a based on access policies.

AWS s3 bucket policy examples

In this section we will go through some of the examples of bucket policy. With bucket policy you can secure access to objects in your buckets, so that only users with the appropriate permissions can access them

s3 bucket policy to encrypt each object with server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS)

To require server-side encryption of all objects in a particular Amazon S3 bucket, you can use a bucket policy.

{
   "Version":"2012-10-17",
   "Id":"PutObjectPolicy",
   "Statement":[{
         "Sid":"DenyUnEncryptedObjectUploads",
         "Effect":"Deny",
         "Principal":"*",
         "Action":"s3:PutObject",
         "Resource":"arn:aws:s3:::DOC-EXAMPLE-BUCKET1/*",
         "Condition":{
            "StringNotEquals":{
               "s3:x-amz-server-side-encryption":"aws:kms"
            }
         }
      }
   ]
}

s3 bucket policy which require SSE-KMS with a specific AWS KMS key for all objects written to a bucket

{
"Version": "2012-10-17",
"Id": "PutObjPolicy",
"Statement": [{
  "Sid": "DenyObjectsThatAreNotSSEKMSWithSpecificKey",
  "Principal": "*",
  "Effect": "Deny",
  "Action": "s3:PutObject",
  "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*",
  "Condition": {
    "ArnNotEqualsIfExists": {
      "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:us-east-2:111122223333:key/01234567-89ab-cdef-0123-456789abcdef"
    }
  }
}]
}

Grant cross-account permissions to upload objects while ensuring that the bucket owner has full control

{
   "Version":"2012-10-17",
   "Statement":[
     {
       "Sid":"PolicyForAllowUploadWithACL",
       "Effect":"Allow",
       "Principal":{"AWS":"111122223333"},
       "Action":"s3:PutObject",
       "Resource":"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*",
       "Condition": {
         "StringEquals": {"s3:x-amz-acl":"bucket-owner-full-control"}
       }
     }
   ]
}

How to remove bucket content completely using aws s3 rm

To remove bucket content completely run the below command.

aws s3 rm s3://bucket-name –recursive

Deleting a AWS S3 bucket – How you can delete an empty Amazon S3 bucket run the below command.

aws s3 rb s3://bucket-name --force 

How to transform data with S3 object Lambda

To Transform the data with AWS S3 Object Lambda follow the below steps:

  • Prerequisites
  • Step 1: Create an S3 bucket
  • Step 2: Upload a file to the S3 bucket
  • Step 3: Create an S3 access point
  • Step 4: Create a Lambda function
  • Step 5: Configure an IAM policy for your Lambda function’s execution role
  • Step 6: Create an S3 Object Lambda Access Point
  • Step 7: View the transformed data
  • Step 8: Clean up

List S3 Bucket using using the AWS S3 CLI command ( aws s3 list bucket or AWS S3 ls )

To list the bucket using AWS CLI then use the below command. The below command lists all prefixes and objects in a bucket

aws s3 ls s3<strong>:</strong>//mybucket

AWS S3 Sync

Syncs directories and S3 prefixes. Recursively copies new and updated files from the source directory to the destination. Only creates folders in the destination if they contain one or more files.

The following sync command syncs objects under a specified prefix and bucket to files in a local directory by uploading the local files to s3

aws s3 sync . s3://mybucket

AWS S3 cp recursive

To list the bucket using AWS CLI then use the below command.

aws s3 mv

Moves a local file or S3 object to another location locally or in S3. The following mv command moves a single file to a specified bucket and key.

aws s3 mv test.txt s3://mybucket/test2.txt

Conclusion

In this tutorial we learned important concepts of AWS S3 such as its use, bucket policy and features of AWS S3 bucket.

Advertisement

How to allow only HTTPS requests on AWS S3 buckets using AWS S3 Policy

It is important for your infrastructure to be secure. Similarly if you wish to secure your AWS bucket contents in AWS contents you need to make sure that you allow only secure requests that works on HTTPS.

In this quick tutorial you will learn How to allow only HTTPS requests on AWS S3 buckets using AWS S3 Policy on a bucket.

Lets get started.

Prerequisites

  • AWS account
  • One AWS Bucket

Creating AWS S3 bucket Policy for AWS S3 bucket

The below policy has two statements which performs the below actions:

  • Version is a standard date used in S3 policy.
  • The Statement below restricts all the requests except HTTPS on the AWS S3 bucket ( my-bucket )
  • Deny Here means it denies any requests that are not secure.
{
    "Version": "2012-10-17",
    "Statement": [{
        "Sid": "RestrictToTLSRequestsOnly",
        "Action": "s3:*",
        "Effect": "Deny",
        "Resource": [
            "arn:aws:s3:::my-bucket",
            "arn:aws:s3:::my-bucket/*"
        ],
        "Condition": {
            "Bool": {
                "aws:SecureTransport": "false"
            }
        },
        "Principal": "*"
    }]
}

Conclusion

This tutorial demonstrated how to allow only HTTPS requests on AWS S3 buckets using AWS S3 Policy.

How AWS s3 list bucket and AWS s3 put object

Are you Struggling to list your AWS S3 bucket and unable to upload data, if yes then don’t worry this tutorial is for you.

In this quick tutorial you will learn how you can list all the AWS Amazon S3 buckets and upload objects into it by assigning IAM policy to a user or a role.

Lets get started.

Prerequisites

  • AWS account
  • One AWS Bucket

Creating IAM policy for AWS S3 to list buckets and put objects

The below policy has two statements which performs the below actions:

  • First statement allows you to list objects in the AWS S3 bucket named (my-bucket-name).
  • Second Statement not only allow to list objects but allow you to perform any actions such as put:object, delet:objects etc. in the AWS S3 bucket named (my-bucket-name).
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListObjectsInBucket",
            "Effect": "Allow",
            "Action": ["s3:ListBucket"],
            "Resource": ["arn:aws:s3:::my-bucket-name"]
        },
        {
            "Sid": "AllObjectActions",
            "Effect": "Allow",
            "Action": "s3:*Object",
            "Resource": ["arn:aws:s3:::my-bucket-name/*"]
        }
    ]
}

Conclusion

This tutorial demonstrated how you can list all the AWS Amazon S3 buckets and upload objects into it by assigning IAM policy to a user or a role. .

How to Access AWS S3 bucket using S3 policy

Are you Struggling to Access your AWS S3 bucket, if yes then this tutorial is for you.

In this quick tutorial you will learn how you can grant read-write access to an Amazon S3 bucket by assigning S3 policy to the role.

Lets get started.

Prerequsites

  • AWS account
  • One AWS Bucket named sagarbucket2023

Creating IAM S3 Policy

The below policy is useful when you want any of your application intending to use the AWS S3 bucket may be for reading the data from a website or storing the data i.e. writing it to AWS S3 bucket.

The below policy contains following attributes

  • Version is Policy version which is fixed.
  • Effect is Allow in each statement as we want to allow users or group be able to work with AWS S3.
  • Actions: We have different actions such as ListAllbuckets to list the buckets etc.
  • Resource is my AWS S3 bucket named sagarbucket2023
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetBucketLocation",
        "s3:ListAllMyBuckets"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": ["arn:aws:s3:::sagarbucket2023"]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject"
      ],
      "Resource": ["arn:aws:s3:::sagarbucket2023/*"]
    }
  ]
}

Conclusion

This tutorial demonstrated that if you need to read or write data in AWS S3 bucket then your policy either attached to IAM user or IAM role should be defined as we showed.

What is AWS CloudFront and how to Setup Amazon CloudFront with AWS S3 and ALB Distributions

Internet users are always impressed with websites’ high speed & loading capacities. Why not have a website that loads the content quickly and delivers fast with AWS Cloudfront?

In this tutorial, you learn What AWS CloudFront is and how to set up Amazon CloudFront with AWS S3 and ALB Distributions which enables users to retrieve content quickly by utilizing the concept of caching.

Let’s get started.

Join 50 other followers

Table of Content

  1. What is AWS Cloudfront?
  2. How AWS Cloudfront delivers content to your users
  3. Amazon Cloudfront caching with regional edge caches
  4. Prerequisites
  5. Creating an IAM user in AWS account with programmatic access
  6. Configuring the IAM user Credentials on local Machine
  7. How to Set up AWS CloudFront
  8. How to Use Custom URLs in AWS CloudFront by Adding alternate Domain Names (CNAMEs)
  9. Using Amazon EC2 as the Origins in the AWS CloudFront
  10. Conclusion

What is AWS Cloudfront?

AWS Cloudfront is an Amazon web service that speeds up the distribution of static and dynamic content such as .html, .css, .js, images, live streaming of video to users. Cloudfront delivers the content quickly using edge locations when the request is requested by users.

If the content is not available in edge locations, Cloudfront requests from the origin configured such as AWS S3 bucket, HTTP server or Load Balancer, etc. Also, the use of Lambda at edge location with CloudFront adds more ways to customize CloudFront.

How AWS Cloudfront delivers content to your users

Now that you have a basic idea of CloudFront knowing how AWS Cloudfront delivers content to users is also important.

Initially, when users request a website or application such as example.com/mypage.html, the DNS server routes the request to AWS Cloudfront edge locations.

Next CloudFront checks if the request can be fulfilled with edge location; else, CloudFront queries to the origin server. The Origin server sends the files back to the edge location, and further Cloudfront sends them back to the user.

AWS Cloudfront architecture
AWS Cloudfront architecture

Amazon Cloudfront caching with regional edge caches

Delivering the content from the edge location is fine. Still, if you to further improve the performance and latency of content, there is a further caching mechanism based on region, known as regional edge cache.

Regional edge caches help with all types of content, particularly content that becomes less popular over time, such as user-generated content, videos, photos, e-commerce assets such as product photos and videos, etc.

Regional edge cache sits in between the origin server and edge locations. The Edge location stores the content and cache, but when the content is too old it removes it from its cache and forwards it to the regional cache, which has wide coverage to store lots of content.

Regional edge cache
Regional edge cache

Prerequisites

  • You must have AWS account in order to setup AWS CloudFront. If you don’t have AWS account, please create a account from here AWS account.
  • AWS S3 bucket created.

Creating an IAM user in AWS account with programmatic access

To connect to AWS Service, you should have an IAM user with an Access key ID and secret keys in the AWS account that you will configure on your local machine to connect to AWS account from your local machine.

There are two ways to connect to an AWS account, the first is providing a username and password on the AWS login page on the browser, and the other way is to configure Access key ID and secret keys on your machine and then use command-line tools to connect programmatically.

  1. Open your favorite web browser and navigate to the AWS Management Console and log in.
  2. While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item.
Opening the IAM service in AWS cloud
Opening the IAM service in AWS cloud
  1. To Create a user click on Users→ Add user and provide the name of the user myuser and make sure to tick the Programmatic access checkbox in Access type which enables an access key ID and secret access key and then hit the Permissions button.
Adding the AWS IAM user with Programmatic access
Adding the AWS IAM user with Programmatic access
  1. Now select the “Attach existing policies directly” option in the set permissions and look for the “Administrator” policy using filter policies in the search box. This policy will allow myuser to have full access to AWS services.
Granting the Administrator Access to the IAM user
Granting the Administrator Access to the IAM user
  1. Finally click on Create user.
  1. Now, the user is created successfully and you will see an option to download a .csv file. Download this file which contains IAM users i.e. myuser Access key ID and Secret access key which you will use later in the tutorial to connect to AWS service from your local machine.
Downloading the AWS IAM user with programmatic access that is access key and secret key
Downloading the AWS IAM user with programmatic access that is access key and secret key

Configuring the IAM user Credentials on local Machine

Now, you have an IAM user myuser created. The next step is to set the download myuser credentials on the local machine, which you will use to connect AWS service via API calls.

  1. Create a new file, C:\Users\your_profile\.aws\credentials on your local machine.
  2. Next, Enter the Access key ID and Secret access key from the downloaded csv file into the credentials file in the same format and save the file.
[default]     # Profile Name
aws_access_key_id = AKIAXXXXXXXXXXXXXXXX
aws_secret_access_key = vIaGXXXXXXXXXXXXXXXXXXXX

credentials files help you to set your profile. By this way, it helps you to create multiple profiles and avoid confusion while connecting to specific AWS accounts.

  1. Similarly, create another file C:\Users\your_profile\.aws\config in the same directory
  2. Next, add the “region” into the config file and make sure to add the name of the profile which you provided in the credentials file, and save the file. This file allows you to work with a specific region.
[default]   # Profile Name
region = us-east-2

How to Set up AWS CloudFront

Now that you know what AWS Cloudfront is, you have an IAM user that will allow you to set up the AWS Cloudfront in the AWS cloud. Let’s set up AWS Cloudfront.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘CloudFront’, and click on the CloudFront menu item.
Searching for AWS Cloudfront in AWS Cloud
Searching for AWS Cloudfront in AWS Cloud
  • Click on Create distributions and then Get Started
Creating the AWS Cloudfront distribution
Creating the AWS Cloudfront distribution
  • Now in the Origin settings provide the AWS S3 bucket name and keep other values as default.
Aligning the AWS S3 bucket in the AWS Cloudfront in AWS Cloud
Aligning the AWS S3 bucket in the AWS Cloudfront in AWS Cloud
  • For the settings under Default Cache Behavior Set and Distribution Settings, accept the default values and then click on Create distribution.
AWS S3 bucket setup in AWS Cloudfront
AWS S3 bucket setup in AWS Cloudfront
AWS Cloudfront distribution
AWS Cloudfront distribution
  • Now upload a index.html with a text hello in the AWS S3 bucket and set the permission as public access as shown below.
Uploading the file in AWS S3 bucket
Uploading the file in AWS S3 bucket
Granting permissions to the file in AWS S3 bucket
Granting permissions to the file in the AWS S3 bucket
  • Now check the Amazon S3 URL to verify that your content is publicly accessible
Checking the content of file of AWS S3 bucket using the AWS S3 URL
Checking the content of file of AWS S3 bucket using the AWS S3 URL
  • Finally check the CloudFront URL by hitting domain-name/index.html and it should show the same result as your index.html file contains.
domainname/index.html
Checking the content of file of AWS S3 bucket using the Cloudfront URL
Checking the content of file of AWS S3 bucket using the Cloudfront URL

How to Use Custom URLs in AWS CloudFront by Adding alternate Domain Names (CNAMEs)

Previously the CloudFront URL was generated with a domain name *.cloudfront.net by default, but If in production, it is important to configure your own domain name that is CNAME, such as abc.com, in the URL. Let’s learn how to use Custom URLs in AWS CloudFront by adding alternate Domain Names (CNAMEs).

Earlier, the default URL of AWS Cloudfront was http://dsx78lsseoju7.cloudfront.net/index.html, but if you wish to use an alternate domain such as http://abc.com/index.html, follows the step below:

  • Navigate back to CloudFront Page and look for the distribution where you need to change the domain and click on Edit
Updating the custom URL in AWS Cloudfront
Updating the custom URL in AWS Cloudfront
  • Here, provide the domain name that you wish to configure with valid SSL certificate.
Updating the CNAME and SSL certificate in AWS Cloudfront
Updating the CNAME and SSL certificate in AWS Cloudfront
  • Now the domain name is succesfully update in Cloudfront but for the URL to work you will need to configure few things in Route53 AWS service such as alias record set. To do that, navigate to the Route53 page by searching on the top of the AWS Page.
Opening the AWS Route53 service
Opening the AWS Route53 service
  • Click on the Hosted Zone and then click on Create Record
Opening the Hosted zone to create a record
Opening the Hosted zone to create a record
  • Now provide the name of record, record type and route traffic as CloudFront distribution. After you configure Route53 verify the index page ( http://mydomain.abc.com/index.html ) and it should work fine.
Creating the record in Route53 to route new domain to CloudFront
Creating the record in Route53 to route new domain to CloudFront

Using Amazon EC2 as the Origins in the AWS CloudFront

A custom Origin can be an Amazon Elastic Compute Cloud (AWS EC2), for example, an http server. You need to provide the DNS name of the AWS EC2 instance as the custom origin, but while setting the custom origin as AWS EC2, make sure to follow some basic guidelines.

  • Host the same content and synchronize the clocks on all servers in the same way.
  • Restrict access requests to the HTTP and HTTPS ports that your custom origin listens on that is AWS EC2.
  • Use an Elastic Load Balancing load balancer to handle traffic across multiple Amazon EC2 instances and when you create your CloudFront distribution, specify the URL of the load balancer for the domain name of your origin server.

Conclusion

This tutorial taught you what CloudFront is and how to set up CloudFront Distributions in the Amazon cloud. The benefit of using CloudFront is it allows users to retrieve their content quickly by utilizing the concept of caching.

So next, what are you going to manage with CloudFront?

How to Launch AWS S3 bucket using Shell Scripting.

Are you storing the data securely, scalable, highly available, and fault-tolerant? If not, consider using Amazon Simple Storage Service (Amazon S3) in the AWS cloud.

This tutorial will teach you how to launch an AWS S3 bucket in an Amazon account using bash or shell scripting.

Let’s dive into it quickly.

Join 50 other followers

Table of Content

  1. What is Shell Script or Bash Script?
  2. What is the Amazon AWS S3 bucket?
  3. Prerequisites
  4. Building a shell script to create AWS S3 bucket in Amazon account
  5. Executing the Shell Script to Create AWS S3 bucket in Amazon Cloud
  6. Verifying the AWS S3 bucket in AWS account
  7. Conclusion

What is Shell Script or Bash Script?

Shell Script is a text file containing lists of commands executed on the terminal or shell in one go in sequential order. Shell Script performs various important tasks such as file manipulation, printing text, program execution.

Shell script includes various environmental variables, comments, conditions, pipe commands, functions, etc., to make it more dynamic.

When you execute a shell script or function, a command interpreter goes through the ASCII text line-by-line, loop-by-loop, test-by-test, and executes each statement as each line is reached from top to bottom.

What is the Amazon AWS S3 bucket?

AWS S3, why it is S3? The name itself tells that it’s a 3 word whose alphabet starts with “S.” The Full form of AWS S3 is a simple storage service. AWS S3 service helps in storing unlimited data safely and efficiently. Everything in the AWS S3 service is an object such as pdf files, zip files, text files, war files, anything. Some of the features of the AWS S3 bucket are below:

  • To store the data in AWS S3 bucket you will need to upload the data.
  • To keep your AWS S3 bucket secure addthe necessary permissions to IAM role or IAM user.
  • AWS S3 buckets have unique name globally that means there will be only 1 bucket throughout different accounts or any regions.
  • 100 buckets can be created in any AWS account, post that you need to raise a ticket to Amazon.
  • Owner of AWS S3 buckets is specific to AWS account only.
  • AWS S3 buckets are created region specific such as us-east-1 , us-east-2 , us-west-1 or us-west-2
  • AWS S3 bucket objects are created in AWS S3 in AWS console or using AWS S3 API service.
  • AWS S3 buckets can be publicly visible that means anybody on the internet can access it but is recommended to keep the public access blocked for all buckets unless very much required.

Prerequisites

  1. AWS account to create ec2 instance. If you don’t have AWS account please create from AWS account or AWS Account
  2. Windows 7 or plus edition where you will execute the shell script.
  3. AWS CLI installed. To install AWS CLI click here.
  4. Git bash. Yo install Git bash click here
  5. Code editor for writing the shell script on windows machine such as visual studio code. To install visual studio click here.

Building a shell script to create AWS S3 bucket in Amazon account

Now that you have a good idea about the AWS S3 bucket and shell script let’s learn how to build a shell script to create an AWS S3 bucket in an Amazon account.

  • Create a folder of your windows machine at any location. Further under the same folder create a file named create-s3.sh and copy/paste the below code.
#! /usr/bin/bash
# This Script will create S3 bucket and tag the bucket with appropriate name.

# To check if access key is setup in your system 


if ! grep aws_access_key_id ~/.aws/config; then
   if ! grep aws_access_key_id ~/.aws/credentials; then
   echo "AWS config not found or you don't have AWS CLI installed"
   exit 1
   fi
fi

# read command will prompt you to enter the name of bucket name you wish to create 


read -r -p  "Enter the name of the bucket:" bucketname

# Creating first function to create a bucket 

function createbucket()
   {
    aws s3api  create-bucket --bucket $bucketname --region us-east-2
   }

# Creating Second function to tag a bucket 

function tagbucket()    {
    
   aws s3api  put-bucket-tagging --bucket $bucketname --tagging 'TagSet=[{Key=Name,Value="'$bucketname'"}]'
}

# echo command will print on the screen 

echo "Creating the AWS S3 bucket and Tagging it !! "
echo ""
createbucket    # Calling the createbucket function  
tagbucket       # calling our tagbucket function
echo "AWS S3 bucket $bucketname created successfully"
echo "AWS S3 bucket $bucketname tagged successfully "

Executing the Shell Script to Create AWS S3 bucket in Amazon Cloud

Previously you created the shell script to create an AWS S3 bucket in Amazon Cloud, which is great, but it is not doing much unless you run it. Let’s execute the shell script now.

  • Open the visual studio code and then open the location of file create-s3.sh.
Opening Shell script on visual studio code
Opening Shell script on visual studio code
  • Finally execute the shell script.
./create-s3.sh
Executing the shell script to create AWS S3 bucket
Executing the shell script to create AWS S3 bucket

Verifying the AWS S3 bucket in AWS account

Earlier in the previous section, the shell script ran successfully; let’s verify the if AWS S3 bucket has been created in the AWS account.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘S3’, and click on the S3 menu item and you should see the list of AWS S3 buckets and the bucket that you specified in shell script.
Viewing the AWS S3 bucket in AWS cloud
Viewing the AWS S3 bucket in AWS cloud
  • Also verify the tags that you applied in the AWS S3 bucket by navigating to proerties tab.
Viewing the AWS S3 bucket tags in AWS cloud
Viewing the AWS S3 bucket tags in the AWS cloud

Conclusion

In this tutorial, you learned how to set up Amazon AWS S3 using shell script on AWS step by step. Most of your phone and website data are stored on AWS S3.

Now that you have a newly created AWS S3 bucket, what do you plan to store in it?

How to Launch AWS S3 bucket on Amazon using Terraform

Do you have lots of log rotation issues, or does your system hangs when lots of logs are generated on the disk, and your system behaves very abruptly? Do you have less space to keep your important deployments jars or wars? Consider using Amazon Simple Storage Service (Amazon S3) to solve these issues.

Storing all the logs, deployment code, and scripts with Amazon’s AWS S3 provides unlimited storage, safe and secure, and quick.

In this tutorial, learn how to Launch an AWS S3 bucket on Amazon using Terraform. Let’s dive in.

Join 50 other followers

Table of Content

  1. What is the Amazon AWS S3 bucket?
  2. Prerequisites
  3. Terraform files and Terraform directory structure
  4. Building Terraform Configuration files to Create AWS S3 bucket using Terraform
  5. Uploading the Objects in the AWS S3 bucket
  6. Conclusion

What is the Amazon AWS S3 bucket?

AWS S3, why it is S3? The name itself tells that it’s a 3 word whose alphabet starts with “S.” The Full form of AWS S3 is a simple storage service. AWS S3 service helps in storing unlimited data safely and efficiently. Everything in the AWS S3 service is an object such as pdf files, zip files, text files, war files, anything. Some of the features of the AWS S3 bucket are below:

  • To store the data in AWS S3 bucket you will need to upload the data.
  • To keep your AWS S3 bucket secure addthe necessary permissions to IAM role or IAM user.
  • AWS S3 buckets have unique name globally that means there will be only 1 bucket throughout different accounts or any regions.
  • 100 buckets can be created in any AWS account, post that you need to raise a ticket to Amazon.
  • Owner of AWS S3 buckets is specific to AWS account only.
  • AWS S3 buckets are created region specific such as us-east-1 , us-east-2 , us-west-1 or us-west-2
  • AWS S3 bucket objects are created in AWS S3 in AWS console or using AWS S3 API service.
  • AWS S3 buckets can be publicly visible that means anybody on the internet can access it but is recommended to keep the public access blocked for all buckets unless very much required.
Recommended: Private bucket
Recommended: Private bucket

Prerequisites

  • Ubuntu machine to run terraform command, if you don’t have Ubuntu machine you can create an AWS EC2 instance on AWS account with 4GB RAM and at least 5GB of drive space.
  • Terraform Installed on Ubuntu Machine. If you don’t have Terraform installed refer Terraform on Windows Machine / Terraform on Ubuntu Machine
  • Ubuntu machine should have IAM role attached with full access to create AWS S3 bucket or administrator permissions.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

Terraform files and Terraform directory structure

Now that you know what is Amazon Elastic search and Amazon OpenSearch service are. Let’s now dive into Terraform files and Terraform directory structure that will help you write the Terraform configuration files later in this tutorial.

Terraform code, that is, Terraform configuration files, are written in a tree-like structure to ease the overall understanding of code with .tf format or .tf.json or .tfvars format. These configuration files are placed inside the Terraform modules.

Terraform modules are on the top level in the hierarchy where configuration files reside. Terraform modules can further call another child to terraform modules from local directories or anywhere in disk or Terraform Registry.

Terraform contains mainly five files as main.tf , vars.tf , providers.tf , output.tf and terraform.tfvars.

  1. main.tf – Terraform main.tf file contains the main code where you define which resources you need to build, update or manage.
  2. vars.tf – Terraform vars.tf file contains the input variables which are customizable and defined inside the main.tf configuration file.
  3. output.tf : The Terraform output.tf file is the file where you declare what output paraeters you wish to fetch after Terraform has been executed that is after terraform apply command.
  4. .terraform: This directory contains cached provider , modules plugins and also contains the last known backend configuration. This is managed by terraform and created after you run terraform init command.
  5. terraform.tfvars files contains the values which are required to be passed for variables that are refered in main.tf and actually decalred in vars.tf file.
  6. providers.tf – The povider.tf is the most important file whrere you define your terraform providers such as terraform aws provider, terraform azure provider etc to authenticate with the cloud provider.

Building Terraform Configuration files to Create AWS S3 bucket using Terraform

Now that you know what are Terraform configurations files look like and how to declare each of them. In this section, you will learn how to build Terraform configuration files to create AWS S3 bucket on the AWS account before running Terraform commands. Let’s get into it.

  • Log in to the Ubuntu machine using your favorite SSH client.
  • Create a folder in opt directory named terraform-s3-demo and switch to that folder.
mkdir /opt/terraform-s3-demo
cd /opt/terraform-s3-demo
  • Create a file named main.tf inside the /opt/terraform-s3-demo directory and copy/paste the below content. The below file creates the below components:
    • Creates the AWS S3 bucket in AWS account.
    • Provides the access to the AWS S3 bucket.
    • Creating encryption keys that will protect the AWS S3 bucket.
# Providing the access to the AWS S3 bucket.

resource "aws_s3_bucket_public_access_block" "publicaccess" {
  bucket = aws_s3_bucket.demobucket.id
  block_public_acls       = false
  block_public_policy     = false
}

# Creating the encryption key which will encrypt the bucket objects

resource "aws_kms_key" "mykey" {
  deletion_window_in_days = "20"
}

# Creating the AWS S3 bucket.

resource "aws_s3_bucket" "demobucket" {

  bucket          = var.bucket
  force_destroy   = var.force_destroy

  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        kms_master_key_id = aws_kms_key.mykey.arn
        sse_algorithm     = "aws:kms"
      }
    }
  }
  versioning {
    enabled          = true
  }
  lifecycle_rule {
    prefix  = "log/"
    enabled = true
    expiration {
      date = var.date
    }
  }
}
  • Create one more file named vars.tf inside the /opt/terraform-s3-demo directory and copy/paste below content. This file contains all the variables that are referred in the main.tf configuration file.
variable "bucket" {
 type = string
}
variable "force_destroy" {
 type = string
}
variable "date" {
 type = string
}
  • Create one more file provider.tf file inside the /opt/terraform-s3-demo directory and copy/paste below content. The provider.tf file will allows Terraform to connect to the AWS cloud.
provider "aws" {
  region = "us-east-2"
}

  • Create one more file terraform.tfvars inside the same folder and copy/paste the below content. This file contains the values of the variables that you declared in vars.tf file and refered in main.tf file.
bucket          = "terraformdemobucket"
force_destroy   = false
date = "2022-01-12"
  • Now the folder structure of all the files should like below.
Folder structure of all the files in the /opt/terraform-s3-demo
The folder structure of all the files in the /opt/terraform-s3-demo
  • Now your files and code are ready for execution. Initialize the terraform using the terraform init command.
terraform init
Initializing the terraform using the terraform init command.
Initializing the terraform using the terraform init command.
  • Terraform initialized successfully , now its time to run the plan command which provides you the details of the deployment. Run terraform plan command to confirm if correct resources is going to provisioned or deleted.
terraform plan
Running the terraform plan command
Running the terraform plan command
  • After verification, now its time to actually deploy the code using terraform apply command.
terraform apply
Running the terraform apply command
Running the terraform apply command

Terraform commands terraform init→ terraform plan→ terraform apply all executed successfully. But it is important to manually verify the AWS S3 bucket launched in the AWS Management console.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘S3’, and click on the S3 menu item.
Verifying the AWS S3 bucket that Terraform created
Verifying the AWS S3 bucket that Terraform created

Uploading the Objects in the AWS S3 bucket

Now that you have the AWS S3 bucket created in the AWS account, which is great, let’s upload a sample text file in the bucket by clicking on the Upload button.

Navigating to the AWS S3 bucket
Navigating to the AWS S3 bucket
  • Now click on Add files button and choose any files that you wish to add in the newly created AWS S3 bucket. This tutorial uses sample.txt file and uploads it.
Adding the files in newly created bucket
Adding the files in the newly created bucket
  • As you can see the sample.txt has been uploaded successfully.
Verifying the sample.txt in the AWS bucket
Verifying the sample.txt in the AWS bucket

Join 50 other followers

Conclusion

In this tutorial, you learned Amazon AWS S3 and how to create an Amazon AWS S3 bucket using Terraform.

Most of your phone and website data is stored on AWS S3, so now what do you plan to store with this newly created AWS bucket.