AWS CloudWatch Logs

When your host applications on AWS cloud and your Amazon infrastructure grows day by day it becomes difficult to monitor it. To solve the monitoring issue the Amazon provides its own managed service that allows you to store logs which you can access anytime.

AWS CloudWatch logs monitor many AWS services and stores the Logs in different or same Log groups.

In this tutorial you will learn everything you should know about CloudWatch Logs. Lets get into it without further delay.

Table of Content

  1. What is AWS CloudWatch Logs?
  2. AWS CloudWatch Pricing
  3. Components of AWS CloudWatch Logs
  4. Collecting EC2 instance Logs with CloudWatch Logs
  5. Unified CloudWatch Agent
  6. Older CloudWatch Agent
  7. Installing the CloudWatch agent
  8. Downloading CloudWatch Agent
  9. Creating IAM roles for CloudWatch
  10. Create and modify the CloudWatch Agent configuration file
  11. Running AWS CloudWatch agent
  12. Analyzing log data with CloudWatch Logs Insights
  13. Running a CloudWatch Logs Insights query
  14. Running a CloudWatch Logs Insights query for Lambda function
  15. Running a CloudWatch Logs Insights query for Amazon VPC Flow Logs
  16. Running a CloudWatch Logs Insights query for Route53 logs
  17. Running a CloudWatch Logs Insights query for CloudTrail logs
  18. Create a CloudWatch Log groups in CloudWatch Logs
  19. Checking Log Entries using AWS Management console.
  20. Checking Log Entries using the AWS CLI
  21. Data at Rest vs Data in Transit
  22. Encrypting Log Data in CloudWatch Logs
  23. Creating an AWS KMS customer managed key
  24. Adding permissions to AWS KMS customer managed keys
  25. Associating the customer managed key with a log group when you create it
  26. Creating metrics from log events using filters
  27. Creating metric filters from log events
  28. Creating metric filters using the AWS CLI
  29. Posting Event data into CloudWatch Log groups using the AWS CLI
  30. To list metric filters using the AWS CLI
  31. Real-time processing of log data with subscriptions
  32. Creating CloudWatch Logs Subscription filter with Kinesis Data Streams
  33. Creating CloudWatch Logs Subscription filter with AWS lambda function.
  34. Publish Logs to AWS S3, kinesis and CloudWatch Logs
  35. Publishing Logs to AWS CloudWatch Logs
  36. Publishing Logs to AWS S3
  37. Publishing Logs to Kinesis Firehose
  38. Conclusion

What is AWS CloudWatch Logs?

AWS CloudWatch service monitor, store and access log files from various other AWS services such as Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Route 53, and other sources.

  • AWS CloudWatch also allows you to query your logs using query language, masking sensitive information and also allows you to generate metrics using filters or embedded logs.
  • CloudWatch Logs Insights is used to query the data search and analyze your log data. You can also use CloudWatch to monitor the AWS EC2 logs.
  • You can also create AWS CloudWatch alarms for various AWS services such as to capture the CloudTrail events etc.
  • Use data protection policies to avoid sensitive data in your logs
  • By default, logs are kept indefinitely and never expire however you can set days from 1 to 10 years depending on the requirement.
  • You can also archive the logs in highly durable storage.

AWS CloudWatch Pricing

The AWS CloudWatch Logs are free of cost for AWS free tier account. However for standard account the logs such as VPC Flow logs, EC2 logs and Lambda logs are charged.

The Metrics, Dashboards, alarms and various other components in AWS CloudWatch are charged.

Components of AWS CloudWatch Logs

Log Event: log event is a record of some activity recorded by the application or resource being monitored. CloudWatch logs understand two things from log event i.e. timestamp and raw event message.

Log streams: Log streams are group of same log events that has same or common source. represent the sequence of events coming from the application instance or resource being monitored such as Apache logs.

Log groups: Log groups define groups of log streams that share the same retention, monitoring, and access control settings. Each log stream has to belong to one log group.

Note: Each log stream has to belong to one log group

Metric filters

You can use metric filters on  ingested events to create metrics data points in a CloudWatch metric. Metric filters are assigned to log groups, and all of the filters assigned to a log group are applied to their log streams.

Retention settings

Retention settings can be used to specify how long log events are kept in CloudWatch Logs

Collecting EC2 instance Logs with CloudWatch Logs

There are two ways in which the AWS EC2 instance logs are captured with CloudWatch Logs agent:

Unified CloudWatch Agent

  • The latest and recommended agent is unified Cloud Watch agent which supports multiple operating systems, including servers running Windows Server. This agent also provides better performance.
  • Retrieve custom metrics from your applications or services using the StatsD and collectd protocols. 
    • StatsD is supported on both Linux servers and servers running Windows Server. 
    • collectd is supported only on Linux servers.
  • The default namespace for metrics collected by the Cloud Watch agent is CWAgent, although you can specify a different namespace when you configure the agent.

Older CloudWatch Agent

The older CloudWatch agent supports collection of logs only from servers running Linux.

Installing the CloudWatch agent

Below is the high level steps that you need to perform to install the cloud watch agent.

  • Create IAM roles or users that enable the agent to collect metrics from the server and optionally to integrate with AWS Systems Manager.
  • Download the agent package. You can download and install the CloudWatch agent manually using the command line, or you can integrate it with SSM
  • Modify the CloudWatch agent configuration file and specify the metrics that you want to collect.
  • Install and start the agent on your servers. As you install the agent on an EC2 instance, you attach the IAM role that you created in step 1. As you install the agent on an on-premises server, you specify a named profile that contains the credentials of the IAM user that you created in step 1.

Downloading CloudWatch Agent

Lets now get into details of what all you need to do to install and work with cloud watch agent.

  • Install the package on AWS EC2 instance using below command.
sudo yum install amazon-cloudwatch-agent
  • Create IAM role and attach it to the AWS EC2 instance that has the CloudWatchAgentServerPolicy attached.
  • You can also download and install it using AWS S3 links.
wget https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm

sudo rpm -U ./amazon-cloudwatch-agent.rpm

sudo dpkg -i -E ./amazon-cloudwatch-agent.deb

Creating IAM roles for CloudWatch

Next, create the IAM role from IAM management console and add the policy CloudWatchAgentServerPolicy. If you want that the Cloud watch agent to set the retention policy for log groups that it sends log events to then add the below in the policy.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "logs:PutRetentionPolicy",
      "Resource": "*"
    }
  ]
}

Create and modify the CloudWatch Agent configuration file

The agent configuration file is a JSON file  with three sections: agent, metrics, and logs that specifies the metrics and logs that the agent is to collect, including custom metrics. The agent configuration file wizard, amazon-cloudwatch-agent-config-wizard.

The wizard can autodetect the credentials and AWS Region to use if you have the AWS credentials and configuration files in place before you start the wizard.

sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard

You can also create the cloud watch agent configuration file manually or sometimes it installed with agent

  • Agent section is declared as below.
"agent": {
   "metrics_collection_interval": 60,
   "region": "us-west-1",
   "logfile": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log",
   "debug": false,
   "run_as_user": "cwagent"
  }
  • metric sections is declarled as below.
{
  "metrics": {
    "namespace": "Development/Product1Metrics",
   ......
   },
} 
  • Logs section are declared as below.
"collect_list": [ 
  {
    "file_path": "/opt/aws/amazon-cloudwatch-agent/logs/test.log", 
    "log_group_name": "test.log", 
    "log_stream_name": "test.log",
    "filters": [
      {
        "type": "exclude",
        "expression": "Firefox"
      },
      {
        "type": "include",
        "expression": "P(UT|OST)"
      }
    ]
  },
  .....
]

Running AWS CloudWatch agent

Finally in this section run the below command to run the CloudWatch agent by performing below steps.

  • Copy the agent configuration file that you want to use to the server where you’re going to run the agent. Note the pathname where you copy it to.
  • Now run the below command On an EC2 instance running Linux, enter the following command.
    • -a fetch-config causes the agent to load the latest version of the CloudWatch agent configuration file
    • -s starts the agent.
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file:configuration-file-path
  • On an EC2 instance running Windows Server, enter the following from the PowerShell console
& "C:\Program Files\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1" -a fetch-config -m ec2 -s -c file:configuration-file-path

Analyzing log data with CloudWatch Logs Insights

If you need to analyze your data more accurately and interactively you can enable CloudWatch logs insights in Amazon CloudWatch Logs. 

  • CloudWatch Logs Insights automatically discovers fields in logs from AWS services such as Amazon Route 53, AWS Lambda, AWS CloudTrail, and Amazon VPC, and any application or custom log that emits log events as JSON.
  • A single request can query up to 50 log groups. Queries time out after 60 minutes, if they have not completed. Query results are available for 7 days.
  • CloudWatch Logs Insights automatically generates five system fields:
    • @message contains the raw unparsed log event.
    • @timestamp contains the event timestamp in the log event’s timestamp field.
    • @ingestionTime contains the time when CloudWatch Logs received the log event.
    • @logStream contains the name of the log stream that the log event was added to.
    • @log is a log group identifier in the form of account-id:log-group-name.
  • Lets say you have below log in JSON format and you want to access type then you use userIdentity.type.
{
    "eventVersion": "1.0",
    "userIdentity": {
        "type": "IAMUser",
        "principalId": "EX_PRINCIPAL_ID",
        "arn": "arn: aws: iam: : 123456789012: user/Alice",
        "accessKeyId": "EXAMPLE_KEY_ID",
        "accountId": "123456789012",
        "userName": "Alice"
    },

Running a CloudWatch Logs Insights query

If you need to run a CloudWatch Logs Insights query, below are the steps to run a query.

  • To run a CloudWatch Logs Insights query, Open the CloudWatch console.
  • In the navigation pane, choose Logs, and then choose Logs Insights. On the Logs Insights page, go to the query editor.
  • In the Select log group(s) drop down, choose one or more log groups to query.
  • Choose Run to view the results.
  • To see all fields for a returned log event, choose the triangular dropdown icon left of the numbered event.
  • Example of some of queries are as follows.
stats count(*) by @logStream     | limit 100

stats count(*) by fieldName

stats count(*) by bin(30s)

Running a CloudWatch Logs Insights query for Lambda function

To run a CloudWatch Logs Insights query for a lambda function that determines the amount of overprovisioned memory run the below command.

filter @type = "REPORT"
    | stats max(@memorySize / 1000 / 1000) as provisonedMemoryMB,
        min(@maxMemoryUsed / 1000 / 1000) as smallestMemoryRequestMB,
        avg(@maxMemoryUsed / 1000 / 1000) as avgMemoryUsedMB,
        max(@maxMemoryUsed / 1000 / 1000) as maxMemoryUsedMB,
        provisonedMemoryMB - maxMemoryUsedMB as overProvisionedMB
    

Running a CloudWatch Logs Insights query for Amazon VPC Flow Logs

To run a CloudWatch Logs Insights query for Amazon VPC Flow Logs that determines the top 15 packet transfers across hosts then run the below command.

stats sum(packets) as packetsTransferred by srcAddr, dstAddr
    | sort packetsTransferred  desc
    | limit 15

Running a CloudWatch Logs Insights query for Route53 logs

To run a CloudWatch Logs Insights query for Route53 that determines the the distribution of records per hour by query then run the below command.

stats count(*) by queryType, bin(1h)

Running a CloudWatch Logs Insights query for CloudTrail logs

  • Find the Amazon EC2 hosts that were started or stopped in a given AWS Region.
filter (eventName="StartInstances" or eventName="StopInstances") and awsRegion="us-east-2"
    

Note: After you run a query, you can add the query to a CloudWatch dashboard or copy the results to the clipboard.

Create a CloudWatch Log groups in CloudWatch Logs

A log stream is a sequence of log events that share the same source. Each separate source of logs in CloudWatch Logs makes up a separate log stream and A log group is a group of log streams with same configurations.

In this section we will learn how to create a log groups in CloudWatch logs services. Lets perform the below steps.

  • Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/

  • In the navigation pane, choose Log groups.

  • Choose Actions, and then choose Create log group.

  • Enter a name for the log group, and then choose Create log group.

Note: You may send logs to CloudWatch using CloudWatch agent, AWS CLI and Programmatically

Checking Log Entries using AWS Management console.

To check the Log entries using the AWS Management console performing the following steps.

  • Open the CloudWatch console and choose Log groups.
  • Look for the right Log groups and then further check the Log streams and further look for Log events.

Checking Log Entries using the AWS CLI

You can run the below command to search logs entry in the AWS CLI.

aws logs filter-log-events --log-group-name my-group [--log-stream-names LIST_OF_STREAMS_TO_SEARCH] [--filter-pattern VALID_METRIC_FILTER_PATTERN]

Data at Rest vs Data in Transit

This section is really important to understand what is Data at rest and what is Data in Transit. The data that resides with your cloud or is brought into the AWS account has to be secure always. So, all the AWS services has ability to encrypt the data either at rest or during in transit.

AWS services uses encryption either using service side encryption or a client side encryption where AWS manages service side using AWS KMS keys and for client side encryption client manages it using various methods including AWS KMS keys.

Data at rest means the data is kept and stored and to encrypt the data we can use AWS KMS keys however for data in transit customers have a choice either by using a protocol like Transport Layer Security (TLS). All AWS service endpoints support TLS to create a secure HTTPS connection to make API requests.

Using services like AWS KMS, AWS CloudHSM, and AWS ACM, customers can implement a comprehensive data at rest and data in transit encryption strategy across their AWS account.

Encrypting Log Data in CloudWatch Logs

Log data is always encrypted in CloudWatch logs. By default, CloudWatch Logs uses server-side encryption for the log data at rest. However you can also use AWS Key Management Service along with AWS KMS customer managed keys. Lets see how you can achieve this.

  • Encryption using AWS KMS is enabled at the log group level, by associating a key with a log group.
  • The encryption is done using an AWS KMS customer managed key.
  • CloudWatch Logs supports only symmetric customer managed keys. 
  • You must have kms:CreateKey, kms:GetKeyPolicy, and kms:PutKeyPolicy permissions.
  • If you revoke CloudWatch Logs access to an associated key or delete an associated customer managed key, your encrypted data in CloudWatch Logs can no longer be retrieved.

Lets follow below items to implement encryption in the AWS CloudWatch logs.

Creating an AWS KMS customer managed key

  • Lets run the below command to create a AWS KMS key.
aws kms create-key

Adding permissions to AWS KMS customer managed keys

  • By default only resource owner has permissions to encrypt or decrypt the data. So its important to grant permissions to access the key to other users and resources. Your policy should look like something below.
  • Note: CloudWatch Logs now supports encryption context, using kms:EncryptionContext:aws:logs:arn as the key and the ARN of the log group as the value for that key
  • Encryption context is a set of key-value pairs that are used as additional authenticated data. The encryption context enables you to use IAM policy conditions to limit access to your AWS KMS key by AWS account and log group.
{
 "Version": "2012-10-17",
    "Id": "key-default-1",
    "Statement": [
        {
            "Sid": "Enable IAM User Permissions",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::Your_account_ID:root"
            },
            "Action": "kms:*",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "logs.region.amazonaws.com"
            },
            "Action": [
                "kms:Encrypt*",
                "kms:Decrypt*",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:Describe*"
            ],
            "Resource": "*",
            "Condition": {
                "ArnEquals": {
                    "kms:EncryptionContext:aws:logs:arn": "arn:aws:logs:region:account-id:log-group:log-group-name"
                }
            }
        }    
    ]
}

Associating the customer managed key with a log group when you create it

  • Use the create-log-group command as follows.
aws logs create-log-group --log-group-name my-log-group --kms-key-id "key-arn"

Creating metrics from log events using filters

We can certainly filter the log data coming to CloudWatch logs by creating the one or more metrics. CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that you can graph or set an alarm on.

Components of Metrics

  • default value: If no logs are ingested during a one-minute period, then no value is reported
  • dimensions: Dimensions are the key-value pairs that further define a metric.
  • metric name: The name of the CloudWatch metric to which the monitored log
  • metric namespace: The destination namespace of the new CloudWatch metric.
  • metric value: The name of the CloudWatch metric to which the monitored log

Creating metric filters from log events

In this section we will go through steps which will guide you through creating metric filters from log events.

  • Open the CloudWatch console.
  • In the navigation pane, choose Logs, and then choose Log groups.
  • Choose the name of the log group.
  • Choose Actions, and then choose Create metric filter.
  • For Filter pattern, enter a filter pattern. To test your filter pattern, under Test Pattern, enter one or more log events to test the pattern.

Note: you can also use below filter pattern to find HTTP 404 code errors.

For Filter Pattern, type [IP, UserInfo, User, Timestamp, RequestInfo, StatusCode=404, Bytes].
  • Choose Next, and then enter a name for your metric filter.
  • Under Metric details, for Metric namespace, enter a name for the CloudWatch namespace where the metric will be published. If the namespace doesn’t already exist, make sure that Create new is selected.
  • For Metric name, enter a name for the new metric.
  • For Metric value, if your metric filter is counting occurrences of the keywords in the filter, enter 1.
  • Finally review and create the metrics.

Creating metric filters using the AWS CLI

The other way of creating metric filters is by using the AWS CLI. Lets checkout the below command to create metric filters using the AWS CLI.

aws logs put-metric-filter \
  --log-group-name MyApp/access.log \
  --filter-name EventCount \
  --filter-pattern " " \
  --metric-transformations \
  metricName=MyAppEventCount,metricNamespace=MyNamespace,metricValue=1,defaultValue=0

Posting Event data into CloudWatch Log groups using the AWS CLI

aws logs put-log-events \
  --log-group-name MyApp/access.log --log-stream-name TestStream1 \
  --log-events \
    timestamp=1394793518000,message="Test event 1" \
    timestamp=1394793518000,message="Test event 2" \
    timestamp=1394793528000,message="This message also contains an Error"

To list metric filters using the AWS CLI

aws logs describe-metric-filters --log-group-name MyApp/access.log

Real-time processing of log data with subscriptions

You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it delivered to other services such as an Amazon Kinesis stream, an Amazon Kinesis Data Firehose stream, or AWS Lambda for custom processing, analysis, or loading to other systems.

To begin subscribing to log events, create the receiving resource, such as a Kinesis Data Streams stream, where the events will be delivered. A subscription filter defines the filter pattern to use for filtering which log events get delivered to your AWS resource, as well as information about where to send matching log events to.

CloudWatch Logs also produces CloudWatch metrics about the forwarding of log events to subscriptions.

You can use a subscription filter with Kinesis Data Streams, Lambda, or Kinesis Data Firehose. Logs that are sent to a receiving service through a subscription filter are base64 encoded and compressed with the gzip format.

Creating CloudWatch Logs Subscription filter with Kinesis Data Streams

In this section we will create a AWS CloudWatch subscription filters and send the logs to the kinesis data streams.

  • Creating a destination stream in the Kinesis Data Streams service using the below command.
 aws kinesis create-stream --stream-name "RootAccess" --shard-count 1
  • Check the kinesis delivery stream if it is in active state.
aws kinesis describe-stream --stream-name "RootAccess"
  • Create the IAM role that will grant CloudWatch Logs permission to put data into your stream. Also make sure to add the trust policy in the role as follows.
{
  "Statement": {
    "Effect": "Allow",
    "Principal": { "Service": "logs.amazonaws.com" },
    "Action": "sts:AssumeRole",
    "Condition": { 
        "StringLike": { "aws:SourceArn": "arn:aws:logs:region:123456789012:*" } 
     }
   }
}
  • In case of cross account you should have the IAM role trust policy something like below.
{
    "Statement": {
        "Effect": "Allow",
        "Principal": {
            "Service": "logs.amazonaws.com"
        },
        "Condition": {
            "StringLike": {
                "aws:SourceArn": [
                    "arn:aws:logs:region:sourceAccountId:*",
                    "arn:aws:logs:region:recipientAccountId:*"
                ]
            }
        },
        "Action": "sts:AssumeRole"
    }
}
aws iam create-role --role-name CWLtoKinesisRole --assume-role-policy-document file://~/TrustPolicyForCWL-Kinesis.json
  • Attach a policy to the IAM role that you created previously.
{
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "kinesis:PutRecord",
      "Resource": "arn:aws:kinesis:region:123456789012:stream/RootAccess"
    }
  ]
}
  • In case of cross account additional step is required where you attach a policy to the Kinesis stream to allow CloudWatch to be able to send the data in this account.
{
  "Version" : "2012-10-17",
  "Statement" : [
    {
      "Sid" : "",
      "Effect" : "Allow",
      "Principal" : {
        "AWS" : "111111111111"
      },
      "Action" : "logs:PutSubscriptionFilter",
      "Resource" : "arn:aws:logs:region:999999999999:destination:testDestination"
    }
  ]
}
  • Create a CloudWatch subscription filter. The subscription filter immediately starts the flow of real-time log data from the chosen log group to your stream. In case of cross account: a subscription filter is created in a sending account.
aws logs put-subscription-filter \
    --log-group-name "CloudTrail/logs" \
    --filter-name "RootAccess" \
    --filter-pattern "{$.userIdentity.type = Root}" \
    --destination-arn "arn:aws:kinesis:region:123456789012:stream/RootAccess" \
    --role-arn "arn:aws:iam::123456789012:role/CWLtoKinesisRole"
  • After you set up the subscription filter, CloudWatch Logs forwards all the incoming log events that match the filter pattern to your stream. Verify by running the following examples.
aws kinesis get-shard-iterator --stream-name RootAccess --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON

Creating CloudWatch Logs Subscription filter with AWS lambda function.

In this section we will create a AWS CloudWatch subscription filters and send the logs to the AWS lambda function.

  • Create the AWS Lambda function. Lets create a sample Lambda function as below using AWS CLI.
aws lambda create-function \
    --function-name helloworld \
    --zip-file fileb://file-path/helloWorld.zip \
    --role lambda-execution-role-arn \
    --handler helloWorld.handler \
    --runtime nodejs12.x
  • Grant CloudWatch Logs the permission to execute your function.
aws lambda add-permission \
    --function-name "helloworld" \
    --statement-id "helloworld" \
    --principal "logs.amazonaws.com" \
    --action "lambda:InvokeFunction" \
    --source-arn "arn:aws:logs:region:123456789123:log-group:TestLambda:*" \
    --source-account "123456789012"
  • Create a subscription filter using the following command
aws logs put-subscription-filter \
    --log-group-name myLogGroup \
    --filter-name demo \
    --filter-pattern "" \
    --destination-arn arn:aws:lambda:region:123456789123:function:helloworld
  • Verify by running below command.
aws logs put-log-events --log-group-name myLogGroup --log-stream-name stream1 --log-events "[{\"timestamp\":<CURRENT TIMESTAMP MILLIS> , \"message\": \"Simple Lambda Test\"}]"

Publish Logs to AWS S3, kinesis and CloudWatch Logs

AWS services that publish logs to CloudWatch Logs are API Gateway, Aurora SQL, AWS VPC Flow logs etc. While many services publish logs only to CloudWatch Logs, some AWS services can publish logs directly to Amazon Simple Storage Service or Amazon Kinesis Data Firehose. 

Publishing Logs to AWS CloudWatch Logs

If you need to send the logs to CloudWatch then you need below permissions for the user or account through which you are logged in.

logs:CreateLogDelivery
logs:PutResourcePolicy
logs:DescribeResourcePolicies
logs:DescribeLogGroups

When the logs are sent to Log groups in AWS CloudWatch then the resource policy is automatically created if you have above permissions else create and attach the resource policy to the Log group as shown below.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AWSLogDeliveryWrite20150319",
      "Effect": "Allow",
      "Principal": {
        "Service": [
          "delivery.logs.amazonaws.com"
        ]
      },
      "Action": [
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": [
        "arn:aws:logs:us-east-1:0123456789:log-group:my-log-group:log-stream:*"
      ],
      "Condition": {
        "StringEquals": {
          "aws:SourceAccount": ["0123456789"]
        },
        "ArnLike": {
          "aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:*"]
        }
      }
    }
  ]
}

Publishing Logs to AWS S3

When logs are published to AWS S3 for the first time then the service that delivers becomes the owner of the bucket. If you need to send the logs to AWS S3 then you need below permissions for the user or account through which you are logged in.

logs:CreateLogDelivery
S3:GetBucketPolicy
S3:PutBucketPolicy

The bucket should have a resource policy as shown below.

{
    "Version": "2012-10-17",
    "Id": "AWSLogDeliveryWrite20150319",
    "Statement": [
        {
            "Sid": "AWSLogDeliveryAclCheck",
            "Effect": "Allow",
            "Principal": {
                "Service": "delivery.logs.amazonaws.com"
                },
            "Action": "s3:GetBucketAcl",
            "Resource": "arn:aws:s3:::my-bucket",
        },
        {
            "Sid": "AWSLogDeliveryWrite",
            "Effect": "Allow",
            "Principal": {
                "Service": "delivery.logs.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::my-bucket/AWSLogs/account-ID/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control",
                    "aws:SourceAccount": ["0123456789"]
                },
                "ArnLike": {
                    "aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:*"]
                }
            }
        }
    ]
}

Note: You can protect the data in your Amazon S3 bucket by enabling either server-side Encryption with Amazon S3-managed keys (SSE-S3) or server-side encryption with a AWS KMS key stored in AWS Key Management Service (SSE-KMS).

If you choose customer managed AWS KMS Keys then your keys must have below policies.

{
    "Sid": "Allow Logs Delivery to use the key",
    "Effect": "Allow",
    "Principal": {
        "Service": [ "delivery.logs.amazonaws.com" ]
    },
    "Action": [
        "kms:Encrypt",
        "kms:Decrypt",
        "kms:ReEncrypt*",
        "kms:GenerateDataKey*",
        "kms:DescribeKey"
    ],
    "Resource": "*",
    "Condition": {
        "StringEquals": {
            "aws:SourceAccount": ["0123456789"]
        },
        "ArnLike": {
            "aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:*"]
        }
      }
}

Publishing Logs to Kinesis Firehose

To be able to set up sending any of these types of logs to Kinesis Data Firehose for the first time, you must be logged into an account with the following permissions.

logs:CreateLogDelivery
firehose:TagDeliveryStream
iam:CreateServiceLinkedRole

Because Kinesis Data Firehose does not use resource policies, AWS uses IAM roles when setting up these logs to be sent to Kinesis Data Firehose. AWS creates a service-linked role named AWSServiceRoleForLogDelivery

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "firehose:PutRecord",
                "firehose:PutRecordBatch",
                "firehose:ListTagsForDeliveryStream"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/LogDeliveryEnabled": "true"
                }
            },
            "Effect": "Allow"
        }
    ]
}

This service-linked role also has a trust policy that allows the delivery.logs.amazonaws.com service principal to assume the needed service-linked role. That trust policy is as follows:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "delivery.logs.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Conclusion

In this tutorial you learned everything one must know to securely push logs in CloudWatch and store it. You also learned how to view and retrieve the data from the CloudWatch Logs.

With this knowledge you will certainly be able to secure your applications and troubleshoot them easily at a central location. Go for it and implement it.

Advertisement

Everything you should know about Amazon VPC or AWS VPC

In this theoretical tutorial you will learn everything you should know about Amazon VPC or AWS VPC. I am sure you will have no further question on AWS VPC after going through this detailed guide.

Why not dive in right now.

Table of Content

  1. What is VPC or an Amazon VPC or what is a VPC?
  2. VPC CIDR Range
  3. What is AWS VPC Peering?
  4. What is AWS VPC Endpoint?
  5. What are VPC Flow logs?
  6. Knowing AWS VPC pricing?
  7. AWS CLI commands to create VPC
  8. Defining AWS VPC Terraform or terraform AWS VPC Code
  9. How to Publish VPC Flow Logs to CloudWatch
  10. Create IAM trust Policy for IAM Role
  11. Creating IAM Policy to publish VPC Flow Logs to Cloud Watch Logs
  12. Create VPC flow logs using AWS CLI
  13. Conclusion

What is VPC or an Amazon VPC or what is a VPC?

Amazon Virtual Private Cloud allows you to launch AWS resources in a isolated and separate virtual network where you are complete owner of that network.

In Every AWS account and in each region, you get a default VPC. it has a default subnet in each Availability Zone in the Region, an attached internet gateway, a route in the main route table that sends all traffic to the internet gateway, and DNS settings that automatically assign public DNS hostnames to instances with public IP addresses and enable DNS resolution through the Amazon-provided DNS server.

Therefore, an EC2 instance that is launched in a default subnet automatically has access to the internet. Virtual Private cloud contains subnets that are linked or tied to a particular Availability zone.

If you associate an Elastic IP address with the eth0 network interface of your instance, its current public IPv4 address (if it had one) is released to the EC2-VPC public IP address pool.

The Subnet and VPC are assigned with IP range also known as CIDR_range which define the network range in which all resources will be created.

You also need to create Route tables that are used to determine the network connectivity of your VPC with other AWS services such as:

  • Peering connection means connection between two VPCs such that you can share resources between the two VPCs.
  • Gateway endpoints:
    • Internet Gateway connects public subnets to Internet
    • NAT Gateway to connect private subnets to internet. To allow an instance in your VPC to initiate outbound connections to the internet but prevent unsolicited inbound connections from the internet, you can use a network address translation (NAT) device.
    • NAT maps multiple private IPv4 addresses to a single public IPv4 address. You can configure the NAT device with an Elastic IP address and connect it to the internet through an internet gateway
    • But if you think non default subnets those are private want to connect them to internet then make sure by attaching an internet gateway to its VPC (if its VPC is not a default VPC) and associating an Elastic IP address with the instance.
    • VPC Endpoints connect to AWS services privately without using NAT or IGW.
  • Transit Gateway acts as a central device or epicentre to route traffic between your VPCs, VPN connections, and AWS Direct Connect connections.
  • Connect your VPCs to your on-premises networks using AWS Virtual Private Network (AWS VPN).

VPC sharing allows to launch any AWS services in centrally managed Virtual Private Cloud.  In this the account that owns VPC shares one or more subnet with other accounts (participants) that belong to the same organization from AWS Organizations.

  • You must enable resource sharing from the management account for your organization.
  • You can share non-default subnets with other accounts within your organization.
  • VPC owners are responsible for creating, managing, and deleting the resources associated with a shared VPC. VPC owners cannot modify or delete resources created by participants, such as EC2 instances and security groups.

If the tenancy of a VPC is default, EC2 instances running in the VPC run on hardware that’s shared with other AWS accounts by default. If the tenancy of the VPC is dedicated, the instances always run as Dedicated Instances, which are instances that run on hardware that’s dedicated for your use.

VPC CIDR Range

  • CIDR stands for Classless Inter Domain Routing (CIDR ) Notation.
  • IPv4 contains 32 bits.
  • VPC IP CIDR ranges is in between /16 to /28
  • Subnet CIDR range is also in between /16 to /28
  • You can assign additional private IP addresses, known as secondary private IP addresses, to instances that are running in a VPC. Unlike a primary private IP address, you can reassign a secondary private IP address from one network interface to another.
  • The allowed block size is between a /16 netmask (65,536 IP addresses) and /28 netmask (16 IP addresses)
10.0.0.0 – 10.255.255.255 (10/8 prefix)10.0.0.0/16
172.16.0.0 – 172.31.255.255 (172.16/12 prefix)172.31.0.0/16
192.168.0.0 – 192.168.255.255 (192.168/16 prefix)192.168.0.0/20
  • You can associate secondary IPv4 CIDR blocks with your VPC
  • VPCs that are associated with the Direct Connect gateway must not have overlapping CIDR blocks

What is AWS VPC Peering?

A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Resources in peered VPCs can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region. Traffic between peered VPCs never traverses the public internet.

What is AWS VPC Endpoint?

VPC Endpoints connect to AWS services privately without using NAT or IGW.

What are VPC Flow logs?

To monitor traffic or network access in your virtual private cloud (VPC). You can use VPC Flow Logs to capture detailed information about the traffic going to and from network interfaces in your VPCs.

Knowing AWS VPC pricing?

There’s no additional charge for using a VPC. There are charges for some VPC components, such as NAT gateways, IP Address Manager, traffic mirroring, Reachability Analyzer, and Network Access Analyzer.

AWS Cli commands to create VPC

aws ec2 create-vpc --cidr-block 10.0.0.0/24 --query Vpc.VpcId --output text

Defining AWS VPC Terraform or terraform AWS VPC Code

You can deploy VPC using Terraform as well with just few lines of code. To understand Terraform basics you can refer.

The below Terraform contains resource block to create a Amazon VPC with cidr_block as “10.0.0.0/16” in the default tenancy with tags as “Name” = “main”.

resource "aws_vpc" "main" {
  cidr_block       = "10.0.0.0/16"
  instance_tenancy = "default"

  tags = {
    Name = "main"
  }
}

How to Publish VPC Flow Logs to CloudWatch

When publishing to CloudWatch Logs, flow log data is published to a log group, and each network interface has a unique log stream in the log group. Log streams contain flow log records. For publishing the logs you need:

  • Create an IAM role. To know how to create a role refer here.
  • Attach a IAM trust policy to an IAM role.
  • Create a IAM policy and attach to an IAM role.
  • Finally create the VPC flow logs using AWS CLI

Create IAM trust Policy for IAM Role

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "vpc-flow-logs.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
} 

Creating IAM Policy to publish VPC Flow Logs to Cloud Watch Logs

The policy below VPC flow logs policy has sufficient permissions to publish flow logs to the specified log group in CloudWatch Logs

{
  "Version": "2012-10-17",
  "Statement": [{

     "Effect": "Allow",
     "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents",
        "logs:DescribeLogGroups",
        "logs:DescribeLogStreams"
     ],
     "Resource": "*"
  }]

}

Create VPC flow logs using AWS CLI

aws ec2 create-flow-logs --resource-type Subnet --resource-ids subnet-1a2b3c4d --traffic-type ACCEPT --log-group-name my-flow-logs --deliver-logs-permission-arn arn:aws:iam::123456789101:role/publishFlowLogs

Conclusion

Now that you should have a sound knowledge on what is AWS VPC.

What is AWS S3 Bucket?

In this Quick tutorial you will learn everything one must know regarding AWS storage service that is AWS S3.

Table of content

What is AWS S3 Bucket?

Amazon Simple storage service allows you to store objects or any sizes securely and with good performance, scalability and securely. You can ideally store unlimited data into AWS S3 bucket. Lets get into some of the important features of AWS S3 bucket.

  • There are various S3 storage classes which can be used according to the requirements.
  • You can also configure Storage lifecycle which allows you to manage your objects efficiently and you can move the objects to different storage classes.
  • S3 object lock: you can add a object lock for a particular time so that the objects are not deleted by mistake.

  • S3 replication: you can replicate objects to different destinations may be in different buckets or different regions accordingly.

  • S3 batch operations: you can manage lot of objects in a single API request using batch operations.

  • You can block public access to S3 buckets and object. By default, Block Public Access settings are turned on at the account and bucket level.

  • You can apply IAM policy to users or roles to access 3 bucket securely. You can also apply resource based policy on AWS s3 buckets and objects.

  • You can also apply access control list on a particular bucket or a particular objects.

  • You can disable ACL and take ownership of every object in your bucket. As a bucket error you have rides on every object in your bucket.

  • You can also use access analyzer for S3 two evaluate all the access policies
  • You can have up to 100 buckets in your AWS account
  • When is the bucket is created you are not allowed to change the name afterwards or  the region.
  • Every object is identified by a name that is a key and a version ID and every object in bucket has exactly one key.

You can access your bucket using the Amazon S3 console using both virtual-hosted–style and path-style URLs to access a bucket.

https://bucket-name.s3.region-code.amazonaws.com/key-name  (Virtual Hosted )

https://bucket-name.s3.region-code.amazonaws.com/key-name  ( Path Based )

AWS S3 Bucket Access Control List

  • You can set the bucket ownership and S3 object ownership in AWS S3 bucket level settings and can disable ACL so that you are owners of every object.
  • When any other AWS account upload the objects in AWS S3 in your account then that account owns the bucket and has access to it but if you disable ACL then bucket owner automatically owns every object in your bucket.

S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to disable access control lists (ACLs) and take ownership of every object in your bucket, simplifying access management for data stored in Amazon S3.

AWS S3 Object Encryption

Amazon AWS S3 encryption is done in transit and at rest. Server-side encryption encrypts the object before saving it and decrypts when you download it.

  • Server-side encryption with Amazon S3 managed keys (SSE-S3)
  • Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS)
  • Server-side encryption with customer-provided keys (SSE-C)

Client side encryption can be done before sending objects to as 3 bucket.

AWS S3 Bucket Policy

In AWS S3 bucket policy is a resource-based policy which allows you to grant permission to your bucket and objects only bucket owner of that account can associate a policy with the bucket and bucket policies a based on access policies.

AWS s3 bucket policy examples

In this section we will go through some of the examples of bucket policy. With bucket policy you can secure access to objects in your buckets, so that only users with the appropriate permissions can access them

s3 bucket policy to encrypt each object with server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS)

To require server-side encryption of all objects in a particular Amazon S3 bucket, you can use a bucket policy.

{
   "Version":"2012-10-17",
   "Id":"PutObjectPolicy",
   "Statement":[{
         "Sid":"DenyUnEncryptedObjectUploads",
         "Effect":"Deny",
         "Principal":"*",
         "Action":"s3:PutObject",
         "Resource":"arn:aws:s3:::DOC-EXAMPLE-BUCKET1/*",
         "Condition":{
            "StringNotEquals":{
               "s3:x-amz-server-side-encryption":"aws:kms"
            }
         }
      }
   ]
}

s3 bucket policy which require SSE-KMS with a specific AWS KMS key for all objects written to a bucket

{
"Version": "2012-10-17",
"Id": "PutObjPolicy",
"Statement": [{
  "Sid": "DenyObjectsThatAreNotSSEKMSWithSpecificKey",
  "Principal": "*",
  "Effect": "Deny",
  "Action": "s3:PutObject",
  "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*",
  "Condition": {
    "ArnNotEqualsIfExists": {
      "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:us-east-2:111122223333:key/01234567-89ab-cdef-0123-456789abcdef"
    }
  }
}]
}

Grant cross-account permissions to upload objects while ensuring that the bucket owner has full control

{
   "Version":"2012-10-17",
   "Statement":[
     {
       "Sid":"PolicyForAllowUploadWithACL",
       "Effect":"Allow",
       "Principal":{"AWS":"111122223333"},
       "Action":"s3:PutObject",
       "Resource":"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*",
       "Condition": {
         "StringEquals": {"s3:x-amz-acl":"bucket-owner-full-control"}
       }
     }
   ]
}

How to remove bucket content completely using aws s3 rm

To remove bucket content completely run the below command.

aws s3 rm s3://bucket-name –recursive

Deleting a AWS S3 bucket – How you can delete an empty Amazon S3 bucket run the below command.

aws s3 rb s3://bucket-name --force 

How to transform data with S3 object Lambda

To Transform the data with AWS S3 Object Lambda follow the below steps:

  • Prerequisites
  • Step 1: Create an S3 bucket
  • Step 2: Upload a file to the S3 bucket
  • Step 3: Create an S3 access point
  • Step 4: Create a Lambda function
  • Step 5: Configure an IAM policy for your Lambda function’s execution role
  • Step 6: Create an S3 Object Lambda Access Point
  • Step 7: View the transformed data
  • Step 8: Clean up

List S3 Bucket using using the AWS S3 CLI command ( aws s3 list bucket or AWS S3 ls )

To list the bucket using AWS CLI then use the below command. The below command lists all prefixes and objects in a bucket

aws s3 ls s3<strong>:</strong>//mybucket

AWS S3 Sync

Syncs directories and S3 prefixes. Recursively copies new and updated files from the source directory to the destination. Only creates folders in the destination if they contain one or more files.

The following sync command syncs objects under a specified prefix and bucket to files in a local directory by uploading the local files to s3

aws s3 sync . s3://mybucket

AWS S3 cp recursive

To list the bucket using AWS CLI then use the below command.

aws s3 mv

Moves a local file or S3 object to another location locally or in S3. The following mv command moves a single file to a specified bucket and key.

aws s3 mv test.txt s3://mybucket/test2.txt

Conclusion

In this tutorial we learned important concepts of AWS S3 such as its use, bucket policy and features of AWS S3 bucket.

kubernetes microservice architecture with kubernetes deployment example

In this article we will go through the kubernetes microservice architecture with kubernetes deployment example.

Table of Content

  1. Prerequisites
  2. kubernetes microservice architecture
  3. Docker run command to deploy Microservice
  4. Preparing-kubernetes-deployment-yaml-or-kubernetes-deployment-yml-file-for-Voting-App-along-with-kubernetes-deployment-environment-variables
  5. Preparing kubernetes deployment yaml or kubernetes deployment yml file for Redis app along with kubernetes deployment environment variables
  6. Preparing kubernetes deployment yaml or kubernetes deployment yml file for PostgresApp along with kubernetes deployment environment variables
  7. Preparing kubernetes deployment yaml or kubernetes deployment yml file for Worker App along with kubernetes deployment environment variables
  8. Preparing kubernetes deployment yaml or kubernetes deployment yml file for Result App along with kubernetes deployment environment variables
  9. Creating kubernetes nodeport or k8s nodeport or kubernetes service nodeport YAML file
  10. Creating kubernetes clusterip or kubernetes service clusterip YAML file
  11. Running kubernetes service and Kubernetes deployments.
  12. Conclusion

Prerequisites

This will be step by step tutorial,

  • Ubuntu or Linux machine with Kubernetes cluster running or a minikube.
  • kubectl command installed

kubernetes microservice architecture

In the below kubernetes microservice architecture you will see an application where you vote and the result will be displayed based on the votes and below are the components:

  • Voting app based on Python which is UI based app where you will add your vote.
  • In Memory app based on Redis which will store your vote in memory.
  • Worker app which is .net based app converts in built memory data into Postgres DB.
  • Postgres DB app which is based on Postgres DB collects the data and store it in database.
  • Result-app which is UI based app fetches the data from DB and displays the vote to the users.

Docker run command to deploy Microservice

We will start this tutorial by showing you docker commands, if we would have run all these applications in docker itself instead of kubernetes.

docker run -d --name=redis redis

docker run -d --name=db postgres:9.4

docker run -d --name=vote -p 5000:80 --link redis:redis voting-app

docker run -d --name=result -p 5001:80 --link db:db  result-app

docker run -d --name=worker  --link redis:redis --link db:db worker

Preparing kubernetes deployment yaml or kubernetes deployment yml file for Voting App along with kubernetes deployment environment variables

As this tutorial is to deploy all applications in kubernetes, we will prepare all the YAML files and in the end of tutorial we will deploy them using kubectl command.

In the below deployment file we are creating voting app and it will run on those pods whose labels matches with name as voting-app-pod and app as demo-voting-app.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: voting-app-deploy
  labels:
    name: voting-app-deploy
    app: demo-voting-app
spec:
  replicas: 3
  selector:
    matchLabels:
      name: voting-app-pod
      app: demo-voting-app  
  template:
    metadata:
      name: voting-app-pod
      labels:
        name: voting-app-pod
        app: demo-voting-app
    spec:
      containers:
        - name: voting-app        
          image: kodekloud/examplevotingapp_voting:v1
          resources:
            limits:
              memory: "4Gi"
              cpu: "1"
            requests:
              memory: "2Gi" 
              cpu: "2"         
          ports:
            - containerPort: 80

Preparing kubernetes deployment yaml or kubernetes deployment yml file for Redis app along with kubernetes deployment environment variables

In the below deployment file we are creating redis app and it will run on those pods whose labels matches with name as redis-pod and app as demo-voting-app.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-deploy
  labels:
    name: redis-deploy
    app: demo-voting-app
spec:  
    replicas: 1
    selector:
      matchlabels:
        name: redis-pod
        app: demo-voting-app
    template:    
      metadata:
        name: redis-pod
        labels:
          name: redis-pod
          app: demo-voting-app

      spec:
        containers:
          - name: redis
            image: redis
            resources:
              limits:
                memory: "4Gi"
                cpu: "1"
              requests:
                memory: "2Gi" 
                cpu: "2"                     
            ports:
              - containerPort: 6379          

Preparing kubernetes deployment yaml or kubernetes deployment yml file for PostgresApp along with kubernetes deployment environment variables

In the below deployment file we are creating postgres app and it will run on those pods whose labels matches with name as postgres-pod and app as demo-voting-app.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres-deploy
  labels:
    name: postgres-deploy
    app: demo-voting-app
spec:
    replicas: 1
    selector:
      matchLabels:
        name: postgres-pod
        app: demo-voting-app
    template: 
      metadata:
        name: postgres-deploy
        labels:
          name: postgres-deploy
          app: demo-voting-app
        spec:
          containers:
            - name: postgres       
              image: postgres
              resources:
                limits:
                  memory: "4Gi"
                  cpu: "1"
                requests:
                  memory: "2Gi" 
                  cpu: "2"         
              ports:
                - containerPort: 5432
              env:
                - name: POSTGRES_USER
                  value: "posgres"
                - name: POSTGRES_PASSWORD
                  value: "posgres"     

Preparing kubernetes deployment yaml or kubernetes deployment yml file for Worker App along with kubernetes deployment environment variables

In the below deployment file we are creating postgres app and it will run on those pods whose labels matches with name as worker-app-pod and app as demo-voting-app.

apiVersion: app/v1
kind: Deployment
metadata:
  name: worker-app-deploy
  labels:
    name: worker-app-deploy
    app: demo-voting-app
spec:
  selector:
    matchLabels:
      name: worker-app-pod
      app: demo-voting-app  
  replicas: 3
  template:
    metadata:
      name: worker-app-pod
      labels:
        name: worker-app-pod
        app: demo-voting-app
    spec: 
    containers:
      - name: worker
        resources:
          limits:
            memory: "4Gi"
            cpu: "1"
          requests:
            memory: "2Gi" 
            cpu: "2"       
        image: kodekloud/examplevotingapp_worker:v1

Preparing kubernetes deployment yaml or kubernetes deployment yml file for Result App along with kubernetes deployment environment variables

In the below deployment file we are creating result app and it will run on those pods whose labels matches with name as result-app-pod and app as demo-voting-app.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: result-app-deploy
    app: demo-voting-app
spec:
   replicas: 1
   selector:
     matchLabels:
       name: result-app-pod
       app: demo-voting-app
   template:
     metadata:
       name: result-app-pod
       labels:
          name: result-app-pod
          app: demo-voting-app
       spec:
          containers:
            - name: result-app        
              image: kodekloud/examplevotingapp_result:v1
              resources:
                limits:
                  memory: "4Gi"
                  cpu: "1"
                requests:
                  memory: "2Gi" 
                  cpu: "2"         
              ports:
                - containerPort: 80

Creating kubernetes nodeport or k8s nodeport or kubernetes service nodeport YAML file

Now that we have created the deployment files for each of the application our voting app and result app will be expose to the outside world so we will declare both of them as NodePort as shown below.

kind: Service 
apiVersion: v1 
metadata:
  name: voting
  labels:
    name: voting-service
    app: demo-voting-app
spec:
  type: NodePort
  selector:
    name: voting-app-pod
    app: demo-voting-app
  ports:      
    - port: 80    
      targetPort: 80   
      nodePort: 30004
kind: Service 
apiVersion: v1 
metadata:
  name: result
  labels:
    name: result-service
    app: demo-voting-app
spec:
  type: NodePort
  selector:
    name: result-pod
    app: demo-voting-app
  ports:      
    - port: 6379    
      targetPort: 6379   
      nodePort: 30005

Creating kubernetes clusterip or kubernetes service clusterip YAML file

Now that we have created the deployment files for each of the application our Redis app and Postgres app will be expose to the internal cluster only world so we will declare both of them as ClusterIP as shown below.

kind: Service 
apiVersion: v1 
metadata:
  name: db
  labels:
    name: postgres-service
    app: demo-voting-app
spec:
  type: ClusterIP
  selector:
    name: postges-pod
    app: demo-voting-app
  ports:      
    - port: 5432    
      targetPort: 5432   
kind: Service 
apiVersion: v1 
metadata:
  name: redis
  labels:
    name: redis-service
    app: demo-voting-app
spec:
  type: ClusterIP
  selector:
    name: redis-pod
    app: demo-voting-app
  ports:      
    - port: 6379    
      targetPort: 6379   

Running kubernetes service and Kubernetes deployments.

Now we will run the kubernetes services and kubernetes deployments using the below commands.

kubectl apply -f postgres-app-deploy.yml
kubectl apply -f redis-app-deploy.yml
kubectl apply -f result-app-deploy.yml
kubectl apply -f worker-app-deploy.yml
kubectl apply -f voting-app-deploy.yml



kubectl apply -f postgres-app-service.yml
kubectl apply -f redis-app-service.yml
kubectl apply -f result-app-service.yml
kubectl apply -f voting-app-service.yml

Conclusion

In this article we went through the kubernetes microservice architecture with kubernetes deployment example.

How to allow only HTTPS requests on AWS S3 buckets using AWS S3 Policy

It is important for your infrastructure to be secure. Similarly if you wish to secure your AWS bucket contents in AWS contents you need to make sure that you allow only secure requests that works on HTTPS.

In this quick tutorial you will learn How to allow only HTTPS requests on AWS S3 buckets using AWS S3 Policy on a bucket.

Lets get started.

Prerequisites

  • AWS account
  • One AWS Bucket

Creating AWS S3 bucket Policy for AWS S3 bucket

The below policy has two statements which performs the below actions:

  • Version is a standard date used in S3 policy.
  • The Statement below restricts all the requests except HTTPS on the AWS S3 bucket ( my-bucket )
  • Deny Here means it denies any requests that are not secure.
{
    "Version": "2012-10-17",
    "Statement": [{
        "Sid": "RestrictToTLSRequestsOnly",
        "Action": "s3:*",
        "Effect": "Deny",
        "Resource": [
            "arn:aws:s3:::my-bucket",
            "arn:aws:s3:::my-bucket/*"
        ],
        "Condition": {
            "Bool": {
                "aws:SecureTransport": "false"
            }
        },
        "Principal": "*"
    }]
}

Conclusion

This tutorial demonstrated how to allow only HTTPS requests on AWS S3 buckets using AWS S3 Policy.

How AWS s3 list bucket and AWS s3 put object

Are you Struggling to list your AWS S3 bucket and unable to upload data, if yes then don’t worry this tutorial is for you.

In this quick tutorial you will learn how you can list all the AWS Amazon S3 buckets and upload objects into it by assigning IAM policy to a user or a role.

Lets get started.

Prerequisites

  • AWS account
  • One AWS Bucket

Creating IAM policy for AWS S3 to list buckets and put objects

The below policy has two statements which performs the below actions:

  • First statement allows you to list objects in the AWS S3 bucket named (my-bucket-name).
  • Second Statement not only allow to list objects but allow you to perform any actions such as put:object, delet:objects etc. in the AWS S3 bucket named (my-bucket-name).
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListObjectsInBucket",
            "Effect": "Allow",
            "Action": ["s3:ListBucket"],
            "Resource": ["arn:aws:s3:::my-bucket-name"]
        },
        {
            "Sid": "AllObjectActions",
            "Effect": "Allow",
            "Action": "s3:*Object",
            "Resource": ["arn:aws:s3:::my-bucket-name/*"]
        }
    ]
}

Conclusion

This tutorial demonstrated how you can list all the AWS Amazon S3 buckets and upload objects into it by assigning IAM policy to a user or a role. .

How to Deny IP addresses to Access AWS Cloud using AWS IAM policy with IAM policy examples

Do you know you can restrict the certain IP addresses to access AWS services to be accessed with a single policy.

In this quick tutorial you will learn Deny IP addresses using AWS IAM policy with IAM policy examples

Lets get started.

Prerequisites

  • AWS account
  • Permissions to create IAM Policy

Lets describe the below IAM Policy in the AWS Cloud.

  • Version is Policy version which is fixed.
  • Effect is Deny in statement as we don’t want to allow IP addresses be able to Access AWS cloud.
  • Resources are * wild character as we want action to be allowed for all AWS services.
  • This policy deny all the IP address to access AWS cloud except few IP addresses using the NotIpAddress Condition and aws:ViaAWSService which is used to limit access to an AWS service makes a request to another service on your behalf.
{
    "Version": "2012-10-17",
    "Statement": {
        "Effect": "Deny",
        "Action": "*",
        "Resource": "*",
        "Condition": {
            "NotIpAddress": {
                "aws:SourceIp": [
                    "192.0.2.0/24",
                    "203.0.113.0/24"
                ]
            },
            "Bool": {"aws:ViaAWSService": "false"}
        }
    }
}
}

Conclusion

This tutorial demonstrated that if you need to deny IP addresses using AWS IAM policy with IAM policy examples.

How to Access AWS EC2 instance on Specific Dates using IAM Policy

Do you know you can restrict the user or group of IAM users to access AWS services to be accessed with a single policy.

In this quick tutorial you will learn how to Access AWS EC2 instance on Specific Dates using IAM Policy

Lets get started.

Prerequisites

  • AWS account
  • Permissions to create IAM Policy

Creating IAM Policy to Access AWS EC2 instance on Specific Dates

Lets describe the below IAM Policy in the AWS Cloud.

  • Version is Policy version which is fixed.
  • Effect is Allow in statement as we want to allow users or group be able to Describe AWS EC2 instance.
  • Resources are * wild character as we want action to be allowed for all AWS EC2 instances.
  • This policy allows users or groups to describe instance within specific dates using DateGreaterthan and DateLessThan attributes within the Condition.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            
            "Action": "ec2:DescribeInstances",
            "Resource": "*",
            "Condition": {
                "DateGreaterThan": {"aws:CurrentTime": "2023-03-11T00:00:00Z"},
                "DateLessThan": {"aws:CurrentTime": "2020-06-30T23:59:59Z"}
            }
        }
    ]
}

Conclusion

This tutorial demonstrated that if you need to create a IAM Policy to Deny AWS Resources outside AWS Regions.

What is Amazon EC2 in AWS?

If you are looking to start your career in AWS cloud then knowing your first service that is AWS EC2 can give you a good understanding around the compute resources in AWS cloud. With AWS EC2 you will also understand which all services utilize AWS EC2.

Lets get and start learning AWS EC2.

Table of Content

  1. Amazon EC2 (AWS Elastic compute Cloud)
  2. Amazon EC2 (AWS Elastic compute Cloud)
  3. Pricing of Amazon Linux 2
  4. Configure SSL/TLS on Amazon Linux 2
  5. How to add extra AWS EBS Volumes to an AWS EC2 instance
  6. AMI (Amazon Machine Image)
  7. Features of AMI
  8. AMI Lifecycle
  9. Creating an Amazon EBS Backed Linux AMI
  10. Creating an Instance Store backed Linux AMI
  11. Copying an Amazon AMI
  12. Storing and restoring an Amazon AMI
  13. Amazon Linux 2
  14. AWS Instances
  15. Stop/Start Instance EBS Backed instance
  16. Reboot AWS EC2 Instance
  17. Hibernated Instance ( EBS Backed instance)
  18. Terminated Instance EBS Backed instance
  19. AWS Instance types
  20. AWS Instance Lifecycle
  21. Monitoring AWS EC2 instance
  22. Cloud-init
  23. AWS EC2 Monitoring
  24. AWS EC2 Networking
  25. Local Zones
  26. AWS Wavelength
  27. Elastic Network Interface
  28. Configure your network interface using ec2-net-utils for Amazon Linux
  29. IP Address
  30. Assign a secondary private IPv4 address
  31. What is Elastic IP address?
  32. Associate an Elastic IP address with the secondary private IPv4 address
  33. Conclusion

Amazon EC2 (AWS Elastic compute Cloud)

Amazon EC2 stands for Amazon Elastic compute cloud that allows you to launch servers or virtual machines that are scalable in the Amazon Web service cloud. Also, with AWS EC2 instance, you don’t require to invest in any hardware or electricity costs, and you just pay for what you use.

When required, you can quickly decrease or scale up the number of AWS EC2 instances.

  • Instance requires operating systems, additional software, etc to get launched, so they use templates known as Amazon machine images (AMI).
  • You can work with various configurations with respect to computing such as Memory or CPU for that you will need to select the appropriate instance_type.
  • To securely log in to these instances you will need to generate the key pair where you store the private key and AWS manages key.
  • Instance can have two types of data ie. instance store that is temporary and the Amazon Elastic block store also known as EBS volumes.

Amazon EC2 (AWS Elastic compute Cloud)

  • Provides scalable computing capacity in Amazon web service cloud. You don’t need to invest in hardware up front etc. It takes few mins to launch your virtual machine and deploy your applications.
  • You can use preconfigured templates known as Amazon machine images (AMI’s) that includes OS and additional software’s. The launched machines are known as instances and instances comes with various compute configurations such as CPU, Memory known as instance type.
  • To securely login you need to key pairs where public key is stored with AWS and private key is stored with customers. Key pair choose either RSA or ED25519 types where windows doesn’t support ED25519.
  • To use a key on mac or Linux computer grant the following permissions:
 chmod 400 key-pair-name.pem
  • Storage volumes for temporary data can use Instance store volumes however when you need permanent data then consider using EBS i.e., Elastic block store.
  • To secure your Instance consider using security groups.
  • If you need to allocate the static IP address to an instance, then consider using Elastic address.
  • Your instance can be EBS backed instance or instance store-based instance that means the root volume can be either EBS or the Instance store. Instance stored backed Instances are either running or terminated but cannot be stopped. Also, instance attributes such as RAM, CPU cannot be changed.
  • Instances launched from an Amazon EBS-backed AMI launch faster than instances launched from an instance store-backed AMI
  • When you launch an instance from an instance store-backed AMI, all the parts have to be retrieved from Amazon S3 before the instance is available. With an Amazon EBS-backed AMI, only the parts required to boot the instance need to be retrieved from the snapshot before the instance is available
  • Use Amazon Inspector to automatically discover software vulnerabilities and unintended network exposure.
  • Use Trusted advisor to inspect your environment.
  • Use separate Amazon EBS volumes for the operating system versus your data.
  • Encrypt EBS volumes and snapshots.
  • Regularly back up your EBS volumes using EBS Snapshots, create AMI’s from your instance.
  • Deploy critical applications across multiple AZ’s.
  • Set TTL to 255 or nearby on your application side so that the connection are intact otherwise it can cause reachability issues.
  • When you install Apache then you will have document root on /var/www/html directory and by default root user have access to this directory. But if you want any other use to access these files under the directory perform the below steps as below. Let’s assume the user is ec2-user
sudo usermod -a -G apache ec2-user  # Logout and login back
sudo chown -R ec2-user:apache /var/www
sudo chmod 2775 /var/www && find /var/www -type d -exec sudo chmod 2775 {} \;  # For Future files

Pricing of Amazon Linux 2

There are different plans available for different EC2 instance such as:

  • On demand Instances:  No longer commitments and you only pay per second and the minimum period should be 60 seconds.
  • Saving Plans: You can book your instance for a year or 3 years.
  • Reserved instances: You can book your instance for a year or a period of 3 years to a specific configuration.
  • Spot instances: If you need cheap instance which are unused you can go ahead and use them.

Configure SSL/TLS on Amazon Linux 2

  • SSL/TLS creates an encrypted channel between a web server and web client that protects data in transit from being eavesdropped on.  
  • Make sure you have EBS backed Amazon Linux 2, Apache installed, TLS Public Key Infrastructure (PKI) relies on DNS. Also make sure to register domain for your EC2 instance.
  • Nowadays we are using TLS 1.2 and 1.3 versions and underlying TLS library is supported and enabled.
  • Enable TLS on server by Installing Apache SSL module using below command followed by configuring it.
 yum install -y mod_ssl 

vi  etc/httpd/conf.d/ssl.conf

  • Generate certificate using
sudo ./make-dummy-cert localhost.crt inside cd /etc/pki/tls/certs

How to add extra AWS EBS Volumes to an AWS EC2 instance

Basically this section is to add the Extra volume to an instance. There are two types of volumes first is root volume and other is extra volume (EBS) which you can add. To add the extra volume on AWS EC2 below are the steps:

  • Launch one AWS EC2 instance and while launching under Configure storage, choose Add new volume. Ensure that the added EBS volume size is 8 GB, and the type is gp3. AWS EC2 instance will have two volumes one for root and other added storage.
  • Before modifying or updating the volume, make sure to take the snapshot of current vol by navigating to storage tab under EC2 and then block devices, volume ID.
  • Now create a file system and attach it to non-mounted EBS volume by running the following command.
sudo mkfs -t xfs /dev/nvme1n1
sudo mkdir /data
sudo mount /dev/nvme1n1 /data
lsblk -f
  • Now, again on AWS EC2 instance go to volume ID, click on Modify the Volume by changing the volume ID.
  • Extend the file system by first checking the size of the file system.
df -hT
  • Now to extend use the command:
sudo xfs_grofs -d /data
  • Again, check the file system sized by running (df -hT) command

AMI (Amazon Machine Image)

  • You can launch multiple instances using the same AMI. Ami includes EBS snapshots and also contains OS, software’s for instance store backed AMI’s.

To Describe the AMI you can run the below command.

aws ec2 describe-images \
    --region us-east-1 \
    --image-ids ami-1234567890EXAMPLE

Features of AMI

  • You can create an AMI using snapshot or a template.
  • You can deregister the AMI as well.
  • AMI’s are either EBS backed or instance backed.
    • With EBS backed AMI’s the Root volume is terminated and other EBS volume is not deleted.
  • When you launch an instance from an instance store-backed AMI, all the parts have to be retrieved from Amazon S3 before the instance is available.
  • With an Amazon EBS-backed AMI, only the parts required to boot the instance need to be retrieved from the snapshot before the instance is available
  • Cost of EBS backed Instance are less because only changes are stored but in case of Instance store backed instances each time customized AMI is stored in AWS S3.
  • AMI uses two types of virtualizations:  paravirtual (PV) or Hardware virtual machine (HVM) which is better performer.
  • HVM are treated like actual physical disks. The boot process is similar to bare metal operating system.
    • The most common HVM bootloader is GRUB or GRUB2.
    • HVM boots by executing master boot record of root block device of your image.
    • HVM allows you to run OS on top of VM as if its bare metal hardware.
    • HVM can take advantage of hardware extensions such as enhanced networking or GPU Processing
  • PV boots with special boot loader called PV-GRUB.
    • PV runs on hardware that doesn’t have explicit support for virtualization.
    • PV cannot take advantage of hardware extensions.
    • All current, regions, generations support HVM API however this is not true with PV.
  • The first component to load when you start a system is BIOS in case of [ Intel and AMD] instance types run on Legacy and UEFI and Unified Extensible Firmware Interface (UEFI) in case of Graviton instance.  To check the boot mode of an AMI run the below command. Note: To check the boot mode of an Instance you can run the describe instance command.
aws ec2 --region us-east-1 describe-images --image-id ami-0abcdef1234567890
  • To check the boot mode of Operating system, SSH into machine and then run the below command.
sudo /usr/sbin/efibootmgr
  • To set the boot mode you can do that while registering an image not while creating an image.
  • Shared AMI: These are created by developers and made available for others to use.
  • You can deprecate or Deregister the AMI anytime.
  • Recycle Bin is a data recovery feature that enables you to restore accidentally deleted Amazon EBS snapshots and EBS-backed AMIs. Provided you have permissions such as ec2:ListImagesInRecycleBin and ec2:RestoreImageFromRecycleBin

AMI Lifecycle

You can launch two types of AMI’s:

Creating an Amazon EBS Backed Linux AMI

  • Launch an instance1 using AMI (Marketplace, Your own AMI, Public AMI, Shared AMI)
  • Customize the instance by adding the software’s etc.
  • Create new image from customized instance. When you create a new image then you create a new AMI as well. Amazon EC2 creates snapshots of your instance’s root volume and any other EBS volumes attached to your instance
  • Launch another instance2

Creating an Instance Store backed Linux AMI

  • Launch an instance1 only from instance backed AMI.
  • SSH Into Instance, customize it.
  • Bundle it which contains image manifest and files that contain template for root volume. Bundling might take few minutes.
  • Next upload the bundle to AWS S3.
  • Now, register your AMI.

Note 1: To create and manage Instance store backed Linux AMI you will need AMI tools to create and manage instance store-backed Linux AMIs. You will also need AWS CLI and AWS S3 bucket.

Note 2: You can’t convert an instance store-backed Windows AMI to an Amazon EBS-backed Windows AMI and you cannot convert an AMI that you do not own.

Copying an Amazon AMI

  • You can copy AMI’s within region or across regions
  • You can also copy AMI along with encrypted snapshot.
  • When you copy Ami the target AMI has its own identifier.
  • Make sure your IAM principal has the permissions to copy AMI.
  • Provide or update Bucket policy so that new AMI can be copied successfully.
  • You can copy an AMI in another region
  • You can copy an AMI in another account. For copying the AMI across accounts make sure you have all the permissions such as Bucket permission, key permissions and snapshot permissions.

Storing and restoring an Amazon AMI

  • You can store AMI’s in AWS S3 bucket by using CreatStoreImageTask  API
  • To monitor the progress of AMI use DescribeStoreImageTask
  • copy AMI to another bucket.
  • You can restore only EBS backed AMI’s using CreateRestoreImageTask.
  • To store and restore AMI the S3 bucket must be in same region.

Amazon Linux 2

  • It supports kernel 4.14 and 5.10. You can also upgrade it to 5.15 version. It allows greater parallelism and scalability.
  • New improvements in EXT file system such as large files can be managed easily.
  • DAMON is better supported as the data access monitoring for better memory and performance analysis.
  • To install and verify by upgrading kernel use below command.
sudo amazon-linux-extras install kernel-5.15
  • The cloud-init package is an open-source application built by Canonical that is used to bootstrap Linux images in a cloud computing environment, such as Amazon EC2. It enables you to specify actions that should happen to your instance at boot time.
  • Amazon Linux also uses cloud-init package to perform initial configuration of the ec2-user account, setting hostname, generate host keys, prepare repositories for package management.
  • Add users public key,
  • Amazon Linux uses the cloud-init actions found in /etc/cloud/cloud.cfg.d and /etc/cloud/cloud.cfg. You can create your own cloud-init action files in /etc/cloud/cloud.cfg.d.

AWS Instances

An instance is a virtual server in the cloud. Instance type essentially determines the hardware of the host computer used for your instance. Each instance type offers different compute and memory capabilities.

The root device for your instance contains the image used to boot the instance. The root device is either an Amazon Elastic Block Store (Amazon EBS) volume or an instance store volume.

Your instance may include local storage volumes, known as instance store volumes, which you can configure at launch time with block device mapping

Stop/Start Instance EBS Backed instance:

  • All the storage and EBS Volumes remains as it is ( they are stopped not deleted).
  • You are not charged for the instance when it is in stopped stage.
  • All the EBS volumes including root device usage are billed.
  • During the instance in stopped stage you can attach or detach EBS volumes.
  • You can create AMI’s during stopped state and you can also configure few instance configurations such as kernel, RAM Disk and instance type.
  • The Elastic IP address remains associated from the instance
  • The instance stays on the same host computer
  • The RAM is erased
  • Instance store volumes data is erased
  • You stop incurring charges for an instance as soon as its state changes to stopping

Reboot AWS EC2 Instance

  • The instance stays on the same host computer
  • The Elastic IP address remains associated from the instance
  • The RAM is erased
  • Instance store volumes data is preserved

Hibernated Instance ( EBS Backed instance)

  • The Elastic IP address remains associated from the instance
  • We move the instance to a new host computer
  • The RAM is saved to a file on the root volume
  • Instance store volumes data is erased
  • You incur charges while the instance is in the stopping state, but stop incurring charges when the instance is in the stopped state

Terminated Instance EBS Backed instance:

  • The root volume device is deleted but any other EBS volumes are preserved.
  • Instances are also terminated and cannot be started again.
  • You are not charged for the instance when it is in stopped stage.
  • The Elastic IP address is disassociated from the instance

AWS Instance types

  • General Purpose: These instances provide an ideal cloud infrastructure, offering a balance of compute, memory, and networking resources for a broad range of applications that are deployed in the cloud.
  • Compute Optimized instances: Compute optimized instances are ideal for compute-bound applications that benefit from high-performance processors.
  • Memory optimized instances:  Memory optimized instances are designed to deliver fast performance for workloads that process large data sets in memory.
  • Storage optimized instances: Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latencies, random I/O operations per second (IOPS) to applications

Note:  EBS-optimized instances enable you to get consistently high performance for your EBS volumes by eliminating contention between Amazon EBS I/O and other network traffic from your instance.

You can enable enhanced networking on supported instance types to provide lower latencies, lower network jitter, and higher packet-per-second (PPS) performance

AWS Instance Lifecycle

  • Note: You cannot stop and then start an Instance store backed instance.
  • FROM AMI
  • Launch Instance 
  • Pending
    • Running to Rebooting or Stopping
      • Shutting Down
        • Terminated

Amazon EC2 instances support multithreading, which enables multiple threads to run concurrently on a single CPU core. Each thread is represented as a virtual CPU (vCPU) on the instance. An instance has a default number of CPU cores, which varies according to instance type. For example, an m5.xlarge instance type has two CPU cores and two threads per core by default—four vCPUs in total.

  • Number of CPU cores: You can customize the number of CPU cores for the instance. You might do this to potentially optimize the licensing costs of your software with an instance that has sufficient amounts of RAM for memory-intensive workloads but fewer CPU cores.
  • Threads per core: You can disable multithreading by specifying a single thread per CPU core. You might do this for certain workloads, such as high performance computing (HPC) workloads.

Monitoring AWS EC2 instance

You can monitor AWS EC2 instances either manually or automatically. Lets discuss few of Automated monitoring tools.

  • System status checks
  • Instance status checks
  • Amazon Cloud watch alarms
  • Amazon Event Bridge
  • Amazon CloudWatch Logs
  • Cloud Watch agent

Now, lets discuss few of manual tools to monitor AWS EC2 instance.

  • Amazon EC2 Dashboard.
  • Amazon Cloud Watch Dashboard
  • Instance Status Checks on the EC2 Dashboard.
  • Scheduled events on EC2 Dashboard.

Cloud-init

It is used to bootstrap the Linux images in cloud computing environment.  Amazon Linux also uses cloud-init to perform initial configuration of the ec2-user account. Amazon Linux uses the cloud-init actions found in /etc/cloud/cloud.cfg.d and /etc/cloud/cloud.cfg and you can also add your own actions in this file.

The tasks that are performed by default by this script.

  • Set the default locale.
  • Set the hostname.
  • Parse and handle user data.
  • Generate host private SSH keys.
  • Add a user’s public SSH keys to .ssh/authorized_keys for easy login and administration.
  • Prepare the repositories for package management.
  • Handle package actions defined in user data.
  • Execute user scripts found in user data.

AWS EC2 Monitoring

  • By default, AWS EC2 sends metrics to CloudWatch every 5 mins.
  • To send metric data for your instance to CloudWatch in 1-minute periods, you can enable detailed monitoring on the instance but You are charged per metric that is sent to CloudWatch.
  • To list all the metrics of a particular AWS EC2 instance use the below command.
aws cloudwatch list-metrics --namespace AWS/EC2 --dimensions Name=InstanceId,Value=i-1234567890abcdef0

To create CloudWatch alarms, you can Select the instance and choose ActionsMonitor and troubleshootManage CloudWatch alarms.

  • You can use Amazon EventBridge to automate your AWS services and respond automatically to system events, such as application availability issues or resource changes.
  • Events from AWS services are delivered to Event Bridge in near real time. For example: Activate a Lambda function whenever an instance enters the running state. Create events and rules on event on AWS EC2 service. Once generated then it will run the lambda function.
  • You can use the Cloud Watch agent to collect both system metrics and log files from Amazon EC2 instances and on-premises servers
sudo yum install amazon-cloudwatch-agent

AWS EC2 Networking

If you require a persistent public IP address, you can allocate an Elastic IP address for your AWS account and associate it with an instance or a network interface.

To increase network performance and reduce latency, you can launch instances in a placement group

To increase network performance and reduce latency, you can launch instances in a placement group.

Local Zones

A Local Zone is an extension of an AWS Region in geographic proximity to your users. Local Zones have their own connections to the internet and support AWS Direct Connect, so that resources created in a Local Zone can serve local users with low-latency communications.

AWS Wavelength

AWS Wavelength enables developers to build applications that deliver ultra-low latencies to mobile devices and end users. Wavelength deploys standard AWS compute and storage services to the edge of telecommunication carriers’ 5G networks. Developers can extend a virtual private cloud (VPC) to one or more Wavelength Zones, and then use AWS resources like Amazon EC2 instances to run applications that require ultra-low latency and a connection to AWS services in the Region.

Elastic Network Interface

  • Eni is basically a Virtual Network adapter which contains following attributes:
    • 1 primary private IPv4
    • 1 or more secondary private IPv4
    • 1 Elastic IP per private IP
    • One Public IPv4 address
    • 1 Mac address
    • You can create and configure network interfaces and attach them to instances in the same Availability Zone.
    • The below diagram is just the one ENI ( Network card adapter) however for some of them have multiple adapters.
    • Each instance has a default network interface, called the primary network interface.
    • Each instance has a default network interface, called the primary network interface.
  • Instances with multiple network cards provide higher network performance, including bandwidth capabilities above 100 Gbps and improved packet rate performance. All the instances have mostly one network card which has further ENI’s.
  • The following instances support multiple network cards. 
  • You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach).
  • You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface.

Configure your network interface using ec2-net-utils for Amazon Linux

There is an additional script that is installed by AWS which is ec2-net-utils. To install this script, use the following command.

yum install e2-net-utils

To list the configuration files that are generated can be checked using the below command:

ls -l /etc/sysconfig/network-scripts/*-eth?

IP Address

  • You can specify multiple private IPv4 and IPv6 addresses for your instances.
  • You can assign a secondary private IPv4 address to any network interface. The network interface does not need to be attached to the instance.
  • Secondary private IPv4 addresses that are assigned to a network interface can be reassigned to another one if you explicitly allow it.
  • Secondary private IPv4 addresses that are assigned to a network interface can be reassigned to another one if you explicitly allow it.
  • Although you can’t detach the primary network interface from an instance, you can reassign the secondary private IPv4 address of the primary network interface to another network interface.
  • Each private IPv4 address can be associated with a single Elastic IP address, and vice versa.
  • When a secondary private IPv4 address is reassigned to another interface, the secondary private IPv4 address retains its association with an Elastic IP address.
  • When a secondary private IPv4 address is unassigned from an interface, an associated Elastic IP address is automatically disassociated from the secondary private IPv4 address.

Assign a secondary private IPv4 address

  • In EC2, choose Network Interfaces
  • Allow secondary IP address.
  • Again verify in EC2 instance networking tab

What is Elastic IP address?

  • Static Ip address
  • It is region specific and cannot be moved to another region.
  • First thing is to allocate to the account.
  • When you associate an Elastic IP address with an instance, it is also associated with the instance’s primary network interface

Associate an Elastic IP address with the secondary private IPv4 address

  • In the navigation pane, choose Elastic IPs.
  • Again verify in EC2 instance networking tab

Conclusion

In the long ultimate guide we learned everything one must know about AWS EC2 in the AWS Cloud.

AWS KMS Keys

If you need to secure your AWS Cloud account for various data content then you must know everything about AWS KMS keys.

In this tutorial we will learn everything we should know about AWS KMS keys and how to call these AWS KMS keys in IAM Policies.

Table of Content

  1. AWS KMS (Key Management Service)
  2. Symmetric Encryption KMS Keys
  3. Asymmetric KMS keys
  4. Data keys
  5. Custom key stores
  6. Key material
  7. Key policies in AWS KMS
  8. Default Key Policy
  9. Allowing user to access KMS keys with Key Policy
  10. Allowing Users and Roles to access KMS keys with Key Policy
  11. Access KMS Key by User in different account
  12. Creating KMS Keys
  13. What is Multi-region KMS Keys?
  14. Key Store and Custom Key Store
  15. How to Encrypt your AWS RDS using AWS KMS keys
  16. Encrypt AWS DB instance using AWS KMS keys
  17. Encrypting the AWS S3 bucket using AWS KMS Keys
  18. Applying Server-side Encryption on AWS S3 bucket
  19. Configure AWS S3 bucket to use S3 Bucket Key with Server Side E-KMS for new objects
  20. Client Side Encryption on AWS S3 Bucket
  21. Conclusion

AWS KMS (Key Management Service)

KMS is a managed service that makes it easy to create and control cryptographic key that protect your data by encrypting and decrypting. KMS uses Hardware security modules to protect and validate your keys.

KMS Keys contains a reference to the key material that is used when you perform cryptographic operations with the KMS key. Also, you cannot delete this key material; you must delete the KMS key. A KMS key contains metadata, such as the key ID, key spec, key usage, creation date, description, and key state. Key identifiers act like names for your KMS keys.

keyID: It acts like a name for example 1234abcd-12ab-34cd-56ef-1234567890a

Note: A cryptographic key is a string of bits used by a cryptographic algorithm to transform plain text into cipher text or vice versa. This key remains private and ensures secure communication.

  • The KMS keys that are created by us are customer managed keys. You have control over KMS policies, enable and disable, rotating key material, adding tags , creating alias. When you create an AWS KMS key, by default, you get a KMS key for symmetric encryption.
    • Symmetric encryption keys are used in symmetric encryption, where the same key is used for encryption and decryption.
    • An asymmetric KMS key represents a mathematically related public key and private key pair.
  • The KMS keys that are created automatically by AWS are AWS Managed keys. The aliases are represented as aws/redshift etc. All AWS’ managed keys are now rotated every 3 year
  • AWS owned keys are a collection of KMS keys that an AWS service owns and manages for use in multiple AWS accounts. Although AWS owned keys are not in your AWS account, an AWS service can use an AWS owned key to protect the resources in your account.
  • Alias: A user friendly name given to KMS key is an alias. For example: alias/ExampleAlias
  • custom key store is an AWS KMS resource backed by a key manager outside of AWS KMS that you own and manage
  • cryptographic operations are API operations that use KMS keys to protect data.
  • Key material is the string of bits used in a cryptographic algorithm.
  • Key policy determines who can manage the KMS keys and who can use it. The key policy that is attached to the KMS key. The key policy is always defined in the AWS account and Region that owns the KMS key.
  • All IAM policies that are attached to the IAM user or role making the request. IAM policies that govern a principal’s use of a KMS key are always defined in the principal’s AWS account.

Symmetric Encryption KMS Keys

When you create an AWS KMS key, by default, you get a KMS key for symmetric encryption. Symmetric key material never leaves AWS KMS unencrypted. To use a symmetric encryption KMS key, you must call AWS KMS. Symmetric encryption keys are used in symmetric encryption, where the same key is used for encryption and decryption.

AWS services that are integrated with AWS KMS use only symmetric encryption KMS keys to encrypt your data. These services do not support encryption with asymmetric KMS keys. 

You can use a symmetric encryption KMS key in AWS KMS to encrypt, decrypt, and re-encrypt data, and generate data keys and data key pairs.

When you create a request or raise a request then it happens as follows:

Requested Syntax:

{
   "EncryptionAlgorithm": "string",
   "EncryptionContext": {
      "string" : "string"
   },

   "GrantTokens": [ "string" ],
   "KeyId": "string",
   "Plaintext": blob
}
Response Syntax

{
   "CiphertextBlob": blob,
   "EncryptionAlgorithm": "string",
   "KeyId": "string"
}

Asymmetric KMS keys

You can create asymmetric KMS keys in AWS KMS. An asymmetric KMS key represents a mathematically related public key and private key pair. The private key never leaves AWS KMS unencrypted.

Data keys

Data keys are symmetric keys you can use to encrypt data, including large amounts of data and other data encryption keys. Unlike symmetric KMS keys, which can’t be downloaded, data keys are returned to you for use outside of AWS KMS.

Custom key stores

custom key store is an AWS KMS resource backed by a key manager outside of AWS KMS that you own and manage. When you use a KMS key in a custom key store for a cryptographic operation

Key material

Key material is the string of bits used in a cryptographic algorithm. Secret key material must be kept secret to protect the cryptographic operations that use it. Public key material is designed to be shared. You can use key material that AWS KMS generates, key material that is generated in the AWS CloudHSM cluster of a custom key store, or import your own key material.

Key policies in AWS KMS

A key policy is a resource policy for an AWS KMS key. Key policies are the primary way to control access to KMS keys. Every KMS key must have exactly one key policy. The statements in the key policy determine who has permission to use the KMS key and how they can use it. You can also use IAM policies and grants to control access to the KMS key, but every KMS key must have a key policy.

Unless the key policy explicitly allows it, you cannot use IAM policies to allow access to a KMS key. Without permission from the key policy, IAM policies that allow permissions have no effect. Unlike IAM policies, which are global, key policies are Regional

Default Key Policy

As soon as you create the KMS keys, the default key policy is also created which gives the AWS account that owns the KMS key full access to the KMS key. It also allows the account to use IAM policies to allow access to the KMS key, in addition to the key policy.

{
  "Sid": "Enable IAM policies",
  "Effect": "Allow",
  "Principal": {
    "AWS": "arn:aws:iam::111122223333:root"
   },
  "Action": "kms:*",
  "Resource": "*"
}

Allowing user to access KMS keys with Key Policy

You can create and manage key policies in AWS KMS console, by using KMS API Operations. First you need to allow users, role or admins in Key policy to use KMS keys. As shown the below key policy allows Alice user in Account(111122223333) to use KMS key

Note: to access KMS you need to create separate IAM policies.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Describe the policy statement",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:user/Alice"
      },
      "Action": "kms:DescribeKey",
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "kms:KeySpec": "SYMMETRIC_DEFAULT"
        }
      }
    }
  ]
}

Allowing Users and Roles to access KMS keys with Key Policy

First you need to allow users, role or admins in Key policy to use KMS keys. For users to access KMS you need to create separate IAM policies. For example in the below policy allows Account(111122223333) and myRole in Account(111122223333) to use KMS keys.

{
    "Id": "key-consolepolicy",
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Enable IAM User Permissions",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:root"
            },
            "Action": "kms:*",
            "Resource": "*"
        },
        {
            "Sid": "Allow access for Key Administrators",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/myRole"
            },
            "Action": [
                "kms:Create*",
                "kms:Describe*",
                "kms:Enable*",
                "kms:List*",
                "kms:Put*",
                "kms:Update*",
                "kms:Revoke*",
                "kms:Disable*",
                "kms:Get*",
                "kms:Delete*",
                "kms:TagResource",
                "kms:UntagResource",
                "kms:ScheduleKeyDeletion",
                "kms:CancelKeyDeletion"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Allow use of the key",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/myRole"
            },
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Allow attachment of persistent resources",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/myRole"
            },
            "Action": [
                "kms:CreateGrant",
                "kms:ListGrants",
                "kms:RevokeGrant"
            ],
            "Resource": "*",
            "Condition": {
                "Bool": {
                    "kms:GrantIsForAWSResource": "true"
                }
            }
        }
    ]
}

Access KMS Key by User in different account

In this section we will go through an example where AWS KMS key is present in Account 2 and user from Account 1 named Bob needs to access it. [Access KMS Key in Account 2 by User bob in Account 1]

  • User bob needs to assume role (engineering) in Account 1.
{
    "Role": {
        "Arn": "arn:aws:iam::111122223333:role/Engineering",
        "CreateDate": "2019-05-16T00:09:25Z",
        "AssumeRolePolicyDocument": {
            "Version": "2012-10-17",
            "Statement": {
                "Principal": {
                    "AWS": "arn:aws:iam::111122223333:user/bob"
                },
                "Effect": "Allow",
                "Action": "sts:AssumeRole"
            }
        },
        "Path": "/",
        "RoleName": "Engineering",
        "RoleId": "AROA4KJY2TU23Y7NK62MV"
    }
}
  • Attach IAM Policy to IAM Role  (engineering) in Account 1. The Policy contains allows anyone to access KMS key in another account. 
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncryptFrom",
                "kms:ReEncryptTo",
                "kms:GenerateDataKey",
                "kms:GenerateDataKeyWithoutPlaintext",
                "kms:DescribeKey"
            ],
            "Resource": [
                "arn:aws:kms:us-west-2:444455556666:key/1234abcd-12ab-34cd-56ef-1234567890ab"
            ]
        }
    ]
}
  • Now, In Account 2 create KMS key policy that allows everyone to access from Account 1
{
    "Id": "key-policy-acct-2",
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Permission to use IAM policies",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::444455556666:root"
            },
            "Action": "kms:*",
            "Resource": "*"
        },
        {
            "Sid": "Allow account 1 to use this KMS key",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:root"
            },
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncryptFrom",
                "kms:ReEncryptTo",
                "kms:GenerateDataKey",
                "kms:GenerateDataKeyWithoutPlaintext",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        }
    ]
}

Creating KMS Keys

You can create the KMS Keys either in single region or multi region. By default the AWS KMS creates the key material. You need below permissions to create the KMS keys.

kms:CreateKey
kms:CreateAlias
kms:TagResource
iam:CreateServiceLinkedRole 
  • Navigate to AWS KMS service in AWS Management console.
  • Add Alias to the key and Description of the AWS Key that you created.
  • Next, add the permissions to the key and review the Key before creation.

What is Multi-region KMS Keys?

AWS KMS supports multi-Region keys, which are AWS KMS keys in different AWS Regions that can be used interchangeably – as though you had the same key in multiple Regions. Each set of related multi-Region keys has the same key material and key ID, so you can encrypt data in one AWS Region and decrypt it in a different AWS Region without re-encrypting or making a cross-Region call to AWS KMS.

  • You begin by creating a symmetric or asymmetric multi-Region primary key in an AWS Region that AWS KMS supports, such as US East (N. Virginia)
  • You set a key policy for the multi-Region key, and you can create grants, and add aliases and tags for categorization and authorization.
  • When you do, AWS KMS creates a replica key in the specified Region with the same key ID and other shared properties as the primary key. Then it securely transports the key material across the Region boundary and associates it with the new KMS key in the destination Region, all within AWS KMS

Key Store and Custom Key Store

key store is a secure location for storing cryptographic keys.  The default key store in AWS KMS also supports methods for generating and managing the keys that its stores.

By default, the cryptographic key material for the AWS KMS keys that you create in AWS KMS is generated in and protected by hardware security modules (HSMs). However, if you require even more control of the HSMs, you can create a custom key store.

custom key store is a logical key store within AWS KMS that is backed by a key manager outside of AWS KMS that you own and manage.

AWS KMS – Keys – Default Key store (IN AWS KMS) – HSM

AWS KMS – Keys – Custom Key Store (OUTSIDE AWS KMS) – Key Manager Manages it

There are two Custom Key Stores:

  • An AWS CloudHSM key store is an AWS KMS custom key store backed by an AWS CloudHSM cluster. You create and manage your custom key stores in AWS KMS and create and manage your HSM clusters in AWS CloudHSM.
  • An external key store is an AWS KMS custom key store backed by an external key manager outside of AWS that you own and control

How to Encrypt your AWS RDS using AWS KMS keys

Amazon RDS supports only symmetric KMS keys. You cannot use an asymmetric KMS key to encrypt data in an Amazon RDS database.

When you use KMS in RDS EBS or DB instances the service specifies encryption context. The encryption context is additional authenticated data ( AAD) and same encryption context is used to decrypt the data. Encryption context is also written to your CloudTrail logs.

At minimum, Amazon RDS always uses the DB instance ID for the encryption context, as in the following JSON-formatted example:

{ "aws:rds:db-id": "db-CQYSMDPBRZ7BPMH7Y3RTDG5QY" }

Encrypt AWS DB instance using AWS KMS keys

  • To encrypt a new DB instance, choose Enable encryption on the Amazon RDS console.
  • When you create an encrypted DB instance, you can choose a customer managed key or the AWS managed key for Amazon RDS to encrypt your DB instance.
  • If you don’t specify the key identifier for a customer managed key, Amazon RDS uses the AWS managed key for your new DB instance

Amazon RDS builds on Amazon Elastic Block Store (Amazon EBS) encryption to provide full disk encryption for database volumes.

When you create an encrypted Amazon EBS volume, you specify an AWS KMS key. By default, Amazon EBS uses the AWS managed key for Amazon EBS in your account (aws/ebs). However, you can specify a customer managed key that you create and manage.

For each volume, Amazon EBS asks AWS KMS to generate a unique data key encrypted under the KMS key that you specify. Amazon EBS stores the encrypted data key with the volume.

Similar to DB instances Amazon EBS uses an encryption context with a name-value pair that identifies the volume or snapshot in the request. 

Encrypting the AWS S3 bucket using AWS KMS Keys

Amazon S3 integrates with AWS Key Management Service (AWS KMS) to provide server-side encryption of Amazon S3 objects. Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3.

Amazon S3 uses server-side encryption with AWS KMS (SSE-KMS) to encrypt your S3 object data.

When you configure your bucket to use an S3 Bucket Key for SSE-KMS, AWS generates a short-lived bucket-level key from AWS KMS then temporarily keeps it in S3

Applying Server-side Encryption on AWS S3 bucket

To apply server side encryption on AWS S3 bucket you need to create a AWS S3 policy and then apply bucket policy as shown below.

{
   "Version":"2012-10-17",
   "Id":"PutObjectPolicy",
   "Statement":[{
         "Sid":"DenyUnEncryptedObjectUploads",
         "Effect":"Deny",
         "Principal":"*",
         "Action":"s3:PutObject",
         "Resource":"arn:aws:s3:::DOC-EXAMPLE-BUCKET1/*",
         "Condition":{
            "StringNotEquals":{
               "s3:x-amz-server-side-encryption":"aws:kms"
            }
         }
      }
   ]
}

Configure AWS S3 bucket to use S3 Bucket Key with Server Side E-KMS for new objects

To enable an S3 Bucket Key when you create a new bucket follow the below steps.

  1. Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.
  2. Choose Create bucket.
  3. Enter your bucket name, and choose your AWS Region.
  4. Under Default encryption, choose Enable.
  5. Under Encryption type, choose AWS Key Management Service key (SSE-KMS).
  6. Choose an AWS KMS key:
    1. Choose AWS managed key (aws/s3).
    1. Choose Customer managed key, and choose a symmetric encryption customer managed key in the same Region as your bucket.
  7. Under Bucket Key, choose Enable.
  8. Choose Create bucket.

Amazon S3 creates your bucket with an S3 Bucket Key enabled. New objects that you upload to the bucket will use an S3 Bucket Key. To disable an S3 Bucket Key, follow the previous steps, and choose disable.

Client Side Encryption on AWS S3 Bucket

Client-side encryption is the act of encrypting your data locally to ensure its security as it passes to the Amazon S3 service. The Amazon S3 service receives your encrypted data; it does not play a role in encrypting or decrypting it. For example, if you need to use KMS keys in Java application then use the below code.

AWSKMS kmsClient = AWSKMSClientBuilder.standard()
                .withRegion(Regions.DEFAULT_REGION)
                .build();

        // create KMS key for for testing this example
        CreateKeyRequest createKeyRequest = new CreateKeyRequest();
        CreateKeyResult createKeyResult = kmsClient.createKey(createKeyRequest);

// --
        // specify an AWS KMS key ID
        String keyId = createKeyResult.getKeyMetadata().getKeyId();

        String s3ObjectKey = "EncryptedContent1.txt";
        String s3ObjectContent = "This is the 1st content to encrypt";
// --

        AmazonS3EncryptionV2 s3Encryption = AmazonS3EncryptionClientV2Builder.standard()
                .withRegion(Regions.US_WEST_2)
                .withCryptoConfiguration(new CryptoConfigurationV2().withCryptoMode(CryptoMode.StrictAuthenticatedEncryption))
                .withEncryptionMaterialsProvider(new KMSEncryptionMaterialsProvider(keyId))
                .build();

        s3Encryption.putObject(bucket_name, s3ObjectKey, s3ObjectContent);
        System.out.println(s3Encryption.getObjectAsString(bucket_name, s3ObjectKey));

        // schedule deletion of KMS key generated for testing
        ScheduleKeyDeletionRequest scheduleKeyDeletionRequest =
                new ScheduleKeyDeletionRequest().withKeyId(keyId).withPendingWindowInDays(7);
        kmsClient.scheduleKeyDeletion(scheduleKeyDeletionRequest);

        s3Encryption.shutdown();
        kmsClient.shutdown();

Conclusion

In this article we learnt what is AWS KMS (Key Management Service) , key policy and IAM policies to access the KMS keys by users or roles in AWS cloud.

What is AWS RDS (Relationship Database Service)?

In this Post you will learn everything you must know end to end about AWS RDS. This tutorial will give you glimpse of each components starting from what is DB instance to scaling and multi availability zone cluster configurations and details.

Lets get started.

Table of Content

  • What is AWS RDS (Relationship Database Service)?
  • Database Instance
  • Database Engines
  • Database Instance class
  • DB Instance Storage
  • Blue/Green Deployments
  • Working with Read Replicas
  • How does cross region replication works?
  • Multi AZ Deployments
  • Multi AZ DB instance deployment
  • How to convert a single DB instance to Multi AZ DB instance deployment
  • Multi-AZ DB Cluster Deployments
  • DB pricing
  • AWS RDS performance troubleshooting
  • Tagging AWS RDS Resources
  • Amazon RDS Storage
  • Monitoring Events, Logs and Streams in an Amazon RDS DB Instance.
  • How to grant Amazon RDS to publish the notifications to the SNS topic using the IAM Policy.
  • RDS logs
  • AWS RDS Proxy
  • Amazon RDS for MySQL
  • Performance improvements on MySQL RDS for Optimized reads.
  • Importing Data into MySQL with different data source.
  • Database Authentication with Amazon RDS
  • Connecting to your DB instance using IAM authentication from the command line: AWS CLI and mysql client
  • Create database user account using IAM authentication
  • Generate an IAM authentication token
  • Connecting to DB instance
  • Connecting to AWS Instance using Python boto3 (boto3 rds)
  • Final AWS RDS Troubleshooting’s

What is AWS RDS (Relationship Database Service)?

  • It allows you to setup relational database in the AWS Cloud. AWS RDS is managed database service.
  • It is cost effective and resizable capacity because you if you invest in your own hardware, memory, CPU and it is time consuming and very costly.
  • With AWS RDS, it manages everything starting from Scaling, availability, backups, software patching, software installing, OS patching, OS installation, hardware lifecycle, server maintenance.
  • You can define permissions of your database users and database with IAM.

Database Instance

DB instance is a database environment which you launch your database users and user created databases.

  1. You can run your database instance in various AZ’s also known as multi-AZ deployments. Amazon automatically provisions and maintains secondary standby instance in different Availability zones. With this approach the primary DB replicates the data written into it to standby instance located in another AZ. Note: Instance in secondary can also be configured as read
  2. You can attach security groups to your database instance to protect your instance.
  3. You can launch DB instance in Local zones as well by enabling local zone in Amazon EC2 console.
  4. You can use Amazon CloudWatch to monitor the status of your database instance. You can monitor the following metrics:
    1. IOPS (I/O operations per second)
    1. Latency (Submitted I/O request until completed)
    1. Throughput (Number of bytes transferred per second) to or from disk.
    1. Queue depth: how many requests are pending in the queue.
  5. DB instance has a unique DB instance identifier that a customer or a user provider and should be different in AWS Region. If you provide the DB instance identifier as testing, then your endpoint formed will be as below.
testing. <account-id><region>.rds.amazonaws.com
  • DB instance supports various DB engines such as MySQL, MariaDB, PostgreSQL, Oracle, Microsoft SQL server, Amazon Aurora database engines.
  • A DB instance can host multiple databases with multiple schemas.
  • When you create any DB instance using AWS RDS service then by default it creates a master user account, and this user has all the permissions. Note: Make sure to change the password of this master user account.
  • You can create a backup of your Database instance by creating database snapshots.  You can also store your snapshots in AWS S3 bucket.
  • You can enable IAM database authentication on your database instance so that you don’t need any password to login to the database instance.
  • You can also enable Kerberos authentication to support external authentication of database users using Kerberos and Microsoft Active directory.
  • DB Instance are billed per hour.

Database Engines

Db engines are specific software’s that runs on your DB instance such as MariaDB, Microsoft SQL server, MySQL, Oracle and Postgres.

Database Instance class

Db instance class determines the computation, memory and storage capacity of a DB instance.  AWS RDS supports three types of DB instance classes:

  • General purpose:
  • Memory optimized:
  • Burstable Performance
  1. DB instance class supports Intel Hyper threading technology which enables multiple threads to run parallelly on single Intel Xeon CPU Core. Each thread is represented as vCPU on DB Instance. For example db.m4.xlarge DB Instance class has 2 CPU Core and two threads per CPU Core which makes to total of 4 vCPU’s. Note: You can disable Intel Hyper threading by specifying a single thread per CPU core when you need a high-performance computing workload.
  2. To set the Core count and Threads per core you need to edit the processor features.
  3. Quick note: To compare the CPU capacity between different DB instance class you should use ECU (Amazon EC2 instance compute units). The amount of CPU that is allocated to a DB instance is expressed in terms of EC2 compute units.
  4. You can use EBS optimised volumes which are good for your DB instance as it provides better performance by minimizing contention between I/O and other traffic from your instance.

DB Instance Storage

You can attach EBS the block level storage volumes to a running instance. DB Instance storage comes with:

  • General purpose (SSD) [gp2 and gp3]: They are cost effective which is ideal for board range of workload on medium sized Generally, they have throughput limit of 250MB/second.
  • For GP2
    • 3 IOPS for each GB with min 100 IOPS (I/O Operations per second)
    • 16000 IOPS for 5.34TB is max limit in gp2  
    • Throughput is max 250MB/sec where throughput is how fast the storage volume can perform read and write.
  • For GP3
    • Up to 32000 IOPS
  • Provisioned IOPS (PIOPS) [io1]: They are used when you need low I/O Latency, consistent I/O throughput. These are suited for production environments.
    • For io1 – up to 256000 (IOPS) and throughput up to 4000 MB/s
    • Note: Benefits of using provisioned IOPS are
      • Increase number of I/O requests that system cab process.
      • Decreased latency because less I/O requests will be in queue.
      • Faster response time and high database throughput.

Blue/Green Deployments

Blue/Green deployments copies database environments in a separate environment. You can make changes in staging environment and then later push those changes in production environments. Blue/ Green deployments are only available for RDS for MariaDB and RDS for MySQL.

Working with Read Replicas

  • Updates from primary DB are copied to the read replicas.
  • You can promote read replica to be standalone DB as well in case you require sharing (Share nothing DB)
  • You can use or create read replica in different AWS Region as well.

You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. 

Note: With Cross region read replicas you can create read replicas in a different region from the source DB instance.

How does cross region replication works?

  • IAM role of Destination must have access to Source DB Instance.
    • Source DB acts as source
    • RDS creates automated DB Snapshot of source DB
    • Copy of Snapshot starts
    • Destination read replica uses copied DB Snapshot

Note: You can configure DB instance to replicate snapshots and transaction logs in another AWS region.

Multi AZ Deployments

  • You can run your database instance in various AZ’s also known as multi-AZ deployments. Amazon automatically provisions and maintains secondary standby instance in different Availability zones. With this approach the primary DB replicates the data written into it to standby instance located in another AZ. Note: Instance in secondary can also be configured as read replicas.
  • You can align one standby or two standby instances.
  • When you have one standby instance it is known as Multi AZ DB instance deployment where one standby instance provides failover support but doesn’t act as read replica.
  • With Two standby instance it is known as Multi AZ DB cluster.
  • The failover mechanism automatically changes the Domain Name System (DNS) record of the DB instance to point to the standby DB instance

Note: DB instances with multi-AZ DB instance deployments can have increased write and commit latency compared to single AZ deployment.

Multi AZ DB instance deployment

In a Multi-AZ DB instance deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone.  You can’t use a standby replica to serve read traffic

If a planned or unplanned outage of your DB instance results from an infrastructure defect, Amazon RDS automatically switches to a standby replica in another Availability Zone if you have turned on Multi-AZ.

How to convert a single DB instance to Multi AZ DB instance deployment

  • Take a snapshot of primary DB instances EBS volume.
  • Creates a new volume for standby replicas from snapshot.
  • Next, turn on block level

Multi-AZ DB Cluster Deployments

  • It has one writer DB instance
  • It has two reader DB instances and allows clients to read the data.
  • AWS RDS replicates writer  
  • Data is synched from Writer instance to both the reader instances.
  • If a failover happens on of the writer instance then the reader instance acts as a automatic failover targets.  It does so by promoting a reader DB instance to a new writer DB instance. It happens automatically within 35 seconds and you can also do by going on Failover tab.

Cluster Endpoint

The cluster endpoint can write as well as read the data. The endpoint cannot be modified.

Reader Endpoint

Reader endpoint is used for reading the content from the DB cluster.

Instance Endpoint

These are used to connect to the DB instance directly to address the issues within instance or your application might require fine grained load balancing.

DB cluster parameter group

DB cluster parameter group acts as a container for engine configuration values that are applied to every DB instance in the Multi-AZ DB cluster

Rds Replica Lag

The Difference in time between latest transaction on writer DB instance and latest applied transaction on reader instance. This could be because of high write concurrency or heavy batch updating.

How to Solve Replica Lag

You can solve the replica lag by reducing the load on your writer DB instance. You can also use Flow control to reduce the replica lag. In Flow log you can add a delay into the end of a transaction, which decreases the write throughput on writer instance. To turn on flow control use the below parameter. By default it is set to 120 seconds and you can turn off by setting to 84000 seconds or less than 120 .

Flow control works by throttling writes on the writer DB instance, which ensures that replica lag doesn’t continue to grow unbounded. Write throttling is accomplished by adding a delay. Throttling means queue or let it flow.

rpl_semi_sync_master_target_apply_lag

To check the status of flow control use below command.

SHOW GLOBAL STATUS like '%flow_control%';

DB pricing

  • DB Instance are billed per hour.
  • Storage are billed per GB per month.
  • I/O requests (per 1 million requests per month.
  • Data transfer per GB in and out of your DB Instance.

AWS RDS performance troubleshooting

  1. Setup CloudWatch monitoring
  2. Enable Automatic backups
  3. If your DB requires more I/O, then to increase migrate to new instance class, convert from magnetic to general or provisioned IOPS.
  4. If you already have provisioned IOPS, consider adding more throughput capacity.
  5. If your app is caching DNS data of your instance, then make sure to set TTL value to less than 30 seconds because caching can lead to connection failures.
  6. Setup enough memory (RAM)
  7. Enable Enhanced monitoring to identify the Operating system issues
  8. Fine tune your SQL queries.
  9. Avoid tables in your database to grow too large as they impact Read and Writes.
  10. You can use options groups if you need to provide additional security for your database.
  11. You can use DB parameter group acts as a container for engine configuration values that are applied to one or more DB instances.

Tagging AWS RDS Resources

  • Tags are very helpful and are basically key value pair formats.
  • You can use Tags in IAM policies to manage access to AWS RDS resources.
  • Tags can be used to produce the detailed billing reports.
  • You can specify if you need tags to be applied to snapshots as well.
  • Tags are useful to determine which instance to be stopped, started, enable backups.

Amazon RDS Storage

Increasing DB instance storage capacity:

Click on Modify in Databases and then Allocated Storage and apply immediately.  

Managing capacity automatically with Amazon RDS storage autoscaling

If workload is unpredictable then enable autoscaling for an Amazon RDS DB Instance. While creating the database engine, enable storage autoscaling and set the maximum storage threshold.

Modifying settings for Provisioned IOPS SSD storage

You can change that is reduce the amount of IOPS for your instance (throughput ) i.e read and write operations however with Provisioned IOPS SSD Storage you cannot reduce the storage size.

Monitoring Events, Logs and Streams in an Amazon RDS DB Instance.

Amazon Event Bridge: Serverless Event bus service that allows to connect apps with data from various sources.

Cloud trail logs and Cloud Watch logs are useful.

Database Activities Streams: AWS RDS push activities to Amazon Kinesis data stream

How to grant Amazon RDS to publish the notifications to the SNS topic using the IAM Policy.

The IAM Policy will be attached to the SNS service.

{
  "Version": "2008-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "events.rds.amazonaws.com"
      },
      "Action": [
        "sns:Publish"
      ],
      "Resource": "arn:aws:sns:us-east-1:123456789012:topic_name",
      "Condition": {
        "ArnLike": {
          "aws:SourceArn": "arn:aws:rds:us-east-1:123456789012:db:prefix-*"
        },
        "StringEquals": {
          "aws:SourceAccount": "123456789012"
        }
      }
    }
  ]
}

RDS logs

  • Amazon RDS doesn’t provide host access to the database logs on the file system of your DB instance. You can Choose the Logs & events tab to view the database log files and logs on the console itself.
  • To publish SQL Server DB logs to CloudWatch Logs from the AWS Management Console. In the Log exports section, choose the logs that you want to start publishing to CloudWatch Logs.

Note: In CloudWatch Logs, a log stream is a sequence of log events that share the same source. Each separate source of logs in CloudWatch Logs makes up a separate log stream. A log group is a group of log streams that share the same retention, monitoring, and access control settings.

  • Amazon RDS provides a REST endpoint that allows access to DB instance log files and you can find the log using REST Endpoint.
GET /v13/downloadCompleteLogFile/DBInstanceIdentifier/LogFileName HTTP/1.1
Content-type: application/json
host: rds.region.amazonaws.com
  • RDS for MySQL writes mysql-error.log to disk every 5 minutes. You can write the RDS for MySQL slow query log and the general log to a file or a database table. You can direct the general and slow query logs to tables on the DB instance by creating a DB parameter group and setting the log_output server parameter to TABLE
    • slow_query_log: To create the slow query log, set to 1. The default is 0.
    • general_log: To create the general log, set to 1. The default is 0.
    • long_query_time: To prevent fast-running queries from being logged in the slow query log

MySQL removes log files more than two weeks old. You can manually rotate the log tables with the following command line procedures, 

CALL mysql.rds_rotate_slow_log;

AWS RDS Proxy

  • RDS Proxy allows you to pool and share db connections to improve ability to scale.
  • RDS Proxy makes applications more effective to db failures by automatically connecting to Standby DB instance.
  • RDS Proxy establishes a database connection pool and reuses connections in this pool and avoids the memory and CPU overhead of opening a new database connection each time.
  • You can enable RDS Proxy for most applications with no code changes.

You can use RDS Proxy in the following scenarios.

  • Any DB instance or cluster that encounters “too many connections” errors is a good candidate for associating with a proxy.
  • For DB instances or clusters that use smaller AWS instance classes, such as T2 or T3, using a proxy can help avoid out-of-memory conditions
  • Applications that typically open and close large numbers of database connections and don’t have built-in connection pooling mechanisms are good candidates for using a proxy.

Amazon RDS for MySQL

There are two versions that are available for MySQL database engines i.e. version 8.0  and 5.7. MySQL provides the validate_password plugin for improved security. The plugin enforces password policies using parameters in the DB parameter group for your MySQL DB instance

To find the available version in MySQL which are supported:

aws rds describe-db-engine-versions --engine mysql --query *[].{Engine:Engine,EngineVersion:EngineVersion}" --output text

SSL/TLS on MySQL DB Instance

Amazon RDS installs SSL/TLS Certificate on the DB Instance. These certificate are signed by CA.  

To connect to DB instance with certificate use below command.

mysql -h mysql–instance1.123456789012.us-east-1.rds.amazonaws.com --ssl-ca=global-bundle.pem --ssl-mode=REQUIRED -P 3306 -u myadmin -p

To check if applications are using SSL.

mysql> SELECT id, user, host, connection_type

       FROM performance_schema.threads pst

       INNER JOIN information_schema.processlist isp

       ON pst.processlist_id = isp.id;

Performance improvements on MySQL RDS for Optimized reads.

  • An instance store provides temporary block-level storage for your DB instance.
  • With RDS Optimized reads some temporary objects are stored on Instance store. These objects include temp files, internal on disk temp tables, memory map files, binary logs, cached files.
  • The storage is located on Non-Volatile Memory express SSD’s that are physically attached.
  • Applications that can uses RDS for Optimized reads are:
    • Applications that run on-demand or dynamic reporting queries.
    • Applications that run analytical queries.
    • Database queries that perform grouping or ordering on non-indexed columns
  • Try to add retry logic for read only queries.
  • Avoid bulk changes in single transaction.
  • You can’t change the location of temporary objects to persistent storage (Amazon EBS) on the DB instance classes that support RDS Optimized Reads.
  • Transactions can fail when the instance store is full.
  • RDS Optimized Reads isn’t supported for multi-AZ DB cluster deployments.

Importing Data into MySQL with different data source.

  1. Existing MySQL database on premises or on Amazon EC2: Create a backup of your on-premises database, store it on Amazon S3, and then restore the backup file to a new Amazon RDS DB instance running MySQL.
  2. Any existing database: Use AWS Database Migration Service to migrate the database with minimal downtime
  3. Existing MySQL DB instance: Create a read replica for ongoing replication. Promote the read replica for one-time creation of a new DB instance.
  4. Data not stored in an existing database: Create flat files and import them using the mysqlimport utility.

Database Authentication with Amazon RDS

For PostgreSQL, use one of the following roles for a user of a specific database.

  • IAM Database authentication: assign rds_iam role to user
  • Kerberos authentication  assign rds_ad role to the user.
  • Password authentication don’t assign above roles.

Password Authentication

  • With Password authentication, database performs all the administration of user accounts. Database controls and authenticate the user accounts.

IAM Database authentication

  • IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don’t need to use a password when you connect to a DB instance

Kerberos Authentication

Benefits of using SSO and centralised authentication of database users.

Connecting to your DB instance using IAM authentication from the command line: AWS CLI and mysql client

  • In the Database authentication section, choose Password and IAM database authentication to enable IAM database authentication.
  • To allow an IAM user or role to connect to your DB instance, you must create an IAM policy.
{

   "Version": "2012-10-17",

   "Statement": [

      {

         "Effect": "Allow",

         "Action": [

             "rds-db:connect"

         ],

         "Resource": [

             "arn:aws:rds-db:us-east-2:1234567890:dbuser:db-ABCDEFGHIJKL01234/db_user"

         ]

      }

   ]

}

Create database user account using IAM authentication

CREATE USER jane_doe IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS';
CREATE USER db_userx;
GRANT rds_iam TO db_userx;

Generate an IAM authentication token

aws rds generate-db-auth-token --hostname rdsmysql.123456789012.us-west-2.rds.amazonaws.com --port 3306 --region us-west-2  --username jane_doe

Connecting to DB instance

mysql –host=hostName –port=portNumber –ssl-ca=full_path_to_ssl_certificate –enable-cleartext-plugin –user=userName –password=authToken

Connecting to AWS Instance using Python boto3 (boto3 rds)

import pymysql
import sys
import boto3
import os

ENDPOINT="mysqldb.123456789012.us-east-1.rds.amazonaws.com"
PORT="3306"
USER="jane_doe"
REGION="us-east-1"
DBNAME="mydb"

os.environ['LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN'] = '1'

#gets the credentials from .aws/credentials
session = boto3.Session(profile_name='default')
client = session.client('rds')
token = client.generate_db_auth_token(DBHostname=ENDPOINT, Port=PORT, DBUsername=USER, Region=REGION)
try:
    conn =  pymysql.connect(host=ENDPOINT, user=USER, passwd=token, port=PORT, database=DBNAME, ssl_ca='SSLCERTIFICATE')

    cur = conn.cursor()
    cur.execute("""SELECT now()""")
    query_results = cur.fetchall()
    print(query_results)

except Exception as e:
    print("Database connection failed due to {}".format(e))

   

Final AWS RDS Troubleshooting’s

Can’t connect to Amazon RDS DB instance

  • Check Security group
  • Check Port
  • Check internet Gateway
  • Check db name

Error – Could not connect to server: Connection timed out

  • Check hostname and port
  • Check security group
  • Telnet to the DB
  • Check the username and password

Error message “failed to retrieve account attributes, certain console functions may be impaired.”

  • Account is missing permissions, or your account hasn’t been properly set up.
  • lack permissions in your access policies to perform certain actions such as creating a DB instance

Amazon RDS DB instance outage or reboot

  • You change the backup retention period for a DB instance from 0 to a nonzero value or from a nonzero value to 0. You then set Apply Immediately to true.
  • You change the DB instance class, and Apply Immediately is set to true.
  • You change the storage type from Magnetic (Standard) to General Purpose (SSD) or Provisioned IOPS (SSD), or from Provisioned IOPS (SSD) or General Purpose (SSD) to Magnetic (Standard).

Amazon RDS DB instance running out of storage

  • Add more storage in  EBS volumes attached to the DB instance.

Amazon RDS insufficient DB instance capacity

The specific DB instance class isn’t available in the requested Availability Zone. You can try one of the following to solve the problem:

  • Retry the request with a different DB instance class.
  • Retry the request with a different Availability Zone.
  • Retry the request without specifying an explicit Availability Zone.

Maximum MySQL and MariaDB connections

  • The connection limit for a DB instance is set by default to the maximum for the DB instance class. You can limit the number of concurrent connections to any value up to the maximum number of connections allowed.
  • A MariaDB or MySQL DB instance can be placed in incompatible-parameters status for a memory limit when The DB instance is either restarted at least three time in one hour or at least five times in one day or potential memory usage of the DB instance exceeds 1.2 times the memory allocated to its DB instance class. To solve the issue:
    • Adjust the memory parameters in the DB parameter group associated with the DB instance.
    • Restart the DB instance.

Conclusion

This tutorial will gave you glimpse of each components starting from what is DB instance to scaling and multi availability zone cluster configurations and AWS RDS details.

How to create IAM policy to access AWS DynamoDB table

Do you know you can allow the user or group of IAM users to access AWS DynamoDB table with a single policy.

In this quick tutorial you will learn How to create IAM policy to access AWS DynamoDB table.

Lets get started.

Prerequisites

  • AWS account
  • You should have writes to create the IAM policy.

Creating IAM Policy to Access DynamoDB table

This section will show you the IAM policy which allows users or a group to access the DynamoDB table. Lets go through the code.

  • Version is the policy version which is fixed.
  • Effect is Allow in each statement as we want to Allow users or group be able to list all the DynamoDB table.
  • There are two statements in the IAM policy where
  • First statement allows to list and describe all the dynamoDB tables.
  • Where as Second statement allows specific table to be accessed by any user or role that is Mytable.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListandDescribe",
            "Effect": "Allow",
            "Action": [
                "dynamodb:List*",
                "dynamodb:DescribeReservedCapacity*",
                "dynamodb:DescribeLimits",
                "dynamodb:DescribeTimeToLive"
            ],
            "Resource": "*",
        },
  {
            "Sid": "SpecificTable",
            "Effect": "Allow",
            "Action": [
                "dynamodb:BatchGet*",
                "dynamodb:DescribeStream",
                "dynamodb:DescribeTable",
                "dynamodb:Get*",
                "dynamodb:Query",
                "dynamodb:Scan",
                "dynamodb:BatchWrite*",
                "dynamodb:CreateTable",
                "dynamodb:Delete*",
                "dynamodb:Update*",
                "dynamodb:PutItem"
            ],
            "Resource": "arn:aws:dynamodb:*:*:table/MyTable"
        }
    ]
}

Conclusion

This tutorial demonstrated that how to create IAM policy to access AWS DynamoDB table.

How to create a IAM Policy to Deny AWS Resources outside AWS Regions.

Do you know you can restrict the user or group of IAM users to multiple services and regions with a single policy.

In this quick tutorial you will learn how to create a IAM Policy to Deny AWS Resources outside AWS Regions.

Lets get started.

Prerequisites

  • AWS account

Creating IAM Policy to Deny access to Specific AWS regions

The below policy is useful when you want any of your users or groups to be explicitly denied on AWS services in AWS Regions.

  • Version is Policy version which is fixed.
  • Effect is Deny in each statement as we want to deny users or group be able to work on specific services and regions.
  • NotActions: We have different actions such as ListAllbuckets to list the buckets etc. NotAction is opposite of actions that means we don’t apply Effect on these resources.
  • This policy denies access to any actions outside the Regions specified (eu-central-1, eu-west-1, eu-west-2, eu-west-3) and except for actions in the services specified using NotAction such as accessing Cloud front, IAM, route53, support. The below policy contains following attributes.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyAllOutsideRequestedRegions",
            "Effect": "Deny",
            "NotAction": [
                "cloudfront:*",
                "iam:*",
                "route53:*",
                "support:*"
            ],
            "Resource": "*",
            "Condition": {
                "StringNotEquals": {
                    "aws:RequestedRegion": [
                        "eu-central-1",
                        "eu-west-1",
                        "eu-west-2",
                        "eu-west-3"
                    ]
                }
            }
        }
    ]
}

Conclusion

This tutorial demonstrated that if you need to create a IAM Policy to Deny AWS Resources outside AWS Regions.

How to Access AWS S3 bucket using S3 policy

Are you Struggling to Access your AWS S3 bucket, if yes then this tutorial is for you.

In this quick tutorial you will learn how you can grant read-write access to an Amazon S3 bucket by assigning S3 policy to the role.

Lets get started.

Prerequsites

  • AWS account
  • One AWS Bucket named sagarbucket2023

Creating IAM S3 Policy

The below policy is useful when you want any of your application intending to use the AWS S3 bucket may be for reading the data from a website or storing the data i.e. writing it to AWS S3 bucket.

The below policy contains following attributes

  • Version is Policy version which is fixed.
  • Effect is Allow in each statement as we want to allow users or group be able to work with AWS S3.
  • Actions: We have different actions such as ListAllbuckets to list the buckets etc.
  • Resource is my AWS S3 bucket named sagarbucket2023
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetBucketLocation",
        "s3:ListAllMyBuckets"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": ["arn:aws:s3:::sagarbucket2023"]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject"
      ],
      "Resource": ["arn:aws:s3:::sagarbucket2023/*"]
    }
  ]
}

Conclusion

This tutorial demonstrated that if you need to read or write data in AWS S3 bucket then your policy either attached to IAM user or IAM role should be defined as we showed.

How to Set up a PostgreSQL Database on Amazon RDS

If you new to AWS RDS or planning to create your first AWS RDS database instance then you are at the right place to learn one of the most popular and widely used Database engine PostgreSQL.

In this tutorial you will learn how to set up a PostgreSQL Database on Amazon RDS in the Amazon cloud from scratch and step by step.

Still interested? Lets get into it.

Join 50 other followers

Table of Content

What is Database?

If you want to store all the information of your employees securely and efficiently such as Name, Employee ID, Employee Address, Employee Joining date, Employee benefits, etc then you need a database.

Basic Database diagram
Basic Database diagram

What is AWS RDS?

Amazon Relational Database (AWS RDS) is an Amazon web service that helps in setting up and configuring the relational database in AWS. With AWS RDS you can scale up or down the capacity i.e you can configure different instance sizes, load-balanced, apply fault-tolerant.

AWS RDS also removes tedious management tasks than setting up manually and saving a lot of our time. AWS RDS supports six database engines: Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server.

With AWS you don’t need to rely on buying hardware, backups, scalability, availability, and it’s more secure than hosting your own database. In the below snap AWS RDS service contains RDS instances and Instances further contain RDS databases and database users & finally you connect them using database clients such as pgadmin4.

Connecting AWS RDS PostgreSQL database from pgadmin client
Connecting AWS RDS PostgreSQL database from pgadmin client

What is PostgreSQL?

PostgreSQL is an open-source relational database system that has the capability to handle heavy workloads, scale systems easily, runs mostly on all Operating systems, and is highly extensible like you can define your own data types, functions. PostgreSQL is one of the most widely used AWS RDS DB engines.

A DB engine is the specific relational database software that runs on your DB instance.

Some of the features of PostgreSQL are listed below:

  • Security
  • Extensibility
  • Text Search
  • Reliable
  • Data Integrity
  • Good Performance

Prerequisites

This tutorial will be step by step and if you would like to follow along, then you must have.

  • Amazon AWS account. If you dont have AWS account create from here.
  • pgAdmin utility to connect to PostgreSQL database instance. To install pgadmin click here.

Creating a PostgreSQL DB instance in AWS RDS

Now that you have a basic idea of what is Postgres database and the benefits of hosting your database on AWS RDS with a database engine like PostgreSQL. Let’s quickly learn how to create a PostgreSQL DB instance in AWS RDS.

  • Sign into your AWS account and and search for AWS RDS in the search box and click on RDS.
Searching for AWS RDS service in AWS Cloud
Searching for AWS RDS service in AWS Cloud
  • Now, in the AWS RDS page click on Create database.
Creating database in AWS RDS service
Creating database in AWS RDS service
  • Further on Create database page choose database creation method as Standard create , Engine as PostgreSQL and Version as : PostgreSQL 12.5-R1 and select FREE tier from Templates.

The Latest vesion of PostgreSQL is PostgreSQL 14.1-R1

Defining all the parameters to create a AWS RDS database engine
Defining all the parameters to create a AWS RDS database engine
  • Next, provide the database name, master username, master password and keeping all the storage values as default .
Specifing the Configuration of database instance
Specifing the Configuration of database instance
Defining storage for database instance
Defining storage for database instance
  • Further in Connectivity section select the Virtual Private Cloud, Subnet group in which you would like to create the AWS RDS instance, Public access as Yes, and select security group as default.

Make sure to allow 0.0.0.0/0 in the Inbound and Outbound traffic in the default security group and subnet group have route to internet so that you can connect to RDS instance from the database client from your browser or local machine.

Defining network connectivity options in AWS RDS
Defining network connectivity options in AWS RDS
  • Now in the “Database authentication” choose Password authentication and finally click on Create database. It usually takes few mins for RDS instance to be launched in AWS Cloud.
Specifying the database authentication method
Specifying the database authentication method

Verifying AWS RDS Postgres database instance in AWS Cloud

Now that you have created the AWS RDS Postgres database instance in AWS Cloud, which is great but unless you verify in Amazon Management console you cannot be sure enough. Lets navigate to AWS console and verify the Postgres instance in AWS RDS service.

As you can see the specified mydb instance has been created successfully in AWS RDS.

Verifying AWS RDS Postgres database instance in AWS Cloud
Verifying AWS RDS Postgres database instance in AWS Cloud

Connecting to a DB instance running the PostgreSQL database engine

Now that you have verified the DB instance running the PostgresSQL in AWS cloud, its time to connect using pgAdmin client from your machine. To connect

  • Open pgAdmin on your machine and click on Create and further Server.
Connecting to PostgreSQL database instance from pgadmin
Connecting to PostgreSQL database instance from pgadmin
  • In the Create-Server Pageunder General tab select name as “myrds”. Next, navigate to Connection tab and provide the all the details such as Host i.e endpoint URL of your database instance, Port, , username and passsword as shown below.
Defining Name of database to connect
Defining Name of database to connect
Defining connection details of the PostgreSQL database instance
Defining connection details of the PostgreSQL database instance
  • After you provide all the details and click on save button, the newly created database will be visible under the severs as shown below.
checking the database instance
checking the database instance
  • Finally under myrds database instance create a database by right clicking on Databases and select Create ➔ Database and provide the name of the database you wish to create.
Creating database instance AWS RDS database instance
Creating database instance AWS RDS database instance
  • As you can see below the testing database is created successfully. 
Viewing the newly launched database in AWS RDS database instance
Viewing the newly launched database in AWS RDS database instance

Conclusion

In this tutorial you learned about one of the most widely used AWS RDS database Postrgres and how to create it in Amazon management console.

So what do you plan to store in this newly created database instance.

How to Install AWS CLI Version 2 and Setup AWS credentials

Are you new to AWS Cloud or tired of managing your AWS Cloud infrastructure using manual steps back and forth? If yes, you should consider installing AWS Command Line Interface (AWS CLI ) and managing infrastructure using it?

In this tutorial, you will learn how to install AWS CLI Version 2 and set up AWS credentials in the AWS CLI tool.

Let’s dive into it.

Join 50 other followers

Table of Content

  1. What is AWS CLI?
  2. Installing AWS CLI Version 2 on windows machine
  3. Creating an IAM user in AWS account with programmatic access
  4. Configure AWS credentials using aws configure
  5. Verify aws configure from AWS CLI by running a simple commands
  6. Configuring AWS credentials using Named profile.
  7. Verify Named profile from AWS CLI by running a simple commands.
  8. Configuring AWS credentials using environment variable
  9. Conclusion

What is AWS CLI?

AWS CLI enables you to interact and provides direct access to the public APIs of AWS services of various AWS accounts using the command-line shells from your local environment or remotely.

You can control multiple AWS services from the AWS CLI and automate them through scripts. You can run AWS CLI commands from a Linux shell such as bash, zsh, tcsh, and from a Windows machine, you can use command prompt or PowerShell to execute AWS CLI commands.

The AWS CLI is available in two versions, and the installation is exactly the same for both versions but in this tutorial, let’s learn how to install AWS CLI version 2.

Installing AWS CLI Version 2 on windows machine

Now that you have a basic idea about AWS CLI and connecting to AWS services using various command prompt and shells. Further in this section, let’s learn how to install AWS CLI Version 2 on a windows machine.

  • First open your favorite browser and download the AWS CLI on windows machine from here
Downloading AWS CLI Interface v2
Downloading AWS CLI Interface v2
  • Next, select the I accept the terms and Licence Agreement and then click on the next button.
Downloading AWS CLI Interface v2
Accepting the terms in the Licence Agreement of AWS CLI
  • Further, on Custom setup page provide the location of installation path and then click on Next button.
Setting the download location of AWS CLI
Setting the download location of AWS CLI
  • Now, click on the Install button to install AWS CLI version 2.
Installing the AWS CLI on Windows machine
Installing the AWS CLI on a Windows machine
  • Finally click on Finish button as shown below.
Finishing the Installation of the AWS CLI on Windows machine
Finishing the Installation of the AWS CLI on Windows machine
  • Verify the AWS version by going to command prompt and run the below command.
aws --version

As you can see below, the AWS CLI version 2 is successfully installed on a windows machine.

Checking the AWS CLI version
Checking the AWS CLI version

Creating an IAM user in AWS account with programmatic access

There are two ways to connect to an AWS account, the first is providing a username and password on the AWS login page and another is configuring the Access key ID and secret keys of IAM users in AWS CLI to connect programmatically.

Earlier, you installed AWS CLI successfully on a Windows machine, but you will need an IAM user with programmatic access to run commands from it.

Let’s learn how to create an IAM user in an AWS account with programmatic access, Access key ID, and secret keys.

  1. Open your favorite web browser and navigate to the AWS Management Console and log in.
  2. While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item.
Checking the IAM AWS service
Checking the IAM AWS service
  1. To Create a user click on Users→ Add user and provide the name of the user myuser and make sure to tick the Programmatic access checkbox in Access type which enables an access key ID and secret access key and then hit the Permissions button.
Adding the IAM user in AWS CLoud
Adding the IAM user in AWS CLoud
  1. Now select the “Attach existing policies directly” option in the set permissions and look for the “Administrator” policy using filter policies in the search box. This policy will allow myuser to have full access to AWS services.
Attaching the admin rights to IAM user in AWS CLoud
Attaching the admin rights to IAM users in AWS CLoud
  1. Finally click on Create user.
  2. Now, the user is created successfully and you will see an option to download a .csv file. Download this file which contains IAM users i.e. myuser Access key ID and Secret access key which you will use later in the tutorial to connect to AWS service from your local machine.
Downloading the AWS credentials of IAM user
Downloading the AWS credentials of IAM user

Configure AWS credentials using aws configure in AWS CLI

You are an IAM user with Access key ID and secret keys, but AWS CLI cannot perform anything unless you configure AWS credentials. Once you configure the credentials, AWS CLI allows you to connect to the AWS account and execute commands.

  • Configure AWS Credentials by running the aws configure command on command prompt.
aws configure
  • Enter the details such as AWS Access key ID, Secret Access Key, region. You can skip the output format as default or text or json .
Configure AWS CLI using aws configure command
Configure AWS CLI using aws configure command
  • Once AWS is configured successfully , verify by navigating to C:\Users\YOUR_USER\.aws  and see if two file credentials and config are present.
Checking the credentials file and config on your machine
Checking the credentials file and config on your machine
  • Now open both the files and verify and you can see below you’re AWS credentials are configured successfully using aws configure.
Checking the config file on your machine
Checking the config file on your machine
Checking the config file on your machine
Checking the config file on your machine

Verify aws configure from AWS CLI by running a simple commands

Now, you can test if AWS Access key ID, Secret Access Key, region you configured in AWS CLI is working fine by going to command prompt and running the following commands.

aws ec2 describe-instances
Describing the AWS EC2 instances using AWS CLI
Describing the AWS EC2 instances using AWS CLI
  • You can also verify the AWS CLI by listing the buckets in your acount by running the below command.
aws cli s3

Configuring AWS credentials using Named profile.

Another method to configure AWS credentials that are mostly used is configuring the Named profile. A named profile is a collection of settings and credentials that you can apply to an AWS CLI command. When you specify a profile to run a command, the settings and credentials are used to run that command. Let’s learn how to store named profiles.

  1. Open credentials files which got created earlier using aws configure and if not then create a file named credentails in C:\Users\your_profile\.aws directory of your windows machine.
  2. Add all the Access key ID and Secret access key into the credentials file in the below format and save. By defining the Named profile allows you to connect with different AWS account easily and avoiding confusion while connecting to specific AWS accounts.
Creating the Named Profile on your machine
Creating the Named Profile on your machine
  1. Similarly, create another file config  in the C:\Users\your_profile\.aws directory.
  2. Next, add the “region” into the config file and make sure to add the name of the profile which you provided in the credentials file, and save the file. This file allows you to work with a specific region.
  • For Linux and mac machine the location of credential file is ~/.aws/credentials and ~/.aws/config.
  • For windows machine the location of config file is  %USERPROFILE%\.aws\credentials and %USERPROFILE%\.aws\config rrespectively.
Creating the Named Profile config file on your machine
Creating the Named Profile config file on your machine

Verifying Named profile from AWS CLI

Previously you configured the Named profile on your machine, but let’s verify the Named profile from AWS CLI by running a simple command. Let’s open the command prompt and run the below command to verify the sandbox profile that you created earlier.

aws ec2 describe-instances --profile sandbox

As you can see below, the instance is described properly using the command with Named profile shows Named profile is configured successfully.

Verifying the Named profile in AWS CLI
Verifying the Named profile in AWS CLI

Configuring AWS credentials using the environment variable

Finally, the last Configuring AWS credentials using the environment variables works well. Let’s check out quickly.

  • Open the command prompt and set the AWS secret key and access key using the environmental variable using set . The enviornment variables doesnt changes the value until the end of the current command prompt session, or until you set the variable to a different value
Configuring AWS credentials using the environment variable
Configuring AWS credentials using the environment variable

AWS CLI Error (ImportError: cannot import name ‘docevents’ from ‘botocore.docs.bcdoc’) and Solution

If you face any issues in AWS CLI related to python or any file then use the below command.

 pip3 install --upgrade awscli

Conclusion

In this tutorial, you learned What is AWS CLI, how to install AWS CLI version 2, and various methods that allow you to configure AWS credentials and then work with AWS CLI.

So which method are you going to use while using AWS CLI to connect and manage AWS infrastructure?

What is AWS WAF (Web Application Firewall) and how to Setup WAF in AWS account.

Are you sure if your applications or website are fully secure and protected? If not, you are at the right place to learn about Amazon web service Web Application Firewall (AWS WAF) that protects your web applications from common web exploits in the best effective way.

AWS WAF allows you to monitor all the HTTP(S) requests that are forwarded to an Amazon CloudFront distribution, Amazon API Gateway REST API, an Application Load Balancer, and takes actions accordingly.

This tutorial will teach what AWS WAF (Web Application Firewall) is and how to set up WAF in an AWS account. Let’s dive in and get started.

Join 50 other followers

Table of Content

  1. What is Amazon web service Web Application Firewall (AWS WAF) ?
  2. Benefits of AWS WAF
  3. Components of AWS WAF
  4. AWS WAF Web ACL (Web Access Control List)
  5. AWS WAF rules
  6. AWS Managed Rules rule group
  7. IP sets and regex pattern sets
  8. Prerequisites
  9. How to create AWS WAF (Web Application Firewall) and AWS WAF rules
  10. Conclusion

What is Amazon web service Web Application Firewall (AWS WAF) ?

AWS WAF allows you to monitor all the HTTP or HTTPS requests forwarded to Amazon Cloud Front, Amazon Load balancer, Amazon API Gateway REST API, etc., from users. AWS WAF controls who can access the required content or data based on specific conditions such as source IP address etc., and protects your applications from common web exploits.

Benefits of AWS WAF

  • AWS WAF is helpful when you want Amazon Cloud Front, Amazon Load balancer, Amazon API Gateway REST to provide the content or serve content to particular users or block particular users.
  • AWS WAF allows you to count the requests that match properties specified without allowing or blocking those requests
  • AWS WAF protects you from web attacks using conditions you specify and also provides real time metrics and details of web requests.
AWS WAF architecture and working
AWS WAF architecture and working

Components of AWS WAF

AWS WAF service contains some important components; let’s discuss each of them now.

AWS WAF Web ACL (Web Access Control List)

AWS WAF Web ACL allows protecting a set of AWS Resources. After you create a web ACL, you need to add AWS WAF rules inside it.

AWS WAF rules define specific conditions applied to web requests coming from users and how to handle these web requests. You also set default action in web ACL to allow or block requests that pass these rules.

AWS WAF rules

AWS WAF rules contain statements that define the criteria, and if the criteria are matched, then the web requests are allowed; else, they are blocked. The rule is based on IP addresses or address ranges, country or geographical location, strings that appear in the request, etc.

AWS Managed Rules rule group

You can use rules individually or in reusable rule groups. There are two types of rules: AWS Managed rule groups and managing your own rule groups.

IP sets and regex pattern sets

AWS WAF stores complex information in sets you use by referencing them in your rules.

  • An IP set is a group of IP addresses and IP address ranges of AWS resources that you want to use together in a rule statement.
  • A regex pattern set provides a collection of regular expressions that you want to use together in a rule statement. Regex pattern sets are AWS resources.

Prerequisites

  • You must have AWS account in order to setup AWS WAF. If you don’t have AWS account, create a AWS account from here AWS account.
  • IAM user with Administrator rights and setup credentials using AWS CLI or using AWS Profile.

How to create AWS WAF (Web Application Firewall) and AWS WAF rules

Now that you have a basic idea of AWS WAF and the components of AWS WAF. To work with AWS WAF, the first thing you need to create is Web Access Control List (ACL) and further add the WAF rules ( individual rules or groups of rules ) such as blocking or allowing web requests.

In this section, let’s learn how to create and set up AWS WAF and create a Web ACL.

  • To create Web ACL open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the console, click on the search bar at the top, search for WAF, and click on the WAF menu item.
Searching for AWS WAF
Searching for AWS WAF
  • Now further click on on Create Web ACL button as shown below.
Creating a Web ACL
Creating a Web ACL
  • Next provide the Name, cloud Watch metric name of your choice and choose Resource type as CloudFront distributions.

This tutorial already had one CloudFront Distribution in place which will be used If you need to create the cloud Distribution follow here

Cloud Distribution in AWS account
Cloud Distribution in AWS account
  • Next, Click on Add AWS Resources and select the CloudFront distribution and hit NEXT.
Selecting the CloudFront distribution in AWS WAF
Selecting the CloudFront distribution in AWS WAF
  • Further In Add rules and rule groups section choose Add my own rules and rule groups and provide the values as shown below.
    • Name as myrule123
    • Type as Regular Rule
    • Inspect as Header
    • Header field as User-Agent
    • if a request as matches the statement
Adding rules and rule groups in AWS WAF
Adding rules and rule groups in AWS WAF
Defining the values of AWS WAF rules and rule groups
Defining the values of AWS WAF rules and rule groups
  • While building the rules there are 3 types of Rule Actions options available such as
    • Count: AWS WAF counts the request but doesn’t determine whether to allow it or block it
    • Allow: AWS WAF allows the request to be forwarded to the protected AWS resource
    • Block: AWS WAF blocks the request and sends back to the client.
  • Choose Count as the rule action.
Choosing the rule action
Choosing the rule action

You can instruct AWS WAF to insert custom headers into the original HTTP request for rule actions or web ACL default actions that are set to allow or count.

  • Finally hit the next button till end and then Create Web ACL.
Creating the Web ACL
Creating the Web ACL
  • The rules you added previous are manual rules, but at times you need to add AWS Managed rules, to do that select AWS Managed rules.
Adding AWS WAF Managed rules
Adding AWS WAF Managed rules
  • Now the AWS Web ACL is should look like as showb below with both managed and your own created AWS WAF rules.
Viewing the AWS WAF with both managed and your own created AWS WAF rules
Viewing the AWS WAF with both managed and your own created AWS WAF rules

Conclusion

In this tutorial, you learned AWS WAF service, WAF components such as AWS Web ACL, the WAF rules, and applied to WAF web ACL.

You also learned how to apply AWS WAF web ACL on CloudFront to protect your websites from getting exploited from attacks.

So now, which applications and websites do you plan to protect next using AWS WAF?

What is AWS CloudFront and how to Setup Amazon CloudFront with AWS S3 and ALB Distributions

Internet users are always impressed with websites’ high speed & loading capacities. Why not have a website that loads the content quickly and delivers fast with AWS Cloudfront?

In this tutorial, you learn What AWS CloudFront is and how to set up Amazon CloudFront with AWS S3 and ALB Distributions which enables users to retrieve content quickly by utilizing the concept of caching.

Let’s get started.

Join 50 other followers

Table of Content

  1. What is AWS Cloudfront?
  2. How AWS Cloudfront delivers content to your users
  3. Amazon Cloudfront caching with regional edge caches
  4. Prerequisites
  5. Creating an IAM user in AWS account with programmatic access
  6. Configuring the IAM user Credentials on local Machine
  7. How to Set up AWS CloudFront
  8. How to Use Custom URLs in AWS CloudFront by Adding alternate Domain Names (CNAMEs)
  9. Using Amazon EC2 as the Origins in the AWS CloudFront
  10. Conclusion

What is AWS Cloudfront?

AWS Cloudfront is an Amazon web service that speeds up the distribution of static and dynamic content such as .html, .css, .js, images, live streaming of video to users. Cloudfront delivers the content quickly using edge locations when the request is requested by users.

If the content is not available in edge locations, Cloudfront requests from the origin configured such as AWS S3 bucket, HTTP server or Load Balancer, etc. Also, the use of Lambda at edge location with CloudFront adds more ways to customize CloudFront.

How AWS Cloudfront delivers content to your users

Now that you have a basic idea of CloudFront knowing how AWS Cloudfront delivers content to users is also important.

Initially, when users request a website or application such as example.com/mypage.html, the DNS server routes the request to AWS Cloudfront edge locations.

Next CloudFront checks if the request can be fulfilled with edge location; else, CloudFront queries to the origin server. The Origin server sends the files back to the edge location, and further Cloudfront sends them back to the user.

AWS Cloudfront architecture
AWS Cloudfront architecture

Amazon Cloudfront caching with regional edge caches

Delivering the content from the edge location is fine. Still, if you to further improve the performance and latency of content, there is a further caching mechanism based on region, known as regional edge cache.

Regional edge caches help with all types of content, particularly content that becomes less popular over time, such as user-generated content, videos, photos, e-commerce assets such as product photos and videos, etc.

Regional edge cache sits in between the origin server and edge locations. The Edge location stores the content and cache, but when the content is too old it removes it from its cache and forwards it to the regional cache, which has wide coverage to store lots of content.

Regional edge cache
Regional edge cache

Prerequisites

  • You must have AWS account in order to setup AWS CloudFront. If you don’t have AWS account, please create a account from here AWS account.
  • AWS S3 bucket created.

Creating an IAM user in AWS account with programmatic access

To connect to AWS Service, you should have an IAM user with an Access key ID and secret keys in the AWS account that you will configure on your local machine to connect to AWS account from your local machine.

There are two ways to connect to an AWS account, the first is providing a username and password on the AWS login page on the browser, and the other way is to configure Access key ID and secret keys on your machine and then use command-line tools to connect programmatically.

  1. Open your favorite web browser and navigate to the AWS Management Console and log in.
  2. While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item.
Opening the IAM service in AWS cloud
Opening the IAM service in AWS cloud
  1. To Create a user click on Users→ Add user and provide the name of the user myuser and make sure to tick the Programmatic access checkbox in Access type which enables an access key ID and secret access key and then hit the Permissions button.
Adding the AWS IAM user with Programmatic access
Adding the AWS IAM user with Programmatic access
  1. Now select the “Attach existing policies directly” option in the set permissions and look for the “Administrator” policy using filter policies in the search box. This policy will allow myuser to have full access to AWS services.
Granting the Administrator Access to the IAM user
Granting the Administrator Access to the IAM user
  1. Finally click on Create user.
  1. Now, the user is created successfully and you will see an option to download a .csv file. Download this file which contains IAM users i.e. myuser Access key ID and Secret access key which you will use later in the tutorial to connect to AWS service from your local machine.
Downloading the AWS IAM user with programmatic access that is access key and secret key
Downloading the AWS IAM user with programmatic access that is access key and secret key

Configuring the IAM user Credentials on local Machine

Now, you have an IAM user myuser created. The next step is to set the download myuser credentials on the local machine, which you will use to connect AWS service via API calls.

  1. Create a new file, C:\Users\your_profile\.aws\credentials on your local machine.
  2. Next, Enter the Access key ID and Secret access key from the downloaded csv file into the credentials file in the same format and save the file.
[default]     # Profile Name
aws_access_key_id = AKIAXXXXXXXXXXXXXXXX
aws_secret_access_key = vIaGXXXXXXXXXXXXXXXXXXXX

credentials files help you to set your profile. By this way, it helps you to create multiple profiles and avoid confusion while connecting to specific AWS accounts.

  1. Similarly, create another file C:\Users\your_profile\.aws\config in the same directory
  2. Next, add the “region” into the config file and make sure to add the name of the profile which you provided in the credentials file, and save the file. This file allows you to work with a specific region.
[default]   # Profile Name
region = us-east-2

How to Set up AWS CloudFront

Now that you know what AWS Cloudfront is, you have an IAM user that will allow you to set up the AWS Cloudfront in the AWS cloud. Let’s set up AWS Cloudfront.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘CloudFront’, and click on the CloudFront menu item.
Searching for AWS Cloudfront in AWS Cloud
Searching for AWS Cloudfront in AWS Cloud
  • Click on Create distributions and then Get Started
Creating the AWS Cloudfront distribution
Creating the AWS Cloudfront distribution
  • Now in the Origin settings provide the AWS S3 bucket name and keep other values as default.
Aligning the AWS S3 bucket in the AWS Cloudfront in AWS Cloud
Aligning the AWS S3 bucket in the AWS Cloudfront in AWS Cloud
  • For the settings under Default Cache Behavior Set and Distribution Settings, accept the default values and then click on Create distribution.
AWS S3 bucket setup in AWS Cloudfront
AWS S3 bucket setup in AWS Cloudfront
AWS Cloudfront distribution
AWS Cloudfront distribution
  • Now upload a index.html with a text hello in the AWS S3 bucket and set the permission as public access as shown below.
Uploading the file in AWS S3 bucket
Uploading the file in AWS S3 bucket
Granting permissions to the file in AWS S3 bucket
Granting permissions to the file in the AWS S3 bucket
  • Now check the Amazon S3 URL to verify that your content is publicly accessible
Checking the content of file of AWS S3 bucket using the AWS S3 URL
Checking the content of file of AWS S3 bucket using the AWS S3 URL
  • Finally check the CloudFront URL by hitting domain-name/index.html and it should show the same result as your index.html file contains.
domainname/index.html
Checking the content of file of AWS S3 bucket using the Cloudfront URL
Checking the content of file of AWS S3 bucket using the Cloudfront URL

How to Use Custom URLs in AWS CloudFront by Adding alternate Domain Names (CNAMEs)

Previously the CloudFront URL was generated with a domain name *.cloudfront.net by default, but If in production, it is important to configure your own domain name that is CNAME, such as abc.com, in the URL. Let’s learn how to use Custom URLs in AWS CloudFront by adding alternate Domain Names (CNAMEs).

Earlier, the default URL of AWS Cloudfront was http://dsx78lsseoju7.cloudfront.net/index.html, but if you wish to use an alternate domain such as http://abc.com/index.html, follows the step below:

  • Navigate back to CloudFront Page and look for the distribution where you need to change the domain and click on Edit
Updating the custom URL in AWS Cloudfront
Updating the custom URL in AWS Cloudfront
  • Here, provide the domain name that you wish to configure with valid SSL certificate.
Updating the CNAME and SSL certificate in AWS Cloudfront
Updating the CNAME and SSL certificate in AWS Cloudfront
  • Now the domain name is succesfully update in Cloudfront but for the URL to work you will need to configure few things in Route53 AWS service such as alias record set. To do that, navigate to the Route53 page by searching on the top of the AWS Page.
Opening the AWS Route53 service
Opening the AWS Route53 service
  • Click on the Hosted Zone and then click on Create Record
Opening the Hosted zone to create a record
Opening the Hosted zone to create a record
  • Now provide the name of record, record type and route traffic as CloudFront distribution. After you configure Route53 verify the index page ( http://mydomain.abc.com/index.html ) and it should work fine.
Creating the record in Route53 to route new domain to CloudFront
Creating the record in Route53 to route new domain to CloudFront

Using Amazon EC2 as the Origins in the AWS CloudFront

A custom Origin can be an Amazon Elastic Compute Cloud (AWS EC2), for example, an http server. You need to provide the DNS name of the AWS EC2 instance as the custom origin, but while setting the custom origin as AWS EC2, make sure to follow some basic guidelines.

  • Host the same content and synchronize the clocks on all servers in the same way.
  • Restrict access requests to the HTTP and HTTPS ports that your custom origin listens on that is AWS EC2.
  • Use an Elastic Load Balancing load balancer to handle traffic across multiple Amazon EC2 instances and when you create your CloudFront distribution, specify the URL of the load balancer for the domain name of your origin server.

Conclusion

This tutorial taught you what CloudFront is and how to set up CloudFront Distributions in the Amazon cloud. The benefit of using CloudFront is it allows users to retrieve their content quickly by utilizing the concept of caching.

So next, what are you going to manage with CloudFront?

How to Launch AWS Redshift Cluster using AWS Management Console in Amazon account.

Do you have huge data to analyze, such as for the performance of your applications? If yes, you are at the right place to learn about AWS Redshift one of the most widely used AWS services to analyze the data.

AWS Redshift service allows storing terabytes of data and analyzing the data, and the service is AWS Redshift.

In this tutorial, you will learn about Amazon’s data warehouse and analytic service, AWS Redshift, and how to create an AWS Redshift cluster using the AWS Management console.

Let’s get started.

Join 50 other followers

Table of Content

  1. What is AWS Redshift?
  2. AWS Redshift Cluster
  3. Prerequisites
  4. Creating AWS IAM role for AWS Redshift Cluster
  5. How to Create AWS Redshift Cluster using AWS Management console
  6. Conclusion

What is AWS Redshift?

AWS Redshift is an AWS analytical service that allows you to store huge amounts of data and analyze queries on the database. It is a fully managed service, so you don’t need to worry about scalability and infrastructure.

To upload the data in the AWS Redshift cluster, first, you need to create the set of nodes, and later, you can start analyzing the data. AWS Redshift manages everything for you, such as monitoring, scaling, applying patches, upgrades, capacity, whatever is required at the infrastructure end.

AWS Redshift Cluster

AWS Redshift cluster contains a single node or more than one node, depending on the requirements. If you wish to create more than one node, then that is known as a cluster. AWS Redshift Cluster contains one leader node, and other nodes are known as compute nodes.

You can create an AWS Redshift cluster using various ways such as AWS Command Line Interface ( AWS CLI ), AWS Management Console, and AWS SDK’s ( Software Development kit) libraries.

  • AWS Redshift cluster snapshots can be created either manually or automatically & are stored in AWS S3 bucket.
  • AWS CloudWwatch is used to capture health and performance of AWS Redshift cluster.
  • As soon as you create Amazon Redshift cluster one database is also created. This database is used to query and analyze the data. While you provision the cluster you need to provide master user which is superuser for the database & has all rights.
  • When a client queries Redshift cluster all the request are received by leader node , it further parses and develop query execution plans. Leader node coordinates with compute node and then provide final results to clients.
AWS Redshift Cluster architecture diagram
AWS Redshift Cluster architecture diagram

Prerequisites

  • You must have AWS account in order to setup AWS Redshift cluster. If you don’t have AWS account, please create a account from here AWS account.
  • It will be great if you have admin rights on AWS cloud else you must have access to create IAM role and AWS Redshift cluster.

Creating AWS IAM role for AWS Redshift Cluster

Before creating an AWS Redshift cluster, let’s create an IAM role that Redshift will assume to work with other services such as AWS S3, etc. Let’s quickly dive in and create an IAM role.

  • Open your browser and and go to AWS Management console and on the top search for IAM , here click on Roles
Viewing the IAM Dashboard
Viewing the IAM Dashboard
  • Next click on Create Role to create a new IAM role.
Creating the IAM role
Creating the IAM role
  • Now select AWS service as Redshift as highlighted below.
Creating IAM role and assigning permissions
Creating IAM role and assigning permissions
  • Further scroll down to the bottom and you will see “Select your use case”, here choose Redshift – Customizable, then choose Next: Permissions. This allowss AWS Redshift to connect to other AWS services such as AWS S3.
Customizing the AWS IAM role for AWS Redshift
Customizing the AWS IAM role for AWS Redshift
  • Now attach AmazonS3ReadOnlyAccess policy and click Next. This policy allows AWS Redshift to access AWS S3 bucket where you will store the data.
Attaching AWS S3 policy to an IAM role in AWS Cloud
Attaching AWS S3 policy to an IAM role in AWS Cloud
  • Next skip tagging as of now just click on Next: Tags and then Review & finally click on Create Role.
Creating AWS Redshift role
Creating AWS Redshift role

IAM role is created successfully; keep the IAM role ARN handy, which you will use in the next section.

Checking the newly created IAM role for AWS Redshift
Checking the newly created IAM role for AWS Redshift

How to Create AWS Redshift Cluster using AWS Management console

Now that you have an IAM role successfully created for the AWS Redshift cluster, let’s move on and learn how to create an AWS Redshift Cluster using the AWS Management console.

  • On the AWS Management console search for Redshift on the top of the page.
Navigating to AWS Redshift cluster Page
Navigating to AWS Redshift cluster Page
  • Next click on create a free trial Cluster and provide the name of cluster as redshift-cluster-1.
Specifying the AWS Redshift cluster configurations
Specifying the AWS Redshift cluster configurations
  • Further provide the database details such as admin username and password and save them for future. Also Associate IAM role that you cretad in previous secion.
Configure database details in the AWS Redshift Cluster
Configure database details in the AWS Redshift Cluster
  • Finally click on Create cluster
Configure network settings in the AWS Redshift Cluster
Configure network settings in the AWS Redshift Cluster

The AWS Redshift cluster is created successfully and available for use.

AWS Redshift cluster created successfully
AWS Redshift cluster created successfully
  • Lets validate the database connection by running a simple query. Click on Query data
Querying the database connection
Querying the database connection
  • Provide the database credentials for connecting to AWS Redshift cluster.
    • Note: dev database was created by default in the AWS Cluster
Providing the database connection details in AWS Redshift cluster
Providing the database connection details in the AWS Redshift cluster
  • Now Rrun a query as below. The query will be executed as there some tables already created by default inside the database like events, date etc.
select * from date

AWS Redshift Cluster is created successfully, and the queries are successfully executed in the database.

Execution of query on AWS Redshift clusters database
Execution of query on AWS Redshift clusters database

Conclusion

In this tutorial, you learned about Amazon’s data warehouse and analytic service, AWS Redshift, AWS Redshift cluster is, and how to create an AWS Redshift cluster using the AWS Management console.

Now that you have the newly launched AWS Redshift, what do you plan to store and analyze?

How to Start and Stop AWS EC2 instance in AWS account using Shell script

Are you spending unnecessary money in AWS Cloud by keeping unused AWS EC2 instances in running states? Why not stop the AWS EC2 instance and only start when required by running a single Shell Script?

Multiple AWS accounts contain dozens of AWS EC2 instances that require some form of automation to stop or start these instances, and to achieve this, nothing could be better than running a shell script.

In this tutorial, you will learn step by step how to Start and Stop AWS EC2 instance in AWS account using Shell script.

Still interested? Let’s dive in!

Join 50 other followers

Table of Content

  1. What is Shell Scripting or Bash Scripting?
  2. What is AWS EC2 instance?
  3. Prerequisites
  4. Building a shell script to start and stop AWS EC2 instance
  5. Executing the Shell Script to Stop AWS EC2 instance
  6. Verifying the Stopped AWS EC2 instance
  7. Executing the Shell Script to Start AWS EC2 instance
  8. Verifying the Running AWS EC2 instance
  9. Conclusion

What is Shell Scripting or Bash Scripting?

Shell Script is a text file containing lists of commands executed on the terminal or shell in one go in sequential order. Shell Script performs various important tasks such as file manipulation, printing text, program execution.

Shell script includes various environmental variables, comments, conditions, pipe commands, functions, etc., to make it more dynamic.

When you execute a shell script or function, a command interpreter goes through the ASCII text line-by-line, loop-by-loop, test-by-test, and executes each statement as each line is reached from top to bottom.

What is AWS EC2 instance?

AWS EC2 stands for Amazon web service elastic compute cloud. AWS EC2 is simply a virtual server that gets launched in quick time, and you don’t need to worry about the hardware. After the AWS EC2 instance is launched, you can deploy highly scalable and available applications.

There are some important components in AWS EC2 instance such as:

AWS EC2 AMI

  • AWS EC2 contains preconfigured templates known as AMI ( Amazon Machine Image ) that include an operating system and software configurations that are highly required. Using these preconfigured templates you can launch as many AWS EC2 instances.

You can configure your own software’s and data you wish to have when an instance on top of Preconfigured templates.

Amazon Machine Image template
Amazon Machine Image template

AWS EC2 instance type

AWS EC2 contains various AWS EC2 instance types with different CPU and memory configurations such as t2.micro, t2.medium, etc.

AWS EC2 instance type
AWS EC2 instance type

Amazon EC2 key pairs

AWS EC2 instance allows you to log in to these launched instances with complete security by creating a Keypair where one of the keys is public that remains within the AWS account, and another is the private key that remains with the owner of the instance.

AWS EC2 EBS Storage

AWS EC2 allows you to add two kinds of storage that is ec2 instance store volumes which are temporary storage, and Elastic block storage (AWS EBS), the permanent storage.

AWS EC2 is launched with root device volume ( ec2 instance store volumes or AWS EBS ) that allows you to boot the machine.

AWS EC2 EBS Storage
AWS EC2 EBS Storage

AWS EC2 instance state

AWS EC2 service provides various states of a launched instance such as stopped, started, running, terminated. Once the instance is terminated, it cannot be restarted back.

AWS EC2 instance state
AWS EC2 instance state

Prerequisites

  1. AWS account to create ec2 instance. If you don’t have AWS account please create from AWS account or AWS Account
  2. Windows 7 or plus edition where you will execute the shell script.
  3. AWS CLI installed. To install AWS CLI click here.
  4. Git bash. Yo install Git bash click here
  5. Code editor for writing the shell script on windows machine such as visual studio code. To install visual studio click here.

Building a shell script to start and stop AWS EC2 instance

Now that you have a good idea about the AWS EC2 instance and shell script but let’s learn how to build a shell script to start and stop the AWS EC2 instances.

  • Create a folder of your windows machine at any location. Further under the same folder create a file named start-stop-ec2.sh and copy/paste the below code.
# /usr/bin/bash 

set -e  # set -e stops the execution of a script if a command or pipeline has an error

id=$1   # Provide the instance ID with the name of the script

# Checking if Instance ID provided is correct 

function check_ec2_instance_id () {
    
    if echo "$1" | grep -E '^i-[a-zA-Z0-9]{8,}' > /dev/null; then 
           echo "Correct Instance ID provided , thank you"
           return 0
    else 
          echo "Opps !! Incorrect Instance ID provided !!"
          return 1
    fi
}

# Function to Start the instance 

function ec2_start_instance ()   {
     aws ec2 start-instances --instance-ids $1 
}

# Function to Stop the instance 

function ec2_stop_instance ()   {
     aws ec2 stop-instances --instance-ids $1 
}

# Function to Check the Status of the instance

function ec2_check_status ()   {
     aws ec2 describe-instances --instance-ids $1 --query "Reservations[].Instances[].State.Name" --output text
}

# Main Function 

function main ()  {
     check_ec2_instance_id $1                # First it checks the Instance ID
     echo " Instance ID provided is $1"  # Prints the message
     echo "Checking the status of $1"    # Prints the message
     ec2_check_status $1
                 # Checks the Status of Instance
   
     status=$(ec2_check_status $id)     # It stores the status of Instance
     if [ "$status" = "running" ]; then     
         echo "I am stopping the instance now"
         ec2_stop_instance $1
         echo "Instance has been stopped successfully"
     else 
         echo "I am starting the instance now"
         ec2_start_instance $1
         echo "Instance has been Started successfully" 
     fi

}

main $1                                 # Actual Script starts from main function

Executing the Shell Script to Stop AWS EC2 instance

Previously you created the shell script to start and stop the AWS EC2 instance, which is great; but it is not doing much unless you run it. Let’s execute the shell script now.

  • Open the visual studio code and then open the location of file start-stop-ec2.sh.
Opening Shell script on visual studio code
Opening Shell script on visual studio code
  • Finally execute the shell script.
./start-stop-ec2.sh <Instance-ID>    # Provide the EC2 instance ID along with script
Executing the shell script to stop the AWS Ec2 instance
Executing the shell script to stop the AWS Ec2 instance

Verifying the Stopped AWS EC2 instance

Earlier in the previous section, the shell script ran successfully; let’s verify the if AWS EC2 instance has been stopped from running state in the AWS account.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘EC2’, and click on the EC2 menu item and you should see the instance you specified in shell script has stopped now.
Viewing the stopped AWS EC2 instance
Viewing the stopped AWS EC2 instance

Executing the Shell Script to Start AWS EC2 instance

Now thaYouuccessfully stopped and verified the AWS EC2 instance in the AWS cloud. This time let’s restart the instance using the same script.

./start-stop-ec2.sh <Instance-ID>    # Provide the EC2 instance ID along with script
Executing the shell script to start the instance
Executing the shell script to start the instance

Verifying the Running AWS EC2 instance

Similarly, in this section, let’s verify the if AWS EC2 instance has been restarted successfully in the AWS account.

Viewing the running AWS EC2 instance
Viewing the running AWS EC2 instance

Conclusion

In this tutorial, you learned what is Amazon EC2 and learned how to start or stop AWS EC2 using shell script on AWS step by step. It is always a good practice to turn off your lights when you leave your home or room, similarly do for EC2 instances.

So which AWS EC2 instance are you planning to stop going further and save dollars?

How to Create an IAM user on an AWS account using shell script

Are you using the correct credentials and right permissions to log in to your AWS account? From a security point of view, it is essential to grant the right permissions to users and identities that access AWS accounts. That is where Identity and access management (AWS IAM) plays a vital role.

In this tutorial, you will learn how to create an IAM user on an AWS account using shell script step by step. Let’s get started.

Join 50 other followers

Table of Content

  1. What is Shell Scripting or Bash Scripting?
  2. What is AWS IAM or What is IAM in AWS ?
  3. AWS IAM Resources
  4. AWS IAM entities
  5. AWS IAM Principals
  6. AWS IAM Identities
  7. Prerequisites
  8. How to create IAM user in AWS manually
  9. How to create AWS IAM user using shell script in Amazon account
  10. Executing the Shell Script to Create AWS IAM user
  11. Verifying the Newly created IAM user in AWS
  12. Conclusion

What is Shell Scripting or Bash Scripting?

Shell Script is a text file containing lists of commands executed on the terminal or shell in one go in sequential order. Shell Script performs various important tasks such as file manipulation, printing text, program execution.

Shell script includes various environmental variables, comments, conditions, pipe commands, functions, etc., to make it more dynamic.

When you execute a shell script or function, a command interpreter goes through the ASCII text line-by-line, loop-by-loop, test-by-test, and executes each statement as each line is reached from top to bottom.

What is AWS IAM or What is IAM in AWS ?

AWS IAM stands for Amazon Managed service Identity and access management service that controls who can access AWS account and what resources in AWS account can be accessed.

When you create a new AWS account by default, you are the root user, have control over the entire AWS account, and can access everything. The Root user can log in to an AWS account using an email address and password you registered.

There are some important components in AWS IAM such as:

AWS IAM Resources

AWS IAM resources are the objects stored in IAM, such as user, role, policy, group, and identity provider.

AWS IAM Resources
AWS IAM Resources

AWS IAM entities

AWS IAM entities are those objects which can authenticate on AWS account, such as root user, IAM user, federated user, and assumed IAM roles.

AWS IAM entities
AWS IAM entities

AWS IAM Principals

AWS IAM Principals are the applications or users who use entities and work with AWS services. For example, Python AWS Boto3 or any person such as Robert.

AWS IAM Identities

AWS IAM Identities are the objects which identify themselves to another service such as IAM user “user1” accessing AWS EC2 instance. This shows that user1 shows its own identity that I have access to create an AWS EC2 instance. Examples of identity are group, users, and role.

AWS IAM Identities
AWS IAM Identities

Prerequisites

  1. AWS account to create ec2 instance. If you don’t have AWS account please create from AWS account or AWS Account
  2. Windows 7 or plus edition where you will execute the shell script.
  3. AWS CLI installed. To install AWS CLI click here.
  4. Git bash. Yo install Git bash click here
  5. Code editor for writing the shell script on windows machine such as visual studio code. To install visual studio click here.

How to create IAM user in AWS manually

Do you know root user is a shared account with all privileges,’ but it is not recommended to be used for any activity on an AWS account?

Instead of using a root user, a shared user, use an individual user and have various permissions accordingly.

IAM user can access a single AWS EC2 instance or multiple AWS S3 buckets or even attain admin access to gain complete access to AWS account.

  • Navigate to the Amazon Management console and and search for IAM.
  • Under AWS IAM page click on Add users button in IAM dashboard.
Adding a IAM user in AWS Cloud
Adding an IAM user in AWS Cloud
  • Now, provide the username, add a custom password and also select Programmatic access as shown below.
Providing the details to create a IAM user
Providing the details to create an IAM user
  • Click on Next permissions and choose Attach existing policies. This tutorial will grant Administrator access to the IAM user that you created previously.
Attaching IAM policy to IAM user in AWS
Attaching IAM policy to IAM users in AWS
  • For now skip tagging and click on create user. IAM user is created successfully . Now save the access key ID and Secret access key that will be used later in the article.
Downloading the AWS IAM user credentials for IAM user
Downloading the AWS IAM user credentials for IAM user

How to create AWS IAM user using shell script in Amazon account

Previously you learned how to create an IAM user manually within the Amazon Management console, but this section lets you create an AWS IAM user using a shell script in an Amazon account. Let’s quickly jump into and create the script.

  • Create a folder of your windows machine at any location. Further under the same folder create a file named create-iam-user.sh and copy/paste the below code.
#! /bin/bash
# Checking if access key is setup in your system 

if ! grep -q aws_access_key_id ~/.aws/config; then      # grep -q  Turns off Writing to standard output
   if ! grep -q aws_access_key_id ~/.aws/credentials; then 
      echo "AWS config not found or CLI is not installed"
      exit 1
    fi 
fi


# read command will prompt you to enter the name of IAM user you wish to create 

read -r -p "Enter the username to create": username

# Using AWS CLI Command create IAM user 

aws iam create-user --user-name "${username}" --output json

# Here we are creating access and secret keys and then using query and storing the values in credentials

credentials=$(aws iam create-access-key --user-name "${username}" --query 'AccessKey.[AccessKeyId,SecretAccessKey]'  --output text)

# cut command formats the output with correct coloumn.

access_key_id=$(echo ${credentials} | cut -d " " -f 1)
secret_access_key=$(echo ${credentials} | cut --complement -d " " -f 1)

# echo command will print on the screen 

echo "The Username "${username}" has been created"
echo "The access key ID  of "${username}" is $access_key_id "
echo "The Secret access key of "${username}" is $secret_access_key "

Executing the Shell Script to Create AWS IAM user

Previously you created the shell script to create the AWS IAM user, which is great, but it is not doing much unless you run it. Let’s execute the shell script now.

  • Open the visual studio code and then open the location of file create-iam-user.sh.
Opening Shell script on visual studio code
Opening Shell script on visual studio code
  • Finally execute the shell script.
./create-iam-user.sh
Executing the shell script to create the AWS IAM user
Executing the shell script to create the AWS IAM user

Verifying the Newly created IAM user in AWS

Earlier in the previous section, the shell script ran successfully; let’s verify the if IAM user has been created in the AWS account.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item and you should see the IAM user is creared.
Verifying the Newly created IAM user in AWS
Verifying the Newly created IAM user in AWS

Conclusion

In this tutorial, you learned how to create AWS IAM users using shell script on AWS step by step. With IAM, you get individual access to AWS account, and you can manage permissions accordingly.

Now that you have newly created IAM users in the AWS account, which AWS resource do you plan to create next using this?

How to Launch AWS S3 bucket using Shell Scripting.

Are you storing the data securely, scalable, highly available, and fault-tolerant? If not, consider using Amazon Simple Storage Service (Amazon S3) in the AWS cloud.

This tutorial will teach you how to launch an AWS S3 bucket in an Amazon account using bash or shell scripting.

Let’s dive into it quickly.

Join 50 other followers

Table of Content

  1. What is Shell Script or Bash Script?
  2. What is the Amazon AWS S3 bucket?
  3. Prerequisites
  4. Building a shell script to create AWS S3 bucket in Amazon account
  5. Executing the Shell Script to Create AWS S3 bucket in Amazon Cloud
  6. Verifying the AWS S3 bucket in AWS account
  7. Conclusion

What is Shell Script or Bash Script?

Shell Script is a text file containing lists of commands executed on the terminal or shell in one go in sequential order. Shell Script performs various important tasks such as file manipulation, printing text, program execution.

Shell script includes various environmental variables, comments, conditions, pipe commands, functions, etc., to make it more dynamic.

When you execute a shell script or function, a command interpreter goes through the ASCII text line-by-line, loop-by-loop, test-by-test, and executes each statement as each line is reached from top to bottom.

What is the Amazon AWS S3 bucket?

AWS S3, why it is S3? The name itself tells that it’s a 3 word whose alphabet starts with “S.” The Full form of AWS S3 is a simple storage service. AWS S3 service helps in storing unlimited data safely and efficiently. Everything in the AWS S3 service is an object such as pdf files, zip files, text files, war files, anything. Some of the features of the AWS S3 bucket are below:

  • To store the data in AWS S3 bucket you will need to upload the data.
  • To keep your AWS S3 bucket secure addthe necessary permissions to IAM role or IAM user.
  • AWS S3 buckets have unique name globally that means there will be only 1 bucket throughout different accounts or any regions.
  • 100 buckets can be created in any AWS account, post that you need to raise a ticket to Amazon.
  • Owner of AWS S3 buckets is specific to AWS account only.
  • AWS S3 buckets are created region specific such as us-east-1 , us-east-2 , us-west-1 or us-west-2
  • AWS S3 bucket objects are created in AWS S3 in AWS console or using AWS S3 API service.
  • AWS S3 buckets can be publicly visible that means anybody on the internet can access it but is recommended to keep the public access blocked for all buckets unless very much required.

Prerequisites

  1. AWS account to create ec2 instance. If you don’t have AWS account please create from AWS account or AWS Account
  2. Windows 7 or plus edition where you will execute the shell script.
  3. AWS CLI installed. To install AWS CLI click here.
  4. Git bash. Yo install Git bash click here
  5. Code editor for writing the shell script on windows machine such as visual studio code. To install visual studio click here.

Building a shell script to create AWS S3 bucket in Amazon account

Now that you have a good idea about the AWS S3 bucket and shell script let’s learn how to build a shell script to create an AWS S3 bucket in an Amazon account.

  • Create a folder of your windows machine at any location. Further under the same folder create a file named create-s3.sh and copy/paste the below code.
#! /usr/bin/bash
# This Script will create S3 bucket and tag the bucket with appropriate name.

# To check if access key is setup in your system 


if ! grep aws_access_key_id ~/.aws/config; then
   if ! grep aws_access_key_id ~/.aws/credentials; then
   echo "AWS config not found or you don't have AWS CLI installed"
   exit 1
   fi
fi

# read command will prompt you to enter the name of bucket name you wish to create 


read -r -p  "Enter the name of the bucket:" bucketname

# Creating first function to create a bucket 

function createbucket()
   {
    aws s3api  create-bucket --bucket $bucketname --region us-east-2
   }

# Creating Second function to tag a bucket 

function tagbucket()    {
    
   aws s3api  put-bucket-tagging --bucket $bucketname --tagging 'TagSet=[{Key=Name,Value="'$bucketname'"}]'
}

# echo command will print on the screen 

echo "Creating the AWS S3 bucket and Tagging it !! "
echo ""
createbucket    # Calling the createbucket function  
tagbucket       # calling our tagbucket function
echo "AWS S3 bucket $bucketname created successfully"
echo "AWS S3 bucket $bucketname tagged successfully "

Executing the Shell Script to Create AWS S3 bucket in Amazon Cloud

Previously you created the shell script to create an AWS S3 bucket in Amazon Cloud, which is great, but it is not doing much unless you run it. Let’s execute the shell script now.

  • Open the visual studio code and then open the location of file create-s3.sh.
Opening Shell script on visual studio code
Opening Shell script on visual studio code
  • Finally execute the shell script.
./create-s3.sh
Executing the shell script to create AWS S3 bucket
Executing the shell script to create AWS S3 bucket

Verifying the AWS S3 bucket in AWS account

Earlier in the previous section, the shell script ran successfully; let’s verify the if AWS S3 bucket has been created in the AWS account.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘S3’, and click on the S3 menu item and you should see the list of AWS S3 buckets and the bucket that you specified in shell script.
Viewing the AWS S3 bucket in AWS cloud
Viewing the AWS S3 bucket in AWS cloud
  • Also verify the tags that you applied in the AWS S3 bucket by navigating to proerties tab.
Viewing the AWS S3 bucket tags in AWS cloud
Viewing the AWS S3 bucket tags in the AWS cloud

Conclusion

In this tutorial, you learned how to set up Amazon AWS S3 using shell script on AWS step by step. Most of your phone and website data are stored on AWS S3.

Now that you have a newly created AWS S3 bucket, what do you plan to store in it?