AWS CloudWatch Logs

When your host applications on AWS cloud and your Amazon infrastructure grows day by day it becomes difficult to monitor it. To solve the monitoring issue the Amazon provides its own managed service that allows you to store logs which you can access anytime.

AWS CloudWatch logs monitor many AWS services and stores the Logs in different or same Log groups.

In this tutorial you will learn everything you should know about CloudWatch Logs. Lets get into it without further delay.

Table of Content

  1. What is AWS CloudWatch Logs?
  2. AWS CloudWatch Pricing
  3. Components of AWS CloudWatch Logs
  4. Collecting EC2 instance Logs with CloudWatch Logs
  5. Unified CloudWatch Agent
  6. Older CloudWatch Agent
  7. Installing the CloudWatch agent
  8. Downloading CloudWatch Agent
  9. Creating IAM roles for CloudWatch
  10. Create and modify the CloudWatch Agent configuration file
  11. Running AWS CloudWatch agent
  12. Analyzing log data with CloudWatch Logs Insights
  13. Running a CloudWatch Logs Insights query
  14. Running a CloudWatch Logs Insights query for Lambda function
  15. Running a CloudWatch Logs Insights query for Amazon VPC Flow Logs
  16. Running a CloudWatch Logs Insights query for Route53 logs
  17. Running a CloudWatch Logs Insights query for CloudTrail logs
  18. Create a CloudWatch Log groups in CloudWatch Logs
  19. Checking Log Entries using AWS Management console.
  20. Checking Log Entries using the AWS CLI
  21. Data at Rest vs Data in Transit
  22. Encrypting Log Data in CloudWatch Logs
  23. Creating an AWS KMS customer managed key
  24. Adding permissions to AWS KMS customer managed keys
  25. Associating the customer managed key with a log group when you create it
  26. Creating metrics from log events using filters
  27. Creating metric filters from log events
  28. Creating metric filters using the AWS CLI
  29. Posting Event data into CloudWatch Log groups using the AWS CLI
  30. To list metric filters using the AWS CLI
  31. Real-time processing of log data with subscriptions
  32. Creating CloudWatch Logs Subscription filter with Kinesis Data Streams
  33. Creating CloudWatch Logs Subscription filter with AWS lambda function.
  34. Publish Logs to AWS S3, kinesis and CloudWatch Logs
  35. Publishing Logs to AWS CloudWatch Logs
  36. Publishing Logs to AWS S3
  37. Publishing Logs to Kinesis Firehose
  38. Conclusion

What is AWS CloudWatch Logs?

AWS CloudWatch service monitor, store and access log files from various other AWS services such as Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Route 53, and other sources.

  • AWS CloudWatch also allows you to query your logs using query language, masking sensitive information and also allows you to generate metrics using filters or embedded logs.
  • CloudWatch Logs Insights is used to query the data search and analyze your log data. You can also use CloudWatch to monitor the AWS EC2 logs.
  • You can also create AWS CloudWatch alarms for various AWS services such as to capture the CloudTrail events etc.
  • Use data protection policies to avoid sensitive data in your logs
  • By default, logs are kept indefinitely and never expire however you can set days from 1 to 10 years depending on the requirement.
  • You can also archive the logs in highly durable storage.

AWS CloudWatch Pricing

The AWS CloudWatch Logs are free of cost for AWS free tier account. However for standard account the logs such as VPC Flow logs, EC2 logs and Lambda logs are charged.

The Metrics, Dashboards, alarms and various other components in AWS CloudWatch are charged.

Components of AWS CloudWatch Logs

Log Event: log event is a record of some activity recorded by the application or resource being monitored. CloudWatch logs understand two things from log event i.e. timestamp and raw event message.

Log streams: Log streams are group of same log events that has same or common source. represent the sequence of events coming from the application instance or resource being monitored such as Apache logs.

Log groups: Log groups define groups of log streams that share the same retention, monitoring, and access control settings. Each log stream has to belong to one log group.

Note: Each log stream has to belong to one log group

Metric filters

You can use metric filters on  ingested events to create metrics data points in a CloudWatch metric. Metric filters are assigned to log groups, and all of the filters assigned to a log group are applied to their log streams.

Retention settings

Retention settings can be used to specify how long log events are kept in CloudWatch Logs

Collecting EC2 instance Logs with CloudWatch Logs

There are two ways in which the AWS EC2 instance logs are captured with CloudWatch Logs agent:

Unified CloudWatch Agent

  • The latest and recommended agent is unified Cloud Watch agent which supports multiple operating systems, including servers running Windows Server. This agent also provides better performance.
  • Retrieve custom metrics from your applications or services using the StatsD and collectd protocols. 
    • StatsD is supported on both Linux servers and servers running Windows Server. 
    • collectd is supported only on Linux servers.
  • The default namespace for metrics collected by the Cloud Watch agent is CWAgent, although you can specify a different namespace when you configure the agent.

Older CloudWatch Agent

The older CloudWatch agent supports collection of logs only from servers running Linux.

Installing the CloudWatch agent

Below is the high level steps that you need to perform to install the cloud watch agent.

  • Create IAM roles or users that enable the agent to collect metrics from the server and optionally to integrate with AWS Systems Manager.
  • Download the agent package. You can download and install the CloudWatch agent manually using the command line, or you can integrate it with SSM
  • Modify the CloudWatch agent configuration file and specify the metrics that you want to collect.
  • Install and start the agent on your servers. As you install the agent on an EC2 instance, you attach the IAM role that you created in step 1. As you install the agent on an on-premises server, you specify a named profile that contains the credentials of the IAM user that you created in step 1.

Downloading CloudWatch Agent

Lets now get into details of what all you need to do to install and work with cloud watch agent.

  • Install the package on AWS EC2 instance using below command.
sudo yum install amazon-cloudwatch-agent
  • Create IAM role and attach it to the AWS EC2 instance that has the CloudWatchAgentServerPolicy attached.
  • You can also download and install it using AWS S3 links.
wget https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm

sudo rpm -U ./amazon-cloudwatch-agent.rpm

sudo dpkg -i -E ./amazon-cloudwatch-agent.deb

Creating IAM roles for CloudWatch

Next, create the IAM role from IAM management console and add the policy CloudWatchAgentServerPolicy. If you want that the Cloud watch agent to set the retention policy for log groups that it sends log events to then add the below in the policy.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "logs:PutRetentionPolicy",
      "Resource": "*"
    }
  ]
}

Create and modify the CloudWatch Agent configuration file

The agent configuration file is a JSON file  with three sections: agent, metrics, and logs that specifies the metrics and logs that the agent is to collect, including custom metrics. The agent configuration file wizard, amazon-cloudwatch-agent-config-wizard.

The wizard can autodetect the credentials and AWS Region to use if you have the AWS credentials and configuration files in place before you start the wizard.

sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard

You can also create the cloud watch agent configuration file manually or sometimes it installed with agent

  • Agent section is declared as below.
"agent": {
   "metrics_collection_interval": 60,
   "region": "us-west-1",
   "logfile": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log",
   "debug": false,
   "run_as_user": "cwagent"
  }
  • metric sections is declarled as below.
{
  "metrics": {
    "namespace": "Development/Product1Metrics",
   ......
   },
} 
  • Logs section are declared as below.
"collect_list": [ 
  {
    "file_path": "/opt/aws/amazon-cloudwatch-agent/logs/test.log", 
    "log_group_name": "test.log", 
    "log_stream_name": "test.log",
    "filters": [
      {
        "type": "exclude",
        "expression": "Firefox"
      },
      {
        "type": "include",
        "expression": "P(UT|OST)"
      }
    ]
  },
  .....
]

Running AWS CloudWatch agent

Finally in this section run the below command to run the CloudWatch agent by performing below steps.

  • Copy the agent configuration file that you want to use to the server where you’re going to run the agent. Note the pathname where you copy it to.
  • Now run the below command On an EC2 instance running Linux, enter the following command.
    • -a fetch-config causes the agent to load the latest version of the CloudWatch agent configuration file
    • -s starts the agent.
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file:configuration-file-path
  • On an EC2 instance running Windows Server, enter the following from the PowerShell console
& "C:\Program Files\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1" -a fetch-config -m ec2 -s -c file:configuration-file-path

Analyzing log data with CloudWatch Logs Insights

If you need to analyze your data more accurately and interactively you can enable CloudWatch logs insights in Amazon CloudWatch Logs. 

  • CloudWatch Logs Insights automatically discovers fields in logs from AWS services such as Amazon Route 53, AWS Lambda, AWS CloudTrail, and Amazon VPC, and any application or custom log that emits log events as JSON.
  • A single request can query up to 50 log groups. Queries time out after 60 minutes, if they have not completed. Query results are available for 7 days.
  • CloudWatch Logs Insights automatically generates five system fields:
    • @message contains the raw unparsed log event.
    • @timestamp contains the event timestamp in the log event’s timestamp field.
    • @ingestionTime contains the time when CloudWatch Logs received the log event.
    • @logStream contains the name of the log stream that the log event was added to.
    • @log is a log group identifier in the form of account-id:log-group-name.
  • Lets say you have below log in JSON format and you want to access type then you use userIdentity.type.
{
    "eventVersion": "1.0",
    "userIdentity": {
        "type": "IAMUser",
        "principalId": "EX_PRINCIPAL_ID",
        "arn": "arn: aws: iam: : 123456789012: user/Alice",
        "accessKeyId": "EXAMPLE_KEY_ID",
        "accountId": "123456789012",
        "userName": "Alice"
    },

Running a CloudWatch Logs Insights query

If you need to run a CloudWatch Logs Insights query, below are the steps to run a query.

  • To run a CloudWatch Logs Insights query, Open the CloudWatch console.
  • In the navigation pane, choose Logs, and then choose Logs Insights. On the Logs Insights page, go to the query editor.
  • In the Select log group(s) drop down, choose one or more log groups to query.
  • Choose Run to view the results.
  • To see all fields for a returned log event, choose the triangular dropdown icon left of the numbered event.
  • Example of some of queries are as follows.
stats count(*) by @logStream     | limit 100

stats count(*) by fieldName

stats count(*) by bin(30s)

Running a CloudWatch Logs Insights query for Lambda function

To run a CloudWatch Logs Insights query for a lambda function that determines the amount of overprovisioned memory run the below command.

filter @type = "REPORT"
    | stats max(@memorySize / 1000 / 1000) as provisonedMemoryMB,
        min(@maxMemoryUsed / 1000 / 1000) as smallestMemoryRequestMB,
        avg(@maxMemoryUsed / 1000 / 1000) as avgMemoryUsedMB,
        max(@maxMemoryUsed / 1000 / 1000) as maxMemoryUsedMB,
        provisonedMemoryMB - maxMemoryUsedMB as overProvisionedMB
    

Running a CloudWatch Logs Insights query for Amazon VPC Flow Logs

To run a CloudWatch Logs Insights query for Amazon VPC Flow Logs that determines the top 15 packet transfers across hosts then run the below command.

stats sum(packets) as packetsTransferred by srcAddr, dstAddr
    | sort packetsTransferred  desc
    | limit 15

Running a CloudWatch Logs Insights query for Route53 logs

To run a CloudWatch Logs Insights query for Route53 that determines the the distribution of records per hour by query then run the below command.

stats count(*) by queryType, bin(1h)

Running a CloudWatch Logs Insights query for CloudTrail logs

  • Find the Amazon EC2 hosts that were started or stopped in a given AWS Region.
filter (eventName="StartInstances" or eventName="StopInstances") and awsRegion="us-east-2"
    

Note: After you run a query, you can add the query to a CloudWatch dashboard or copy the results to the clipboard.

Create a CloudWatch Log groups in CloudWatch Logs

A log stream is a sequence of log events that share the same source. Each separate source of logs in CloudWatch Logs makes up a separate log stream and A log group is a group of log streams with same configurations.

In this section we will learn how to create a log groups in CloudWatch logs services. Lets perform the below steps.

  • Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/

  • In the navigation pane, choose Log groups.

  • Choose Actions, and then choose Create log group.

  • Enter a name for the log group, and then choose Create log group.

Note: You may send logs to CloudWatch using CloudWatch agent, AWS CLI and Programmatically

Checking Log Entries using AWS Management console.

To check the Log entries using the AWS Management console performing the following steps.

  • Open the CloudWatch console and choose Log groups.
  • Look for the right Log groups and then further check the Log streams and further look for Log events.

Checking Log Entries using the AWS CLI

You can run the below command to search logs entry in the AWS CLI.

aws logs filter-log-events --log-group-name my-group [--log-stream-names LIST_OF_STREAMS_TO_SEARCH] [--filter-pattern VALID_METRIC_FILTER_PATTERN]

Data at Rest vs Data in Transit

This section is really important to understand what is Data at rest and what is Data in Transit. The data that resides with your cloud or is brought into the AWS account has to be secure always. So, all the AWS services has ability to encrypt the data either at rest or during in transit.

AWS services uses encryption either using service side encryption or a client side encryption where AWS manages service side using AWS KMS keys and for client side encryption client manages it using various methods including AWS KMS keys.

Data at rest means the data is kept and stored and to encrypt the data we can use AWS KMS keys however for data in transit customers have a choice either by using a protocol like Transport Layer Security (TLS). All AWS service endpoints support TLS to create a secure HTTPS connection to make API requests.

Using services like AWS KMS, AWS CloudHSM, and AWS ACM, customers can implement a comprehensive data at rest and data in transit encryption strategy across their AWS account.

Encrypting Log Data in CloudWatch Logs

Log data is always encrypted in CloudWatch logs. By default, CloudWatch Logs uses server-side encryption for the log data at rest. However you can also use AWS Key Management Service along with AWS KMS customer managed keys. Lets see how you can achieve this.

  • Encryption using AWS KMS is enabled at the log group level, by associating a key with a log group.
  • The encryption is done using an AWS KMS customer managed key.
  • CloudWatch Logs supports only symmetric customer managed keys. 
  • You must have kms:CreateKey, kms:GetKeyPolicy, and kms:PutKeyPolicy permissions.
  • If you revoke CloudWatch Logs access to an associated key or delete an associated customer managed key, your encrypted data in CloudWatch Logs can no longer be retrieved.

Lets follow below items to implement encryption in the AWS CloudWatch logs.

Creating an AWS KMS customer managed key

  • Lets run the below command to create a AWS KMS key.
aws kms create-key

Adding permissions to AWS KMS customer managed keys

  • By default only resource owner has permissions to encrypt or decrypt the data. So its important to grant permissions to access the key to other users and resources. Your policy should look like something below.
  • Note: CloudWatch Logs now supports encryption context, using kms:EncryptionContext:aws:logs:arn as the key and the ARN of the log group as the value for that key
  • Encryption context is a set of key-value pairs that are used as additional authenticated data. The encryption context enables you to use IAM policy conditions to limit access to your AWS KMS key by AWS account and log group.
{
 "Version": "2012-10-17",
    "Id": "key-default-1",
    "Statement": [
        {
            "Sid": "Enable IAM User Permissions",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::Your_account_ID:root"
            },
            "Action": "kms:*",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "logs.region.amazonaws.com"
            },
            "Action": [
                "kms:Encrypt*",
                "kms:Decrypt*",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:Describe*"
            ],
            "Resource": "*",
            "Condition": {
                "ArnEquals": {
                    "kms:EncryptionContext:aws:logs:arn": "arn:aws:logs:region:account-id:log-group:log-group-name"
                }
            }
        }    
    ]
}

Associating the customer managed key with a log group when you create it

  • Use the create-log-group command as follows.
aws logs create-log-group --log-group-name my-log-group --kms-key-id "key-arn"

Creating metrics from log events using filters

We can certainly filter the log data coming to CloudWatch logs by creating the one or more metrics. CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that you can graph or set an alarm on.

Components of Metrics

  • default value: If no logs are ingested during a one-minute period, then no value is reported
  • dimensions: Dimensions are the key-value pairs that further define a metric.
  • metric name: The name of the CloudWatch metric to which the monitored log
  • metric namespace: The destination namespace of the new CloudWatch metric.
  • metric value: The name of the CloudWatch metric to which the monitored log

Creating metric filters from log events

In this section we will go through steps which will guide you through creating metric filters from log events.

  • Open the CloudWatch console.
  • In the navigation pane, choose Logs, and then choose Log groups.
  • Choose the name of the log group.
  • Choose Actions, and then choose Create metric filter.
  • For Filter pattern, enter a filter pattern. To test your filter pattern, under Test Pattern, enter one or more log events to test the pattern.

Note: you can also use below filter pattern to find HTTP 404 code errors.

For Filter Pattern, type [IP, UserInfo, User, Timestamp, RequestInfo, StatusCode=404, Bytes].
  • Choose Next, and then enter a name for your metric filter.
  • Under Metric details, for Metric namespace, enter a name for the CloudWatch namespace where the metric will be published. If the namespace doesn’t already exist, make sure that Create new is selected.
  • For Metric name, enter a name for the new metric.
  • For Metric value, if your metric filter is counting occurrences of the keywords in the filter, enter 1.
  • Finally review and create the metrics.

Creating metric filters using the AWS CLI

The other way of creating metric filters is by using the AWS CLI. Lets checkout the below command to create metric filters using the AWS CLI.

aws logs put-metric-filter \
  --log-group-name MyApp/access.log \
  --filter-name EventCount \
  --filter-pattern " " \
  --metric-transformations \
  metricName=MyAppEventCount,metricNamespace=MyNamespace,metricValue=1,defaultValue=0

Posting Event data into CloudWatch Log groups using the AWS CLI

aws logs put-log-events \
  --log-group-name MyApp/access.log --log-stream-name TestStream1 \
  --log-events \
    timestamp=1394793518000,message="Test event 1" \
    timestamp=1394793518000,message="Test event 2" \
    timestamp=1394793528000,message="This message also contains an Error"

To list metric filters using the AWS CLI

aws logs describe-metric-filters --log-group-name MyApp/access.log

Real-time processing of log data with subscriptions

You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it delivered to other services such as an Amazon Kinesis stream, an Amazon Kinesis Data Firehose stream, or AWS Lambda for custom processing, analysis, or loading to other systems.

To begin subscribing to log events, create the receiving resource, such as a Kinesis Data Streams stream, where the events will be delivered. A subscription filter defines the filter pattern to use for filtering which log events get delivered to your AWS resource, as well as information about where to send matching log events to.

CloudWatch Logs also produces CloudWatch metrics about the forwarding of log events to subscriptions.

You can use a subscription filter with Kinesis Data Streams, Lambda, or Kinesis Data Firehose. Logs that are sent to a receiving service through a subscription filter are base64 encoded and compressed with the gzip format.

Creating CloudWatch Logs Subscription filter with Kinesis Data Streams

In this section we will create a AWS CloudWatch subscription filters and send the logs to the kinesis data streams.

  • Creating a destination stream in the Kinesis Data Streams service using the below command.
 aws kinesis create-stream --stream-name "RootAccess" --shard-count 1
  • Check the kinesis delivery stream if it is in active state.
aws kinesis describe-stream --stream-name "RootAccess"
  • Create the IAM role that will grant CloudWatch Logs permission to put data into your stream. Also make sure to add the trust policy in the role as follows.
{
  "Statement": {
    "Effect": "Allow",
    "Principal": { "Service": "logs.amazonaws.com" },
    "Action": "sts:AssumeRole",
    "Condition": { 
        "StringLike": { "aws:SourceArn": "arn:aws:logs:region:123456789012:*" } 
     }
   }
}
  • In case of cross account you should have the IAM role trust policy something like below.
{
    "Statement": {
        "Effect": "Allow",
        "Principal": {
            "Service": "logs.amazonaws.com"
        },
        "Condition": {
            "StringLike": {
                "aws:SourceArn": [
                    "arn:aws:logs:region:sourceAccountId:*",
                    "arn:aws:logs:region:recipientAccountId:*"
                ]
            }
        },
        "Action": "sts:AssumeRole"
    }
}
aws iam create-role --role-name CWLtoKinesisRole --assume-role-policy-document file://~/TrustPolicyForCWL-Kinesis.json
  • Attach a policy to the IAM role that you created previously.
{
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "kinesis:PutRecord",
      "Resource": "arn:aws:kinesis:region:123456789012:stream/RootAccess"
    }
  ]
}
  • In case of cross account additional step is required where you attach a policy to the Kinesis stream to allow CloudWatch to be able to send the data in this account.
{
  "Version" : "2012-10-17",
  "Statement" : [
    {
      "Sid" : "",
      "Effect" : "Allow",
      "Principal" : {
        "AWS" : "111111111111"
      },
      "Action" : "logs:PutSubscriptionFilter",
      "Resource" : "arn:aws:logs:region:999999999999:destination:testDestination"
    }
  ]
}
  • Create a CloudWatch subscription filter. The subscription filter immediately starts the flow of real-time log data from the chosen log group to your stream. In case of cross account: a subscription filter is created in a sending account.
aws logs put-subscription-filter \
    --log-group-name "CloudTrail/logs" \
    --filter-name "RootAccess" \
    --filter-pattern "{$.userIdentity.type = Root}" \
    --destination-arn "arn:aws:kinesis:region:123456789012:stream/RootAccess" \
    --role-arn "arn:aws:iam::123456789012:role/CWLtoKinesisRole"
  • After you set up the subscription filter, CloudWatch Logs forwards all the incoming log events that match the filter pattern to your stream. Verify by running the following examples.
aws kinesis get-shard-iterator --stream-name RootAccess --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON

Creating CloudWatch Logs Subscription filter with AWS lambda function.

In this section we will create a AWS CloudWatch subscription filters and send the logs to the AWS lambda function.

  • Create the AWS Lambda function. Lets create a sample Lambda function as below using AWS CLI.
aws lambda create-function \
    --function-name helloworld \
    --zip-file fileb://file-path/helloWorld.zip \
    --role lambda-execution-role-arn \
    --handler helloWorld.handler \
    --runtime nodejs12.x
  • Grant CloudWatch Logs the permission to execute your function.
aws lambda add-permission \
    --function-name "helloworld" \
    --statement-id "helloworld" \
    --principal "logs.amazonaws.com" \
    --action "lambda:InvokeFunction" \
    --source-arn "arn:aws:logs:region:123456789123:log-group:TestLambda:*" \
    --source-account "123456789012"
  • Create a subscription filter using the following command
aws logs put-subscription-filter \
    --log-group-name myLogGroup \
    --filter-name demo \
    --filter-pattern "" \
    --destination-arn arn:aws:lambda:region:123456789123:function:helloworld
  • Verify by running below command.
aws logs put-log-events --log-group-name myLogGroup --log-stream-name stream1 --log-events "[{\"timestamp\":<CURRENT TIMESTAMP MILLIS> , \"message\": \"Simple Lambda Test\"}]"

Publish Logs to AWS S3, kinesis and CloudWatch Logs

AWS services that publish logs to CloudWatch Logs are API Gateway, Aurora SQL, AWS VPC Flow logs etc. While many services publish logs only to CloudWatch Logs, some AWS services can publish logs directly to Amazon Simple Storage Service or Amazon Kinesis Data Firehose. 

Publishing Logs to AWS CloudWatch Logs

If you need to send the logs to CloudWatch then you need below permissions for the user or account through which you are logged in.

logs:CreateLogDelivery
logs:PutResourcePolicy
logs:DescribeResourcePolicies
logs:DescribeLogGroups

When the logs are sent to Log groups in AWS CloudWatch then the resource policy is automatically created if you have above permissions else create and attach the resource policy to the Log group as shown below.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AWSLogDeliveryWrite20150319",
      "Effect": "Allow",
      "Principal": {
        "Service": [
          "delivery.logs.amazonaws.com"
        ]
      },
      "Action": [
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": [
        "arn:aws:logs:us-east-1:0123456789:log-group:my-log-group:log-stream:*"
      ],
      "Condition": {
        "StringEquals": {
          "aws:SourceAccount": ["0123456789"]
        },
        "ArnLike": {
          "aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:*"]
        }
      }
    }
  ]
}

Publishing Logs to AWS S3

When logs are published to AWS S3 for the first time then the service that delivers becomes the owner of the bucket. If you need to send the logs to AWS S3 then you need below permissions for the user or account through which you are logged in.

logs:CreateLogDelivery
S3:GetBucketPolicy
S3:PutBucketPolicy

The bucket should have a resource policy as shown below.

{
    "Version": "2012-10-17",
    "Id": "AWSLogDeliveryWrite20150319",
    "Statement": [
        {
            "Sid": "AWSLogDeliveryAclCheck",
            "Effect": "Allow",
            "Principal": {
                "Service": "delivery.logs.amazonaws.com"
                },
            "Action": "s3:GetBucketAcl",
            "Resource": "arn:aws:s3:::my-bucket",
        },
        {
            "Sid": "AWSLogDeliveryWrite",
            "Effect": "Allow",
            "Principal": {
                "Service": "delivery.logs.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::my-bucket/AWSLogs/account-ID/*",
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": "bucket-owner-full-control",
                    "aws:SourceAccount": ["0123456789"]
                },
                "ArnLike": {
                    "aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:*"]
                }
            }
        }
    ]
}

Note: You can protect the data in your Amazon S3 bucket by enabling either server-side Encryption with Amazon S3-managed keys (SSE-S3) or server-side encryption with a AWS KMS key stored in AWS Key Management Service (SSE-KMS).

If you choose customer managed AWS KMS Keys then your keys must have below policies.

{
    "Sid": "Allow Logs Delivery to use the key",
    "Effect": "Allow",
    "Principal": {
        "Service": [ "delivery.logs.amazonaws.com" ]
    },
    "Action": [
        "kms:Encrypt",
        "kms:Decrypt",
        "kms:ReEncrypt*",
        "kms:GenerateDataKey*",
        "kms:DescribeKey"
    ],
    "Resource": "*",
    "Condition": {
        "StringEquals": {
            "aws:SourceAccount": ["0123456789"]
        },
        "ArnLike": {
            "aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:*"]
        }
      }
}

Publishing Logs to Kinesis Firehose

To be able to set up sending any of these types of logs to Kinesis Data Firehose for the first time, you must be logged into an account with the following permissions.

logs:CreateLogDelivery
firehose:TagDeliveryStream
iam:CreateServiceLinkedRole

Because Kinesis Data Firehose does not use resource policies, AWS uses IAM roles when setting up these logs to be sent to Kinesis Data Firehose. AWS creates a service-linked role named AWSServiceRoleForLogDelivery

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "firehose:PutRecord",
                "firehose:PutRecordBatch",
                "firehose:ListTagsForDeliveryStream"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/LogDeliveryEnabled": "true"
                }
            },
            "Effect": "Allow"
        }
    ]
}

This service-linked role also has a trust policy that allows the delivery.logs.amazonaws.com service principal to assume the needed service-linked role. That trust policy is as follows:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "delivery.logs.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Conclusion

In this tutorial you learned everything one must know to securely push logs in CloudWatch and store it. You also learned how to view and retrieve the data from the CloudWatch Logs.

With this knowledge you will certainly be able to secure your applications and troubleshoot them easily at a central location. Go for it and implement it.

Advertisement

Everything you should know about Amazon VPC or AWS VPC

In this theoretical tutorial you will learn everything you should know about Amazon VPC or AWS VPC. I am sure you will have no further question on AWS VPC after going through this detailed guide.

Why not dive in right now.

Table of Content

  1. What is VPC or an Amazon VPC or what is a VPC?
  2. VPC CIDR Range
  3. What is AWS VPC Peering?
  4. What is AWS VPC Endpoint?
  5. What are VPC Flow logs?
  6. Knowing AWS VPC pricing?
  7. AWS CLI commands to create VPC
  8. Defining AWS VPC Terraform or terraform AWS VPC Code
  9. How to Publish VPC Flow Logs to CloudWatch
  10. Create IAM trust Policy for IAM Role
  11. Creating IAM Policy to publish VPC Flow Logs to Cloud Watch Logs
  12. Create VPC flow logs using AWS CLI
  13. Conclusion

What is VPC or an Amazon VPC or what is a VPC?

Amazon Virtual Private Cloud allows you to launch AWS resources in a isolated and separate virtual network where you are complete owner of that network.

In Every AWS account and in each region, you get a default VPC. it has a default subnet in each Availability Zone in the Region, an attached internet gateway, a route in the main route table that sends all traffic to the internet gateway, and DNS settings that automatically assign public DNS hostnames to instances with public IP addresses and enable DNS resolution through the Amazon-provided DNS server.

Therefore, an EC2 instance that is launched in a default subnet automatically has access to the internet. Virtual Private cloud contains subnets that are linked or tied to a particular Availability zone.

If you associate an Elastic IP address with the eth0 network interface of your instance, its current public IPv4 address (if it had one) is released to the EC2-VPC public IP address pool.

The Subnet and VPC are assigned with IP range also known as CIDR_range which define the network range in which all resources will be created.

You also need to create Route tables that are used to determine the network connectivity of your VPC with other AWS services such as:

  • Peering connection means connection between two VPCs such that you can share resources between the two VPCs.
  • Gateway endpoints:
    • Internet Gateway connects public subnets to Internet
    • NAT Gateway to connect private subnets to internet. To allow an instance in your VPC to initiate outbound connections to the internet but prevent unsolicited inbound connections from the internet, you can use a network address translation (NAT) device.
    • NAT maps multiple private IPv4 addresses to a single public IPv4 address. You can configure the NAT device with an Elastic IP address and connect it to the internet through an internet gateway
    • But if you think non default subnets those are private want to connect them to internet then make sure by attaching an internet gateway to its VPC (if its VPC is not a default VPC) and associating an Elastic IP address with the instance.
    • VPC Endpoints connect to AWS services privately without using NAT or IGW.
  • Transit Gateway acts as a central device or epicentre to route traffic between your VPCs, VPN connections, and AWS Direct Connect connections.
  • Connect your VPCs to your on-premises networks using AWS Virtual Private Network (AWS VPN).

VPC sharing allows to launch any AWS services in centrally managed Virtual Private Cloud.  In this the account that owns VPC shares one or more subnet with other accounts (participants) that belong to the same organization from AWS Organizations.

  • You must enable resource sharing from the management account for your organization.
  • You can share non-default subnets with other accounts within your organization.
  • VPC owners are responsible for creating, managing, and deleting the resources associated with a shared VPC. VPC owners cannot modify or delete resources created by participants, such as EC2 instances and security groups.

If the tenancy of a VPC is default, EC2 instances running in the VPC run on hardware that’s shared with other AWS accounts by default. If the tenancy of the VPC is dedicated, the instances always run as Dedicated Instances, which are instances that run on hardware that’s dedicated for your use.

VPC CIDR Range

  • CIDR stands for Classless Inter Domain Routing (CIDR ) Notation.
  • IPv4 contains 32 bits.
  • VPC IP CIDR ranges is in between /16 to /28
  • Subnet CIDR range is also in between /16 to /28
  • You can assign additional private IP addresses, known as secondary private IP addresses, to instances that are running in a VPC. Unlike a primary private IP address, you can reassign a secondary private IP address from one network interface to another.
  • The allowed block size is between a /16 netmask (65,536 IP addresses) and /28 netmask (16 IP addresses)
10.0.0.0 – 10.255.255.255 (10/8 prefix)10.0.0.0/16
172.16.0.0 – 172.31.255.255 (172.16/12 prefix)172.31.0.0/16
192.168.0.0 – 192.168.255.255 (192.168/16 prefix)192.168.0.0/20
  • You can associate secondary IPv4 CIDR blocks with your VPC
  • VPCs that are associated with the Direct Connect gateway must not have overlapping CIDR blocks

What is AWS VPC Peering?

A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. Resources in peered VPCs can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region. Traffic between peered VPCs never traverses the public internet.

What is AWS VPC Endpoint?

VPC Endpoints connect to AWS services privately without using NAT or IGW.

What are VPC Flow logs?

To monitor traffic or network access in your virtual private cloud (VPC). You can use VPC Flow Logs to capture detailed information about the traffic going to and from network interfaces in your VPCs.

Knowing AWS VPC pricing?

There’s no additional charge for using a VPC. There are charges for some VPC components, such as NAT gateways, IP Address Manager, traffic mirroring, Reachability Analyzer, and Network Access Analyzer.

AWS Cli commands to create VPC

aws ec2 create-vpc --cidr-block 10.0.0.0/24 --query Vpc.VpcId --output text

Defining AWS VPC Terraform or terraform AWS VPC Code

You can deploy VPC using Terraform as well with just few lines of code. To understand Terraform basics you can refer.

The below Terraform contains resource block to create a Amazon VPC with cidr_block as “10.0.0.0/16” in the default tenancy with tags as “Name” = “main”.

resource "aws_vpc" "main" {
  cidr_block       = "10.0.0.0/16"
  instance_tenancy = "default"

  tags = {
    Name = "main"
  }
}

How to Publish VPC Flow Logs to CloudWatch

When publishing to CloudWatch Logs, flow log data is published to a log group, and each network interface has a unique log stream in the log group. Log streams contain flow log records. For publishing the logs you need:

  • Create an IAM role. To know how to create a role refer here.
  • Attach a IAM trust policy to an IAM role.
  • Create a IAM policy and attach to an IAM role.
  • Finally create the VPC flow logs using AWS CLI

Create IAM trust Policy for IAM Role

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "vpc-flow-logs.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
} 

Creating IAM Policy to publish VPC Flow Logs to Cloud Watch Logs

The policy below VPC flow logs policy has sufficient permissions to publish flow logs to the specified log group in CloudWatch Logs

{
  "Version": "2012-10-17",
  "Statement": [{

     "Effect": "Allow",
     "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents",
        "logs:DescribeLogGroups",
        "logs:DescribeLogStreams"
     ],
     "Resource": "*"
  }]

}

Create VPC flow logs using AWS CLI

aws ec2 create-flow-logs --resource-type Subnet --resource-ids subnet-1a2b3c4d --traffic-type ACCEPT --log-group-name my-flow-logs --deliver-logs-permission-arn arn:aws:iam::123456789101:role/publishFlowLogs

Conclusion

Now that you should have a sound knowledge on what is AWS VPC.

What is AWS S3 Bucket?

In this Quick tutorial you will learn everything one must know regarding AWS storage service that is AWS S3.

Table of content

What is AWS S3 Bucket?

Amazon Simple storage service allows you to store objects or any sizes securely and with good performance, scalability and securely. You can ideally store unlimited data into AWS S3 bucket. Lets get into some of the important features of AWS S3 bucket.

  • There are various S3 storage classes which can be used according to the requirements.
  • You can also configure Storage lifecycle which allows you to manage your objects efficiently and you can move the objects to different storage classes.
  • S3 object lock: you can add a object lock for a particular time so that the objects are not deleted by mistake.

  • S3 replication: you can replicate objects to different destinations may be in different buckets or different regions accordingly.

  • S3 batch operations: you can manage lot of objects in a single API request using batch operations.

  • You can block public access to S3 buckets and object. By default, Block Public Access settings are turned on at the account and bucket level.

  • You can apply IAM policy to users or roles to access 3 bucket securely. You can also apply resource based policy on AWS s3 buckets and objects.

  • You can also apply access control list on a particular bucket or a particular objects.

  • You can disable ACL and take ownership of every object in your bucket. As a bucket error you have rides on every object in your bucket.

  • You can also use access analyzer for S3 two evaluate all the access policies
  • You can have up to 100 buckets in your AWS account
  • When is the bucket is created you are not allowed to change the name afterwards or  the region.
  • Every object is identified by a name that is a key and a version ID and every object in bucket has exactly one key.

You can access your bucket using the Amazon S3 console using both virtual-hosted–style and path-style URLs to access a bucket.

https://bucket-name.s3.region-code.amazonaws.com/key-name  (Virtual Hosted )

https://bucket-name.s3.region-code.amazonaws.com/key-name  ( Path Based )

AWS S3 Bucket Access Control List

  • You can set the bucket ownership and S3 object ownership in AWS S3 bucket level settings and can disable ACL so that you are owners of every object.
  • When any other AWS account upload the objects in AWS S3 in your account then that account owns the bucket and has access to it but if you disable ACL then bucket owner automatically owns every object in your bucket.

S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to disable access control lists (ACLs) and take ownership of every object in your bucket, simplifying access management for data stored in Amazon S3.

AWS S3 Object Encryption

Amazon AWS S3 encryption is done in transit and at rest. Server-side encryption encrypts the object before saving it and decrypts when you download it.

  • Server-side encryption with Amazon S3 managed keys (SSE-S3)
  • Server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS)
  • Server-side encryption with customer-provided keys (SSE-C)

Client side encryption can be done before sending objects to as 3 bucket.

AWS S3 Bucket Policy

In AWS S3 bucket policy is a resource-based policy which allows you to grant permission to your bucket and objects only bucket owner of that account can associate a policy with the bucket and bucket policies a based on access policies.

AWS s3 bucket policy examples

In this section we will go through some of the examples of bucket policy. With bucket policy you can secure access to objects in your buckets, so that only users with the appropriate permissions can access them

s3 bucket policy to encrypt each object with server-side encryption using AWS Key Management Service (AWS KMS) keys (SSE-KMS)

To require server-side encryption of all objects in a particular Amazon S3 bucket, you can use a bucket policy.

{
   "Version":"2012-10-17",
   "Id":"PutObjectPolicy",
   "Statement":[{
         "Sid":"DenyUnEncryptedObjectUploads",
         "Effect":"Deny",
         "Principal":"*",
         "Action":"s3:PutObject",
         "Resource":"arn:aws:s3:::DOC-EXAMPLE-BUCKET1/*",
         "Condition":{
            "StringNotEquals":{
               "s3:x-amz-server-side-encryption":"aws:kms"
            }
         }
      }
   ]
}

s3 bucket policy which require SSE-KMS with a specific AWS KMS key for all objects written to a bucket

{
"Version": "2012-10-17",
"Id": "PutObjPolicy",
"Statement": [{
  "Sid": "DenyObjectsThatAreNotSSEKMSWithSpecificKey",
  "Principal": "*",
  "Effect": "Deny",
  "Action": "s3:PutObject",
  "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*",
  "Condition": {
    "ArnNotEqualsIfExists": {
      "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:us-east-2:111122223333:key/01234567-89ab-cdef-0123-456789abcdef"
    }
  }
}]
}

Grant cross-account permissions to upload objects while ensuring that the bucket owner has full control

{
   "Version":"2012-10-17",
   "Statement":[
     {
       "Sid":"PolicyForAllowUploadWithACL",
       "Effect":"Allow",
       "Principal":{"AWS":"111122223333"},
       "Action":"s3:PutObject",
       "Resource":"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*",
       "Condition": {
         "StringEquals": {"s3:x-amz-acl":"bucket-owner-full-control"}
       }
     }
   ]
}

How to remove bucket content completely using aws s3 rm

To remove bucket content completely run the below command.

aws s3 rm s3://bucket-name –recursive

Deleting a AWS S3 bucket – How you can delete an empty Amazon S3 bucket run the below command.

aws s3 rb s3://bucket-name --force 

How to transform data with S3 object Lambda

To Transform the data with AWS S3 Object Lambda follow the below steps:

  • Prerequisites
  • Step 1: Create an S3 bucket
  • Step 2: Upload a file to the S3 bucket
  • Step 3: Create an S3 access point
  • Step 4: Create a Lambda function
  • Step 5: Configure an IAM policy for your Lambda function’s execution role
  • Step 6: Create an S3 Object Lambda Access Point
  • Step 7: View the transformed data
  • Step 8: Clean up

List S3 Bucket using using the AWS S3 CLI command ( aws s3 list bucket or AWS S3 ls )

To list the bucket using AWS CLI then use the below command. The below command lists all prefixes and objects in a bucket

aws s3 ls s3<strong>:</strong>//mybucket

AWS S3 Sync

Syncs directories and S3 prefixes. Recursively copies new and updated files from the source directory to the destination. Only creates folders in the destination if they contain one or more files.

The following sync command syncs objects under a specified prefix and bucket to files in a local directory by uploading the local files to s3

aws s3 sync . s3://mybucket

AWS S3 cp recursive

To list the bucket using AWS CLI then use the below command.

aws s3 mv

Moves a local file or S3 object to another location locally or in S3. The following mv command moves a single file to a specified bucket and key.

aws s3 mv test.txt s3://mybucket/test2.txt

Conclusion

In this tutorial we learned important concepts of AWS S3 such as its use, bucket policy and features of AWS S3 bucket.

kubernetes microservice architecture with kubernetes deployment example

In this article we will go through the kubernetes microservice architecture with kubernetes deployment example.

Table of Content

  1. Prerequisites
  2. kubernetes microservice architecture
  3. Docker run command to deploy Microservice
  4. Preparing-kubernetes-deployment-yaml-or-kubernetes-deployment-yml-file-for-Voting-App-along-with-kubernetes-deployment-environment-variables
  5. Preparing kubernetes deployment yaml or kubernetes deployment yml file for Redis app along with kubernetes deployment environment variables
  6. Preparing kubernetes deployment yaml or kubernetes deployment yml file for PostgresApp along with kubernetes deployment environment variables
  7. Preparing kubernetes deployment yaml or kubernetes deployment yml file for Worker App along with kubernetes deployment environment variables
  8. Preparing kubernetes deployment yaml or kubernetes deployment yml file for Result App along with kubernetes deployment environment variables
  9. Creating kubernetes nodeport or k8s nodeport or kubernetes service nodeport YAML file
  10. Creating kubernetes clusterip or kubernetes service clusterip YAML file
  11. Running kubernetes service and Kubernetes deployments.
  12. Conclusion

Prerequisites

This will be step by step tutorial,

  • Ubuntu or Linux machine with Kubernetes cluster running or a minikube.
  • kubectl command installed

kubernetes microservice architecture

In the below kubernetes microservice architecture you will see an application where you vote and the result will be displayed based on the votes and below are the components:

  • Voting app based on Python which is UI based app where you will add your vote.
  • In Memory app based on Redis which will store your vote in memory.
  • Worker app which is .net based app converts in built memory data into Postgres DB.
  • Postgres DB app which is based on Postgres DB collects the data and store it in database.
  • Result-app which is UI based app fetches the data from DB and displays the vote to the users.

Docker run command to deploy Microservice

We will start this tutorial by showing you docker commands, if we would have run all these applications in docker itself instead of kubernetes.

docker run -d --name=redis redis

docker run -d --name=db postgres:9.4

docker run -d --name=vote -p 5000:80 --link redis:redis voting-app

docker run -d --name=result -p 5001:80 --link db:db  result-app

docker run -d --name=worker  --link redis:redis --link db:db worker

Preparing kubernetes deployment yaml or kubernetes deployment yml file for Voting App along with kubernetes deployment environment variables

As this tutorial is to deploy all applications in kubernetes, we will prepare all the YAML files and in the end of tutorial we will deploy them using kubectl command.

In the below deployment file we are creating voting app and it will run on those pods whose labels matches with name as voting-app-pod and app as demo-voting-app.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: voting-app-deploy
  labels:
    name: voting-app-deploy
    app: demo-voting-app
spec:
  replicas: 3
  selector:
    matchLabels:
      name: voting-app-pod
      app: demo-voting-app  
  template:
    metadata:
      name: voting-app-pod
      labels:
        name: voting-app-pod
        app: demo-voting-app
    spec:
      containers:
        - name: voting-app        
          image: kodekloud/examplevotingapp_voting:v1
          resources:
            limits:
              memory: "4Gi"
              cpu: "1"
            requests:
              memory: "2Gi" 
              cpu: "2"         
          ports:
            - containerPort: 80

Preparing kubernetes deployment yaml or kubernetes deployment yml file for Redis app along with kubernetes deployment environment variables

In the below deployment file we are creating redis app and it will run on those pods whose labels matches with name as redis-pod and app as demo-voting-app.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-deploy
  labels:
    name: redis-deploy
    app: demo-voting-app
spec:  
    replicas: 1
    selector:
      matchlabels:
        name: redis-pod
        app: demo-voting-app
    template:    
      metadata:
        name: redis-pod
        labels:
          name: redis-pod
          app: demo-voting-app

      spec:
        containers:
          - name: redis
            image: redis
            resources:
              limits:
                memory: "4Gi"
                cpu: "1"
              requests:
                memory: "2Gi" 
                cpu: "2"                     
            ports:
              - containerPort: 6379          

Preparing kubernetes deployment yaml or kubernetes deployment yml file for PostgresApp along with kubernetes deployment environment variables

In the below deployment file we are creating postgres app and it will run on those pods whose labels matches with name as postgres-pod and app as demo-voting-app.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres-deploy
  labels:
    name: postgres-deploy
    app: demo-voting-app
spec:
    replicas: 1
    selector:
      matchLabels:
        name: postgres-pod
        app: demo-voting-app
    template: 
      metadata:
        name: postgres-deploy
        labels:
          name: postgres-deploy
          app: demo-voting-app
        spec:
          containers:
            - name: postgres       
              image: postgres
              resources:
                limits:
                  memory: "4Gi"
                  cpu: "1"
                requests:
                  memory: "2Gi" 
                  cpu: "2"         
              ports:
                - containerPort: 5432
              env:
                - name: POSTGRES_USER
                  value: "posgres"
                - name: POSTGRES_PASSWORD
                  value: "posgres"     

Preparing kubernetes deployment yaml or kubernetes deployment yml file for Worker App along with kubernetes deployment environment variables

In the below deployment file we are creating postgres app and it will run on those pods whose labels matches with name as worker-app-pod and app as demo-voting-app.

apiVersion: app/v1
kind: Deployment
metadata:
  name: worker-app-deploy
  labels:
    name: worker-app-deploy
    app: demo-voting-app
spec:
  selector:
    matchLabels:
      name: worker-app-pod
      app: demo-voting-app  
  replicas: 3
  template:
    metadata:
      name: worker-app-pod
      labels:
        name: worker-app-pod
        app: demo-voting-app
    spec: 
    containers:
      - name: worker
        resources:
          limits:
            memory: "4Gi"
            cpu: "1"
          requests:
            memory: "2Gi" 
            cpu: "2"       
        image: kodekloud/examplevotingapp_worker:v1

Preparing kubernetes deployment yaml or kubernetes deployment yml file for Result App along with kubernetes deployment environment variables

In the below deployment file we are creating result app and it will run on those pods whose labels matches with name as result-app-pod and app as demo-voting-app.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: result-app-deploy
    app: demo-voting-app
spec:
   replicas: 1
   selector:
     matchLabels:
       name: result-app-pod
       app: demo-voting-app
   template:
     metadata:
       name: result-app-pod
       labels:
          name: result-app-pod
          app: demo-voting-app
       spec:
          containers:
            - name: result-app        
              image: kodekloud/examplevotingapp_result:v1
              resources:
                limits:
                  memory: "4Gi"
                  cpu: "1"
                requests:
                  memory: "2Gi" 
                  cpu: "2"         
              ports:
                - containerPort: 80

Creating kubernetes nodeport or k8s nodeport or kubernetes service nodeport YAML file

Now that we have created the deployment files for each of the application our voting app and result app will be expose to the outside world so we will declare both of them as NodePort as shown below.

kind: Service 
apiVersion: v1 
metadata:
  name: voting
  labels:
    name: voting-service
    app: demo-voting-app
spec:
  type: NodePort
  selector:
    name: voting-app-pod
    app: demo-voting-app
  ports:      
    - port: 80    
      targetPort: 80   
      nodePort: 30004
kind: Service 
apiVersion: v1 
metadata:
  name: result
  labels:
    name: result-service
    app: demo-voting-app
spec:
  type: NodePort
  selector:
    name: result-pod
    app: demo-voting-app
  ports:      
    - port: 6379    
      targetPort: 6379   
      nodePort: 30005

Creating kubernetes clusterip or kubernetes service clusterip YAML file

Now that we have created the deployment files for each of the application our Redis app and Postgres app will be expose to the internal cluster only world so we will declare both of them as ClusterIP as shown below.

kind: Service 
apiVersion: v1 
metadata:
  name: db
  labels:
    name: postgres-service
    app: demo-voting-app
spec:
  type: ClusterIP
  selector:
    name: postges-pod
    app: demo-voting-app
  ports:      
    - port: 5432    
      targetPort: 5432   
kind: Service 
apiVersion: v1 
metadata:
  name: redis
  labels:
    name: redis-service
    app: demo-voting-app
spec:
  type: ClusterIP
  selector:
    name: redis-pod
    app: demo-voting-app
  ports:      
    - port: 6379    
      targetPort: 6379   

Running kubernetes service and Kubernetes deployments.

Now we will run the kubernetes services and kubernetes deployments using the below commands.

kubectl apply -f postgres-app-deploy.yml
kubectl apply -f redis-app-deploy.yml
kubectl apply -f result-app-deploy.yml
kubectl apply -f worker-app-deploy.yml
kubectl apply -f voting-app-deploy.yml



kubectl apply -f postgres-app-service.yml
kubectl apply -f redis-app-service.yml
kubectl apply -f result-app-service.yml
kubectl apply -f voting-app-service.yml

Conclusion

In this article we went through the kubernetes microservice architecture with kubernetes deployment example.

How to allow only HTTPS requests on AWS S3 buckets using AWS S3 Policy

It is important for your infrastructure to be secure. Similarly if you wish to secure your AWS bucket contents in AWS contents you need to make sure that you allow only secure requests that works on HTTPS.

In this quick tutorial you will learn How to allow only HTTPS requests on AWS S3 buckets using AWS S3 Policy on a bucket.

Lets get started.

Prerequisites

  • AWS account
  • One AWS Bucket

Creating AWS S3 bucket Policy for AWS S3 bucket

The below policy has two statements which performs the below actions:

  • Version is a standard date used in S3 policy.
  • The Statement below restricts all the requests except HTTPS on the AWS S3 bucket ( my-bucket )
  • Deny Here means it denies any requests that are not secure.
{
    "Version": "2012-10-17",
    "Statement": [{
        "Sid": "RestrictToTLSRequestsOnly",
        "Action": "s3:*",
        "Effect": "Deny",
        "Resource": [
            "arn:aws:s3:::my-bucket",
            "arn:aws:s3:::my-bucket/*"
        ],
        "Condition": {
            "Bool": {
                "aws:SecureTransport": "false"
            }
        },
        "Principal": "*"
    }]
}

Conclusion

This tutorial demonstrated how to allow only HTTPS requests on AWS S3 buckets using AWS S3 Policy.

How AWS s3 list bucket and AWS s3 put object

Are you Struggling to list your AWS S3 bucket and unable to upload data, if yes then don’t worry this tutorial is for you.

In this quick tutorial you will learn how you can list all the AWS Amazon S3 buckets and upload objects into it by assigning IAM policy to a user or a role.

Lets get started.

Prerequisites

  • AWS account
  • One AWS Bucket

Creating IAM policy for AWS S3 to list buckets and put objects

The below policy has two statements which performs the below actions:

  • First statement allows you to list objects in the AWS S3 bucket named (my-bucket-name).
  • Second Statement not only allow to list objects but allow you to perform any actions such as put:object, delet:objects etc. in the AWS S3 bucket named (my-bucket-name).
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListObjectsInBucket",
            "Effect": "Allow",
            "Action": ["s3:ListBucket"],
            "Resource": ["arn:aws:s3:::my-bucket-name"]
        },
        {
            "Sid": "AllObjectActions",
            "Effect": "Allow",
            "Action": "s3:*Object",
            "Resource": ["arn:aws:s3:::my-bucket-name/*"]
        }
    ]
}

Conclusion

This tutorial demonstrated how you can list all the AWS Amazon S3 buckets and upload objects into it by assigning IAM policy to a user or a role. .

How to Deny IP addresses to Access AWS Cloud using AWS IAM policy with IAM policy examples

Do you know you can restrict the certain IP addresses to access AWS services to be accessed with a single policy.

In this quick tutorial you will learn Deny IP addresses using AWS IAM policy with IAM policy examples

Lets get started.

Prerequisites

  • AWS account
  • Permissions to create IAM Policy

Lets describe the below IAM Policy in the AWS Cloud.

  • Version is Policy version which is fixed.
  • Effect is Deny in statement as we don’t want to allow IP addresses be able to Access AWS cloud.
  • Resources are * wild character as we want action to be allowed for all AWS services.
  • This policy deny all the IP address to access AWS cloud except few IP addresses using the NotIpAddress Condition and aws:ViaAWSService which is used to limit access to an AWS service makes a request to another service on your behalf.
{
    "Version": "2012-10-17",
    "Statement": {
        "Effect": "Deny",
        "Action": "*",
        "Resource": "*",
        "Condition": {
            "NotIpAddress": {
                "aws:SourceIp": [
                    "192.0.2.0/24",
                    "203.0.113.0/24"
                ]
            },
            "Bool": {"aws:ViaAWSService": "false"}
        }
    }
}
}

Conclusion

This tutorial demonstrated that if you need to deny IP addresses using AWS IAM policy with IAM policy examples.

What is Amazon EC2 in AWS?

If you are looking to start your career in AWS cloud then knowing your first service that is AWS EC2 can give you a good understanding around the compute resources in AWS cloud. With AWS EC2 you will also understand which all services utilize AWS EC2.

Lets get and start learning AWS EC2.

Table of Content

  1. Amazon EC2 (AWS Elastic compute Cloud)
  2. Amazon EC2 (AWS Elastic compute Cloud)
  3. Pricing of Amazon Linux 2
  4. Configure SSL/TLS on Amazon Linux 2
  5. How to add extra AWS EBS Volumes to an AWS EC2 instance
  6. AMI (Amazon Machine Image)
  7. Features of AMI
  8. AMI Lifecycle
  9. Creating an Amazon EBS Backed Linux AMI
  10. Creating an Instance Store backed Linux AMI
  11. Copying an Amazon AMI
  12. Storing and restoring an Amazon AMI
  13. Amazon Linux 2
  14. AWS Instances
  15. Stop/Start Instance EBS Backed instance
  16. Reboot AWS EC2 Instance
  17. Hibernated Instance ( EBS Backed instance)
  18. Terminated Instance EBS Backed instance
  19. AWS Instance types
  20. AWS Instance Lifecycle
  21. Monitoring AWS EC2 instance
  22. Cloud-init
  23. AWS EC2 Monitoring
  24. AWS EC2 Networking
  25. Local Zones
  26. AWS Wavelength
  27. Elastic Network Interface
  28. Configure your network interface using ec2-net-utils for Amazon Linux
  29. IP Address
  30. Assign a secondary private IPv4 address
  31. What is Elastic IP address?
  32. Associate an Elastic IP address with the secondary private IPv4 address
  33. Conclusion

Amazon EC2 (AWS Elastic compute Cloud)

Amazon EC2 stands for Amazon Elastic compute cloud that allows you to launch servers or virtual machines that are scalable in the Amazon Web service cloud. Also, with AWS EC2 instance, you don’t require to invest in any hardware or electricity costs, and you just pay for what you use.

When required, you can quickly decrease or scale up the number of AWS EC2 instances.

  • Instance requires operating systems, additional software, etc to get launched, so they use templates known as Amazon machine images (AMI).
  • You can work with various configurations with respect to computing such as Memory or CPU for that you will need to select the appropriate instance_type.
  • To securely log in to these instances you will need to generate the key pair where you store the private key and AWS manages key.
  • Instance can have two types of data ie. instance store that is temporary and the Amazon Elastic block store also known as EBS volumes.

Amazon EC2 (AWS Elastic compute Cloud)

  • Provides scalable computing capacity in Amazon web service cloud. You don’t need to invest in hardware up front etc. It takes few mins to launch your virtual machine and deploy your applications.
  • You can use preconfigured templates known as Amazon machine images (AMI’s) that includes OS and additional software’s. The launched machines are known as instances and instances comes with various compute configurations such as CPU, Memory known as instance type.
  • To securely login you need to key pairs where public key is stored with AWS and private key is stored with customers. Key pair choose either RSA or ED25519 types where windows doesn’t support ED25519.
  • To use a key on mac or Linux computer grant the following permissions:
 chmod 400 key-pair-name.pem
  • Storage volumes for temporary data can use Instance store volumes however when you need permanent data then consider using EBS i.e., Elastic block store.
  • To secure your Instance consider using security groups.
  • If you need to allocate the static IP address to an instance, then consider using Elastic address.
  • Your instance can be EBS backed instance or instance store-based instance that means the root volume can be either EBS or the Instance store. Instance stored backed Instances are either running or terminated but cannot be stopped. Also, instance attributes such as RAM, CPU cannot be changed.
  • Instances launched from an Amazon EBS-backed AMI launch faster than instances launched from an instance store-backed AMI
  • When you launch an instance from an instance store-backed AMI, all the parts have to be retrieved from Amazon S3 before the instance is available. With an Amazon EBS-backed AMI, only the parts required to boot the instance need to be retrieved from the snapshot before the instance is available
  • Use Amazon Inspector to automatically discover software vulnerabilities and unintended network exposure.
  • Use Trusted advisor to inspect your environment.
  • Use separate Amazon EBS volumes for the operating system versus your data.
  • Encrypt EBS volumes and snapshots.
  • Regularly back up your EBS volumes using EBS Snapshots, create AMI’s from your instance.
  • Deploy critical applications across multiple AZ’s.
  • Set TTL to 255 or nearby on your application side so that the connection are intact otherwise it can cause reachability issues.
  • When you install Apache then you will have document root on /var/www/html directory and by default root user have access to this directory. But if you want any other use to access these files under the directory perform the below steps as below. Let’s assume the user is ec2-user
sudo usermod -a -G apache ec2-user  # Logout and login back
sudo chown -R ec2-user:apache /var/www
sudo chmod 2775 /var/www && find /var/www -type d -exec sudo chmod 2775 {} \;  # For Future files

Pricing of Amazon Linux 2

There are different plans available for different EC2 instance such as:

  • On demand Instances:  No longer commitments and you only pay per second and the minimum period should be 60 seconds.
  • Saving Plans: You can book your instance for a year or 3 years.
  • Reserved instances: You can book your instance for a year or a period of 3 years to a specific configuration.
  • Spot instances: If you need cheap instance which are unused you can go ahead and use them.

Configure SSL/TLS on Amazon Linux 2

  • SSL/TLS creates an encrypted channel between a web server and web client that protects data in transit from being eavesdropped on.  
  • Make sure you have EBS backed Amazon Linux 2, Apache installed, TLS Public Key Infrastructure (PKI) relies on DNS. Also make sure to register domain for your EC2 instance.
  • Nowadays we are using TLS 1.2 and 1.3 versions and underlying TLS library is supported and enabled.
  • Enable TLS on server by Installing Apache SSL module using below command followed by configuring it.
 yum install -y mod_ssl 

vi  etc/httpd/conf.d/ssl.conf

  • Generate certificate using
sudo ./make-dummy-cert localhost.crt inside cd /etc/pki/tls/certs

How to add extra AWS EBS Volumes to an AWS EC2 instance

Basically this section is to add the Extra volume to an instance. There are two types of volumes first is root volume and other is extra volume (EBS) which you can add. To add the extra volume on AWS EC2 below are the steps:

  • Launch one AWS EC2 instance and while launching under Configure storage, choose Add new volume. Ensure that the added EBS volume size is 8 GB, and the type is gp3. AWS EC2 instance will have two volumes one for root and other added storage.
  • Before modifying or updating the volume, make sure to take the snapshot of current vol by navigating to storage tab under EC2 and then block devices, volume ID.
  • Now create a file system and attach it to non-mounted EBS volume by running the following command.
sudo mkfs -t xfs /dev/nvme1n1
sudo mkdir /data
sudo mount /dev/nvme1n1 /data
lsblk -f
  • Now, again on AWS EC2 instance go to volume ID, click on Modify the Volume by changing the volume ID.
  • Extend the file system by first checking the size of the file system.
df -hT
  • Now to extend use the command:
sudo xfs_grofs -d /data
  • Again, check the file system sized by running (df -hT) command

AMI (Amazon Machine Image)

  • You can launch multiple instances using the same AMI. Ami includes EBS snapshots and also contains OS, software’s for instance store backed AMI’s.

To Describe the AMI you can run the below command.

aws ec2 describe-images \
    --region us-east-1 \
    --image-ids ami-1234567890EXAMPLE

Features of AMI

  • You can create an AMI using snapshot or a template.
  • You can deregister the AMI as well.
  • AMI’s are either EBS backed or instance backed.
    • With EBS backed AMI’s the Root volume is terminated and other EBS volume is not deleted.
  • When you launch an instance from an instance store-backed AMI, all the parts have to be retrieved from Amazon S3 before the instance is available.
  • With an Amazon EBS-backed AMI, only the parts required to boot the instance need to be retrieved from the snapshot before the instance is available
  • Cost of EBS backed Instance are less because only changes are stored but in case of Instance store backed instances each time customized AMI is stored in AWS S3.
  • AMI uses two types of virtualizations:  paravirtual (PV) or Hardware virtual machine (HVM) which is better performer.
  • HVM are treated like actual physical disks. The boot process is similar to bare metal operating system.
    • The most common HVM bootloader is GRUB or GRUB2.
    • HVM boots by executing master boot record of root block device of your image.
    • HVM allows you to run OS on top of VM as if its bare metal hardware.
    • HVM can take advantage of hardware extensions such as enhanced networking or GPU Processing
  • PV boots with special boot loader called PV-GRUB.
    • PV runs on hardware that doesn’t have explicit support for virtualization.
    • PV cannot take advantage of hardware extensions.
    • All current, regions, generations support HVM API however this is not true with PV.
  • The first component to load when you start a system is BIOS in case of [ Intel and AMD] instance types run on Legacy and UEFI and Unified Extensible Firmware Interface (UEFI) in case of Graviton instance.  To check the boot mode of an AMI run the below command. Note: To check the boot mode of an Instance you can run the describe instance command.
aws ec2 --region us-east-1 describe-images --image-id ami-0abcdef1234567890
  • To check the boot mode of Operating system, SSH into machine and then run the below command.
sudo /usr/sbin/efibootmgr
  • To set the boot mode you can do that while registering an image not while creating an image.
  • Shared AMI: These are created by developers and made available for others to use.
  • You can deprecate or Deregister the AMI anytime.
  • Recycle Bin is a data recovery feature that enables you to restore accidentally deleted Amazon EBS snapshots and EBS-backed AMIs. Provided you have permissions such as ec2:ListImagesInRecycleBin and ec2:RestoreImageFromRecycleBin

AMI Lifecycle

You can launch two types of AMI’s:

Creating an Amazon EBS Backed Linux AMI

  • Launch an instance1 using AMI (Marketplace, Your own AMI, Public AMI, Shared AMI)
  • Customize the instance by adding the software’s etc.
  • Create new image from customized instance. When you create a new image then you create a new AMI as well. Amazon EC2 creates snapshots of your instance’s root volume and any other EBS volumes attached to your instance
  • Launch another instance2

Creating an Instance Store backed Linux AMI

  • Launch an instance1 only from instance backed AMI.
  • SSH Into Instance, customize it.
  • Bundle it which contains image manifest and files that contain template for root volume. Bundling might take few minutes.
  • Next upload the bundle to AWS S3.
  • Now, register your AMI.

Note 1: To create and manage Instance store backed Linux AMI you will need AMI tools to create and manage instance store-backed Linux AMIs. You will also need AWS CLI and AWS S3 bucket.

Note 2: You can’t convert an instance store-backed Windows AMI to an Amazon EBS-backed Windows AMI and you cannot convert an AMI that you do not own.

Copying an Amazon AMI

  • You can copy AMI’s within region or across regions
  • You can also copy AMI along with encrypted snapshot.
  • When you copy Ami the target AMI has its own identifier.
  • Make sure your IAM principal has the permissions to copy AMI.
  • Provide or update Bucket policy so that new AMI can be copied successfully.
  • You can copy an AMI in another region
  • You can copy an AMI in another account. For copying the AMI across accounts make sure you have all the permissions such as Bucket permission, key permissions and snapshot permissions.

Storing and restoring an Amazon AMI

  • You can store AMI’s in AWS S3 bucket by using CreatStoreImageTask  API
  • To monitor the progress of AMI use DescribeStoreImageTask
  • copy AMI to another bucket.
  • You can restore only EBS backed AMI’s using CreateRestoreImageTask.
  • To store and restore AMI the S3 bucket must be in same region.

Amazon Linux 2

  • It supports kernel 4.14 and 5.10. You can also upgrade it to 5.15 version. It allows greater parallelism and scalability.
  • New improvements in EXT file system such as large files can be managed easily.
  • DAMON is better supported as the data access monitoring for better memory and performance analysis.
  • To install and verify by upgrading kernel use below command.
sudo amazon-linux-extras install kernel-5.15
  • The cloud-init package is an open-source application built by Canonical that is used to bootstrap Linux images in a cloud computing environment, such as Amazon EC2. It enables you to specify actions that should happen to your instance at boot time.
  • Amazon Linux also uses cloud-init package to perform initial configuration of the ec2-user account, setting hostname, generate host keys, prepare repositories for package management.
  • Add users public key,
  • Amazon Linux uses the cloud-init actions found in /etc/cloud/cloud.cfg.d and /etc/cloud/cloud.cfg. You can create your own cloud-init action files in /etc/cloud/cloud.cfg.d.

AWS Instances

An instance is a virtual server in the cloud. Instance type essentially determines the hardware of the host computer used for your instance. Each instance type offers different compute and memory capabilities.

The root device for your instance contains the image used to boot the instance. The root device is either an Amazon Elastic Block Store (Amazon EBS) volume or an instance store volume.

Your instance may include local storage volumes, known as instance store volumes, which you can configure at launch time with block device mapping

Stop/Start Instance EBS Backed instance:

  • All the storage and EBS Volumes remains as it is ( they are stopped not deleted).
  • You are not charged for the instance when it is in stopped stage.
  • All the EBS volumes including root device usage are billed.
  • During the instance in stopped stage you can attach or detach EBS volumes.
  • You can create AMI’s during stopped state and you can also configure few instance configurations such as kernel, RAM Disk and instance type.
  • The Elastic IP address remains associated from the instance
  • The instance stays on the same host computer
  • The RAM is erased
  • Instance store volumes data is erased
  • You stop incurring charges for an instance as soon as its state changes to stopping

Reboot AWS EC2 Instance

  • The instance stays on the same host computer
  • The Elastic IP address remains associated from the instance
  • The RAM is erased
  • Instance store volumes data is preserved

Hibernated Instance ( EBS Backed instance)

  • The Elastic IP address remains associated from the instance
  • We move the instance to a new host computer
  • The RAM is saved to a file on the root volume
  • Instance store volumes data is erased
  • You incur charges while the instance is in the stopping state, but stop incurring charges when the instance is in the stopped state

Terminated Instance EBS Backed instance:

  • The root volume device is deleted but any other EBS volumes are preserved.
  • Instances are also terminated and cannot be started again.
  • You are not charged for the instance when it is in stopped stage.
  • The Elastic IP address is disassociated from the instance

AWS Instance types

  • General Purpose: These instances provide an ideal cloud infrastructure, offering a balance of compute, memory, and networking resources for a broad range of applications that are deployed in the cloud.
  • Compute Optimized instances: Compute optimized instances are ideal for compute-bound applications that benefit from high-performance processors.
  • Memory optimized instances:  Memory optimized instances are designed to deliver fast performance for workloads that process large data sets in memory.
  • Storage optimized instances: Storage optimized instances are designed for workloads that require high, sequential read and write access to very large data sets on local storage. They are optimized to deliver tens of thousands of low-latencies, random I/O operations per second (IOPS) to applications

Note:  EBS-optimized instances enable you to get consistently high performance for your EBS volumes by eliminating contention between Amazon EBS I/O and other network traffic from your instance.

You can enable enhanced networking on supported instance types to provide lower latencies, lower network jitter, and higher packet-per-second (PPS) performance

AWS Instance Lifecycle

  • Note: You cannot stop and then start an Instance store backed instance.
  • FROM AMI
  • Launch Instance 
  • Pending
    • Running to Rebooting or Stopping
      • Shutting Down
        • Terminated

Amazon EC2 instances support multithreading, which enables multiple threads to run concurrently on a single CPU core. Each thread is represented as a virtual CPU (vCPU) on the instance. An instance has a default number of CPU cores, which varies according to instance type. For example, an m5.xlarge instance type has two CPU cores and two threads per core by default—four vCPUs in total.

  • Number of CPU cores: You can customize the number of CPU cores for the instance. You might do this to potentially optimize the licensing costs of your software with an instance that has sufficient amounts of RAM for memory-intensive workloads but fewer CPU cores.
  • Threads per core: You can disable multithreading by specifying a single thread per CPU core. You might do this for certain workloads, such as high performance computing (HPC) workloads.

Monitoring AWS EC2 instance

You can monitor AWS EC2 instances either manually or automatically. Lets discuss few of Automated monitoring tools.

  • System status checks
  • Instance status checks
  • Amazon Cloud watch alarms
  • Amazon Event Bridge
  • Amazon CloudWatch Logs
  • Cloud Watch agent

Now, lets discuss few of manual tools to monitor AWS EC2 instance.

  • Amazon EC2 Dashboard.
  • Amazon Cloud Watch Dashboard
  • Instance Status Checks on the EC2 Dashboard.
  • Scheduled events on EC2 Dashboard.

Cloud-init

It is used to bootstrap the Linux images in cloud computing environment.  Amazon Linux also uses cloud-init to perform initial configuration of the ec2-user account. Amazon Linux uses the cloud-init actions found in /etc/cloud/cloud.cfg.d and /etc/cloud/cloud.cfg and you can also add your own actions in this file.

The tasks that are performed by default by this script.

  • Set the default locale.
  • Set the hostname.
  • Parse and handle user data.
  • Generate host private SSH keys.
  • Add a user’s public SSH keys to .ssh/authorized_keys for easy login and administration.
  • Prepare the repositories for package management.
  • Handle package actions defined in user data.
  • Execute user scripts found in user data.

AWS EC2 Monitoring

  • By default, AWS EC2 sends metrics to CloudWatch every 5 mins.
  • To send metric data for your instance to CloudWatch in 1-minute periods, you can enable detailed monitoring on the instance but You are charged per metric that is sent to CloudWatch.
  • To list all the metrics of a particular AWS EC2 instance use the below command.
aws cloudwatch list-metrics --namespace AWS/EC2 --dimensions Name=InstanceId,Value=i-1234567890abcdef0

To create CloudWatch alarms, you can Select the instance and choose ActionsMonitor and troubleshootManage CloudWatch alarms.

  • You can use Amazon EventBridge to automate your AWS services and respond automatically to system events, such as application availability issues or resource changes.
  • Events from AWS services are delivered to Event Bridge in near real time. For example: Activate a Lambda function whenever an instance enters the running state. Create events and rules on event on AWS EC2 service. Once generated then it will run the lambda function.
  • You can use the Cloud Watch agent to collect both system metrics and log files from Amazon EC2 instances and on-premises servers
sudo yum install amazon-cloudwatch-agent

AWS EC2 Networking

If you require a persistent public IP address, you can allocate an Elastic IP address for your AWS account and associate it with an instance or a network interface.

To increase network performance and reduce latency, you can launch instances in a placement group

To increase network performance and reduce latency, you can launch instances in a placement group.

Local Zones

A Local Zone is an extension of an AWS Region in geographic proximity to your users. Local Zones have their own connections to the internet and support AWS Direct Connect, so that resources created in a Local Zone can serve local users with low-latency communications.

AWS Wavelength

AWS Wavelength enables developers to build applications that deliver ultra-low latencies to mobile devices and end users. Wavelength deploys standard AWS compute and storage services to the edge of telecommunication carriers’ 5G networks. Developers can extend a virtual private cloud (VPC) to one or more Wavelength Zones, and then use AWS resources like Amazon EC2 instances to run applications that require ultra-low latency and a connection to AWS services in the Region.

Elastic Network Interface

  • Eni is basically a Virtual Network adapter which contains following attributes:
    • 1 primary private IPv4
    • 1 or more secondary private IPv4
    • 1 Elastic IP per private IP
    • One Public IPv4 address
    • 1 Mac address
    • You can create and configure network interfaces and attach them to instances in the same Availability Zone.
    • The below diagram is just the one ENI ( Network card adapter) however for some of them have multiple adapters.
    • Each instance has a default network interface, called the primary network interface.
    • Each instance has a default network interface, called the primary network interface.
  • Instances with multiple network cards provide higher network performance, including bandwidth capabilities above 100 Gbps and improved packet rate performance. All the instances have mostly one network card which has further ENI’s.
  • The following instances support multiple network cards. 
  • You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach).
  • You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface.

Configure your network interface using ec2-net-utils for Amazon Linux

There is an additional script that is installed by AWS which is ec2-net-utils. To install this script, use the following command.

yum install e2-net-utils

To list the configuration files that are generated can be checked using the below command:

ls -l /etc/sysconfig/network-scripts/*-eth?

IP Address

  • You can specify multiple private IPv4 and IPv6 addresses for your instances.
  • You can assign a secondary private IPv4 address to any network interface. The network interface does not need to be attached to the instance.
  • Secondary private IPv4 addresses that are assigned to a network interface can be reassigned to another one if you explicitly allow it.
  • Secondary private IPv4 addresses that are assigned to a network interface can be reassigned to another one if you explicitly allow it.
  • Although you can’t detach the primary network interface from an instance, you can reassign the secondary private IPv4 address of the primary network interface to another network interface.
  • Each private IPv4 address can be associated with a single Elastic IP address, and vice versa.
  • When a secondary private IPv4 address is reassigned to another interface, the secondary private IPv4 address retains its association with an Elastic IP address.
  • When a secondary private IPv4 address is unassigned from an interface, an associated Elastic IP address is automatically disassociated from the secondary private IPv4 address.

Assign a secondary private IPv4 address

  • In EC2, choose Network Interfaces
  • Allow secondary IP address.
  • Again verify in EC2 instance networking tab

What is Elastic IP address?

  • Static Ip address
  • It is region specific and cannot be moved to another region.
  • First thing is to allocate to the account.
  • When you associate an Elastic IP address with an instance, it is also associated with the instance’s primary network interface

Associate an Elastic IP address with the secondary private IPv4 address

  • In the navigation pane, choose Elastic IPs.
  • Again verify in EC2 instance networking tab

Conclusion

In the long ultimate guide we learned everything one must know about AWS EC2 in the AWS Cloud.

AWS KMS Keys

If you need to secure your AWS Cloud account for various data content then you must know everything about AWS KMS keys.

In this tutorial we will learn everything we should know about AWS KMS keys and how to call these AWS KMS keys in IAM Policies.

Table of Content

  1. AWS KMS (Key Management Service)
  2. Symmetric Encryption KMS Keys
  3. Asymmetric KMS keys
  4. Data keys
  5. Custom key stores
  6. Key material
  7. Key policies in AWS KMS
  8. Default Key Policy
  9. Allowing user to access KMS keys with Key Policy
  10. Allowing Users and Roles to access KMS keys with Key Policy
  11. Access KMS Key by User in different account
  12. Creating KMS Keys
  13. What is Multi-region KMS Keys?
  14. Key Store and Custom Key Store
  15. How to Encrypt your AWS RDS using AWS KMS keys
  16. Encrypt AWS DB instance using AWS KMS keys
  17. Encrypting the AWS S3 bucket using AWS KMS Keys
  18. Applying Server-side Encryption on AWS S3 bucket
  19. Configure AWS S3 bucket to use S3 Bucket Key with Server Side E-KMS for new objects
  20. Client Side Encryption on AWS S3 Bucket
  21. Conclusion

AWS KMS (Key Management Service)

KMS is a managed service that makes it easy to create and control cryptographic key that protect your data by encrypting and decrypting. KMS uses Hardware security modules to protect and validate your keys.

KMS Keys contains a reference to the key material that is used when you perform cryptographic operations with the KMS key. Also, you cannot delete this key material; you must delete the KMS key. A KMS key contains metadata, such as the key ID, key spec, key usage, creation date, description, and key state. Key identifiers act like names for your KMS keys.

keyID: It acts like a name for example 1234abcd-12ab-34cd-56ef-1234567890a

Note: A cryptographic key is a string of bits used by a cryptographic algorithm to transform plain text into cipher text or vice versa. This key remains private and ensures secure communication.

  • The KMS keys that are created by us are customer managed keys. You have control over KMS policies, enable and disable, rotating key material, adding tags , creating alias. When you create an AWS KMS key, by default, you get a KMS key for symmetric encryption.
    • Symmetric encryption keys are used in symmetric encryption, where the same key is used for encryption and decryption.
    • An asymmetric KMS key represents a mathematically related public key and private key pair.
  • The KMS keys that are created automatically by AWS are AWS Managed keys. The aliases are represented as aws/redshift etc. All AWS’ managed keys are now rotated every 3 year
  • AWS owned keys are a collection of KMS keys that an AWS service owns and manages for use in multiple AWS accounts. Although AWS owned keys are not in your AWS account, an AWS service can use an AWS owned key to protect the resources in your account.
  • Alias: A user friendly name given to KMS key is an alias. For example: alias/ExampleAlias
  • custom key store is an AWS KMS resource backed by a key manager outside of AWS KMS that you own and manage
  • cryptographic operations are API operations that use KMS keys to protect data.
  • Key material is the string of bits used in a cryptographic algorithm.
  • Key policy determines who can manage the KMS keys and who can use it. The key policy that is attached to the KMS key. The key policy is always defined in the AWS account and Region that owns the KMS key.
  • All IAM policies that are attached to the IAM user or role making the request. IAM policies that govern a principal’s use of a KMS key are always defined in the principal’s AWS account.

Symmetric Encryption KMS Keys

When you create an AWS KMS key, by default, you get a KMS key for symmetric encryption. Symmetric key material never leaves AWS KMS unencrypted. To use a symmetric encryption KMS key, you must call AWS KMS. Symmetric encryption keys are used in symmetric encryption, where the same key is used for encryption and decryption.

AWS services that are integrated with AWS KMS use only symmetric encryption KMS keys to encrypt your data. These services do not support encryption with asymmetric KMS keys. 

You can use a symmetric encryption KMS key in AWS KMS to encrypt, decrypt, and re-encrypt data, and generate data keys and data key pairs.

When you create a request or raise a request then it happens as follows:

Requested Syntax:

{
   "EncryptionAlgorithm": "string",
   "EncryptionContext": {
      "string" : "string"
   },

   "GrantTokens": [ "string" ],
   "KeyId": "string",
   "Plaintext": blob
}
Response Syntax

{
   "CiphertextBlob": blob,
   "EncryptionAlgorithm": "string",
   "KeyId": "string"
}

Asymmetric KMS keys

You can create asymmetric KMS keys in AWS KMS. An asymmetric KMS key represents a mathematically related public key and private key pair. The private key never leaves AWS KMS unencrypted.

Data keys

Data keys are symmetric keys you can use to encrypt data, including large amounts of data and other data encryption keys. Unlike symmetric KMS keys, which can’t be downloaded, data keys are returned to you for use outside of AWS KMS.

Custom key stores

custom key store is an AWS KMS resource backed by a key manager outside of AWS KMS that you own and manage. When you use a KMS key in a custom key store for a cryptographic operation

Key material

Key material is the string of bits used in a cryptographic algorithm. Secret key material must be kept secret to protect the cryptographic operations that use it. Public key material is designed to be shared. You can use key material that AWS KMS generates, key material that is generated in the AWS CloudHSM cluster of a custom key store, or import your own key material.

Key policies in AWS KMS

A key policy is a resource policy for an AWS KMS key. Key policies are the primary way to control access to KMS keys. Every KMS key must have exactly one key policy. The statements in the key policy determine who has permission to use the KMS key and how they can use it. You can also use IAM policies and grants to control access to the KMS key, but every KMS key must have a key policy.

Unless the key policy explicitly allows it, you cannot use IAM policies to allow access to a KMS key. Without permission from the key policy, IAM policies that allow permissions have no effect. Unlike IAM policies, which are global, key policies are Regional

Default Key Policy

As soon as you create the KMS keys, the default key policy is also created which gives the AWS account that owns the KMS key full access to the KMS key. It also allows the account to use IAM policies to allow access to the KMS key, in addition to the key policy.

{
  "Sid": "Enable IAM policies",
  "Effect": "Allow",
  "Principal": {
    "AWS": "arn:aws:iam::111122223333:root"
   },
  "Action": "kms:*",
  "Resource": "*"
}

Allowing user to access KMS keys with Key Policy

You can create and manage key policies in AWS KMS console, by using KMS API Operations. First you need to allow users, role or admins in Key policy to use KMS keys. As shown the below key policy allows Alice user in Account(111122223333) to use KMS key

Note: to access KMS you need to create separate IAM policies.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Describe the policy statement",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:user/Alice"
      },
      "Action": "kms:DescribeKey",
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "kms:KeySpec": "SYMMETRIC_DEFAULT"
        }
      }
    }
  ]
}

Allowing Users and Roles to access KMS keys with Key Policy

First you need to allow users, role or admins in Key policy to use KMS keys. For users to access KMS you need to create separate IAM policies. For example in the below policy allows Account(111122223333) and myRole in Account(111122223333) to use KMS keys.

{
    "Id": "key-consolepolicy",
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Enable IAM User Permissions",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:root"
            },
            "Action": "kms:*",
            "Resource": "*"
        },
        {
            "Sid": "Allow access for Key Administrators",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/myRole"
            },
            "Action": [
                "kms:Create*",
                "kms:Describe*",
                "kms:Enable*",
                "kms:List*",
                "kms:Put*",
                "kms:Update*",
                "kms:Revoke*",
                "kms:Disable*",
                "kms:Get*",
                "kms:Delete*",
                "kms:TagResource",
                "kms:UntagResource",
                "kms:ScheduleKeyDeletion",
                "kms:CancelKeyDeletion"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Allow use of the key",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/myRole"
            },
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Allow attachment of persistent resources",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/myRole"
            },
            "Action": [
                "kms:CreateGrant",
                "kms:ListGrants",
                "kms:RevokeGrant"
            ],
            "Resource": "*",
            "Condition": {
                "Bool": {
                    "kms:GrantIsForAWSResource": "true"
                }
            }
        }
    ]
}

Access KMS Key by User in different account

In this section we will go through an example where AWS KMS key is present in Account 2 and user from Account 1 named Bob needs to access it. [Access KMS Key in Account 2 by User bob in Account 1]

  • User bob needs to assume role (engineering) in Account 1.
{
    "Role": {
        "Arn": "arn:aws:iam::111122223333:role/Engineering",
        "CreateDate": "2019-05-16T00:09:25Z",
        "AssumeRolePolicyDocument": {
            "Version": "2012-10-17",
            "Statement": {
                "Principal": {
                    "AWS": "arn:aws:iam::111122223333:user/bob"
                },
                "Effect": "Allow",
                "Action": "sts:AssumeRole"
            }
        },
        "Path": "/",
        "RoleName": "Engineering",
        "RoleId": "AROA4KJY2TU23Y7NK62MV"
    }
}
  • Attach IAM Policy to IAM Role  (engineering) in Account 1. The Policy contains allows anyone to access KMS key in another account. 
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncryptFrom",
                "kms:ReEncryptTo",
                "kms:GenerateDataKey",
                "kms:GenerateDataKeyWithoutPlaintext",
                "kms:DescribeKey"
            ],
            "Resource": [
                "arn:aws:kms:us-west-2:444455556666:key/1234abcd-12ab-34cd-56ef-1234567890ab"
            ]
        }
    ]
}
  • Now, In Account 2 create KMS key policy that allows everyone to access from Account 1
{
    "Id": "key-policy-acct-2",
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Permission to use IAM policies",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::444455556666:root"
            },
            "Action": "kms:*",
            "Resource": "*"
        },
        {
            "Sid": "Allow account 1 to use this KMS key",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:root"
            },
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncryptFrom",
                "kms:ReEncryptTo",
                "kms:GenerateDataKey",
                "kms:GenerateDataKeyWithoutPlaintext",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        }
    ]
}

Creating KMS Keys

You can create the KMS Keys either in single region or multi region. By default the AWS KMS creates the key material. You need below permissions to create the KMS keys.

kms:CreateKey
kms:CreateAlias
kms:TagResource
iam:CreateServiceLinkedRole 
  • Navigate to AWS KMS service in AWS Management console.
  • Add Alias to the key and Description of the AWS Key that you created.
  • Next, add the permissions to the key and review the Key before creation.

What is Multi-region KMS Keys?

AWS KMS supports multi-Region keys, which are AWS KMS keys in different AWS Regions that can be used interchangeably – as though you had the same key in multiple Regions. Each set of related multi-Region keys has the same key material and key ID, so you can encrypt data in one AWS Region and decrypt it in a different AWS Region without re-encrypting or making a cross-Region call to AWS KMS.

  • You begin by creating a symmetric or asymmetric multi-Region primary key in an AWS Region that AWS KMS supports, such as US East (N. Virginia)
  • You set a key policy for the multi-Region key, and you can create grants, and add aliases and tags for categorization and authorization.
  • When you do, AWS KMS creates a replica key in the specified Region with the same key ID and other shared properties as the primary key. Then it securely transports the key material across the Region boundary and associates it with the new KMS key in the destination Region, all within AWS KMS

Key Store and Custom Key Store

key store is a secure location for storing cryptographic keys.  The default key store in AWS KMS also supports methods for generating and managing the keys that its stores.

By default, the cryptographic key material for the AWS KMS keys that you create in AWS KMS is generated in and protected by hardware security modules (HSMs). However, if you require even more control of the HSMs, you can create a custom key store.

custom key store is a logical key store within AWS KMS that is backed by a key manager outside of AWS KMS that you own and manage.

AWS KMS – Keys – Default Key store (IN AWS KMS) – HSM

AWS KMS – Keys – Custom Key Store (OUTSIDE AWS KMS) – Key Manager Manages it

There are two Custom Key Stores:

  • An AWS CloudHSM key store is an AWS KMS custom key store backed by an AWS CloudHSM cluster. You create and manage your custom key stores in AWS KMS and create and manage your HSM clusters in AWS CloudHSM.
  • An external key store is an AWS KMS custom key store backed by an external key manager outside of AWS that you own and control

How to Encrypt your AWS RDS using AWS KMS keys

Amazon RDS supports only symmetric KMS keys. You cannot use an asymmetric KMS key to encrypt data in an Amazon RDS database.

When you use KMS in RDS EBS or DB instances the service specifies encryption context. The encryption context is additional authenticated data ( AAD) and same encryption context is used to decrypt the data. Encryption context is also written to your CloudTrail logs.

At minimum, Amazon RDS always uses the DB instance ID for the encryption context, as in the following JSON-formatted example:

{ "aws:rds:db-id": "db-CQYSMDPBRZ7BPMH7Y3RTDG5QY" }

Encrypt AWS DB instance using AWS KMS keys

  • To encrypt a new DB instance, choose Enable encryption on the Amazon RDS console.
  • When you create an encrypted DB instance, you can choose a customer managed key or the AWS managed key for Amazon RDS to encrypt your DB instance.
  • If you don’t specify the key identifier for a customer managed key, Amazon RDS uses the AWS managed key for your new DB instance

Amazon RDS builds on Amazon Elastic Block Store (Amazon EBS) encryption to provide full disk encryption for database volumes.

When you create an encrypted Amazon EBS volume, you specify an AWS KMS key. By default, Amazon EBS uses the AWS managed key for Amazon EBS in your account (aws/ebs). However, you can specify a customer managed key that you create and manage.

For each volume, Amazon EBS asks AWS KMS to generate a unique data key encrypted under the KMS key that you specify. Amazon EBS stores the encrypted data key with the volume.

Similar to DB instances Amazon EBS uses an encryption context with a name-value pair that identifies the volume or snapshot in the request. 

Encrypting the AWS S3 bucket using AWS KMS Keys

Amazon S3 integrates with AWS Key Management Service (AWS KMS) to provide server-side encryption of Amazon S3 objects. Amazon S3 now applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for every bucket in Amazon S3.

Amazon S3 uses server-side encryption with AWS KMS (SSE-KMS) to encrypt your S3 object data.

When you configure your bucket to use an S3 Bucket Key for SSE-KMS, AWS generates a short-lived bucket-level key from AWS KMS then temporarily keeps it in S3

Applying Server-side Encryption on AWS S3 bucket

To apply server side encryption on AWS S3 bucket you need to create a AWS S3 policy and then apply bucket policy as shown below.

{
   "Version":"2012-10-17",
   "Id":"PutObjectPolicy",
   "Statement":[{
         "Sid":"DenyUnEncryptedObjectUploads",
         "Effect":"Deny",
         "Principal":"*",
         "Action":"s3:PutObject",
         "Resource":"arn:aws:s3:::DOC-EXAMPLE-BUCKET1/*",
         "Condition":{
            "StringNotEquals":{
               "s3:x-amz-server-side-encryption":"aws:kms"
            }
         }
      }
   ]
}

Configure AWS S3 bucket to use S3 Bucket Key with Server Side E-KMS for new objects

To enable an S3 Bucket Key when you create a new bucket follow the below steps.

  1. Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.
  2. Choose Create bucket.
  3. Enter your bucket name, and choose your AWS Region.
  4. Under Default encryption, choose Enable.
  5. Under Encryption type, choose AWS Key Management Service key (SSE-KMS).
  6. Choose an AWS KMS key:
    1. Choose AWS managed key (aws/s3).
    1. Choose Customer managed key, and choose a symmetric encryption customer managed key in the same Region as your bucket.
  7. Under Bucket Key, choose Enable.
  8. Choose Create bucket.

Amazon S3 creates your bucket with an S3 Bucket Key enabled. New objects that you upload to the bucket will use an S3 Bucket Key. To disable an S3 Bucket Key, follow the previous steps, and choose disable.

Client Side Encryption on AWS S3 Bucket

Client-side encryption is the act of encrypting your data locally to ensure its security as it passes to the Amazon S3 service. The Amazon S3 service receives your encrypted data; it does not play a role in encrypting or decrypting it. For example, if you need to use KMS keys in Java application then use the below code.

AWSKMS kmsClient = AWSKMSClientBuilder.standard()
                .withRegion(Regions.DEFAULT_REGION)
                .build();

        // create KMS key for for testing this example
        CreateKeyRequest createKeyRequest = new CreateKeyRequest();
        CreateKeyResult createKeyResult = kmsClient.createKey(createKeyRequest);

// --
        // specify an AWS KMS key ID
        String keyId = createKeyResult.getKeyMetadata().getKeyId();

        String s3ObjectKey = "EncryptedContent1.txt";
        String s3ObjectContent = "This is the 1st content to encrypt";
// --

        AmazonS3EncryptionV2 s3Encryption = AmazonS3EncryptionClientV2Builder.standard()
                .withRegion(Regions.US_WEST_2)
                .withCryptoConfiguration(new CryptoConfigurationV2().withCryptoMode(CryptoMode.StrictAuthenticatedEncryption))
                .withEncryptionMaterialsProvider(new KMSEncryptionMaterialsProvider(keyId))
                .build();

        s3Encryption.putObject(bucket_name, s3ObjectKey, s3ObjectContent);
        System.out.println(s3Encryption.getObjectAsString(bucket_name, s3ObjectKey));

        // schedule deletion of KMS key generated for testing
        ScheduleKeyDeletionRequest scheduleKeyDeletionRequest =
                new ScheduleKeyDeletionRequest().withKeyId(keyId).withPendingWindowInDays(7);
        kmsClient.scheduleKeyDeletion(scheduleKeyDeletionRequest);

        s3Encryption.shutdown();
        kmsClient.shutdown();

Conclusion

In this article we learnt what is AWS KMS (Key Management Service) , key policy and IAM policies to access the KMS keys by users or roles in AWS cloud.

What is AWS RDS (Relationship Database Service)?

In this Post you will learn everything you must know end to end about AWS RDS. This tutorial will give you glimpse of each components starting from what is DB instance to scaling and multi availability zone cluster configurations and details.

Lets get started.

Table of Content

  • What is AWS RDS (Relationship Database Service)?
  • Database Instance
  • Database Engines
  • Database Instance class
  • DB Instance Storage
  • Blue/Green Deployments
  • Working with Read Replicas
  • How does cross region replication works?
  • Multi AZ Deployments
  • Multi AZ DB instance deployment
  • How to convert a single DB instance to Multi AZ DB instance deployment
  • Multi-AZ DB Cluster Deployments
  • DB pricing
  • AWS RDS performance troubleshooting
  • Tagging AWS RDS Resources
  • Amazon RDS Storage
  • Monitoring Events, Logs and Streams in an Amazon RDS DB Instance.
  • How to grant Amazon RDS to publish the notifications to the SNS topic using the IAM Policy.
  • RDS logs
  • AWS RDS Proxy
  • Amazon RDS for MySQL
  • Performance improvements on MySQL RDS for Optimized reads.
  • Importing Data into MySQL with different data source.
  • Database Authentication with Amazon RDS
  • Connecting to your DB instance using IAM authentication from the command line: AWS CLI and mysql client
  • Create database user account using IAM authentication
  • Generate an IAM authentication token
  • Connecting to DB instance
  • Connecting to AWS Instance using Python boto3 (boto3 rds)
  • Final AWS RDS Troubleshooting’s

What is AWS RDS (Relationship Database Service)?

  • It allows you to setup relational database in the AWS Cloud. AWS RDS is managed database service.
  • It is cost effective and resizable capacity because you if you invest in your own hardware, memory, CPU and it is time consuming and very costly.
  • With AWS RDS, it manages everything starting from Scaling, availability, backups, software patching, software installing, OS patching, OS installation, hardware lifecycle, server maintenance.
  • You can define permissions of your database users and database with IAM.

Database Instance

DB instance is a database environment which you launch your database users and user created databases.

  1. You can run your database instance in various AZ’s also known as multi-AZ deployments. Amazon automatically provisions and maintains secondary standby instance in different Availability zones. With this approach the primary DB replicates the data written into it to standby instance located in another AZ. Note: Instance in secondary can also be configured as read
  2. You can attach security groups to your database instance to protect your instance.
  3. You can launch DB instance in Local zones as well by enabling local zone in Amazon EC2 console.
  4. You can use Amazon CloudWatch to monitor the status of your database instance. You can monitor the following metrics:
    1. IOPS (I/O operations per second)
    1. Latency (Submitted I/O request until completed)
    1. Throughput (Number of bytes transferred per second) to or from disk.
    1. Queue depth: how many requests are pending in the queue.
  5. DB instance has a unique DB instance identifier that a customer or a user provider and should be different in AWS Region. If you provide the DB instance identifier as testing, then your endpoint formed will be as below.
testing. <account-id><region>.rds.amazonaws.com
  • DB instance supports various DB engines such as MySQL, MariaDB, PostgreSQL, Oracle, Microsoft SQL server, Amazon Aurora database engines.
  • A DB instance can host multiple databases with multiple schemas.
  • When you create any DB instance using AWS RDS service then by default it creates a master user account, and this user has all the permissions. Note: Make sure to change the password of this master user account.
  • You can create a backup of your Database instance by creating database snapshots.  You can also store your snapshots in AWS S3 bucket.
  • You can enable IAM database authentication on your database instance so that you don’t need any password to login to the database instance.
  • You can also enable Kerberos authentication to support external authentication of database users using Kerberos and Microsoft Active directory.
  • DB Instance are billed per hour.

Database Engines

Db engines are specific software’s that runs on your DB instance such as MariaDB, Microsoft SQL server, MySQL, Oracle and Postgres.

Database Instance class

Db instance class determines the computation, memory and storage capacity of a DB instance.  AWS RDS supports three types of DB instance classes:

  • General purpose:
  • Memory optimized:
  • Burstable Performance
  1. DB instance class supports Intel Hyper threading technology which enables multiple threads to run parallelly on single Intel Xeon CPU Core. Each thread is represented as vCPU on DB Instance. For example db.m4.xlarge DB Instance class has 2 CPU Core and two threads per CPU Core which makes to total of 4 vCPU’s. Note: You can disable Intel Hyper threading by specifying a single thread per CPU core when you need a high-performance computing workload.
  2. To set the Core count and Threads per core you need to edit the processor features.
  3. Quick note: To compare the CPU capacity between different DB instance class you should use ECU (Amazon EC2 instance compute units). The amount of CPU that is allocated to a DB instance is expressed in terms of EC2 compute units.
  4. You can use EBS optimised volumes which are good for your DB instance as it provides better performance by minimizing contention between I/O and other traffic from your instance.

DB Instance Storage

You can attach EBS the block level storage volumes to a running instance. DB Instance storage comes with:

  • General purpose (SSD) [gp2 and gp3]: They are cost effective which is ideal for board range of workload on medium sized Generally, they have throughput limit of 250MB/second.
  • For GP2
    • 3 IOPS for each GB with min 100 IOPS (I/O Operations per second)
    • 16000 IOPS for 5.34TB is max limit in gp2  
    • Throughput is max 250MB/sec where throughput is how fast the storage volume can perform read and write.
  • For GP3
    • Up to 32000 IOPS
  • Provisioned IOPS (PIOPS) [io1]: They are used when you need low I/O Latency, consistent I/O throughput. These are suited for production environments.
    • For io1 – up to 256000 (IOPS) and throughput up to 4000 MB/s
    • Note: Benefits of using provisioned IOPS are
      • Increase number of I/O requests that system cab process.
      • Decreased latency because less I/O requests will be in queue.
      • Faster response time and high database throughput.

Blue/Green Deployments

Blue/Green deployments copies database environments in a separate environment. You can make changes in staging environment and then later push those changes in production environments. Blue/ Green deployments are only available for RDS for MariaDB and RDS for MySQL.

Working with Read Replicas

  • Updates from primary DB are copied to the read replicas.
  • You can promote read replica to be standalone DB as well in case you require sharing (Share nothing DB)
  • You can use or create read replica in different AWS Region as well.

You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. 

Note: With Cross region read replicas you can create read replicas in a different region from the source DB instance.

How does cross region replication works?

  • IAM role of Destination must have access to Source DB Instance.
    • Source DB acts as source
    • RDS creates automated DB Snapshot of source DB
    • Copy of Snapshot starts
    • Destination read replica uses copied DB Snapshot

Note: You can configure DB instance to replicate snapshots and transaction logs in another AWS region.

Multi AZ Deployments

  • You can run your database instance in various AZ’s also known as multi-AZ deployments. Amazon automatically provisions and maintains secondary standby instance in different Availability zones. With this approach the primary DB replicates the data written into it to standby instance located in another AZ. Note: Instance in secondary can also be configured as read replicas.
  • You can align one standby or two standby instances.
  • When you have one standby instance it is known as Multi AZ DB instance deployment where one standby instance provides failover support but doesn’t act as read replica.
  • With Two standby instance it is known as Multi AZ DB cluster.
  • The failover mechanism automatically changes the Domain Name System (DNS) record of the DB instance to point to the standby DB instance

Note: DB instances with multi-AZ DB instance deployments can have increased write and commit latency compared to single AZ deployment.

Multi AZ DB instance deployment

In a Multi-AZ DB instance deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone.  You can’t use a standby replica to serve read traffic

If a planned or unplanned outage of your DB instance results from an infrastructure defect, Amazon RDS automatically switches to a standby replica in another Availability Zone if you have turned on Multi-AZ.

How to convert a single DB instance to Multi AZ DB instance deployment

  • Take a snapshot of primary DB instances EBS volume.
  • Creates a new volume for standby replicas from snapshot.
  • Next, turn on block level

Multi-AZ DB Cluster Deployments

  • It has one writer DB instance
  • It has two reader DB instances and allows clients to read the data.
  • AWS RDS replicates writer  
  • Data is synched from Writer instance to both the reader instances.
  • If a failover happens on of the writer instance then the reader instance acts as a automatic failover targets.  It does so by promoting a reader DB instance to a new writer DB instance. It happens automatically within 35 seconds and you can also do by going on Failover tab.

Cluster Endpoint

The cluster endpoint can write as well as read the data. The endpoint cannot be modified.

Reader Endpoint

Reader endpoint is used for reading the content from the DB cluster.

Instance Endpoint

These are used to connect to the DB instance directly to address the issues within instance or your application might require fine grained load balancing.

DB cluster parameter group

DB cluster parameter group acts as a container for engine configuration values that are applied to every DB instance in the Multi-AZ DB cluster

Rds Replica Lag

The Difference in time between latest transaction on writer DB instance and latest applied transaction on reader instance. This could be because of high write concurrency or heavy batch updating.

How to Solve Replica Lag

You can solve the replica lag by reducing the load on your writer DB instance. You can also use Flow control to reduce the replica lag. In Flow log you can add a delay into the end of a transaction, which decreases the write throughput on writer instance. To turn on flow control use the below parameter. By default it is set to 120 seconds and you can turn off by setting to 84000 seconds or less than 120 .

Flow control works by throttling writes on the writer DB instance, which ensures that replica lag doesn’t continue to grow unbounded. Write throttling is accomplished by adding a delay. Throttling means queue or let it flow.

rpl_semi_sync_master_target_apply_lag

To check the status of flow control use below command.

SHOW GLOBAL STATUS like '%flow_control%';

DB pricing

  • DB Instance are billed per hour.
  • Storage are billed per GB per month.
  • I/O requests (per 1 million requests per month.
  • Data transfer per GB in and out of your DB Instance.

AWS RDS performance troubleshooting

  1. Setup CloudWatch monitoring
  2. Enable Automatic backups
  3. If your DB requires more I/O, then to increase migrate to new instance class, convert from magnetic to general or provisioned IOPS.
  4. If you already have provisioned IOPS, consider adding more throughput capacity.
  5. If your app is caching DNS data of your instance, then make sure to set TTL value to less than 30 seconds because caching can lead to connection failures.
  6. Setup enough memory (RAM)
  7. Enable Enhanced monitoring to identify the Operating system issues
  8. Fine tune your SQL queries.
  9. Avoid tables in your database to grow too large as they impact Read and Writes.
  10. You can use options groups if you need to provide additional security for your database.
  11. You can use DB parameter group acts as a container for engine configuration values that are applied to one or more DB instances.

Tagging AWS RDS Resources

  • Tags are very helpful and are basically key value pair formats.
  • You can use Tags in IAM policies to manage access to AWS RDS resources.
  • Tags can be used to produce the detailed billing reports.
  • You can specify if you need tags to be applied to snapshots as well.
  • Tags are useful to determine which instance to be stopped, started, enable backups.

Amazon RDS Storage

Increasing DB instance storage capacity:

Click on Modify in Databases and then Allocated Storage and apply immediately.  

Managing capacity automatically with Amazon RDS storage autoscaling

If workload is unpredictable then enable autoscaling for an Amazon RDS DB Instance. While creating the database engine, enable storage autoscaling and set the maximum storage threshold.

Modifying settings for Provisioned IOPS SSD storage

You can change that is reduce the amount of IOPS for your instance (throughput ) i.e read and write operations however with Provisioned IOPS SSD Storage you cannot reduce the storage size.

Monitoring Events, Logs and Streams in an Amazon RDS DB Instance.

Amazon Event Bridge: Serverless Event bus service that allows to connect apps with data from various sources.

Cloud trail logs and Cloud Watch logs are useful.

Database Activities Streams: AWS RDS push activities to Amazon Kinesis data stream

How to grant Amazon RDS to publish the notifications to the SNS topic using the IAM Policy.

The IAM Policy will be attached to the SNS service.

{
  "Version": "2008-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "events.rds.amazonaws.com"
      },
      "Action": [
        "sns:Publish"
      ],
      "Resource": "arn:aws:sns:us-east-1:123456789012:topic_name",
      "Condition": {
        "ArnLike": {
          "aws:SourceArn": "arn:aws:rds:us-east-1:123456789012:db:prefix-*"
        },
        "StringEquals": {
          "aws:SourceAccount": "123456789012"
        }
      }
    }
  ]
}

RDS logs

  • Amazon RDS doesn’t provide host access to the database logs on the file system of your DB instance. You can Choose the Logs & events tab to view the database log files and logs on the console itself.
  • To publish SQL Server DB logs to CloudWatch Logs from the AWS Management Console. In the Log exports section, choose the logs that you want to start publishing to CloudWatch Logs.

Note: In CloudWatch Logs, a log stream is a sequence of log events that share the same source. Each separate source of logs in CloudWatch Logs makes up a separate log stream. A log group is a group of log streams that share the same retention, monitoring, and access control settings.

  • Amazon RDS provides a REST endpoint that allows access to DB instance log files and you can find the log using REST Endpoint.
GET /v13/downloadCompleteLogFile/DBInstanceIdentifier/LogFileName HTTP/1.1
Content-type: application/json
host: rds.region.amazonaws.com
  • RDS for MySQL writes mysql-error.log to disk every 5 minutes. You can write the RDS for MySQL slow query log and the general log to a file or a database table. You can direct the general and slow query logs to tables on the DB instance by creating a DB parameter group and setting the log_output server parameter to TABLE
    • slow_query_log: To create the slow query log, set to 1. The default is 0.
    • general_log: To create the general log, set to 1. The default is 0.
    • long_query_time: To prevent fast-running queries from being logged in the slow query log

MySQL removes log files more than two weeks old. You can manually rotate the log tables with the following command line procedures, 

CALL mysql.rds_rotate_slow_log;

AWS RDS Proxy

  • RDS Proxy allows you to pool and share db connections to improve ability to scale.
  • RDS Proxy makes applications more effective to db failures by automatically connecting to Standby DB instance.
  • RDS Proxy establishes a database connection pool and reuses connections in this pool and avoids the memory and CPU overhead of opening a new database connection each time.
  • You can enable RDS Proxy for most applications with no code changes.

You can use RDS Proxy in the following scenarios.

  • Any DB instance or cluster that encounters “too many connections” errors is a good candidate for associating with a proxy.
  • For DB instances or clusters that use smaller AWS instance classes, such as T2 or T3, using a proxy can help avoid out-of-memory conditions
  • Applications that typically open and close large numbers of database connections and don’t have built-in connection pooling mechanisms are good candidates for using a proxy.

Amazon RDS for MySQL

There are two versions that are available for MySQL database engines i.e. version 8.0  and 5.7. MySQL provides the validate_password plugin for improved security. The plugin enforces password policies using parameters in the DB parameter group for your MySQL DB instance

To find the available version in MySQL which are supported:

aws rds describe-db-engine-versions --engine mysql --query *[].{Engine:Engine,EngineVersion:EngineVersion}" --output text

SSL/TLS on MySQL DB Instance

Amazon RDS installs SSL/TLS Certificate on the DB Instance. These certificate are signed by CA.  

To connect to DB instance with certificate use below command.

mysql -h mysql–instance1.123456789012.us-east-1.rds.amazonaws.com --ssl-ca=global-bundle.pem --ssl-mode=REQUIRED -P 3306 -u myadmin -p

To check if applications are using SSL.

mysql> SELECT id, user, host, connection_type

       FROM performance_schema.threads pst

       INNER JOIN information_schema.processlist isp

       ON pst.processlist_id = isp.id;

Performance improvements on MySQL RDS for Optimized reads.

  • An instance store provides temporary block-level storage for your DB instance.
  • With RDS Optimized reads some temporary objects are stored on Instance store. These objects include temp files, internal on disk temp tables, memory map files, binary logs, cached files.
  • The storage is located on Non-Volatile Memory express SSD’s that are physically attached.
  • Applications that can uses RDS for Optimized reads are:
    • Applications that run on-demand or dynamic reporting queries.
    • Applications that run analytical queries.
    • Database queries that perform grouping or ordering on non-indexed columns
  • Try to add retry logic for read only queries.
  • Avoid bulk changes in single transaction.
  • You can’t change the location of temporary objects to persistent storage (Amazon EBS) on the DB instance classes that support RDS Optimized Reads.
  • Transactions can fail when the instance store is full.
  • RDS Optimized Reads isn’t supported for multi-AZ DB cluster deployments.

Importing Data into MySQL with different data source.

  1. Existing MySQL database on premises or on Amazon EC2: Create a backup of your on-premises database, store it on Amazon S3, and then restore the backup file to a new Amazon RDS DB instance running MySQL.
  2. Any existing database: Use AWS Database Migration Service to migrate the database with minimal downtime
  3. Existing MySQL DB instance: Create a read replica for ongoing replication. Promote the read replica for one-time creation of a new DB instance.
  4. Data not stored in an existing database: Create flat files and import them using the mysqlimport utility.

Database Authentication with Amazon RDS

For PostgreSQL, use one of the following roles for a user of a specific database.

  • IAM Database authentication: assign rds_iam role to user
  • Kerberos authentication  assign rds_ad role to the user.
  • Password authentication don’t assign above roles.

Password Authentication

  • With Password authentication, database performs all the administration of user accounts. Database controls and authenticate the user accounts.

IAM Database authentication

  • IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don’t need to use a password when you connect to a DB instance

Kerberos Authentication

Benefits of using SSO and centralised authentication of database users.

Connecting to your DB instance using IAM authentication from the command line: AWS CLI and mysql client

  • In the Database authentication section, choose Password and IAM database authentication to enable IAM database authentication.
  • To allow an IAM user or role to connect to your DB instance, you must create an IAM policy.
{

   "Version": "2012-10-17",

   "Statement": [

      {

         "Effect": "Allow",

         "Action": [

             "rds-db:connect"

         ],

         "Resource": [

             "arn:aws:rds-db:us-east-2:1234567890:dbuser:db-ABCDEFGHIJKL01234/db_user"

         ]

      }

   ]

}

Create database user account using IAM authentication

CREATE USER jane_doe IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS';
CREATE USER db_userx;
GRANT rds_iam TO db_userx;

Generate an IAM authentication token

aws rds generate-db-auth-token --hostname rdsmysql.123456789012.us-west-2.rds.amazonaws.com --port 3306 --region us-west-2  --username jane_doe

Connecting to DB instance

mysql –host=hostName –port=portNumber –ssl-ca=full_path_to_ssl_certificate –enable-cleartext-plugin –user=userName –password=authToken

Connecting to AWS Instance using Python boto3 (boto3 rds)

import pymysql
import sys
import boto3
import os

ENDPOINT="mysqldb.123456789012.us-east-1.rds.amazonaws.com"
PORT="3306"
USER="jane_doe"
REGION="us-east-1"
DBNAME="mydb"

os.environ['LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN'] = '1'

#gets the credentials from .aws/credentials
session = boto3.Session(profile_name='default')
client = session.client('rds')
token = client.generate_db_auth_token(DBHostname=ENDPOINT, Port=PORT, DBUsername=USER, Region=REGION)
try:
    conn =  pymysql.connect(host=ENDPOINT, user=USER, passwd=token, port=PORT, database=DBNAME, ssl_ca='SSLCERTIFICATE')

    cur = conn.cursor()
    cur.execute("""SELECT now()""")
    query_results = cur.fetchall()
    print(query_results)

except Exception as e:
    print("Database connection failed due to {}".format(e))

   

Final AWS RDS Troubleshooting’s

Can’t connect to Amazon RDS DB instance

  • Check Security group
  • Check Port
  • Check internet Gateway
  • Check db name

Error – Could not connect to server: Connection timed out

  • Check hostname and port
  • Check security group
  • Telnet to the DB
  • Check the username and password

Error message “failed to retrieve account attributes, certain console functions may be impaired.”

  • Account is missing permissions, or your account hasn’t been properly set up.
  • lack permissions in your access policies to perform certain actions such as creating a DB instance

Amazon RDS DB instance outage or reboot

  • You change the backup retention period for a DB instance from 0 to a nonzero value or from a nonzero value to 0. You then set Apply Immediately to true.
  • You change the DB instance class, and Apply Immediately is set to true.
  • You change the storage type from Magnetic (Standard) to General Purpose (SSD) or Provisioned IOPS (SSD), or from Provisioned IOPS (SSD) or General Purpose (SSD) to Magnetic (Standard).

Amazon RDS DB instance running out of storage

  • Add more storage in  EBS volumes attached to the DB instance.

Amazon RDS insufficient DB instance capacity

The specific DB instance class isn’t available in the requested Availability Zone. You can try one of the following to solve the problem:

  • Retry the request with a different DB instance class.
  • Retry the request with a different Availability Zone.
  • Retry the request without specifying an explicit Availability Zone.

Maximum MySQL and MariaDB connections

  • The connection limit for a DB instance is set by default to the maximum for the DB instance class. You can limit the number of concurrent connections to any value up to the maximum number of connections allowed.
  • A MariaDB or MySQL DB instance can be placed in incompatible-parameters status for a memory limit when The DB instance is either restarted at least three time in one hour or at least five times in one day or potential memory usage of the DB instance exceeds 1.2 times the memory allocated to its DB instance class. To solve the issue:
    • Adjust the memory parameters in the DB parameter group associated with the DB instance.
    • Restart the DB instance.

Conclusion

This tutorial will gave you glimpse of each components starting from what is DB instance to scaling and multi availability zone cluster configurations and AWS RDS details.

How to create a IAM Policy to Deny AWS Resources outside AWS Regions.

Do you know you can restrict the user or group of IAM users to multiple services and regions with a single policy.

In this quick tutorial you will learn how to create a IAM Policy to Deny AWS Resources outside AWS Regions.

Lets get started.

Prerequisites

  • AWS account

Creating IAM Policy to Deny access to Specific AWS regions

The below policy is useful when you want any of your users or groups to be explicitly denied on AWS services in AWS Regions.

  • Version is Policy version which is fixed.
  • Effect is Deny in each statement as we want to deny users or group be able to work on specific services and regions.
  • NotActions: We have different actions such as ListAllbuckets to list the buckets etc. NotAction is opposite of actions that means we don’t apply Effect on these resources.
  • This policy denies access to any actions outside the Regions specified (eu-central-1, eu-west-1, eu-west-2, eu-west-3) and except for actions in the services specified using NotAction such as accessing Cloud front, IAM, route53, support. The below policy contains following attributes.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyAllOutsideRequestedRegions",
            "Effect": "Deny",
            "NotAction": [
                "cloudfront:*",
                "iam:*",
                "route53:*",
                "support:*"
            ],
            "Resource": "*",
            "Condition": {
                "StringNotEquals": {
                    "aws:RequestedRegion": [
                        "eu-central-1",
                        "eu-west-1",
                        "eu-west-2",
                        "eu-west-3"
                    ]
                }
            }
        }
    ]
}

Conclusion

This tutorial demonstrated that if you need to create a IAM Policy to Deny AWS Resources outside AWS Regions.

How to Launch an Amazon DynamoDB tables in AWS Account

With rise in number of database it has become a big challenge to make the right selection. As data grows our database should also scale and perform equally well.

Now Organizations have started to move toward big data and working with real time applications we certainly need a non relational and a good performance database. For these types of challenges and work AWS has always been on the top and served various services which solves our problems and one such service is AWS DynamoDB which manages non-relational databases for you and can store unlimited data and perform very well. .

Table of content

  1. What is Relational database management system ?
  2. What is SQL and NO SQL database?
  3. What is Amazon DynamoDB ?
  4. Prerequisites
  5. How to Create tables in DynamoDB in AWS Account
  6. Conclusion

What is Relational database management system ?

  • Relational database is based on tables and structured data
  • They have relationships which are logically connected.
  • Oracle Database, MySQL, Microsoft SQL Server, and IBM DB2. PostgreSQL , SQLite (for mobiles) are few example of RDMS.

Figure shows Relational Database Management System based on relational model

What is SQL and NO SQL database?

SQL:

  • The full form of SQL is structured query language which is used to manage data in relational database management system i.e RDMS.
  • SQL database belongs to the relational database management system.
  • The SQL type database follow structure pattern that’s why they are suitable for static or predefined schemas.
  • They are good in solving complex queries and highly scalable in nature but in vertical direction.
  • SQL database follows table based methodology and that’s the reason they are good for applications such as accounting systems.

NoSQL:

  • The full form of NoSQL is non-sql or non-relational.
  • This database is used for dynamic storage or those kind of managements where data is not fixed or static
  • This database is not tabular in nature rather its a key pair values.
  • They are good for big data and real time web application and scalable in nature but in horizontal direction
  • Some of the NoSQL databases which are DynamoDB, Foundation DB, Infinity DB, MemcacheDB, , Oracle NoSQL Database, , Redis MongoDB, Cassandra, Scylla, HBase.

What is Amazon DynamoDB ?

DynamoDB is a NoSQL database service that means it is different from the relational database which consists of tables in tabular form. DynamoDB service has very fast performance and is very scalable. DynamoDB service is one of the AWS managed service where you don’t need to worry about capacity , workload , setup , configuration , software patches , replications or even cluster scaling.

With DynamoDB service you just need to create tables where you can add data or retrieve data otherwise DynamoDB takes care of everything else. If you wish to monitor your resources you can do it on AWS console.

Whenever there is a traffic or high request coming in DynamoDB scales up while maintaining the performance.

Basic components of Amazon DynamoDB

  • Tables: It stores data.
    • In below example we used a database table
  • Items: Items are present in table. You can store as many item you wish in a table.
    • In below example different Employee ID are items.
  • Attributes: Each items contains one or more attributes.
    • In below example office , designation and phone are attributes of EmployeeID.

{
"EmployeeID": "1"
"office": "USA"
"Designation": "Devops engineer"
"Phone": "1234567890"
}


{
"EmployeeID": "2"
"office": "UK"
"Designation": "Senior Devops Engineer"
"Phone": "0123456789"
}

To work with Amazon DynamoDB , applications will need API’s to communicate.

  • Control Plane: It allows you to create and manage DynamoDB tables.
  • Data lane: It allows you to perform actions on the data in DynamoDB tables.

Prerequisites

  • You should have AWS account with Full access permissions on DynamoDB . If you don’t have AWS account, please create a account from here AWS account.

How to Create tables in DynamoDB in AWS Account

  • Go to AWS account and search for DynamoDB on the top of the page.
  • Click on Create Table and then you need to Enter the name of the Table and primary Key
  • Now click on Organisation that is table name
  • Now click on Items
  • Add the list of items such address , designation and phone number.
  • Verify if table has required details.

So this was the first way to use AWS provided web service and directly start creating DynamoDB tables . The other way is to download it manually on your machine , setup and then create you’re tables . You can find the steps here

Conclusion

You should now have a basic knowledge about relational database management system and non relational. We also learned about Amazon DynamoDB which is NO SQL database . We also covered on how to create tables on Amazon DynamoDB service & store the data .

This tutorial consists of all the practical’s which were done on our lab server with lots of hard work and efforts. Please share the word if you like it and hoping you get benefit out of this tutorial.

How to Launch AWS Elastic beanstalk using Terraform

If you want to scale instances, align a load balancer in front of them, host a website, and store all data in the database. Nothing could be better than Amazon Elastic beanstalk, which provides a common platform.

With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without having to worry about the infrastructure that runs those applications.

In this tutorial, we will learn how to step up Amazon Elastic beanstalk using Terraform on AWS step by step and then upload the code to run one of the simple applications.

Let’s get started.

Join 50 other followers

Table of Content

  1. What is AWS Elastic beanstalk?
  2. Prerequisites
  3. Building Terraform configuration files for AWS Elastic beanstalk
  4. Deploying Terraform configuration to Launch AWS Elastic beanstalk
  5. Verifying AWS Elastic beanstalk in AWS Cloud.
  6. Conclusion

What is AWS Elastic beanstalk?

AWS Elastic Beanstalk is one of the most widely used Amazon web service tool services. It is a service that provides a platform for various languages such as python, go ruby, java, .net, PHP for hosting the application.

The only thing you need to do in elastic beanstalk is upload code, and the rest of the things such as scaling, load balancing, monitoring will be taken care of by elastic beanstalk itself.

Elastic beanstalk makes the life of developer and cloud admins or sysadmins so easy compared to setting each service individually and interlinking each other. Some of the key benefits of AWS Elastic beanstalk are:

  • It scales the applications up or down as per the required traffic.
  • As infrastructure is managed and taken care of by AWS Elastic beanstalk developers working with admins don’t need to spend much time.
  • It is fast and easy to setup
  • You can interlink with lots of other AWS services of your own choice or you can skip it such as linking of application or classic or network load balancer.

Prerequisites

  • Ubuntu machine to run terraform preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account. Recommended to have 4GB RAM and at least 5GB of drive space.
  • Ubuntu machine should have IAM role attached with AWS Elastic beanstalk creation permissions or admin rights or access key and secret key configured in AWS CLI.
  • Terraform installed on the Ubuntu Machine. Refer How to Install Terraform on an Ubuntu machine.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

Building Terraform configuration files for AWS Elastic beanstalk

Now that you have Terraform installed on your machine, It’s time to build Terraform configuration files for AWS Elastic beanstalk that you will use to launch AWS Elastic beanstalk on the AWS Cloud.

Assuming you are still logged in Ubuntu machine.

  • Create a folder in opt directory and name it as terraform-elasticbeanstalk-demo and switch to this directory.
mkdir /opt/terraform-elasticbeanstalk-demo
cd /opt/terraform-elasticbeanstalk-demo
  • Create a file named main.tf in the /opt/terraform-elasticbeanstalk-demo directory and copy/paste the below content into it. The below Terraform configuration creates the AWS elastic beanstalk application and enviornment that will be required for application to be deployed.
# Create elastic beanstalk application

resource "aws_elastic_beanstalk_application" "elasticapp" {
  name = var.elasticapp
}

# Create elastic beanstalk Environment

resource "aws_elastic_beanstalk_environment" "beanstalkappenv" {
  name                = var.beanstalkappenv
  application         = aws_elastic_beanstalk_application.elasticapp.name
  solution_stack_name = var.solution_stack_name
  tier                = var.tier

  setting {
    namespace = "aws:ec2:vpc"
    name      = "VPCId"
    value     = var.vpc_id
  }
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "IamInstanceProfile"
    value     =  "aws-elasticbeanstalk-ec2-role"
  }
  setting {
    namespace = "aws:ec2:vpc"
    name      = "AssociatePublicIpAddress"
    value     =  "True"
  }

  setting {
    namespace = "aws:ec2:vpc"
    name      = "Subnets"
    value     = join(",", var.public_subnets)
  }
  setting {
    namespace = "aws:elasticbeanstalk:environment:process:default"
    name      = "MatcherHTTPCode"
    value     = "200"
  }
  setting {
    namespace = "aws:elasticbeanstalk:environment"
    name      = "LoadBalancerType"
    value     = "application"
  }
  setting {
    namespace = "aws:autoscaling:launchconfiguration"
    name      = "InstanceType"
    value     = "t2.medium"
  }
  setting {
    namespace = "aws:ec2:vpc"
    name      = "ELBScheme"
    value     = "internet facing"
  }
  setting {
    namespace = "aws:autoscaling:asg"
    name      = "MinSize"
    value     = 1
  }
  setting {
    namespace = "aws:autoscaling:asg"
    name      = "MaxSize"
    value     = 2
  }
  setting {
    namespace = "aws:elasticbeanstalk:healthreporting:system"
    name      = "SystemType"
    value     = "enhanced"
  }

}

  • Create another file named vars.tf in the /opt/terraform-elasticbeanstalk-demo directory and copy/paste the below content into it. The variable file contains all the variables that you have referred in main.tf file.
variable "elasticapp" {
  default = "myapp"
}
variable "beanstalkappenv" {
  default = "myenv"
}
variable "solution_stack_name" {
  type = string
}
variable "tier" {
  type = string
}

variable "vpc_id" {}
variable "public_subnets" {}
variable "elb_public_subnets" {}

  • Create another file named provider.tf in the /opt/terraform-elasticbeanstalk-demo directory and copy/paste the below content into it. The provider.tf file will authenticate and allows Terraform to connect to AWS cloud.
provider "aws" {
  region = "us-east-2"
}
  • Finally create one more file named terraform.tfvars in the /opt/terraform-elasticbeanstalk-demo directory and copy/paste the below content into it.
vpc_id              = "vpc-XXXXXXXXX"
Instance_type       = "t2.medium"
minsize             = 1
maxsize             = 2
public_subnets     = ["subnet-XXXXXXXXXX", "subnet-XXXXXXXXX"] # Service Subnet
elb_public_subnets = ["subnet-XXXXXXXXXX", "subnet-XXXXXXXXX"] # ELB Subnet
tier = "WebServer"
solution_stack_name= "64bit Amazon Linux 2 v3.2.0 running Python 3.8"

  • Now use tree command on your ubuntu machine and your folder structure should look something like below.
 tree command on your ubuntu machine and your folder structure
tree command on your ubuntu machine and your folder structure

Deploying Terraform configuration to Launch AWS Elastic beanstalk

Now that all Terraform configuration files are set up, these are not doing much unless you use Terraform commands and deploy them.

  • To deploy the AWS Elastic beanstalk first thing you need to do is Initialize the terraform by running terraform init command.
terraform init

As you see below, Terraform was initialized successfully; now, it’s time to run terraform plan.

 Terraform was initialized successfully
Terraform was initialized successfully
  • Next run the terraform plan command. Teraform plan command provides the information regarding what all resources will be provisioned or deleted by Terraform.
terraform plan
Running Terraform plan command
Running Terraform plan command
  • Finally run terraform apply command that actually deploy the code and provision the AWS Elastic terraform.
terraform apply

Verifying AWS Elastic beanstalk in AWS Cloud.

Great Job; terraform commands were executed successfully. Now it’s time to validate the AWS Elastic beanstalk launched in AWS Cloud.

  • Navigate to the AWS cloud and then futher in AWS Elasticbeanstalk service. After you reach elastic beanstalk screen you will see the enviornment and applciation name that you specified in terraform.tfvar file.
AWS Elasticbeanstalk service page
AWS Elasticbeanstalk service page
  • Next in AWS Elastic beanstalk service page click on the application URL and you will see something like below.
AWS Elasticbeanstalk service link
AWS Elasticbeanstalk service link

Join 50 other followers

Conclusion

In this tutorial, you learned what AWS Elastic beanstalk is and how to set up Amazon Elastic beanstalk using Terraform on AWS step by step.

Now that you have AWS Elastic beanstalk launched on AWS using Terraform, which applications do you plan to deploy on it next?

How to Launch multiple EC2 instances on AWS using Terraform count and Terraform for_each

Creating multiple AWS EC2 instances is generally the need of the project or the organization when you are asked to create dozens of AWS EC2 machines in a particular AWS account, and using AWS console will take hours to do that why not automate it using Terraform and save your hours of hard work?

There are various automated ways that can create multiple instances quickly, but automating with Terraform is way easier and more fun.

In this tutorial, you will learn how to Launch multiple AWS EC2 instances on AWS using Terraform count and Terraform for_each. Let’s dive in.

Join 50 other followers

Table of Content

  1. What is Amazon EC2 instance?
  2. Prerequisites
  3. Terraform files and Terraform directory structure
  4. Launch multiple EC2 instances using Terraform count
  5. Launch multiple EC2 instances using Terraform for_each
  6. Conclusion

What is Amazon EC2 instance?

Amazon Elastic Compute Cloud (Amazon EC2) provides the scalable capacity in the Amazon Web Services (AWS) Cloud. With AWS EC2, you don’t need to worry about the hardware and time to develop and deploy applications on the machines.

You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down the computations such as memory or CPU when needed. Also, AWS EC2 instances are safe as initially, they grant access to them using SSH keys.

Prerequisites

  • Ubuntu machine 20.04 version would be great , if you don’t have any machine you can create a AWS EC2 instance on AWS account with recommended 4GB RAM and at least 5GB of drive space.
  • Ubuntu machine should have IAM role attached with full access to create AWS secrets in the AWS Secret Manager or administrator permissions.
  • Terraform installed on the Ubuntu Machine. Refer How to Install Terraform on an Ubuntu machine.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

Terraform files and Terraform directory structure

Now that you have Terraform installed. Let’s now dive into Terraform files and Terraform directory structure that will help you write the Terraform configuration files later in this tutorial.

Terraform code, that is, Terraform configuration files, are written in a tree-like structure to ease the overall understanding of code with .tf format or .tf.json or .tfvars format. These configuration files are placed inside the Terraform modules.

Terraform modules are on the top level in the hierarchy where configuration files reside. Terraform modules can further call another child to terraform modules from local directories or anywhere in disk or Terraform Registry.

Terraform contains mainly five files as main.tf , vars.tf , providers.tf , output.tf and terraform.tfvars.

  1. main.tf – Terraform main.tf file contains the main code where you define which resources you need to build, update or manage.
  2. vars.tf – Terraform vars.tf file contains the input variables which are customizable and defined inside the main.tf configuration file.
  3. output.tf : The Terraform output.tf file is the file where you declare what output paraeters you wish to fetch after Terraform has been executed that is after terraform apply command.
  4. .terraform: This directory contains cached provider , modules plugins and also contains the last known backend configuration. This is managed by terraform and created after you run terraform init command.
  5. terraform.tfvars files contains the values which are required to be passed for variables that are refered in main.tf and actually decalred in vars.tf file.
  6. providers.tf – The povider.tf is the most important file whrere you define your terraform providers such as terraform aws provider, terraform azure provider etc to authenticate with the cloud provider.

Launch multiple EC2 instances using Terraform count

Another special argument is Terraform count. By default, terraform create a single resource defined in Terraform resource block. But at times, you want to manage multiple objects of the same kind, such as creating four AWS EC2 instances of the same type in the AWS cloud without writing a separate block for each instance. Let’s learn how to use Terraform count meta argument.

This demonstration will create multiple AWS EC2 instances using Terraform count. So let’s create all the Terraform configuration files required to create multiple AWS EC2 instances on the AWS account.

  • Log in to the Ubuntu machine using your favorite SSH client.
  • Create a folder in opt directory named terraform-demo and switch to this folder. This terraform-demo folder will contain all the configuration files that Terraform needs.
mkdir /opt/terraform-demo
cd /opt/terraform-demo
  • Create main.tf file in the /opt/terraform-demo directory and copy/paste the content below. The below code creates the four identical AWS EC2 instances in AWS account using Terraform count meta argument.
resource "aws_instance" "my-machine" {
   count = 4   # Here we are creating identical 4 machines. 
   ami = var.ami
   instance_type = var.instance_type
   tags = {
      Name = "my-machine-${count.index}"
           }
}
  • Create another file named vars.tf file in the /opt/terraform-demo directory and copy/paste the content below. The vars.tf file contains all the variables that you reffered in the main.tf file.
# Creating a Variable for ami
variable "ami" {       
  type = string
}

# Creating a Variable for instance_type
variable "instance_type" {    
  type = string
}
  • Create another file named terraform.tfvars file in the /opt/terraform-demo directory and copy/paste the content below. The terraform.tfvars file contains all the values that are needed by variables declared in the var.tf file.
 ami = "ami-0742a572c2ce45ebf"
 instance_type = "t2.micro"

  • Create one more file named outputs.tf inside the /opt/terraform-demo directory and copy/paste the below content. This file contains all the outputs variables that will be used to display he output after running the terraform apply command.
output "ec2_machines" {
 # Here * indicates that there are more than one arn because count is 4   
  value = aws_instance.my-machine.*.arn 
}
 
  • Create another file and name it as provider.tf. This file allows Terraform to interact with AWS cloud using AWS API.
provider "aws" {
  region = "us-east-2"
}
  • Now your folder should have all files as shown below and should look like.
Terraform configurations and structure
Terraform configurations and structure
  • Now your files and code are ready for execution. Initialize the terraform using the terraform init command.
terraform init
Initialize the terraform using the terraform init command.
Initialize the terraform using the terraform init command.
  • Terraform initialized successfully , now its time to run the plan command which provides you the details of the deployment. Run terraform plan command to confirm if correct resources is going to provisioned or deleted.
terraform plan
Running terraform plan command
Running terraform plan command
Output of the terraform plan command
The output of the terraform plan command
  • After verification, now its time to actually deploy the code using terraform apply command.
terraform apply
Running terraform apply command
Running terraform apply command

Terraform commands terraform init→ terraform plan→ terraform apply All executed successfully. But it is important to manually verify all the four AWS instances launched in AWS.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘EC2’, and click on the EC2 menu item and you should see four EC2 instances.
Four instance launched using Terraform count
Four instances launched using Terraform count

Launch multiple EC2 instances using Terraform for_each

In the previous example, you created more than four AWS instances, but all the instances contain the same attributes such as instance_type, ami, etc. But if you need to create multiple instances with different attributes, such as one instance with t2.medium and others with t2.micro types, you should consider using Terraform for_each.

Assuming you are still logged into the Ubuntu machine using your favorite SSH client.

  • Create a folder in opt directory named terraform-for_each-demo and switch to this folder. This terraform-for_each-demo folder will contain all the configuration files that Terraform needs.
mkdir /opt/terraform-for_each-demo
cd /opt/terraform-for_each-demo
  • Create main.tf file in the /opt/terraform-for_each-demo directory and copy/paste the content below. The below code creates the two AWS EC2 instances with different instance_type in AWS account using Terraform for_each argument.
resource "aws_instance" "my-machine" {
  ami = var.ami
  for_each  = {                     # for_each iterates over each key and values
      key1 = "t2.micro"             # Instance 1 will have key1 with t2.micro instance type
      key2 = "t2.medium"            # Instance 2 will have key2 with t2.medium instance type
        }
        instance_type  = each.value
	key_name       = each.key
    tags =  {
	   Name  = each.value
	}
}
  • Create another file vars.tf file in the /opt/terraform-for_each-demo directory and copy/paste the content below.
variable "tag_ec2" {
  type = list(string)
  default = ["ec21a","ec21b"]
}
                                           
variable "ami" {       # Creating a Variable for ami
  type = string
}
  • Create another file terraform.vars file in the /opt/terraform-for_each-demo directory and copy/paste the content below.
ami = "ami-0742a572c2ce45ebf"
instance_type = "t2.micro"
  • Now that you have all the Terraform configurations read for execution.
  • Next initialize the Terraform using terraform init command followed by terraform plan and finally terraform apply to deploy the changes.
terraform init 
terraform plan
terraform apply
Two instance launched using Terraform for_each
Two instances launched using Terraform for_each

Join 50 other followers

Conclusion

Terraform is a great open-source tool that provides the easiest code and configuration files to work with. Now that you know how to launch multiple AWS EC2 instances on AWS using Terraform count and Terraform for_each on Amazon Web Service.

So which argument do you plan to use in your next Terraform deployment?