When your host applications on AWS cloud and your Amazon infrastructure grows day by day it becomes difficult to monitor it. To solve the monitoring issue the Amazon provides its own managed service that allows you to store logs which you can access anytime.
AWS CloudWatch logs monitor many AWS services and stores the Logs in different or same Log groups.
In this tutorial you will learn everything you should know about CloudWatch Logs. Lets get into it without further delay.
Table of Content
- What is AWS CloudWatch Logs?
- AWS CloudWatch Pricing
- Components of AWS CloudWatch Logs
- Collecting EC2 instance Logs with CloudWatch Logs
- Unified CloudWatch Agent
- Older CloudWatch Agent
- Installing the CloudWatch agent
- Downloading CloudWatch Agent
- Creating IAM roles for CloudWatch
- Create and modify the CloudWatch Agent configuration file
- Running AWS CloudWatch agent
- Analyzing log data with CloudWatch Logs Insights
- Running a CloudWatch Logs Insights query
- Running a CloudWatch Logs Insights query for Lambda function
- Running a CloudWatch Logs Insights query for Amazon VPC Flow Logs
- Running a CloudWatch Logs Insights query for Route53 logs
- Running a CloudWatch Logs Insights query for CloudTrail logs
- Create a CloudWatch Log groups in CloudWatch Logs
- Checking Log Entries using AWS Management console.
- Checking Log Entries using the AWS CLI
- Data at Rest vs Data in Transit
- Encrypting Log Data in CloudWatch Logs
- Creating an AWS KMS customer managed key
- Adding permissions to AWS KMS customer managed keys
- Associating the customer managed key with a log group when you create it
- Creating metrics from log events using filters
- Creating metric filters from log events
- Creating metric filters using the AWS CLI
- Posting Event data into CloudWatch Log groups using the AWS CLI
- To list metric filters using the AWS CLI
- Real-time processing of log data with subscriptions
- Creating CloudWatch Logs Subscription filter with Kinesis Data Streams
- Creating CloudWatch Logs Subscription filter with AWS lambda function.
- Publish Logs to AWS S3, kinesis and CloudWatch Logs
- Publishing Logs to AWS CloudWatch Logs
- Publishing Logs to AWS S3
- Publishing Logs to Kinesis Firehose
- Conclusion
What is AWS CloudWatch Logs?
AWS CloudWatch service monitor, store and access log files from various other AWS services such as Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Route 53, and other sources.
- AWS CloudWatch also allows you to query your logs using query language, masking sensitive information and also allows you to generate metrics using filters or embedded logs.
- CloudWatch Logs Insights is used to query the data search and analyze your log data. You can also use CloudWatch to monitor the AWS EC2 logs.
- You can also create AWS CloudWatch alarms for various AWS services such as to capture the CloudTrail events etc.
- Use data protection policies to avoid sensitive data in your logs
- By default, logs are kept indefinitely and never expire however you can set days from 1 to 10 years depending on the requirement.
- You can also archive the logs in highly durable storage.
AWS CloudWatch Pricing
The AWS CloudWatch Logs are free of cost for AWS free tier account. However for standard account the logs such as VPC Flow logs, EC2 logs and Lambda logs are charged.
The Metrics, Dashboards, alarms and various other components in AWS CloudWatch are charged.
Components of AWS CloudWatch Logs
Log Event: log event is a record of some activity recorded by the application or resource being monitored. CloudWatch logs understand two things from log event i.e. timestamp and raw event message.
Log streams: Log streams are group of same log events that has same or common source. represent the sequence of events coming from the application instance or resource being monitored such as Apache logs.
Log groups: Log groups define groups of log streams that share the same retention, monitoring, and access control settings. Each log stream has to belong to one log group.
Note: Each log stream has to belong to one log group
Metric filters
You can use metric filters on ingested events to create metrics data points in a CloudWatch metric. Metric filters are assigned to log groups, and all of the filters assigned to a log group are applied to their log streams.
Retention settings
Retention settings can be used to specify how long log events are kept in CloudWatch Logs

Collecting EC2 instance Logs with CloudWatch Logs
There are two ways in which the AWS EC2 instance logs are captured with CloudWatch Logs agent:
Unified CloudWatch Agent
- The latest and recommended agent is unified Cloud Watch agent which supports multiple operating systems, including servers running Windows Server. This agent also provides better performance.
- Retrieve custom metrics from your applications or services using the StatsD and collectd protocols.
- StatsD is supported on both Linux servers and servers running Windows Server.
- collectd is supported only on Linux servers.
- The default namespace for metrics collected by the Cloud Watch agent is CWAgent, although you can specify a different namespace when you configure the agent.
Older CloudWatch Agent
The older CloudWatch agent supports collection of logs only from servers running Linux.
Installing the CloudWatch agent
Below is the high level steps that you need to perform to install the cloud watch agent.
- Create IAM roles or users that enable the agent to collect metrics from the server and optionally to integrate with AWS Systems Manager.
- Download the agent package. You can download and install the CloudWatch agent manually using the command line, or you can integrate it with SSM
- Modify the CloudWatch agent configuration file and specify the metrics that you want to collect.
- Install and start the agent on your servers. As you install the agent on an EC2 instance, you attach the IAM role that you created in step 1. As you install the agent on an on-premises server, you specify a named profile that contains the credentials of the IAM user that you created in step 1.
Downloading CloudWatch Agent
Lets now get into details of what all you need to do to install and work with cloud watch agent.
- Install the package on AWS EC2 instance using below command.
sudo yum install amazon-cloudwatch-agent
- Create IAM role and attach it to the AWS EC2 instance that has the CloudWatchAgentServerPolicy attached.
- You can also download and install it using AWS S3 links.
wget https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm
sudo rpm -U ./amazon-cloudwatch-agent.rpm
sudo dpkg -i -E ./amazon-cloudwatch-agent.deb
Creating IAM roles for CloudWatch
Next, create the IAM role from IAM management console and add the policy CloudWatchAgentServerPolicy. If you want that the Cloud watch agent to set the retention policy for log groups that it sends log events to then add the below in the policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "logs:PutRetentionPolicy",
"Resource": "*"
}
]
}
Create and modify the CloudWatch Agent configuration file
The agent configuration file is a JSON file with three sections: agent, metrics, and logs that specifies the metrics and logs that the agent is to collect, including custom metrics. The agent configuration file wizard, amazon-cloudwatch-agent-config-wizard.
The wizard can autodetect the credentials and AWS Region to use if you have the AWS credentials and configuration files in place before you start the wizard.
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
You can also create the cloud watch agent configuration file manually or sometimes it installed with agent
- Agent section is declared as below.
"agent": {
"metrics_collection_interval": 60,
"region": "us-west-1",
"logfile": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log",
"debug": false,
"run_as_user": "cwagent"
}
- metric sections is declarled as below.
{
"metrics": {
"namespace": "Development/Product1Metrics",
......
},
}
- Logs section are declared as below.
"collect_list": [
{
"file_path": "/opt/aws/amazon-cloudwatch-agent/logs/test.log",
"log_group_name": "test.log",
"log_stream_name": "test.log",
"filters": [
{
"type": "exclude",
"expression": "Firefox"
},
{
"type": "include",
"expression": "P(UT|OST)"
}
]
},
.....
]
Running AWS CloudWatch agent
Finally in this section run the below command to run the CloudWatch agent by performing below steps.
- Copy the agent configuration file that you want to use to the server where you’re going to run the agent. Note the pathname where you copy it to.
- Now run the below command On an EC2 instance running Linux, enter the following command.
- -a fetch-config causes the agent to load the latest version of the CloudWatch agent configuration file
- -s starts the agent.
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file:configuration-file-path
- On an EC2 instance running Windows Server, enter the following from the PowerShell console
& "C:\Program Files\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-ctl.ps1" -a fetch-config -m ec2 -s -c file:configuration-file-path
Analyzing log data with CloudWatch Logs Insights
If you need to analyze your data more accurately and interactively you can enable CloudWatch logs insights in Amazon CloudWatch Logs.
- CloudWatch Logs Insights automatically discovers fields in logs from AWS services such as Amazon Route 53, AWS Lambda, AWS CloudTrail, and Amazon VPC, and any application or custom log that emits log events as JSON.
- A single request can query up to 50 log groups. Queries time out after 60 minutes, if they have not completed. Query results are available for 7 days.
- CloudWatch Logs Insights automatically generates five system fields:
- @message contains the raw unparsed log event.
- @timestamp contains the event timestamp in the log event’s
timestamp
field. - @ingestionTime contains the time when CloudWatch Logs received the log event.
- @logStream contains the name of the log stream that the log event was added to.
- @log is a log group identifier in the form of account-id:log-group-name.
- Lets say you have below log in JSON format and you want to access type then you use userIdentity.type.
{
"eventVersion": "1.0",
"userIdentity": {
"type": "IAMUser",
"principalId": "EX_PRINCIPAL_ID",
"arn": "arn: aws: iam: : 123456789012: user/Alice",
"accessKeyId": "EXAMPLE_KEY_ID",
"accountId": "123456789012",
"userName": "Alice"
},
Running a CloudWatch Logs Insights query
If you need to run a CloudWatch Logs Insights query, below are the steps to run a query.
- To run a CloudWatch Logs Insights query, Open the CloudWatch console.
- In the navigation pane, choose Logs, and then choose Logs Insights. On the Logs Insights page, go to the query editor.
- In the Select log group(s) drop down, choose one or more log groups to query.
- Choose Run to view the results.
- To see all fields for a returned log event, choose the triangular dropdown icon left of the numbered event.

- Example of some of queries are as follows.
stats count(*) by @logStream | limit 100
stats count(*) by fieldName
stats count(*) by bin(30s)
Running a CloudWatch Logs Insights query for Lambda function
To run a CloudWatch Logs Insights query for a lambda function that determines the amount of overprovisioned memory run the below command.
filter @type = "REPORT"
| stats max(@memorySize / 1000 / 1000) as provisonedMemoryMB,
min(@maxMemoryUsed / 1000 / 1000) as smallestMemoryRequestMB,
avg(@maxMemoryUsed / 1000 / 1000) as avgMemoryUsedMB,
max(@maxMemoryUsed / 1000 / 1000) as maxMemoryUsedMB,
provisonedMemoryMB - maxMemoryUsedMB as overProvisionedMB
Running a CloudWatch Logs Insights query for Amazon VPC Flow Logs
To run a CloudWatch Logs Insights query for Amazon VPC Flow Logs that determines the top 15 packet transfers across hosts then run the below command.
stats sum(packets) as packetsTransferred by srcAddr, dstAddr
| sort packetsTransferred desc
| limit 15
Running a CloudWatch Logs Insights query for Route53 logs
To run a CloudWatch Logs Insights query for Route53 that determines the the distribution of records per hour by query then run the below command.
stats count(*) by queryType, bin(1h)
Running a CloudWatch Logs Insights query for CloudTrail logs
- Find the Amazon EC2 hosts that were started or stopped in a given AWS Region.
filter (eventName="StartInstances" or eventName="StopInstances") and awsRegion="us-east-2"
Note: After you run a query, you can add the query to a CloudWatch dashboard or copy the results to the clipboard.
Create a CloudWatch Log groups in CloudWatch Logs
A log stream is a sequence of log events that share the same source. Each separate source of logs in CloudWatch Logs makes up a separate log stream and A log group is a group of log streams with same configurations.
In this section we will learn how to create a log groups in CloudWatch logs services. Lets perform the below steps.
Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/
In the navigation pane, choose Log groups.
Choose Actions, and then choose Create log group.
Enter a name for the log group, and then choose Create log group.
Note: You may send logs to CloudWatch using CloudWatch agent, AWS CLI and Programmatically
Checking Log Entries using AWS Management console.
To check the Log entries using the AWS Management console performing the following steps.
- Open the CloudWatch console and choose Log groups.
- Look for the right Log groups and then further check the Log streams and further look for Log events.
Checking Log Entries using the AWS CLI
You can run the below command to search logs entry in the AWS CLI.
aws logs filter-log-events --log-group-name my-group [--log-stream-names LIST_OF_STREAMS_TO_SEARCH] [--filter-pattern VALID_METRIC_FILTER_PATTERN]
Data at Rest vs Data in Transit
This section is really important to understand what is Data at rest and what is Data in Transit. The data that resides with your cloud or is brought into the AWS account has to be secure always. So, all the AWS services has ability to encrypt the data either at rest or during in transit.
AWS services uses encryption either using service side encryption or a client side encryption where AWS manages service side using AWS KMS keys and for client side encryption client manages it using various methods including AWS KMS keys.
Data at rest means the data is kept and stored and to encrypt the data we can use AWS KMS keys however for data in transit customers have a choice either by using a protocol like Transport Layer Security (TLS). All AWS service endpoints support TLS to create a secure HTTPS connection to make API requests.
Using services like AWS KMS, AWS CloudHSM, and AWS ACM, customers can implement a comprehensive data at rest and data in transit encryption strategy across their AWS account.
Encrypting Log Data in CloudWatch Logs
Log data is always encrypted in CloudWatch logs. By default, CloudWatch Logs uses server-side encryption for the log data at rest. However you can also use AWS Key Management Service along with AWS KMS customer managed keys. Lets see how you can achieve this.
- Encryption using AWS KMS is enabled at the log group level, by associating a key with a log group.
- The encryption is done using an AWS KMS customer managed key.
- CloudWatch Logs supports only symmetric customer managed keys.
- You must have kms:CreateKey, kms:GetKeyPolicy, and kms:PutKeyPolicy permissions.
- If you revoke CloudWatch Logs access to an associated key or delete an associated customer managed key, your encrypted data in CloudWatch Logs can no longer be retrieved.
Lets follow below items to implement encryption in the AWS CloudWatch logs.
Creating an AWS KMS customer managed key
- Lets run the below command to create a AWS KMS key.
aws kms create-key
Adding permissions to AWS KMS customer managed keys
- By default only resource owner has permissions to encrypt or decrypt the data. So its important to grant permissions to access the key to other users and resources. Your policy should look like something below.
- Note: CloudWatch Logs now supports encryption context, using kms:EncryptionContext:aws:logs:arn as the key and the ARN of the log group as the value for that key
- Encryption context is a set of key-value pairs that are used as additional authenticated data. The encryption context enables you to use IAM policy conditions to limit access to your AWS KMS key by AWS account and log group.
{
"Version": "2012-10-17",
"Id": "key-default-1",
"Statement": [
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::Your_account_ID:root"
},
"Action": "kms:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Principal": {
"Service": "logs.region.amazonaws.com"
},
"Action": [
"kms:Encrypt*",
"kms:Decrypt*",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:Describe*"
],
"Resource": "*",
"Condition": {
"ArnEquals": {
"kms:EncryptionContext:aws:logs:arn": "arn:aws:logs:region:account-id:log-group:log-group-name"
}
}
}
]
}
Associating the customer managed key with a log group when you create it
- Use the create-log-group command as follows.
aws logs create-log-group --log-group-name my-log-group --kms-key-id "key-arn"
Creating metrics from log events using filters
We can certainly filter the log data coming to CloudWatch logs by creating the one or more metrics. CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that you can graph or set an alarm on.
Components of Metrics
- default value: If no logs are ingested during a one-minute period, then no value is reported
- dimensions: Dimensions are the key-value pairs that further define a metric.
- metric name: The name of the CloudWatch metric to which the monitored log
- metric namespace: The destination namespace of the new CloudWatch metric.
- metric value: The name of the CloudWatch metric to which the monitored log
Creating metric filters from log events
In this section we will go through steps which will guide you through creating metric filters from log events.
- Open the CloudWatch console.
- In the navigation pane, choose Logs, and then choose Log groups.

- Choose the name of the log group.
- Choose Actions, and then choose Create metric filter.

- For Filter pattern, enter a filter pattern. To test your filter pattern, under Test Pattern, enter one or more log events to test the pattern.

Note: you can also use below filter pattern to find HTTP 404 code errors.
For Filter Pattern, type [IP, UserInfo, User, Timestamp, RequestInfo, StatusCode=404, Bytes].
- Choose Next, and then enter a name for your metric filter.
- Under Metric details, for Metric namespace, enter a name for the CloudWatch namespace where the metric will be published. If the namespace doesn’t already exist, make sure that Create new is selected.
- For Metric name, enter a name for the new metric.
- For Metric value, if your metric filter is counting occurrences of the keywords in the filter, enter 1.

- Finally review and create the metrics.

Creating metric filters using the AWS CLI
The other way of creating metric filters is by using the AWS CLI. Lets checkout the below command to create metric filters using the AWS CLI.
aws logs put-metric-filter \
--log-group-name MyApp/access.log \
--filter-name EventCount \
--filter-pattern " " \
--metric-transformations \
metricName=MyAppEventCount,metricNamespace=MyNamespace,metricValue=1,defaultValue=0
Posting Event data into CloudWatch Log groups using the AWS CLI
aws logs put-log-events \
--log-group-name MyApp/access.log --log-stream-name TestStream1 \
--log-events \
timestamp=1394793518000,message="Test event 1" \
timestamp=1394793518000,message="Test event 2" \
timestamp=1394793528000,message="This message also contains an Error"
To list metric filters using the AWS CLI
aws logs describe-metric-filters --log-group-name MyApp/access.log
Real-time processing of log data with subscriptions
You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it delivered to other services such as an Amazon Kinesis stream, an Amazon Kinesis Data Firehose stream, or AWS Lambda for custom processing, analysis, or loading to other systems.
To begin subscribing to log events, create the receiving resource, such as a Kinesis Data Streams stream, where the events will be delivered. A subscription filter defines the filter pattern to use for filtering which log events get delivered to your AWS resource, as well as information about where to send matching log events to.
CloudWatch Logs also produces CloudWatch metrics about the forwarding of log events to subscriptions.
You can use a subscription filter with Kinesis Data Streams, Lambda, or Kinesis Data Firehose. Logs that are sent to a receiving service through a subscription filter are base64 encoded and compressed with the gzip format.
Creating CloudWatch Logs Subscription filter with Kinesis Data Streams
In this section we will create a AWS CloudWatch subscription filters and send the logs to the kinesis data streams.

- Creating a destination stream in the Kinesis Data Streams service using the below command.
aws kinesis create-stream --stream-name "RootAccess" --shard-count 1
- Check the kinesis delivery stream if it is in active state.
aws kinesis describe-stream --stream-name "RootAccess"
- Create the IAM role that will grant CloudWatch Logs permission to put data into your stream. Also make sure to add the trust policy in the role as follows.
{
"Statement": {
"Effect": "Allow",
"Principal": { "Service": "logs.amazonaws.com" },
"Action": "sts:AssumeRole",
"Condition": {
"StringLike": { "aws:SourceArn": "arn:aws:logs:region:123456789012:*" }
}
}
}
- In case of cross account you should have the IAM role trust policy something like below.
{
"Statement": {
"Effect": "Allow",
"Principal": {
"Service": "logs.amazonaws.com"
},
"Condition": {
"StringLike": {
"aws:SourceArn": [
"arn:aws:logs:region:sourceAccountId:*",
"arn:aws:logs:region:recipientAccountId:*"
]
}
},
"Action": "sts:AssumeRole"
}
}
aws iam create-role --role-name CWLtoKinesisRole --assume-role-policy-document file://~/TrustPolicyForCWL-Kinesis.json
- Attach a policy to the IAM role that you created previously.
{
"Statement": [
{
"Effect": "Allow",
"Action": "kinesis:PutRecord",
"Resource": "arn:aws:kinesis:region:123456789012:stream/RootAccess"
}
]
}
- In case of cross account additional step is required where you attach a policy to the Kinesis stream to allow CloudWatch to be able to send the data in this account.
{
"Version" : "2012-10-17",
"Statement" : [
{
"Sid" : "",
"Effect" : "Allow",
"Principal" : {
"AWS" : "111111111111"
},
"Action" : "logs:PutSubscriptionFilter",
"Resource" : "arn:aws:logs:region:999999999999:destination:testDestination"
}
]
}
- Create a CloudWatch subscription filter. The subscription filter immediately starts the flow of real-time log data from the chosen log group to your stream. In case of cross account: a subscription filter is created in a sending account.
aws logs put-subscription-filter \
--log-group-name "CloudTrail/logs" \
--filter-name "RootAccess" \
--filter-pattern "{$.userIdentity.type = Root}" \
--destination-arn "arn:aws:kinesis:region:123456789012:stream/RootAccess" \
--role-arn "arn:aws:iam::123456789012:role/CWLtoKinesisRole"
- After you set up the subscription filter, CloudWatch Logs forwards all the incoming log events that match the filter pattern to your stream. Verify by running the following examples.
aws kinesis get-shard-iterator --stream-name RootAccess --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON
Creating CloudWatch Logs Subscription filter with AWS lambda function.
In this section we will create a AWS CloudWatch subscription filters and send the logs to the AWS lambda function.
- Create the AWS Lambda function. Lets create a sample Lambda function as below using AWS CLI.
aws lambda create-function \
--function-name helloworld \
--zip-file fileb://file-path/helloWorld.zip \
--role lambda-execution-role-arn \
--handler helloWorld.handler \
--runtime nodejs12.x
- Grant CloudWatch Logs the permission to execute your function.
aws lambda add-permission \
--function-name "helloworld" \
--statement-id "helloworld" \
--principal "logs.amazonaws.com" \
--action "lambda:InvokeFunction" \
--source-arn "arn:aws:logs:region:123456789123:log-group:TestLambda:*" \
--source-account "123456789012"
- Create a subscription filter using the following command
aws logs put-subscription-filter \
--log-group-name myLogGroup \
--filter-name demo \
--filter-pattern "" \
--destination-arn arn:aws:lambda:region:123456789123:function:helloworld
- Verify by running below command.
aws logs put-log-events --log-group-name myLogGroup --log-stream-name stream1 --log-events "[{\"timestamp\":<CURRENT TIMESTAMP MILLIS> , \"message\": \"Simple Lambda Test\"}]"
Publish Logs to AWS S3, kinesis and CloudWatch Logs
AWS services that publish logs to CloudWatch Logs are API Gateway, Aurora SQL, AWS VPC Flow logs etc. While many services publish logs only to CloudWatch Logs, some AWS services can publish logs directly to Amazon Simple Storage Service or Amazon Kinesis Data Firehose.

Publishing Logs to AWS CloudWatch Logs
If you need to send the logs to CloudWatch then you need below permissions for the user or account through which you are logged in.
logs:CreateLogDelivery
logs:PutResourcePolicy
logs:DescribeResourcePolicies
logs:DescribeLogGroups
When the logs are sent to Log groups in AWS CloudWatch then the resource policy is automatically created if you have above permissions else create and attach the resource policy to the Log group as shown below.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AWSLogDeliveryWrite20150319",
"Effect": "Allow",
"Principal": {
"Service": [
"delivery.logs.amazonaws.com"
]
},
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-east-1:0123456789:log-group:my-log-group:log-stream:*"
],
"Condition": {
"StringEquals": {
"aws:SourceAccount": ["0123456789"]
},
"ArnLike": {
"aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:*"]
}
}
}
]
}
Publishing Logs to AWS S3
When logs are published to AWS S3 for the first time then the service that delivers becomes the owner of the bucket. If you need to send the logs to AWS S3 then you need below permissions for the user or account through which you are logged in.
logs:CreateLogDelivery
S3:GetBucketPolicy
S3:PutBucketPolicy
The bucket should have a resource policy as shown below.
{
"Version": "2012-10-17",
"Id": "AWSLogDeliveryWrite20150319",
"Statement": [
{
"Sid": "AWSLogDeliveryAclCheck",
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::my-bucket",
},
{
"Sid": "AWSLogDeliveryWrite",
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-bucket/AWSLogs/account-ID/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control",
"aws:SourceAccount": ["0123456789"]
},
"ArnLike": {
"aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:*"]
}
}
}
]
}
Note: You can protect the data in your Amazon S3 bucket by enabling either server-side Encryption with Amazon S3-managed keys (SSE-S3) or server-side encryption with a AWS KMS key stored in AWS Key Management Service (SSE-KMS).
If you choose customer managed AWS KMS Keys then your keys must have below policies.
{
"Sid": "Allow Logs Delivery to use the key",
"Effect": "Allow",
"Principal": {
"Service": [ "delivery.logs.amazonaws.com" ]
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:SourceAccount": ["0123456789"]
},
"ArnLike": {
"aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:*"]
}
}
}
Publishing Logs to Kinesis Firehose
To be able to set up sending any of these types of logs to Kinesis Data Firehose for the first time, you must be logged into an account with the following permissions.
logs:CreateLogDelivery
firehose:TagDeliveryStream
iam:CreateServiceLinkedRole
Because Kinesis Data Firehose does not use resource policies, AWS uses IAM roles when setting up these logs to be sent to Kinesis Data Firehose. AWS creates a service-linked role named AWSServiceRoleForLogDelivery
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"firehose:PutRecord",
"firehose:PutRecordBatch",
"firehose:ListTagsForDeliveryStream"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/LogDeliveryEnabled": "true"
}
},
"Effect": "Allow"
}
]
}
This service-linked role also has a trust policy that allows the delivery.logs.amazonaws.com service principal to assume the needed service-linked role. That trust policy is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Conclusion
In this tutorial you learned everything one must know to securely push logs in CloudWatch and store it. You also learned how to view and retrieve the data from the CloudWatch Logs.
With this knowledge you will certainly be able to secure your applications and troubleshoot them easily at a central location. Go for it and implement it.