What is AWS RDS (Relationship Database Service)?

In this Post you will learn everything you must know end to end about AWS RDS. This tutorial will give you glimpse of each components starting from what is DB instance to scaling and multi availability zone cluster configurations and details.

Lets get started.

What is AWS RDS (Relationship Database Service)?

  • It allows you to setup relational database in the AWS Cloud. AWS RDS is managed database service.
  • It is cost effective and resizable capacity because you if you invest in your own hardware, memory, CPU and it is time consuming and very costly.
  • With AWS RDS, it manages everything starting from Scaling, availability, backups, software patching, software installing, OS patching, OS installation, hardware lifecycle, server maintenance.
  • You can define permissions of your database users and database with IAM.

Database Instance

DB instance is a database environment which you launch your database users and user created databases.

  1. You can run your database instance in various AZ’s also known as multi-AZ deployments. Amazon automatically provisions and maintains secondary standby instance in different Availability zones. With this approach the primary DB replicates the data written into it to standby instance located in another AZ. Note: Instance in secondary can also be configured as read
  2. You can attach security groups to your database instance to protect your instance.
  3. You can launch DB instance in Local zones as well by enabling local zone in Amazon EC2 console.
  4. You can use Amazon CloudWatch to monitor the status of your database instance. You can monitor the following metrics:
    1. IOPS (I/O operations per second)
    1. Latency (Submitted I/O request until completed)
    1. Throughput (Number of bytes transferred per second) to or from disk.
    1. Queue depth: how many requests are pending in the queue.
  5. DB instance has a unique DB instance identifier that a customer or a user provider and should be different in AWS Region. If you provide the DB instance identifier as testing, then your endpoint formed will be as below.
testing. <account-id><region>.rds.amazonaws.com
  • DB instance supports various DB engines such as MySQL, MariaDB, PostgreSQL, Oracle, Microsoft SQL server, Amazon Aurora database engines.
  • A DB instance can host multiple databases with multiple schemas.
  • When you create any DB instance using AWS RDS service then by default it creates a master user account, and this user has all the permissions. Note: Make sure to change the password of this master user account.
  • You can create a backup of your Database instance by creating database snapshots.  You can also store your snapshots in AWS S3 bucket.
  • You can enable IAM database authentication on your database instance so that you don’t need any password to login to the database instance.
  • You can also enable Kerberos authentication to support external authentication of database users using Kerberos and Microsoft Active directory.
  • DB Instance are billed per hour.

DB Engines

Db engines are specific software’s that runs on your DB instance such as MariaDB, Microsoft SQL server, MySQL, Oracle and Postgres.

DB Instance class

Db instance class determines the computation, memory and storage capacity of a DB instance.  AWS RDS supports three types of DB instance classes:

  • General purpose:
  • Memory optimized:
  • Burstable Performance
  1. DB instance class supports Intel Hyper threading technology which enables multiple threads to run parallelly on single Intel Xeon CPU Core. Each thread is represented as vCPU on DB Instance. For example db.m4.xlarge DB Instance class has 2 CPU Core and two threads per CPU Core which makes to total of 4 vCPU’s. Note: You can disable Intel Hyper threading by specifying a single thread per CPU core when you need a high-performance computing workload.
  2. To set the Core count and Threads per core you need to edit the processor features.
  3. Quick note: To compare the CPU capacity between different DB instance class you should use ECU (Amazon EC2 instance compute units). The amount of CPU that is allocated to a DB instance is expressed in terms of EC2 compute units.
  4. You can use EBS optimised volumes which are good for your DB instance as it provides better performance by minimizing contention between I/O and other traffic from your instance.

DB Instance Storage

You can attach EBS the block level storage volumes to a running instance. DB Instance storage comes with:

  • General purpose (SSD) [gp2 and gp3]: They are cost effective which is ideal for board range of workload on medium sized Generally, they have throughput limit of 250MB/second.
  • For GP2
    • 3 IOPS for each GB with min 100 IOPS (I/O Operations per second)
    • 16000 IOPS for 5.34TB is max limit in gp2  
    • Throughput is max 250MB/sec where throughput is how fast the storage volume can perform read and write.
  • For GP3
    • Up to 32000 IOPS
  • Provisioned IOPS (PIOPS) [io1]: They are used when you need low I/O Latency, consistent I/O throughput. These are suited for production environments.
    • For io1 – up to 256000 (IOPS) and throughput up to 4000 MB/s
    • Note: Benefits of using provisioned IOPS are
      • Increase number of I/O requests that system cab process.
      • Decreased latency because less I/O requests will be in queue.
      • Faster response time and high database throughput.

Blue/Green Deployments

Blue/Green deployments copies database environments in a separate environment. You can make changes in staging environment and then later push those changes in production environments. Blue/ Green deployments are only available for RDS for MariaDB and RDS for MySQL.

Working with Read Replicas

  • Updates from primary DB are copied to the read replicas.
  • You can promote read replica to be standalone DB as well in case you require sharing (Share nothing DB)
  • You can use or create read replica in different AWS Region as well.
  • How does cross region replication works?
  • IAM role of Destination must have access to Source DB Instance.
    • Source DB acts as source
    • RDS creates automated DB Snapshot of source DB
    • Copy of Snapshot starts
    • Destination read replica uses copied DB Snapshot

Cross Region Read Replicas

With Cross region read replicas you can create read replicas in a different region from the source DB instance.

Cross Region Automated Backups

You can configure DB instance to replicate snapshots and transaction logs in another AWS region.

Multi AZ Deployments

  • You can run your database instance in various AZ’s also known as multi-AZ deployments. Amazon automatically provisions and maintains secondary standby instance in different Availability zones. With this approach the primary DB replicates the data written into it to standby instance located in another AZ. Note: Instance in secondary can also be configured as read replicas.
  • You can align one standby or two standby instances.
  • When you have one standby instance it is known as Multi AZ DB instance deployment where one standby instance provides failover support but doesn’t act as read replica.
  • With Two standby instance it is known as Multi AZ DB cluster.
  • The failover mechanism automatically changes the Domain Name System (DNS) record of the DB instance to point to the standby DB instance

Note: DB instances with multi-AZ DB instance deployments can have increased write and commit latency compared to single AZ deployment.

Multi AZ DB instance deployment

In a Multi-AZ DB instance deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone.  You can’t use a standby replica to serve read traffic

If a planned or unplanned outage of your DB instance results from an infrastructure defect, Amazon RDS automatically switches to a standby replica in another Availability Zone if you have turned on Multi-AZ.

How to convert a single DB instance to Multi AZ DB instance deployment

  • Take a snapshot of primary DB instances EBS volume.
  • Creates a new volume for standby replicas from snapshot.
  • Next, turn on block level

Multi-AZ DB Cluster Deployments

  • It has one writer DB instance
  • It has two reader DB instances and allows clients to read the data.
  • AWS RDS replicates writer  
  • Data is synched from Writer instance to both the reader instances.
  • If a failover happens on of the writer instance then the reader instance acts as a automatic failover targets.  It does so by promoting a reader DB instance to a new writer DB instance. It happens automatically within 35 seconds and you can also do by going on Failover tab.

Cluster Endpoint

The cluster endpoint can write as well as read the data. The endpoint cannot be modified.

Reader Endpoint

Reader endpoint is used for reading the content from the DB cluster.

Instance Endpoint

These are used to connect to the DB instance directly to address the issues within instance or your application might require fine grained load balancing.

DB cluster parameter group

DB cluster parameter group acts as a container for engine configuration values that are applied to every DB instance in the Multi-AZ DB cluster

Replica Lag

The Difference in time between latest transaction on writer DB instance and latest applied transaction on reader instance. This could be because of high write concurrency or heavy batch updating.

How to Solve Replica Lag

You can solve the replica lag by reducing the load on your writer DB instance. You can also use Flow control to reduce the replica lag. In Flow log you can add a delay into the end of a transaction, which decreases the write throughput on writer instance. To turn on flow control use the below parameter. By default it is set to 120 seconds and you can turn off by setting to 84000 seconds or less than 120 .

Flow control works by throttling writes on the writer DB instance, which ensures that replica lag doesn’t continue to grow unbounded. Write throttling is accomplished by adding a delay. Throttling means queue or let it flow.

rpl_semi_sync_master_target_apply_lag

To check the status of flow control use below command.

SHOW GLOBAL STATUS like '%flow_control%';

DB Instance pricing

  • DB Instance are billed per hour.
  • Storage are billed per GB per month.
  • I/O requests (per 1 million requests per month.
  • Data transfer per GB in and out of your DB Instance.

AWS RDS performance troubleshooting

  1. Setup CloudWatch monitoring
  2. Enable Automatic backups
  3. If your DB requires more I/O, then to increase migrate to new instance class, convert from magnetic to general or provisioned IOPS.
  4. If you already have provisioned IOPS, consider adding more throughput capacity.
  5. If your app is caching DNS data of your instance, then make sure to set TTL value to less than 30 seconds because caching can lead to connection failures.
  6. Setup enough memory (RAM)
  7. Enable Enhanced monitoring to identify the Operating system issues
  8. Fine tune your SQL queries.
  9. Avoid tables in your database to grow too large as they impact Read and Writes.
  10. You can use options groups if you need to provide additional security for your database.
  11. You can use DB parameter group acts as a container for engine configuration values that are applied to one or more DB instances.

Tagging AWS RDS Resources

  • Tags are very helpful and are basically key value pair formats.
  • You can use Tags in IAM policies to manage access to AWS RDS resources.
  • Tags can be used to produce the detailed billing reports.
  • You can specify if you need tags to be applied to snapshots as well.
  • Tags are useful to determine which instance to be stopped, started, enable backups.

Amazon RDS Storage

Increasing DB instance storage capacity

Click on Modify in Databases and then Allocated Storage and apply immediately.  

Managing capacity automatically with Amazon RDS storage autoscaling

If workload is unpredictable then enable autoscaling for an Amazon RDS DB Instance. While creating the database engine, enable storage autoscaling and set the maximum storage threshold.

Modifying settings for Provisioned IOPS SSD storage

You can change that is reduce the amount of IOPS for your instance (throughput ) i.e read and write operations however with Provisioned IOPS SSD Storage you cannot reduce the storage size.

Monitoring Events, Logs and Streams in an Amazon RDS DB Instance.

Amazon Event Bridge: Serverless Event bus service that allows to connect apps with data from various sources.

Cloud trail logs and Cloud Watch logs are useful.

Database Activities Streams: AWS RDS push activities to Amazon Kinesis data stream

How to grant Amazon RDS to publish the notifications to the SNS topic using the IAM Policy.

The IAM Policy will be attached to the SNS service.

{
  "Version": "2008-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "events.rds.amazonaws.com"
      },
      "Action": [
        "sns:Publish"
      ],
      "Resource": "arn:aws:sns:us-east-1:123456789012:topic_name",
      "Condition": {
        "ArnLike": {
          "aws:SourceArn": "arn:aws:rds:us-east-1:123456789012:db:prefix-*"
        },
        "StringEquals": {
          "aws:SourceAccount": "123456789012"
        }
      }
    }
  ]
}

RDS logs

  • Amazon RDS doesn’t provide host access to the database logs on the file system of your DB instance. You can Choose the Logs & events tab to view the database log files and logs on the console itself.
  • To publish SQL Server DB logs to CloudWatch Logs from the AWS Management Console. In the Log exports section, choose the logs that you want to start publishing to CloudWatch Logs.

Note: In CloudWatch Logs, a log stream is a sequence of log events that share the same source. Each separate source of logs in CloudWatch Logs makes up a separate log stream. A log group is a group of log streams that share the same retention, monitoring, and access control settings.

  • Amazon RDS provides a REST endpoint that allows access to DB instance log files and you can find the log using REST Endpoint.
GET /v13/downloadCompleteLogFile/DBInstanceIdentifier/LogFileName HTTP/1.1
Content-type: application/json
host: rds.region.amazonaws.com
  • RDS for MySQL writes mysql-error.log to disk every 5 minutes. You can write the RDS for MySQL slow query log and the general log to a file or a database table. You can direct the general and slow query logs to tables on the DB instance by creating a DB parameter group and setting the log_output server parameter to TABLE
    • slow_query_log: To create the slow query log, set to 1. The default is 0.
    • general_log: To create the general log, set to 1. The default is 0.
    • long_query_time: To prevent fast-running queries from being logged in the slow query log

MySQL removes log files more than two weeks old. You can manually rotate the log tables with the following command line procedures, 

CALL mysql.rds_rotate_slow_log;

AWS RDS Proxy

  • RDS Proxy allows you to pool and share db connections to improve ability to scale.
  • RDS Proxy makes applications more effective to db failures by automatically connecting to Standby DB instance.
  • RDS Proxy establishes a database connection pool and reuses connections in this pool and avoids the memory and CPU overhead of opening a new database connection each time.
  • You can enable RDS Proxy for most applications with no code changes.

You can use RDS Proxy in the following scenarios.

  • Any DB instance or cluster that encounters “too many connections” errors is a good candidate for associating with a proxy.
  • For DB instances or clusters that use smaller AWS instance classes, such as T2 or T3, using a proxy can help avoid out-of-memory conditions
  • Applications that typically open and close large numbers of database connections and don’t have built-in connection pooling mechanisms are good candidates for using a proxy.

Amazon RDS for MySQL

There are two versions that are available for MySQL database engines i.e. version 8.0  and 5.7. MySQL provides the validate_password plugin for improved security. The plugin enforces password policies using parameters in the DB parameter group for your MySQL DB instance

To find the available version in MySQL which are supported:

aws rds describe-db-engine-versions --engine mysql --query *[].{Engine:Engine,EngineVersion:EngineVersion}" --output text

SSL/TLS on MySQL DB Instance

Amazon RDS installs SSL/TLS Certificate on the DB Instance. These certificate are signed by CA.  

To connect to DB instance with certificate use below command.

mysql -h mysql–instance1.123456789012.us-east-1.rds.amazonaws.com --ssl-ca=global-bundle.pem --ssl-mode=REQUIRED -P 3306 -u myadmin -p

To check if applications are using SSL.

mysql> SELECT id, user, host, connection_type

       FROM performance_schema.threads pst

       INNER JOIN information_schema.processlist isp

       ON pst.processlist_id = isp.id;

Performance improvements on MySQL RDS for Optimised reads.

  • An instance store provides temporary block-level storage for your DB instance.
  • With RDS Optimized reads some temporary objects are stored on Instance store. These objects include temp files, internal on disk temp tables, memory map files, binary logs, cached files.
  • The storage is located on Non-Volatile Memory express SSD’s that are physically attached.
  • Applications that can uses RDS for Optimized reads are:
    • Applications that run on-demand or dynamic reporting queries.
    • Applications that run analytical queries.
    • Database queries that perform grouping or ordering on non-indexed columns
  • Try to add retry logic for read only queries.
  • Avoid bulk changes in single transaction.
  • You can’t change the location of temporary objects to persistent storage (Amazon EBS) on the DB instance classes that support RDS Optimized Reads.
  • Transactions can fail when the instance store is full.
  • RDS Optimized Reads isn’t supported for multi-AZ DB cluster deployments.

Importing Data into MySQL with different data source.

  1. Existing MySQL database on premises or on Amazon EC2: Create a backup of your on-premises database, store it on Amazon S3, and then restore the backup file to a new Amazon RDS DB instance running MySQL.
  2. Any existing database: Use AWS Database Migration Service to migrate the database with minimal downtime
  3. Existing MySQL DB instance: Create a read replica for ongoing replication. Promote the read replica for one-time creation of a new DB instance.
  4. Data not stored in an existing database: Create flat files and import them using the mysqlimport utility.

Database Authentication with Amazon RDS

For PostgreSQL, use one of the following roles for a user of a specific database.

  • IAM Database authentication: assign rds_iam role to user
  • Kerberos authentication  assign rds_ad role to the user.
  • Password authentication don’t assign above roles.

Password Authentication

  • With Password authentication, database performs all the administration of user accounts. Database controls and authenticate the user accounts.

IAM Database authentication

  • IAM database authentication works with MySQL and PostgreSQL. With this authentication method, you don’t need to use a password when you connect to a DB instance

Kerberos Authentication

Benefits of using SSO and centralised authentication of database users.

Connecting to your DB instance using IAM authentication from the command line: AWS CLI and mysql client

  • In the Database authentication section, choose Password and IAM database authentication to enable IAM database authentication.
  • To allow an IAM user or role to connect to your DB instance, you must create an IAM policy.
{

   "Version": "2012-10-17",

   "Statement": [

      {

         "Effect": "Allow",

         "Action": [

             "rds-db:connect"

         ],

         "Resource": [

             "arn:aws:rds-db:us-east-2:1234567890:dbuser:db-ABCDEFGHIJKL01234/db_user"

         ]

      }

   ]

}

Create database user account using IAM authentication

CREATE USER jane_doe IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS';
CREATE USER db_userx;
GRANT rds_iam TO db_userx;

Generate an IAM authentication token

aws rds generate-db-auth-token --hostname rdsmysql.123456789012.us-west-2.rds.amazonaws.com --port 3306 --region us-west-2  --username jane_doe

Connecting to DB instance

mysql –host=hostName –port=portNumber –ssl-ca=full_path_to_ssl_certificate –enable-cleartext-plugin –user=userName –password=authToken

Connecting to DB using Python

import pymysql
import sys
import boto3
import os

ENDPOINT="mysqldb.123456789012.us-east-1.rds.amazonaws.com"
PORT="3306"
USER="jane_doe"
REGION="us-east-1"
DBNAME="mydb"

os.environ['LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN'] = '1'

#gets the credentials from .aws/credentials
session = boto3.Session(profile_name='default')
client = session.client('rds')
token = client.generate_db_auth_token(DBHostname=ENDPOINT, Port=PORT, DBUsername=USER, Region=REGION)
try:
    conn =  pymysql.connect(host=ENDPOINT, user=USER, passwd=token, port=PORT, database=DBNAME, ssl_ca='SSLCERTIFICATE')

    cur = conn.cursor()
    cur.execute("""SELECT now()""")
    query_results = cur.fetchall()
    print(query_results)

except Exception as e:
    print("Database connection failed due to {}".format(e))

   

Final AWS RDS Troubleshooting’s

Can’t connect to Amazon RDS DB instance

  • Check Security group
  • Check Port
  • Check internet Gateway
  • Check db name

Error – Could not connect to server: Connection timed out

  • Check hostname and port
  • Check security group
  • Telnet to the DB
  • Check the username and password

Error message “failed to retrieve account attributes, certain console functions may be impaired.”

  • Account is missing permissions, or your account hasn’t been properly set up.
  • lack permissions in your access policies to perform certain actions such as creating a DB instance

Amazon RDS DB instance outage or reboot

  • You change the backup retention period for a DB instance from 0 to a nonzero value or from a nonzero value to 0. You then set Apply Immediately to true.
  • You change the DB instance class, and Apply Immediately is set to true.
  • You change the storage type from Magnetic (Standard) to General Purpose (SSD) or Provisioned IOPS (SSD), or from Provisioned IOPS (SSD) or General Purpose (SSD) to Magnetic (Standard).

Amazon RDS DB instance running out of storage

  • Add more storage in  EBS volumes attached to the DB instance.

Amazon RDS insufficient DB instance capacity

The specific DB instance class isn’t available in the requested Availability Zone. You can try one of the following to solve the problem:

  • Retry the request with a different DB instance class.
  • Retry the request with a different Availability Zone.
  • Retry the request without specifying an explicit Availability Zone.

Maximum MySQL and MariaDB connections

  • The connection limit for a DB instance is set by default to the maximum for the DB instance class. You can limit the number of concurrent connections to any value up to the maximum number of connections allowed.
  • A MariaDB or MySQL DB instance can be placed in incompatible-parameters status for a memory limit when The DB instance is either restarted at least three time in one hour or at least five times in one day or potential memory usage of the DB instance exceeds 1.2 times the memory allocated to its DB instance class. To solve the issue:
    • Adjust the memory parameters in the DB parameter group associated with the DB instance.
    • Restart the DB instance.

How to Set up a PostgreSQL Database on Amazon RDS

If you new to AWS RDS or planning to create your first AWS RDS database instance then you are at the right place to learn one of the most popular and widely used Database engine PostgreSQL.

In this tutorial you will learn how to set up a PostgreSQL Database on Amazon RDS in the Amazon cloud from scratch and step by step.

Still interested? Lets get into it.

Join 48 other followers

Table of Content

What is Database?

If you want to store all the information of your employees securely and efficiently such as Name, Employee ID, Employee Address, Employee Joining date, Employee benefits, etc then you need a database.

Basic Database diagram
Basic Database diagram

What is AWS RDS?

Amazon Relational Database (AWS RDS) is an Amazon web service that helps in setting up and configuring the relational database in AWS. With AWS RDS you can scale up or down the capacity i.e you can configure different instance sizes, load-balanced, apply fault-tolerant.

AWS RDS also removes tedious management tasks than setting up manually and saving a lot of our time. AWS RDS supports six database engines: Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server.

With AWS you don’t need to rely on buying hardware, backups, scalability, availability, and it’s more secure than hosting your own database. In the below snap AWS RDS service contains RDS instances and Instances further contain RDS databases and database users & finally you connect them using database clients such as pgadmin4.

Connecting AWS RDS PostgreSQL database from pgadmin client
Connecting AWS RDS PostgreSQL database from pgadmin client

What is PostgreSQL?

PostgreSQL is an open-source relational database system that has the capability to handle heavy workloads, scale systems easily, runs mostly on all Operating systems, and is highly extensible like you can define your own data types, functions. PostgreSQL is one of the most widely used AWS RDS DB engines.

A DB engine is the specific relational database software that runs on your DB instance.

Some of the features of PostgreSQL are listed below:

  • Security
  • Extensibility
  • Text Search
  • Reliable
  • Data Integrity
  • Good Performance

Prerequisites

This tutorial will be step by step and if you would like to follow along, then you must have.

  • Amazon AWS account. If you dont have AWS account create from here.
  • pgAdmin utility to connect to PostgreSQL database instance. To install pgadmin click here.

Creating a PostgreSQL DB instance in AWS RDS

Now that you have a basic idea of what is Postgres database and the benefits of hosting your database on AWS RDS with a database engine like PostgreSQL. Let’s quickly learn how to create a PostgreSQL DB instance in AWS RDS.

  • Sign into your AWS account and and search for AWS RDS in the search box and click on RDS.
Searching for AWS RDS service in AWS Cloud
Searching for AWS RDS service in AWS Cloud
  • Now, in the AWS RDS page click on Create database.
Creating database in AWS RDS service
Creating database in AWS RDS service
  • Further on Create database page choose database creation method as Standard create , Engine as PostgreSQL and Version as : PostgreSQL 12.5-R1 and select FREE tier from Templates.

The Latest vesion of PostgreSQL is PostgreSQL 14.1-R1

Defining all the parameters to create a AWS RDS database engine
Defining all the parameters to create a AWS RDS database engine
  • Next, provide the database name, master username, master password and keeping all the storage values as default .
Specifing the Configuration of database instance
Specifing the Configuration of database instance
Defining storage for database instance
Defining storage for database instance
  • Further in Connectivity section select the Virtual Private Cloud, Subnet group in which you would like to create the AWS RDS instance, Public access as Yes, and select security group as default.

Make sure to allow 0.0.0.0/0 in the Inbound and Outbound traffic in the default security group and subnet group have route to internet so that you can connect to RDS instance from the database client from your browser or local machine.

Defining network connectivity options in AWS RDS
Defining network connectivity options in AWS RDS
  • Now in the “Database authentication” choose Password authentication and finally click on Create database. It usually takes few mins for RDS instance to be launched in AWS Cloud.
Specifying the database authentication method
Specifying the database authentication method

Verifying AWS RDS Postgres database instance in AWS Cloud

Now that you have created the AWS RDS Postgres database instance in AWS Cloud, which is great but unless you verify in Amazon Management console you cannot be sure enough. Lets navigate to AWS console and verify the Postgres instance in AWS RDS service.

As you can see the specified mydb instance has been created successfully in AWS RDS.

Verifying AWS RDS Postgres database instance in AWS Cloud
Verifying AWS RDS Postgres database instance in AWS Cloud

Connecting to a DB instance running the PostgreSQL database engine

Now that you have verified the DB instance running the PostgresSQL in AWS cloud, its time to connect using pgAdmin client from your machine. To connect

  • Open pgAdmin on your machine and click on Create and further Server.
Connecting to PostgreSQL database instance from pgadmin
Connecting to PostgreSQL database instance from pgadmin
  • In the Create-Server Pageunder General tab select name as “myrds”. Next, navigate to Connection tab and provide the all the details such as Host i.e endpoint URL of your database instance, Port, , username and passsword as shown below.
Defining Name of database to connect
Defining Name of database to connect
Defining connection details of the PostgreSQL database instance
Defining connection details of the PostgreSQL database instance
  • After you provide all the details and click on save button, the newly created database will be visible under the severs as shown below.
checking the database instance
checking the database instance
  • Finally under myrds database instance create a database by right clicking on Databases and select Create ➔ Database and provide the name of the database you wish to create.
Creating database instance AWS RDS database instance
Creating database instance AWS RDS database instance
  • As you can see below the testing database is created successfully. 
Viewing the newly launched database in AWS RDS database instance
Viewing the newly launched database in AWS RDS database instance

Conclusion

In this tutorial you learned about one of the most widely used AWS RDS database Postrgres and how to create it in Amazon management console.

So what do you plan to store in this newly created database instance.

How to Install AWS CLI Version 2 and Setup AWS credentials

Are you new to AWS Cloud or tired of managing your AWS Cloud infrastructure using manual steps back and forth? If yes, you should consider installing AWS Command Line Interface (AWS CLI ) and managing infrastructure using it?

In this tutorial, you will learn how to install AWS CLI Version 2 and set up AWS credentials in the AWS CLI tool.

Let’s dive into it.

Join 48 other followers

Table of Content

  1. What is AWS CLI?
  2. Installing AWS CLI Version 2 on windows machine
  3. Creating an IAM user in AWS account with programmatic access
  4. Configure AWS credentials using aws configure
  5. Verify aws configure from AWS CLI by running a simple commands
  6. Configuring AWS credentials using Named profile.
  7. Verify Named profile from AWS CLI by running a simple commands.
  8. Configuring AWS credentials using environment variable
  9. Conclusion

What is AWS CLI?

AWS CLI enables you to interact and provides direct access to the public APIs of AWS services of various AWS accounts using the command-line shells from your local environment or remotely.

You can control multiple AWS services from the AWS CLI and automate them through scripts. You can run AWS CLI commands from a Linux shell such as bash, zsh, tcsh, and from a Windows machine, you can use command prompt or PowerShell to execute AWS CLI commands.

The AWS CLI is available in two versions, and the installation is exactly the same for both versions but in this tutorial, let’s learn how to install AWS CLI version 2.

Installing AWS CLI Version 2 on windows machine

Now that you have a basic idea about AWS CLI and connecting to AWS services using various command prompt and shells. Further in this section, let’s learn how to install AWS CLI Version 2 on a windows machine.

  • First open your favorite browser and download the AWS CLI on windows machine from here
Downloading AWS CLI Interface v2
Downloading AWS CLI Interface v2
  • Next, select the I accept the terms and Licence Agreement and then click on the next button.
Downloading AWS CLI Interface v2
Accepting the terms in the Licence Agreement of AWS CLI
  • Further, on Custom setup page provide the location of installation path and then click on Next button.
Setting the download location of AWS CLI
Setting the download location of AWS CLI
  • Now, click on the Install button to install AWS CLI version 2.
Installing the AWS CLI on Windows machine
Installing the AWS CLI on a Windows machine
  • Finally click on Finish button as shown below.
Finishing the Installation of the AWS CLI on Windows machine
Finishing the Installation of the AWS CLI on Windows machine
  • Verify the AWS version by going to command prompt and run the below command.
aws --version

As you can see below, the AWS CLI version 2 is successfully installed on a windows machine.

Checking the AWS CLI version
Checking the AWS CLI version

Creating an IAM user in AWS account with programmatic access

There are two ways to connect to an AWS account, the first is providing a username and password on the AWS login page and another is configuring the Access key ID and secret keys of IAM users in AWS CLI to connect programmatically.

Earlier, you installed AWS CLI successfully on a Windows machine, but you will need an IAM user with programmatic access to run commands from it.

Let’s learn how to create an IAM user in an AWS account with programmatic access, Access key ID, and secret keys.

  1. Open your favorite web browser and navigate to the AWS Management Console and log in.
  2. While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item.
Checking the IAM AWS service
Checking the IAM AWS service
  1. To Create a user click on Users→ Add user and provide the name of the user myuser and make sure to tick the Programmatic access checkbox in Access type which enables an access key ID and secret access key and then hit the Permissions button.
Adding the IAM user in AWS CLoud
Adding the IAM user in AWS CLoud
  1. Now select the “Attach existing policies directly” option in the set permissions and look for the “Administrator” policy using filter policies in the search box. This policy will allow myuser to have full access to AWS services.
Attaching the admin rights to IAM user in AWS CLoud
Attaching the admin rights to IAM users in AWS CLoud
  1. Finally click on Create user.
  2. Now, the user is created successfully and you will see an option to download a .csv file. Download this file which contains IAM users i.e. myuser Access key ID and Secret access key which you will use later in the tutorial to connect to AWS service from your local machine.
Downloading the AWS credentials of IAM user
Downloading the AWS credentials of IAM user

Configure AWS credentials using aws configure in AWS CLI

You are an IAM user with Access key ID and secret keys, but AWS CLI cannot perform anything unless you configure AWS credentials. Once you configure the credentials, AWS CLI allows you to connect to the AWS account and execute commands.

  • Configure AWS Credentials by running the aws configure command on command prompt.
aws configure
  • Enter the details such as AWS Access key ID, Secret Access Key, region. You can skip the output format as default or text or json .
Configure AWS CLI using aws configure command
Configure AWS CLI using aws configure command
  • Once AWS is configured successfully , verify by navigating to C:\Users\YOUR_USER\.aws  and see if two file credentials and config are present.
Checking the credentials file and config on your machine
Checking the credentials file and config on your machine
  • Now open both the files and verify and you can see below you’re AWS credentials are configured successfully using aws configure.
Checking the config file on your machine
Checking the config file on your machine
Checking the config file on your machine
Checking the config file on your machine

Verify aws configure from AWS CLI by running a simple commands

Now, you can test if AWS Access key ID, Secret Access Key, region you configured in AWS CLI is working fine by going to command prompt and running the following commands.

aws ec2 describe-instances
Describing the AWS EC2 instances using AWS CLI
Describing the AWS EC2 instances using AWS CLI
  • You can also verify the AWS CLI by listing the buckets in your acount by running the below command.
aws cli s3

Configuring AWS credentials using Named profile.

Another method to configure AWS credentials that are mostly used is configuring the Named profile. A named profile is a collection of settings and credentials that you can apply to an AWS CLI command. When you specify a profile to run a command, the settings and credentials are used to run that command. Let’s learn how to store named profiles.

  1. Open credentials files which got created earlier using aws configure and if not then create a file named credentails in C:\Users\your_profile\.aws directory of your windows machine.
  2. Add all the Access key ID and Secret access key into the credentials file in the below format and save. By defining the Named profile allows you to connect with different AWS account easily and avoiding confusion while connecting to specific AWS accounts.
Creating the Named Profile on your machine
Creating the Named Profile on your machine
  1. Similarly, create another file config  in the C:\Users\your_profile\.aws directory.
  2. Next, add the “region” into the config file and make sure to add the name of the profile which you provided in the credentials file, and save the file. This file allows you to work with a specific region.
  • For Linux and mac machine the location of credential file is ~/.aws/credentials and ~/.aws/config.
  • For windows machine the location of config file is  %USERPROFILE%\.aws\credentials and %USERPROFILE%\.aws\config rrespectively.
Creating the Named Profile config file on your machine
Creating the Named Profile config file on your machine

Verifying Named profile from AWS CLI

Previously you configured the Named profile on your machine, but let’s verify the Named profile from AWS CLI by running a simple command. Let’s open the command prompt and run the below command to verify the sandbox profile that you created earlier.

aws ec2 describe-instances --profile sandbox

As you can see below, the instance is described properly using the command with Named profile shows Named profile is configured successfully.

Verifying the Named profile in AWS CLI
Verifying the Named profile in AWS CLI

Configuring AWS credentials using the environment variable

Finally, the last Configuring AWS credentials using the environment variables works well. Let’s check out quickly.

  • Open the command prompt and set the AWS secret key and access key using the environmental variable using set . The enviornment variables doesnt changes the value until the end of the current command prompt session, or until you set the variable to a different value
Configuring AWS credentials using the environment variable
Configuring AWS credentials using the environment variable

AWS CLI Error (ImportError: cannot import name ‘docevents’ from ‘botocore.docs.bcdoc’) and Solution

If you face any issues in AWS CLI related to python or any file then use the below command.

 pip3 install --upgrade awscli

Conclusion

In this tutorial, you learned What is AWS CLI, how to install AWS CLI version 2, and various methods that allow you to configure AWS credentials and then work with AWS CLI.

So which method are you going to use while using AWS CLI to connect and manage AWS infrastructure?

What is AWS WAF (Web Application Firewall) and how to Setup WAF in AWS account.

Are you sure if your applications or website are fully secure and protected? If not, you are at the right place to learn about Amazon web service Web Application Firewall (AWS WAF) that protects your web applications from common web exploits in the best effective way.

AWS WAF allows you to monitor all the HTTP(S) requests that are forwarded to an Amazon CloudFront distribution, Amazon API Gateway REST API, an Application Load Balancer, and takes actions accordingly.

This tutorial will teach what AWS WAF (Web Application Firewall) is and how to set up WAF in an AWS account. Let’s dive in and get started.

Join 48 other followers

Table of Content

  1. What is Amazon web service Web Application Firewall (AWS WAF) ?
  2. Benefits of AWS WAF
  3. Components of AWS WAF
  4. AWS WAF Web ACL (Web Access Control List)
  5. AWS WAF rules
  6. AWS Managed Rules rule group
  7. IP sets and regex pattern sets
  8. Prerequisites
  9. How to create AWS WAF (Web Application Firewall) and AWS WAF rules
  10. Conclusion

What is Amazon web service Web Application Firewall (AWS WAF) ?

AWS WAF allows you to monitor all the HTTP or HTTPS requests forwarded to Amazon Cloud Front, Amazon Load balancer, Amazon API Gateway REST API, etc., from users. AWS WAF controls who can access the required content or data based on specific conditions such as source IP address etc., and protects your applications from common web exploits.

Benefits of AWS WAF

  • AWS WAF is helpful when you want Amazon Cloud Front, Amazon Load balancer, Amazon API Gateway REST to provide the content or serve content to particular users or block particular users.
  • AWS WAF allows you to count the requests that match properties specified without allowing or blocking those requests
  • AWS WAF protects you from web attacks using conditions you specify and also provides real time metrics and details of web requests.
AWS WAF architecture and working
AWS WAF architecture and working

Components of AWS WAF

AWS WAF service contains some important components; let’s discuss each of them now.

AWS WAF Web ACL (Web Access Control List)

AWS WAF Web ACL allows protecting a set of AWS Resources. After you create a web ACL, you need to add AWS WAF rules inside it.

AWS WAF rules define specific conditions applied to web requests coming from users and how to handle these web requests. You also set default action in web ACL to allow or block requests that pass these rules.

AWS WAF rules

AWS WAF rules contain statements that define the criteria, and if the criteria are matched, then the web requests are allowed; else, they are blocked. The rule is based on IP addresses or address ranges, country or geographical location, strings that appear in the request, etc.

AWS Managed Rules rule group

You can use rules individually or in reusable rule groups. There are two types of rules: AWS Managed rule groups and managing your own rule groups.

IP sets and regex pattern sets

AWS WAF stores complex information in sets you use by referencing them in your rules.

  • An IP set is a group of IP addresses and IP address ranges of AWS resources that you want to use together in a rule statement.
  • A regex pattern set provides a collection of regular expressions that you want to use together in a rule statement. Regex pattern sets are AWS resources.

Prerequisites

  • You must have AWS account in order to setup AWS WAF. If you don’t have AWS account, create a AWS account from here AWS account.
  • IAM user with Administrator rights and setup credentials using AWS CLI or using AWS Profile.

How to create AWS WAF (Web Application Firewall) and AWS WAF rules

Now that you have a basic idea of AWS WAF and the components of AWS WAF. To work with AWS WAF, the first thing you need to create is Web Access Control List (ACL) and further add the WAF rules ( individual rules or groups of rules ) such as blocking or allowing web requests.

In this section, let’s learn how to create and set up AWS WAF and create a Web ACL.

  • To create Web ACL open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the console, click on the search bar at the top, search for WAF, and click on the WAF menu item.
Searching for AWS WAF
Searching for AWS WAF
  • Now further click on on Create Web ACL button as shown below.
Creating a Web ACL
Creating a Web ACL
  • Next provide the Name, cloud Watch metric name of your choice and choose Resource type as CloudFront distributions.

This tutorial already had one CloudFront Distribution in place which will be used If you need to create the cloud Distribution follow here

Cloud Distribution in AWS account
Cloud Distribution in AWS account
  • Next, Click on Add AWS Resources and select the CloudFront distribution and hit NEXT.
Selecting the CloudFront distribution in AWS WAF
Selecting the CloudFront distribution in AWS WAF
  • Further In Add rules and rule groups section choose Add my own rules and rule groups and provide the values as shown below.
    • Name as myrule123
    • Type as Regular Rule
    • Inspect as Header
    • Header field as User-Agent
    • if a request as matches the statement
Adding rules and rule groups in AWS WAF
Adding rules and rule groups in AWS WAF
Defining the values of AWS WAF rules and rule groups
Defining the values of AWS WAF rules and rule groups
  • While building the rules there are 3 types of Rule Actions options available such as
    • Count: AWS WAF counts the request but doesn’t determine whether to allow it or block it
    • Allow: AWS WAF allows the request to be forwarded to the protected AWS resource
    • Block: AWS WAF blocks the request and sends back to the client.
  • Choose Count as the rule action.
Choosing the rule action
Choosing the rule action

You can instruct AWS WAF to insert custom headers into the original HTTP request for rule actions or web ACL default actions that are set to allow or count.

  • Finally hit the next button till end and then Create Web ACL.
Creating the Web ACL
Creating the Web ACL
  • The rules you added previous are manual rules, but at times you need to add AWS Managed rules, to do that select AWS Managed rules.
Adding AWS WAF Managed rules
Adding AWS WAF Managed rules
  • Now the AWS Web ACL is should look like as showb below with both managed and your own created AWS WAF rules.
Viewing the AWS WAF with both managed and your own created AWS WAF rules
Viewing the AWS WAF with both managed and your own created AWS WAF rules

Conclusion

In this tutorial, you learned AWS WAF service, WAF components such as AWS Web ACL, the WAF rules, and applied to WAF web ACL.

You also learned how to apply AWS WAF web ACL on CloudFront to protect your websites from getting exploited from attacks.

So now, which applications and websites do you plan to protect next using AWS WAF?

What is AWS CloudFront and how to Setup Amazon CloudFront with AWS S3 and ALB Distributions

Internet users are always impressed with websites’ high speed & loading capacities. Why not have a website that loads the content quickly and delivers fast with AWS Cloudfront?

In this tutorial, you learn What AWS CloudFront is and how to set up Amazon CloudFront with AWS S3 and ALB Distributions which enables users to retrieve content quickly by utilizing the concept of caching.

Let’s get started.

Join 48 other followers

Table of Content

  1. What is AWS Cloudfront?
  2. How AWS Cloudfront delivers content to your users
  3. Amazon Cloudfront caching with regional edge caches
  4. Prerequisites
  5. Creating an IAM user in AWS account with programmatic access
  6. Configuring the IAM user Credentials on local Machine
  7. How to Set up AWS CloudFront
  8. How to Use Custom URLs in AWS CloudFront by Adding alternate Domain Names (CNAMEs)
  9. Using Amazon EC2 as the Origins in the AWS CloudFront
  10. Conclusion

What is AWS Cloudfront?

AWS Cloudfront is an Amazon web service that speeds up the distribution of static and dynamic content such as .html, .css, .js, images, live streaming of video to users. Cloudfront delivers the content quickly using edge locations when the request is requested by users.

If the content is not available in edge locations, Cloudfront requests from the origin configured such as AWS S3 bucket, HTTP server or Load Balancer, etc. Also, the use of Lambda at edge location with CloudFront adds more ways to customize CloudFront.

How AWS Cloudfront delivers content to your users

Now that you have a basic idea of CloudFront knowing how AWS Cloudfront delivers content to users is also important.

Initially, when users request a website or application such as example.com/mypage.html, the DNS server routes the request to AWS Cloudfront edge locations.

Next CloudFront checks if the request can be fulfilled with edge location; else, CloudFront queries to the origin server. The Origin server sends the files back to the edge location, and further Cloudfront sends them back to the user.

AWS Cloudfront architecture
AWS Cloudfront architecture

Amazon Cloudfront caching with regional edge caches

Delivering the content from the edge location is fine. Still, if you to further improve the performance and latency of content, there is a further caching mechanism based on region, known as regional edge cache.

Regional edge caches help with all types of content, particularly content that becomes less popular over time, such as user-generated content, videos, photos, e-commerce assets such as product photos and videos, etc.

Regional edge cache sits in between the origin server and edge locations. The Edge location stores the content and cache, but when the content is too old it removes it from its cache and forwards it to the regional cache, which has wide coverage to store lots of content.

Regional edge cache
Regional edge cache

Prerequisites

  • You must have AWS account in order to setup AWS CloudFront. If you don’t have AWS account, please create a account from here AWS account.
  • AWS S3 bucket created.

Creating an IAM user in AWS account with programmatic access

To connect to AWS Service, you should have an IAM user with an Access key ID and secret keys in the AWS account that you will configure on your local machine to connect to AWS account from your local machine.

There are two ways to connect to an AWS account, the first is providing a username and password on the AWS login page on the browser, and the other way is to configure Access key ID and secret keys on your machine and then use command-line tools to connect programmatically.

  1. Open your favorite web browser and navigate to the AWS Management Console and log in.
  2. While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item.
Opening the IAM service in AWS cloud
Opening the IAM service in AWS cloud
  1. To Create a user click on Users→ Add user and provide the name of the user myuser and make sure to tick the Programmatic access checkbox in Access type which enables an access key ID and secret access key and then hit the Permissions button.
Adding the AWS IAM user with Programmatic access
Adding the AWS IAM user with Programmatic access
  1. Now select the “Attach existing policies directly” option in the set permissions and look for the “Administrator” policy using filter policies in the search box. This policy will allow myuser to have full access to AWS services.
Granting the Administrator Access to the IAM user
Granting the Administrator Access to the IAM user
  1. Finally click on Create user.
  1. Now, the user is created successfully and you will see an option to download a .csv file. Download this file which contains IAM users i.e. myuser Access key ID and Secret access key which you will use later in the tutorial to connect to AWS service from your local machine.
Downloading the AWS IAM user with programmatic access that is access key and secret key
Downloading the AWS IAM user with programmatic access that is access key and secret key

Configuring the IAM user Credentials on local Machine

Now, you have an IAM user myuser created. The next step is to set the download myuser credentials on the local machine, which you will use to connect AWS service via API calls.

  1. Create a new file, C:\Users\your_profile\.aws\credentials on your local machine.
  2. Next, Enter the Access key ID and Secret access key from the downloaded csv file into the credentials file in the same format and save the file.
[default]     # Profile Name
aws_access_key_id = AKIAXXXXXXXXXXXXXXXX
aws_secret_access_key = vIaGXXXXXXXXXXXXXXXXXXXX

credentials files help you to set your profile. By this way, it helps you to create multiple profiles and avoid confusion while connecting to specific AWS accounts.

  1. Similarly, create another file C:\Users\your_profile\.aws\config in the same directory
  2. Next, add the “region” into the config file and make sure to add the name of the profile which you provided in the credentials file, and save the file. This file allows you to work with a specific region.
[default]   # Profile Name
region = us-east-2

How to Set up AWS CloudFront

Now that you know what AWS Cloudfront is, you have an IAM user that will allow you to set up the AWS Cloudfront in the AWS cloud. Let’s set up AWS Cloudfront.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘CloudFront’, and click on the CloudFront menu item.
Searching for AWS Cloudfront in AWS Cloud
Searching for AWS Cloudfront in AWS Cloud
  • Click on Create distributions and then Get Started
Creating the AWS Cloudfront distribution
Creating the AWS Cloudfront distribution
  • Now in the Origin settings provide the AWS S3 bucket name and keep other values as default.
Aligning the AWS S3 bucket in the AWS Cloudfront in AWS Cloud
Aligning the AWS S3 bucket in the AWS Cloudfront in AWS Cloud
  • For the settings under Default Cache Behavior Set and Distribution Settings, accept the default values and then click on Create distribution.
AWS S3 bucket setup in AWS Cloudfront
AWS S3 bucket setup in AWS Cloudfront
AWS Cloudfront distribution
AWS Cloudfront distribution
  • Now upload a index.html with a text hello in the AWS S3 bucket and set the permission as public access as shown below.
Uploading the file in AWS S3 bucket
Uploading the file in AWS S3 bucket
Granting permissions to the file in AWS S3 bucket
Granting permissions to the file in the AWS S3 bucket
  • Now check the Amazon S3 URL to verify that your content is publicly accessible
Checking the content of file of AWS S3 bucket using the AWS S3 URL
Checking the content of file of AWS S3 bucket using the AWS S3 URL
  • Finally check the CloudFront URL by hitting domain-name/index.html and it should show the same result as your index.html file contains.
domainname/index.html
Checking the content of file of AWS S3 bucket using the Cloudfront URL
Checking the content of file of AWS S3 bucket using the Cloudfront URL

How to Use Custom URLs in AWS CloudFront by Adding alternate Domain Names (CNAMEs)

Previously the CloudFront URL was generated with a domain name *.cloudfront.net by default, but If in production, it is important to configure your own domain name that is CNAME, such as abc.com, in the URL. Let’s learn how to use Custom URLs in AWS CloudFront by adding alternate Domain Names (CNAMEs).

Earlier, the default URL of AWS Cloudfront was http://dsx78lsseoju7.cloudfront.net/index.html, but if you wish to use an alternate domain such as http://abc.com/index.html, follows the step below:

  • Navigate back to CloudFront Page and look for the distribution where you need to change the domain and click on Edit
Updating the custom URL in AWS Cloudfront
Updating the custom URL in AWS Cloudfront
  • Here, provide the domain name that you wish to configure with valid SSL certificate.
Updating the CNAME and SSL certificate in AWS Cloudfront
Updating the CNAME and SSL certificate in AWS Cloudfront
  • Now the domain name is succesfully update in Cloudfront but for the URL to work you will need to configure few things in Route53 AWS service such as alias record set. To do that, navigate to the Route53 page by searching on the top of the AWS Page.
Opening the AWS Route53 service
Opening the AWS Route53 service
  • Click on the Hosted Zone and then click on Create Record
Opening the Hosted zone to create a record
Opening the Hosted zone to create a record
  • Now provide the name of record, record type and route traffic as CloudFront distribution. After you configure Route53 verify the index page ( http://mydomain.abc.com/index.html ) and it should work fine.
Creating the record in Route53 to route new domain to CloudFront
Creating the record in Route53 to route new domain to CloudFront

Using Amazon EC2 as the Origins in the AWS CloudFront

A custom Origin can be an Amazon Elastic Compute Cloud (AWS EC2), for example, an http server. You need to provide the DNS name of the AWS EC2 instance as the custom origin, but while setting the custom origin as AWS EC2, make sure to follow some basic guidelines.

  • Host the same content and synchronize the clocks on all servers in the same way.
  • Restrict access requests to the HTTP and HTTPS ports that your custom origin listens on that is AWS EC2.
  • Use an Elastic Load Balancing load balancer to handle traffic across multiple Amazon EC2 instances and when you create your CloudFront distribution, specify the URL of the load balancer for the domain name of your origin server.

Conclusion

This tutorial taught you what CloudFront is and how to set up CloudFront Distributions in the Amazon cloud. The benefit of using CloudFront is it allows users to retrieve their content quickly by utilizing the concept of caching.

So next, what are you going to manage with CloudFront?

How to Launch AWS Redshift Cluster using AWS Management Console in Amazon account.

Do you have huge data to analyze, such as for the performance of your applications? If yes, you are at the right place to learn about AWS Redshift one of the most widely used AWS services to analyze the data.

AWS Redshift service allows storing terabytes of data and analyzing the data, and the service is AWS Redshift.

In this tutorial, you will learn about Amazon’s data warehouse and analytic service, AWS Redshift, and how to create an AWS Redshift cluster using the AWS Management console.

Let’s get started.

Join 48 other followers

Table of Content

  1. What is AWS Redshift?
  2. AWS Redshift Cluster
  3. Prerequisites
  4. Creating AWS IAM role for AWS Redshift Cluster
  5. How to Create AWS Redshift Cluster using AWS Management console
  6. Conclusion

What is AWS Redshift?

AWS Redshift is an AWS analytical service that allows you to store huge amounts of data and analyze queries on the database. It is a fully managed service, so you don’t need to worry about scalability and infrastructure.

To upload the data in the AWS Redshift cluster, first, you need to create the set of nodes, and later, you can start analyzing the data. AWS Redshift manages everything for you, such as monitoring, scaling, applying patches, upgrades, capacity, whatever is required at the infrastructure end.

AWS Redshift Cluster

AWS Redshift cluster contains a single node or more than one node, depending on the requirements. If you wish to create more than one node, then that is known as a cluster. AWS Redshift Cluster contains one leader node, and other nodes are known as compute nodes.

You can create an AWS Redshift cluster using various ways such as AWS Command Line Interface ( AWS CLI ), AWS Management Console, and AWS SDK’s ( Software Development kit) libraries.

  • AWS Redshift cluster snapshots can be created either manually or automatically & are stored in AWS S3 bucket.
  • AWS CloudWwatch is used to capture health and performance of AWS Redshift cluster.
  • As soon as you create Amazon Redshift cluster one database is also created. This database is used to query and analyze the data. While you provision the cluster you need to provide master user which is superuser for the database & has all rights.
  • When a client queries Redshift cluster all the request are received by leader node , it further parses and develop query execution plans. Leader node coordinates with compute node and then provide final results to clients.
AWS Redshift Cluster architecture diagram
AWS Redshift Cluster architecture diagram

Prerequisites

  • You must have AWS account in order to setup AWS Redshift cluster. If you don’t have AWS account, please create a account from here AWS account.
  • It will be great if you have admin rights on AWS cloud else you must have access to create IAM role and AWS Redshift cluster.

Creating AWS IAM role for AWS Redshift Cluster

Before creating an AWS Redshift cluster, let’s create an IAM role that Redshift will assume to work with other services such as AWS S3, etc. Let’s quickly dive in and create an IAM role.

  • Open your browser and and go to AWS Management console and on the top search for IAM , here click on Roles
Viewing the IAM Dashboard
Viewing the IAM Dashboard
  • Next click on Create Role to create a new IAM role.
Creating the IAM role
Creating the IAM role
  • Now select AWS service as Redshift as highlighted below.
Creating IAM role and assigning permissions
Creating IAM role and assigning permissions
  • Further scroll down to the bottom and you will see “Select your use case”, here choose Redshift – Customizable, then choose Next: Permissions. This allowss AWS Redshift to connect to other AWS services such as AWS S3.
Customizing the AWS IAM role for AWS Redshift
Customizing the AWS IAM role for AWS Redshift
  • Now attach AmazonS3ReadOnlyAccess policy and click Next. This policy allows AWS Redshift to access AWS S3 bucket where you will store the data.
Attaching AWS S3 policy to an IAM role in AWS Cloud
Attaching AWS S3 policy to an IAM role in AWS Cloud
  • Next skip tagging as of now just click on Next: Tags and then Review & finally click on Create Role.
Creating AWS Redshift role
Creating AWS Redshift role

IAM role is created successfully; keep the IAM role ARN handy, which you will use in the next section.

Checking the newly created IAM role for AWS Redshift
Checking the newly created IAM role for AWS Redshift

How to Create AWS Redshift Cluster using AWS Management console

Now that you have an IAM role successfully created for the AWS Redshift cluster, let’s move on and learn how to create an AWS Redshift Cluster using the AWS Management console.

  • On the AWS Management console search for Redshift on the top of the page.
Navigating to AWS Redshift cluster Page
Navigating to AWS Redshift cluster Page
  • Next click on create a free trial Cluster and provide the name of cluster as redshift-cluster-1.
Specifying the AWS Redshift cluster configurations
Specifying the AWS Redshift cluster configurations
  • Further provide the database details such as admin username and password and save them for future. Also Associate IAM role that you cretad in previous secion.
Configure database details in the AWS Redshift Cluster
Configure database details in the AWS Redshift Cluster
  • Finally click on Create cluster
Configure network settings in the AWS Redshift Cluster
Configure network settings in the AWS Redshift Cluster

The AWS Redshift cluster is created successfully and available for use.

AWS Redshift cluster created successfully
AWS Redshift cluster created successfully
  • Lets validate the database connection by running a simple query. Click on Query data
Querying the database connection
Querying the database connection
  • Provide the database credentials for connecting to AWS Redshift cluster.
    • Note: dev database was created by default in the AWS Cluster
Providing the database connection details in AWS Redshift cluster
Providing the database connection details in the AWS Redshift cluster
  • Now Rrun a query as below. The query will be executed as there some tables already created by default inside the database like events, date etc.
select * from date

AWS Redshift Cluster is created successfully, and the queries are successfully executed in the database.

Execution of query on AWS Redshift clusters database
Execution of query on AWS Redshift clusters database

Conclusion

In this tutorial, you learned about Amazon’s data warehouse and analytic service, AWS Redshift, AWS Redshift cluster is, and how to create an AWS Redshift cluster using the AWS Management console.

Now that you have the newly launched AWS Redshift, what do you plan to store and analyze?

How to Start and Stop AWS EC2 instance in AWS account using Shell script

Are you spending unnecessary money in AWS Cloud by keeping unused AWS EC2 instances in running states? Why not stop the AWS EC2 instance and only start when required by running a single Shell Script?

Multiple AWS accounts contain dozens of AWS EC2 instances that require some form of automation to stop or start these instances, and to achieve this, nothing could be better than running a shell script.

In this tutorial, you will learn step by step how to Start and Stop AWS EC2 instance in AWS account using Shell script.

Still interested? Let’s dive in!

Join 48 other followers

Table of Content

  1. What is Shell Scripting or Bash Scripting?
  2. What is AWS EC2 instance?
  3. Prerequisites
  4. Building a shell script to start and stop AWS EC2 instance
  5. Executing the Shell Script to Stop AWS EC2 instance
  6. Verifying the Stopped AWS EC2 instance
  7. Executing the Shell Script to Start AWS EC2 instance
  8. Verifying the Running AWS EC2 instance
  9. Conclusion

What is Shell Scripting or Bash Scripting?

Shell Script is a text file containing lists of commands executed on the terminal or shell in one go in sequential order. Shell Script performs various important tasks such as file manipulation, printing text, program execution.

Shell script includes various environmental variables, comments, conditions, pipe commands, functions, etc., to make it more dynamic.

When you execute a shell script or function, a command interpreter goes through the ASCII text line-by-line, loop-by-loop, test-by-test, and executes each statement as each line is reached from top to bottom.

What is AWS EC2 instance?

AWS EC2 stands for Amazon web service elastic compute cloud. AWS EC2 is simply a virtual server that gets launched in quick time, and you don’t need to worry about the hardware. After the AWS EC2 instance is launched, you can deploy highly scalable and available applications.

There are some important components in AWS EC2 instance such as:

AWS EC2 AMI

  • AWS EC2 contains preconfigured templates known as AMI ( Amazon Machine Image ) that include an operating system and software configurations that are highly required. Using these preconfigured templates you can launch as many AWS EC2 instances.

You can configure your own software’s and data you wish to have when an instance on top of Preconfigured templates.

Amazon Machine Image template
Amazon Machine Image template

AWS EC2 instance type

AWS EC2 contains various AWS EC2 instance types with different CPU and memory configurations such as t2.micro, t2.medium, etc.

AWS EC2 instance type
AWS EC2 instance type

Amazon EC2 key pairs

AWS EC2 instance allows you to log in to these launched instances with complete security by creating a Keypair where one of the keys is public that remains within the AWS account, and another is the private key that remains with the owner of the instance.

AWS EC2 EBS Storage

AWS EC2 allows you to add two kinds of storage that is ec2 instance store volumes which are temporary storage, and Elastic block storage (AWS EBS), the permanent storage.

AWS EC2 is launched with root device volume ( ec2 instance store volumes or AWS EBS ) that allows you to boot the machine.

AWS EC2 EBS Storage
AWS EC2 EBS Storage

AWS EC2 instance state

AWS EC2 service provides various states of a launched instance such as stopped, started, running, terminated. Once the instance is terminated, it cannot be restarted back.

AWS EC2 instance state
AWS EC2 instance state

Prerequisites

  1. AWS account to create ec2 instance. If you don’t have AWS account please create from AWS account or AWS Account
  2. Windows 7 or plus edition where you will execute the shell script.
  3. AWS CLI installed. To install AWS CLI click here.
  4. Git bash. Yo install Git bash click here
  5. Code editor for writing the shell script on windows machine such as visual studio code. To install visual studio click here.

Building a shell script to start and stop AWS EC2 instance

Now that you have a good idea about the AWS EC2 instance and shell script but let’s learn how to build a shell script to start and stop the AWS EC2 instances.

  • Create a folder of your windows machine at any location. Further under the same folder create a file named start-stop-ec2.sh and copy/paste the below code.
# /usr/bin/bash 

set -e  # set -e stops the execution of a script if a command or pipeline has an error

id=$1   # Provide the instance ID with the name of the script

# Checking if Instance ID provided is correct 

function check_ec2_instance_id () {
    
    if echo "$1" | grep -E '^i-[a-zA-Z0-9]{8,}' > /dev/null; then 
           echo "Correct Instance ID provided , thank you"
           return 0
    else 
          echo "Opps !! Incorrect Instance ID provided !!"
          return 1
    fi
}

# Function to Start the instance 

function ec2_start_instance ()   {
     aws ec2 start-instances --instance-ids $1 
}

# Function to Stop the instance 

function ec2_stop_instance ()   {
     aws ec2 stop-instances --instance-ids $1 
}

# Function to Check the Status of the instance

function ec2_check_status ()   {
     aws ec2 describe-instances --instance-ids $1 --query "Reservations[].Instances[].State.Name" --output text
}

# Main Function 

function main ()  {
     check_ec2_instance_id $1                # First it checks the Instance ID
     echo " Instance ID provided is $1"  # Prints the message
     echo "Checking the status of $1"    # Prints the message
     ec2_check_status $1
                 # Checks the Status of Instance
   
     status=$(ec2_check_status $id)     # It stores the status of Instance
     if [ "$status" = "running" ]; then     
         echo "I am stopping the instance now"
         ec2_stop_instance $1
         echo "Instance has been stopped successfully"
     else 
         echo "I am starting the instance now"
         ec2_start_instance $1
         echo "Instance has been Started successfully" 
     fi

}

main $1                                 # Actual Script starts from main function

Executing the Shell Script to Stop AWS EC2 instance

Previously you created the shell script to start and stop the AWS EC2 instance, which is great; but it is not doing much unless you run it. Let’s execute the shell script now.

  • Open the visual studio code and then open the location of file start-stop-ec2.sh.
Opening Shell script on visual studio code
Opening Shell script on visual studio code
  • Finally execute the shell script.
./start-stop-ec2.sh <Instance-ID>    # Provide the EC2 instance ID along with script
Executing the shell script to stop the AWS Ec2 instance
Executing the shell script to stop the AWS Ec2 instance

Verifying the Stopped AWS EC2 instance

Earlier in the previous section, the shell script ran successfully; let’s verify the if AWS EC2 instance has been stopped from running state in the AWS account.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘EC2’, and click on the EC2 menu item and you should see the instance you specified in shell script has stopped now.
Viewing the stopped AWS EC2 instance
Viewing the stopped AWS EC2 instance

Executing the Shell Script to Start AWS EC2 instance

Now thaYouuccessfully stopped and verified the AWS EC2 instance in the AWS cloud. This time let’s restart the instance using the same script.

./start-stop-ec2.sh <Instance-ID>    # Provide the EC2 instance ID along with script
Executing the shell script to start the instance
Executing the shell script to start the instance

Verifying the Running AWS EC2 instance

Similarly, in this section, let’s verify the if AWS EC2 instance has been restarted successfully in the AWS account.

Viewing the running AWS EC2 instance
Viewing the running AWS EC2 instance

Conclusion

In this tutorial, you learned what is Amazon EC2 and learned how to start or stop AWS EC2 using shell script on AWS step by step. It is always a good practice to turn off your lights when you leave your home or room, similarly do for EC2 instances.

So which AWS EC2 instance are you planning to stop going further and save dollars?

How to Create an IAM user on an AWS account using shell script

Are you using the correct credentials and right permissions to log in to your AWS account? From a security point of view, it is essential to grant the right permissions to users and identities that access AWS accounts. That is where Identity and access management (AWS IAM) plays a vital role.

In this tutorial, you will learn how to create an IAM user on an AWS account using shell script step by step. Let’s get started.

Join 48 other followers

Table of Content

  1. What is Shell Scripting or Bash Scripting?
  2. What is AWS IAM or What is IAM in AWS ?
  3. AWS IAM Resources
  4. AWS IAM entities
  5. AWS IAM Principals
  6. AWS IAM Identities
  7. Prerequisites
  8. How to create IAM user in AWS manually
  9. How to create AWS IAM user using shell script in Amazon account
  10. Executing the Shell Script to Create AWS IAM user
  11. Verifying the Newly created IAM user in AWS
  12. Conclusion

What is Shell Scripting or Bash Scripting?

Shell Script is a text file containing lists of commands executed on the terminal or shell in one go in sequential order. Shell Script performs various important tasks such as file manipulation, printing text, program execution.

Shell script includes various environmental variables, comments, conditions, pipe commands, functions, etc., to make it more dynamic.

When you execute a shell script or function, a command interpreter goes through the ASCII text line-by-line, loop-by-loop, test-by-test, and executes each statement as each line is reached from top to bottom.

What is AWS IAM or What is IAM in AWS ?

AWS IAM stands for Amazon Managed service Identity and access management service that controls who can access AWS account and what resources in AWS account can be accessed.

When you create a new AWS account by default, you are the root user, have control over the entire AWS account, and can access everything. The Root user can log in to an AWS account using an email address and password you registered.

There are some important components in AWS IAM such as:

AWS IAM Resources

AWS IAM resources are the objects stored in IAM, such as user, role, policy, group, and identity provider.

AWS IAM Resources
AWS IAM Resources

AWS IAM entities

AWS IAM entities are those objects which can authenticate on AWS account, such as root user, IAM user, federated user, and assumed IAM roles.

AWS IAM entities
AWS IAM entities

AWS IAM Principals

AWS IAM Principals are the applications or users who use entities and work with AWS services. For example, Python AWS Boto3 or any person such as Robert.

AWS IAM Identities

AWS IAM Identities are the objects which identify themselves to another service such as IAM user “user1” accessing AWS EC2 instance. This shows that user1 shows its own identity that I have access to create an AWS EC2 instance. Examples of identity are group, users, and role.

AWS IAM Identities
AWS IAM Identities

Prerequisites

  1. AWS account to create ec2 instance. If you don’t have AWS account please create from AWS account or AWS Account
  2. Windows 7 or plus edition where you will execute the shell script.
  3. AWS CLI installed. To install AWS CLI click here.
  4. Git bash. Yo install Git bash click here
  5. Code editor for writing the shell script on windows machine such as visual studio code. To install visual studio click here.

How to create IAM user in AWS manually

Do you know root user is a shared account with all privileges,’ but it is not recommended to be used for any activity on an AWS account?

Instead of using a root user, a shared user, use an individual user and have various permissions accordingly.

IAM user can access a single AWS EC2 instance or multiple AWS S3 buckets or even attain admin access to gain complete access to AWS account.

  • Navigate to the Amazon Management console and and search for IAM.
  • Under AWS IAM page click on Add users button in IAM dashboard.
Adding a IAM user in AWS Cloud
Adding an IAM user in AWS Cloud
  • Now, provide the username, add a custom password and also select Programmatic access as shown below.
Providing the details to create a IAM user
Providing the details to create an IAM user
  • Click on Next permissions and choose Attach existing policies. This tutorial will grant Administrator access to the IAM user that you created previously.
Attaching IAM policy to IAM user in AWS
Attaching IAM policy to IAM users in AWS
  • For now skip tagging and click on create user. IAM user is created successfully . Now save the access key ID and Secret access key that will be used later in the article.
Downloading the AWS IAM user credentials for IAM user
Downloading the AWS IAM user credentials for IAM user

How to create AWS IAM user using shell script in Amazon account

Previously you learned how to create an IAM user manually within the Amazon Management console, but this section lets you create an AWS IAM user using a shell script in an Amazon account. Let’s quickly jump into and create the script.

  • Create a folder of your windows machine at any location. Further under the same folder create a file named create-iam-user.sh and copy/paste the below code.
#! /bin/bash
# Checking if access key is setup in your system 

if ! grep -q aws_access_key_id ~/.aws/config; then      # grep -q  Turns off Writing to standard output
   if ! grep -q aws_access_key_id ~/.aws/credentials; then 
      echo "AWS config not found or CLI is not installed"
      exit 1
    fi 
fi


# read command will prompt you to enter the name of IAM user you wish to create 

read -r -p "Enter the username to create": username

# Using AWS CLI Command create IAM user 

aws iam create-user --user-name "${username}" --output json

# Here we are creating access and secret keys and then using query and storing the values in credentials

credentials=$(aws iam create-access-key --user-name "${username}" --query 'AccessKey.[AccessKeyId,SecretAccessKey]'  --output text)

# cut command formats the output with correct coloumn.

access_key_id=$(echo ${credentials} | cut -d " " -f 1)
secret_access_key=$(echo ${credentials} | cut --complement -d " " -f 1)

# echo command will print on the screen 

echo "The Username "${username}" has been created"
echo "The access key ID  of "${username}" is $access_key_id "
echo "The Secret access key of "${username}" is $secret_access_key "

Executing the Shell Script to Create AWS IAM user

Previously you created the shell script to create the AWS IAM user, which is great, but it is not doing much unless you run it. Let’s execute the shell script now.

  • Open the visual studio code and then open the location of file create-iam-user.sh.
Opening Shell script on visual studio code
Opening Shell script on visual studio code
  • Finally execute the shell script.
./create-iam-user.sh
Executing the shell script to create the AWS IAM user
Executing the shell script to create the AWS IAM user

Verifying the Newly created IAM user in AWS

Earlier in the previous section, the shell script ran successfully; let’s verify the if IAM user has been created in the AWS account.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘IAM’, and click on the IAM menu item and you should see the IAM user is creared.
Verifying the Newly created IAM user in AWS
Verifying the Newly created IAM user in AWS

Conclusion

In this tutorial, you learned how to create AWS IAM users using shell script on AWS step by step. With IAM, you get individual access to AWS account, and you can manage permissions accordingly.

Now that you have newly created IAM users in the AWS account, which AWS resource do you plan to create next using this?

How to Launch AWS S3 bucket using Shell Scripting.

Are you storing the data securely, scalable, highly available, and fault-tolerant? If not, consider using Amazon Simple Storage Service (Amazon S3) in the AWS cloud.

This tutorial will teach you how to launch an AWS S3 bucket in an Amazon account using bash or shell scripting.

Let’s dive into it quickly.

Join 48 other followers

Table of Content

  1. What is Shell Script or Bash Script?
  2. What is the Amazon AWS S3 bucket?
  3. Prerequisites
  4. Building a shell script to create AWS S3 bucket in Amazon account
  5. Executing the Shell Script to Create AWS S3 bucket in Amazon Cloud
  6. Verifying the AWS S3 bucket in AWS account
  7. Conclusion

What is Shell Script or Bash Script?

Shell Script is a text file containing lists of commands executed on the terminal or shell in one go in sequential order. Shell Script performs various important tasks such as file manipulation, printing text, program execution.

Shell script includes various environmental variables, comments, conditions, pipe commands, functions, etc., to make it more dynamic.

When you execute a shell script or function, a command interpreter goes through the ASCII text line-by-line, loop-by-loop, test-by-test, and executes each statement as each line is reached from top to bottom.

What is the Amazon AWS S3 bucket?

AWS S3, why it is S3? The name itself tells that it’s a 3 word whose alphabet starts with “S.” The Full form of AWS S3 is a simple storage service. AWS S3 service helps in storing unlimited data safely and efficiently. Everything in the AWS S3 service is an object such as pdf files, zip files, text files, war files, anything. Some of the features of the AWS S3 bucket are below:

  • To store the data in AWS S3 bucket you will need to upload the data.
  • To keep your AWS S3 bucket secure addthe necessary permissions to IAM role or IAM user.
  • AWS S3 buckets have unique name globally that means there will be only 1 bucket throughout different accounts or any regions.
  • 100 buckets can be created in any AWS account, post that you need to raise a ticket to Amazon.
  • Owner of AWS S3 buckets is specific to AWS account only.
  • AWS S3 buckets are created region specific such as us-east-1 , us-east-2 , us-west-1 or us-west-2
  • AWS S3 bucket objects are created in AWS S3 in AWS console or using AWS S3 API service.
  • AWS S3 buckets can be publicly visible that means anybody on the internet can access it but is recommended to keep the public access blocked for all buckets unless very much required.

Prerequisites

  1. AWS account to create ec2 instance. If you don’t have AWS account please create from AWS account or AWS Account
  2. Windows 7 or plus edition where you will execute the shell script.
  3. AWS CLI installed. To install AWS CLI click here.
  4. Git bash. Yo install Git bash click here
  5. Code editor for writing the shell script on windows machine such as visual studio code. To install visual studio click here.

Building a shell script to create AWS S3 bucket in Amazon account

Now that you have a good idea about the AWS S3 bucket and shell script let’s learn how to build a shell script to create an AWS S3 bucket in an Amazon account.

  • Create a folder of your windows machine at any location. Further under the same folder create a file named create-s3.sh and copy/paste the below code.
#! /usr/bin/bash
# This Script will create S3 bucket and tag the bucket with appropriate name.

# To check if access key is setup in your system 


if ! grep aws_access_key_id ~/.aws/config; then
   if ! grep aws_access_key_id ~/.aws/credentials; then
   echo "AWS config not found or you don't have AWS CLI installed"
   exit 1
   fi
fi

# read command will prompt you to enter the name of bucket name you wish to create 


read -r -p  "Enter the name of the bucket:" bucketname

# Creating first function to create a bucket 

function createbucket()
   {
    aws s3api  create-bucket --bucket $bucketname --region us-east-2
   }

# Creating Second function to tag a bucket 

function tagbucket()    {
    
   aws s3api  put-bucket-tagging --bucket $bucketname --tagging 'TagSet=[{Key=Name,Value="'$bucketname'"}]'
}

# echo command will print on the screen 

echo "Creating the AWS S3 bucket and Tagging it !! "
echo ""
createbucket    # Calling the createbucket function  
tagbucket       # calling our tagbucket function
echo "AWS S3 bucket $bucketname created successfully"
echo "AWS S3 bucket $bucketname tagged successfully "

Executing the Shell Script to Create AWS S3 bucket in Amazon Cloud

Previously you created the shell script to create an AWS S3 bucket in Amazon Cloud, which is great, but it is not doing much unless you run it. Let’s execute the shell script now.

  • Open the visual studio code and then open the location of file create-s3.sh.
Opening Shell script on visual studio code
Opening Shell script on visual studio code
  • Finally execute the shell script.
./create-s3.sh
Executing the shell script to create AWS S3 bucket
Executing the shell script to create AWS S3 bucket

Verifying the AWS S3 bucket in AWS account

Earlier in the previous section, the shell script ran successfully; let’s verify the if AWS S3 bucket has been created in the AWS account.

  • Open your favorite web browser and navigate to the AWS Management Console and log in.
  • While in the Console, click on the search bar at the top, search for ‘S3’, and click on the S3 menu item and you should see the list of AWS S3 buckets and the bucket that you specified in shell script.
Viewing the AWS S3 bucket in AWS cloud
Viewing the AWS S3 bucket in AWS cloud
  • Also verify the tags that you applied in the AWS S3 bucket by navigating to proerties tab.
Viewing the AWS S3 bucket tags in AWS cloud
Viewing the AWS S3 bucket tags in the AWS cloud

Conclusion

In this tutorial, you learned how to set up Amazon AWS S3 using shell script on AWS step by step. Most of your phone and website data are stored on AWS S3.

Now that you have a newly created AWS S3 bucket, what do you plan to store in it?