Install ELK Stack on Ubuntu: Elasticsearch, Logstash, and Kibana Dashboard.

If you are looking to quickly install ELK Stack, previously known as Elastic stack, then you have come to the right place.

ELK Stack contains mainly four components, i.e., Elasticsearch, Logstash, Kibana Dashboard, Filebeat, and Metricbeat. Combing all these components, it is easier to store, search, analyze, and visualize logs generated from any source in any format.

In this tutorial, you will learn how to install ELK Stack, Elasticsearch, install Logstash, and install Kibana Dashboard on the Ubuntu machine.

Let’s dive in quickly.

Join 50 other followers

Table of Content

  1. Prerequisites
  2. How to Install Elasticsearch on ubuntu
  3. Configuring Elasticsearch on Ubuntu Machine
  4. How to Install Kibana on ubuntu
  5. Viewing Kibana Dashboard on Ubuntu Machine
  6. Verify the Kibana Dashboard
  7. How to Install Logstash
  8. Configuring Logstash with Filebeat
  9. Installing and Configuring Filebeat
  10. Installing and Configuring Metricbeat
  11. Verifying the ELK Stack in the Kibana Dashboard
  12. Conclusion
ELK Stack architecture
ELK Stack architecture

Prerequisites

  • Ubuntu machine preferably 18.04 version + , if you don’t have any machine you can create a ec2 instance on AWS account
  • Recommended to have 4GB RAM, at least 5GB of drive space.
  • Apache installed on the Ubuntu machine that works as a web server and proxy server.

You may incur a small charge for creating an EC2 instance on Amazon Managed Web Service.

How to Install Elasticsearch on ubuntu

Let’s kick off this tutorial by first installing the first component’s ELK stack that is Elasticsearch, but before you install Elasticsearch, you need to have java installed on the machine.

  • Login to Ubuntu machine using your favorite SSH client.
  • First, update your existing list of packages by running the below command.
sudo apt update
  • Now, install java using the apt install command as shown below.
# Installing Java Version: Java SE 11 (LTS)
sudo apt install default-jdk  
Installing Java
Installing Java
  • Next, verify the java version on your machine. As you can see below Java has been succesfully installed on ubuntu machine.
java -version               # To check the Installed Java Version
To check the Installed Java Version
To check the Installed Java Version
  • Further add the GPG key for the official Elastic repository to your system. This key builds the trust of your machine with Elastic official repository and enable access to all the open-source software in the ELK stack.
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Adding the GPG key for the official Elastic repository to your system
Adding the GPG key for the official Elastic repository to your system
  • Install below prerequisites softwares so that apt uses packages over https protocol. The apt transport software allow your machine to connect with external respositories to connect over HTTPS or HTTP over TLS.
sudo apt install apt-transport-https ca-certificates curl software-properties-common
Installing softwares
Installing software
  • Now, add the Elastic repository to APT sources so that you can install all the required ELK package.
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee –a /etc/apt/sources.list.d/elastic-7.x.list
  • Next, update the system using the following commands.
sudo apt update
  • Now its time to Install Elasticsearch with the following command:
sudo apt-get install elasticsearch
Install Elasticsearch
Install Elasticsearch

Configuring Elasticsearch on Ubuntu Machine

Now that you have successfully installed Elasticsearch on your ubuntu machine, it is important to configure the hostname and the port in the Elasticsearch configuration file. Let’s do it.

  • Open the Elasticsearch configuration file with below command and uncomment the network.host, http.port parameter.
vi /etc/elasticsearch/elasticsearch.yml
 uncomment the network.host, http.port parameter.
Uncomment the network. Host, http. Port parameter.
  • In the Elasticsearch configuration file update the discovery.type as below.
update the discovery.type
Update the discovery.type
  • Now, start and enable the Elasticsearch service on the ubuntu machine using below commands.
sudo systemctl enable elasticsearch.service
sudo systemctl start elasticsearch.service
start and enable the Elasticsearch service
start and enable the Elasticsearch service
 Checking the Elasticsearch service status
Checking the Elasticsearch service status
  • Finally, verify the Elasticsearch installtion by running the curl command on your machine on port 9200.
curl http://127.0.0.1:9200
Verify the Elasticsearch service
Verify the Elasticsearch service

How to Install Kibana on ubuntu

Now that you have successfully installed Elasticsearch and configured it. The next component you need to install in the ELK stack is Kibana and view the kibana dashboard. Let’s install Kibana.

  • Installing kibana is simple and you need to run a single command as shown below.
sudo apt-get install kibana
Installing kibana
Installing kibana

Join 50 other followers
  • Now Kibana is installed succesfully. You will need to make changes in configuration file of Kibana as you did earlier for elasticsearch. To make the configuration changes open the kibana.yml configuration file and uncomment the following lines:
server.port: 5601
server.host: "localhost"
elasticsearch.hosts: ["http://localhost:9200"]
uncomment the kibana port, URL and elasticsearch host
uncomment the kibana port, URL, and elasticsearch host

Kibana works on Port 5061 by default

  • Once the configuration file is updated, start and enable the Kibana service that you recently installed.
sudo systemctl start kibana
sudo systemctl enable kibana
starting and enabling the Kibana service
starting and enabling the Kibana service

Viewing Kibana Dashboard on Ubuntu Machine

Great, now you have elasticsearch running on Port 9200 and Kibana running on Port 5601. Still, to view the Kibana dashboard on the Ubuntu machine, you need to use the Apache server as your proxy server, allowing the Kibana Dashboard to be viewed on Port 80.

Let’s configure apache to run as a proxy server.

  • Create the configuration file named domain.conf in /etc/apache2/sites-available directory and copy/paste the below configuration file.
vi /etc/apache2/sites-available/domain.conf
<VirtualHost *:80>
    ServerName localhost
    ProxyRequests Off
    ProxyPreserveHost On
    ProxyVia Full
    <Proxy *>
        Require all granted
    </Proxy>
    ProxyPass / http://127.0.0.1:5601/
    ProxyPassReverse / http://127.0.0.1:5601/
</VirtualHost>
  • After changing the Apache configuration file run the below commands so that apache works as proxy server.
sudo a2dissite 000-default
sudo a2enmod proxy proxy_http rewrite headers expires
sudo a2ensite domain.conf
sudo service apache2 restart

Verify the Kibana Dashboard

Earlier in the previous section, you installed kibana and configured it to run behind the apache server. Let’s verify by viewing the Kibana dashboard by navigating to the IP address of the server followed by Port 80.

As you can see below, the Kibana dashboard loads successfully.

Kibana dashboard loads successfully.
Kibana dashboard loads successfully.

How to Install Logstash

Logstash is a lightweight, open-source, server-side data processing pipeline that allows you to collect data from various sources, transform it on the fly, and send it to your desired destination. Logstash is a tool that collects data from multiple sources, stores it in Elasticsearch, and is parsed by Kibana.

With that, let’s install the third component used in Elastic Stack. Let’s install Logstash on an Ubuntu machine.

  • Install Logstash by running the following command.
sudo apt-get install logstash
Installing Logstash
Installing Logstash
  • Now start and enable the Logstash by running the systemctl commands.
sudo systemctl start logstash
sudo systemctl enable logstash
Starting and Enabling the Logstash
Starting and Enabling the Logstash
  • Finally verify the Logstash by running the below command.
sudo systemctl status logstash
Verifying the Logstash
Verifying the Logstash

Configuring Logstash with Filebeat

Awesome, now you have Logstash installed. You will configure beats in the Logstash; although beats can send the data directly to the Elasticsearch database, it is good to use Logstash to process the data. Let’s configure beats in the Logstash with the below steps.

  • Create a file named logstash.conf and copy/paste the below data that allows you to set up Filebeat input .
# Specify the incoming logs from the beats in Logstash over Port 5044

input {
  beats {
    port => 5044
  }
}

# By filter syslog messages are sent to Elasticsearch

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

# Specify output will push logstash logs to an Elastisearch instance

output {
  elasticsearch { hosts => ["localhost:9200"]
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

  • Now test your Logstash configuration with below command. If you see Configuration OK message then the setup is properly done.
sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
test your Logstash configuration
Test your Logstash configuration
  • Finally start and enable Logstash with below command.
sudo systemctl start logstash
sudo systemctl enable logstash

Installing and Configuring Filebeat

The Elastic Stack uses lightweight data shippers called beats ( such as Filebeat, Metricbeat ) to collect data from various sources and transport them to Logstash or Elasticsearch. You will learn to install and configure Filebeat on an Ubuntu machine that will be used to push data in Logstash and further to Kibana.

  • Install Filebeat on ubuntu machine using following commnads.
sudo apt install filebeat
Installing the Filebeat
Installing the Filebeat
  • Next, edit the Filebeat configuration file so that filebeat is able to connect to Logstash. Uncomment the output.logstash and hosts: [“localhost:5044”] and comment the output.elasticsearch: and hosts: [“localhost:9200”].
vi /etc/filebeat/filebeat.yml
Uncomment the output.logstash and hosts: ["localhost:5044"]
Uncomment the output.logstash and hosts: [“localhost:5044”]
comment the output.elasticsearch: and hosts: ["localhost:9200"]
comment the output.elasticsearch: and hosts: [“localhost:9200”]
  • Next enable the filebeat with below command.
sudo filebeat modules enable system
sudo filebeat setup --pipelines --modules system
Enabling the filebeat
Enabling the filebeat
  • Now, Load the index template from the Filebeat into Logstash by running the below command. Index template are collection of documents that have similar characteristics.
sudo filebeat setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
Load the index template from the Filebeat into Logstash
Load the index template from the Filebeat into Logstash
  • Also run the below command so that Logstash can further push to Elasticsearch.
sudo filebeat setup -e -E output.logstash.enabled=false -E output.elasticsearch.hosts=['localhost:9200'] -E setup.kibana.host=localhost:5601
Logstash can further push to Elasticsearch
Logstash can further push to Elasticsearch
  • Now you can start and enable Filebeat.
sudo systemctl start filebeat
sudo systemctl enable filebeat
start and enable Filebeat

Installing and Configuring Metricbeat

Previously you learned to install and configure Filebeat, but this time you will learn to install and configure Metricbeat. Metricbeat is a lightweight shipper that you can install on your servers to periodically collect metrics from the operating system and from services running on the server.

  • To download and install Metricbeat, open a terminal window and use the commands that work with your system:
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.16.3-amd64.deb
sudo dpkg -i metricbeat-7.16.3-amd64.deb
  • From the Metricbeat install directory, enable the system module:
sudo metricbeat modules enable system
  • Set up the initial environment for Metricbeat and Start Metricbeat by running the following commands.
sudo metricbeat setup -e
sudo service metricbeat start

Verifying the ELK Stack in the Kibana Dashboard

Now that you have your ELK Elastic Stack set up completely. Filebeat and Metricbeat will begin pushing the Syslog and authorization logs to Logstash, then load that data into Elasticsearch. To verify if Elasticsearch is receiving the data, query the index with the below command.

 curl -XGET http://localhost:9200/_cat/indices?v
  • As you can see below the request is successful, that means the data that was pushed by filebeat is successfully stored in elasticsearch.
Filebeat and Metricbeat pushing the data in ELasticsearch
Filebeat and Metricbeat pushing the data in ELasticsearch
Kibana Dashboard with beats configured
Kibana Dashboard with beats configured
Logs from Metricbeat in Kibana Dashboard.
Logs from Metricbeat in Kibana Dashboard.

Join 50 other followers

Conclusion

In this tutorial, you learned how to install ELK Stack, including installing components, i.e., Elasticsearch, Logstash, Kibana Dashboard, Filebeat, and Metricbeat on the Ubuntu machine.

Now that you have a strong understanding of ELK Stack and all the components, which application do you plan to monitor next?

Advertisement

Learn ELK Stack from Scratch: Elasticsearch, Logstash, Kibana dashboard, and AWS Elasticsearch

If you want to analyze data for your website or applications, consider learning ELK Stack or Elastic Stack that contains Elasticsearch, logstash, and Kibana dashboard.

Elasticsearch is a powerful analytics search engine that allows you to store, index, and search the documents of all types of data in real-time. But if you need your search engine to automatically scale, load-balanced then AWS Elasticsearch (Amazon OpenSearch) is for you.

In this tutorial, you will learn what is Elastic Stack, Elasticsearch, Logstash, kibana dashboard, and finally AWS Elasticsearch from Scratch, and believe me, this tutorial will be helpful for you.

Let’s get into it.

Related: Install ELK Stack on Ubuntu: Elasticsearch, Logstash, and Kibana Dashboard.

Join 50 other followers

Table of Content

  1. What is ELK Stack or Elastic Stack?
  2. What is Elasticsearch ?
  3. QuickStart Kibana Dashboard
  4. What is Logstash?
  5. Features of Logstash
  6. What is AWS Elasticsearch or Amazon OpenSearch Service?
  7. Creating the Amazon Elasticsearch Service domain or OpenSearch Service domain
  8. Uploading data in AWS Elasticsearch
  9. Search documents in Kibana Dashboard
  10. Conclusion

What is ELK Stack or Elastic Stack?

The ELK stack or Elastic Stack is used to describe a stack that contains: Elasticsearch, Logstash, and Kibana. The ELK stack allows you to aggregate logs from all your systems and applications, analyze these logs, and create visualizations for application and infrastructure monitoring, faster troubleshooting, security analytics, and more.

  • E = Elasticsearch: Elasticsearch is a distributed search and analytics engine built on Apache Lucene
  • L = Logstash: Logstash is an open-source data ingestion tool that allows you to collect data from a various sources and then transforms it and send it to your desired destination
  • K = Kibana: Kibana is a data visualization and exploration tool for reviewing logs and events.
ELK Stack architecture
ELK Stack architecture

What is Elasticsearch ?

Elasticsearch is an analytics and full-text search engine built on the Apache Lucene search engine library where the indexing, search, and analysis operations occur. Elasticsearch is a powerful analytics search engine that allows you to store, index, and search the documents of all types of data in real-time.

Even if you have structured or unstructured text numerical data, Elasticsearch can efficiently store and index it in a way that supports fast searches. Some of the features of Elasticsearch are:

  • Provides the search box on the website, web page or on applications.
  • Stores and analyze the data and metrics.
  • Logstash and Beats helps with collecting, aggregating the data and storing it in Elasticsearch.
  • Elasticsearch is used in the machine learning.
  • Elasticsearch stores complex data structures that have been serialized as JSON documents.
  • If you have multiple Elasticsearch nodes in Elasticsearch cluster then documents are distributed across the cluster and can be accessed immediately from any node.
  • Elasticsearch also has the ability to be schema-less, which means that documents can be indexed without explicitly specifying how to handle each of the different fields.
  • The Elasticsearch REST APIs support structured queries, full text queries, and complex queries that combine the two.You can access all of these search capabilities using Elasticsearch’s comprehensive JSON-style query language (Query DSL).
  • Elasticsearch index can be thought of as an optimized collection of documents and each document is a collection of fields, which are the key-value pairs that contain your data.
  • Elasticsearch index is really just a logical grouping of one or more physical shards, where each shard is actually a self-contained index.
  • There are two types of shards: primaries and replicas. Each document in an index belongs to one primary shard. The number of primary shards in an index is fixed at the time that an index is created, but the number of replica shards can be changed at any time.
  • Sharding splits index or indices into smaller pieces. It is used so that more number of documents can be stored at index level, easier to fit large indices into nodes, improve query throughput. By default index have one shard and you can add more shards.
Elasticsearch Cluster
Elasticsearch Cluster

Elasticsearch provides REST API for managing your cluster and indexing and searching your data. For testing purposes, you can easily submit requests directly from the command line or through the Kibana dashboard by running the GET request in the Kibana console under dev tools, as shown below.

<IP-address-of-elasticsearch>/app/dev_tools#/console
Kibana console with Dev tools
Kibana console with Dev tools
  • You can find the Elasticsearch cluster health by running the below command where _cluster is API and health is the command.
GET _cluster/health
Checking the health of the Elasticsearch cluster
Checking the health of the Elasticsearch cluster
  • To check the Elasticsearch node details using below command.
GET _cat/nodes?v
Checking the health of the elasticsearch node
Checking the health of the elasticsearch node
  • To check the Elasticsearch indices configured, run the below command. You will notice kibana is also listed as indices because kibana data is also stored in elasticsearch.
GET _cat/indices
Checking the Elasticsearch indices on the elasticsearch cluster
Checking the Elasticsearch indices on the elasticsearch cluster
  • To check the Primary and replica shards from a kibana console run the below request.
GET _cat/shards
Checking all the primary shards and replica shards in elasticsearch cluster
Checking all the primary shards and replica shards in the elasticsearch cluster

QuickStart Kibana Dashboard

Kibana allows you to search the documents, observe the data and analyze the data, visualize in charts, maps, graphs, and more for the Elastic Stack in the form of a dashboard. Your data can be structured or unstructured text, numerical data, time-series data, geospatial data, logs, metrics, security events.

Kibana also manages your data, monitor the health of your Elastic Stack cluster, and control which users have access to the Kibana Dashboard.

Kibana also allows you to upload the data into the ELK stack by uploading your file and optionally importing the data into an Elasticsearch index. Let’s learn how to import the data in the kibana dashboard.

  • Create a file named shanky.txt and copy/paste the below content.
[    6.487046] kernel: emc: device handler registered
[    6.489024] kernel: rdac: device handler registered
[    6.596669] kernel: loop0: detected capacity change from 0 to 51152
[    6.620482] kernel: loop1: detected capacity change from 0 to 113640
[    6.636498] kernel: loop2: detected capacity change from 0 to 137712
[    6.668493] kernel: loop3: detected capacity change from 0 to 126632
[    6.696335] kernel: loop4: detected capacity change from 0 to 86368
[    6.960766] kernel: audit: type=1400 audit(1643177832.640:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lsb_release" pid=394 comm="apparmor_parser"
[    6.965983] kernel: audit: type=1400 audit(1643177832.644:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="nvidia_modprobe" pid=396 comm="apparmor_parser"
  • Once the file is uploaded successfully you will see the details of all code that you uploaded.
Data uploaded in the kibana
Data uploaded in the kibana
Details of the data uploaded in the kibana
Details of the data uploaded in the kibana
  • Next create the elasticsearch index and click on import.
Creating the elasticsearch index on Elasticsearch cluster
Creating the elasticsearch index on the Elasticsearch cluster
  • After import is successful you will see the status of your elasticsearch index as below.
Status of file upload in kibana
Status of file upload in kibana
  • Next, click View index in Discover as shown in the the previous image. Now you should be able to see the logs within elasticsearch index (shankyindex).
Checking the logs in kibana with newly created index
Checking the logs in kibana with newly created index

Kibana allows you to perform the below actions such as:

  • Refresh, flush, and clear the cache of your indices or index.
  • Define the lifecycle of an index as it ages.
  • Define a policy for taking snapshots of your Elasticsearch cluster.
  • Roll up data from one or more indices into a new, compact index.
  • Replicate indices on a remote cluster and copy them to a local cluster.
  • Alerting allows you to detect conditions in different Kibana apps and trigger actions when those conditions are met.

What is Logstash?

Logstash allows you to collect the data with real-time pipelining capabilities. Logstash allows you to collect data from various sources beats and push it to the elasticsearch cluster. With Logstash, any type of event is transformed using an array of input, filter, and output plugins, further simplifying the ingestion process.

Working of Logstash
Working of Logstash

Features of Logstash

Now that you have a basic idea about Logstash, let’s look at some of the benefits of Logstash, such as:

  • Logstash hndle all types of logging data and easily ingest web logs like Apache, and application logs like log4j for Java.
  • Logstash captures other log formats like syslog, networking and firewall logs.
  • One of the main benefits of Logstash is to securely ingest logs with Filebeat.

What is AWS Elasticsearch or Amazon OpenSearch Service??

Amazon Elasticsearch Service or OpenSearch is a managed service that deploys and scales the Elasticsearch clusters in the cloud. Elasticsearch is an open-source analytical and search engine that performs real-time application monitoring and log analytics.

Amazon Elasticsearch service provisions all resources for Elasticsearch clusters and launches it. It also replaces the failed Elasticsearch nodes in the cluster automatically. Let’s look at some of the key features of the Amazon Elasticsearch Service.

  • AWS Elasticsearch or Amazon OpenSearch can scale up to 3 PB of attached storage and works with various instance types.
  • AWS Elasticsearch or Amazon OpenSearch easily integrates with other services such as IAM for security, VPC, AWS S3 for loading data, AWS Cloud Watch for monitoring and AWS SNS for alerts notifications.

Creating the Amazon Elasticsearch Service domain or OpenSearch Service domain

Now that you have a basic idea about the Amazon Elasticsearch Service domain or OpenSearch Service let’s create the Amazon Elasticsearch Service domain or OpenSearch Service domain using the Amazon Management console.

  • While in the Console, click on the search bar at the top, search for ‘Elasticsearch’, and click on the Elasticsearch menu item.

Now Elasticsearch service has been replaced with Opensearch service.

Searching for Elasticsearch service
Searching for Elasticsearch service
  • Creating a Amazon Elasticsearch domain is same as that of Elasticsearch cluster that means domains are clusters with the settings, instance types, instance counts, and storage resources that you specify. Click on create a new domain.
Creating an Amazon Elasticsearch domain
Creating an Amazon Elasticsearch domain
  • Next, select the deployment type as Development and testing.
Choosing the deployment type.
Choosing the deployment type.

Next, select the below settings as defined below:

  • For Configure domain provide the Elasticsearch domain name as “firstdomain”. A domain is the collection of resources needed to run Elasticsearch. The domain name will be part of your domain endpoint.
  • For Data nodes, choose the t3.small.elasticsearch and ignore rest of the settings and click on NEXT.
  • For Network configuration, choose Public access.
  • For Fine-grained access control, choose Create master user and provide username as user and password as Admin@123. Fine-grained access control keeps your data safe.
  • For Domain access policy, choose Allow open access to the domain. Access policies control whether a request is accepted or rejected when it reaches the Amazon Elasticsearch Service domain.
  • Further keep clicking on NEXT button and create the domain which takes few minutes for Domain to get Launched.
Viewing the Elasticsearch domain or Elasticcluster endpoint
Viewing the Elasticsearch domain or Elasticcluster endpoint
  • After successful creation of Elasticsearch domain. Click on the firstdomain Elasticsearch domain.
firstdomain Elasticsearch domain
Elasticsearch domain (first domain)

Uploading data in AWS Elasticsearch

You can load streaming data into your Amazon Elasticsearch Service (Amazon ES) domain from many different sources like Amazon Kinesis Data Firehose, Amazon Cloud Watch Logs, Amazon S3, Amazon Kinesis Data Streams, Amazon DynamoDB, AWS Lambda functions as event handlers.

  • In this tutorial you will use a sample data to upload the data. To upload the sample data go to the Elasticsearch domain URL using the username user and password Admin@123 and then click on Add data.
Adding data in Elasticsearch
Adding data in Elasticsearch
  • Now use sample data and add e-commerce orders.
 sample data to add e-commerce orders in Elasticsearch cluster
sample data to add e-commerce orders in Elasticsearch cluster

Search documents in Kibana Dashboard

Kibana is a popular open-source visualization tool that works with the AWS Elasticsearch service. It provides an interface to monitor and search the indexes. Let’s use Kibana to search the sample data you just uploaded in AWS ES.

  • Now in the Elasticsearch domain URL itself, Click on Discover option on the left side to search the data.
Click on the Discover option
Click on the Discover option.
  • Now you will notice that Kibana has the data that got uploaded. You can modify the timelines and many other fields accordingly.
Viewing the data in Kibana dashboard
Viewing the data in the Kibana dashboard

Join 50 other followers

Kibana provided the data when we searched in the dashboard using the sample data you uploaded.

Conclusion

In this tutorial, you learned what Elastic Stack, Elasticsearch, Logstash, kibana dashboard, and AWS Elasticsearch from Scratch using Amazon Management console. Also, you learned t how to upload the sample data in AWS ES.

Now that you have a strong understanding of ELK Stack, Elasticsearch, kibana, and AWS Elasticsearch, which site are you planning to monitor using ELK Stack and components?