0% found this document useful (0 votes)
11 views

ELK Stack Explanation & Configuration

Uploaded by

praveen
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

ELK Stack Explanation & Configuration

Uploaded by

praveen
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

ELK Stack

ELK introduction:
"ELK" is the acronym for three open source projects: Elasticsearch, Logstash,
and Kibana. Elasticsearch is a search and analytics engine. Logstash is a
server-side data processing pipeline that ingests data from multiple sources
simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch.
Kibana lets users visualize data with charts and graphs in Elasticsearch.

The open source, distributed, RESTful, JSON-based search engine. Easy to use, scalable and
flexible, it earned hyper-popularity among users and a company formed around it, you know, for
search.

A search engine at heart, users started using Elasticsearch for logs and wanted to easily ingest
and visualize them. Enter Logstash, the powerful ingest pipeline, and Kibana, the flexible
visualization tool.

ELK purpose using

The ELK Stack is popular because it fulfills a need in the log


management and analytics space. Monitoring modern
applications and the IT infrastructure they are deployed on
requires a log management and analytics solution that
enables engineers to overcome the challenge of monitoring
what are highly distributed, dynamic and noisy environments.

The ELK Stack helps by providing users with a powerful


platform that collects and processes data from multiple data
sources, stores that data in one centralized data store that
can scale as data grows, and that provides a set of tools to
analyze the data.Of course, the ELK Stack is open source.

Basic Elasticsearch Concepts

1. Fields

Fields are the smallest individual unit of data in Elasticsearch.


Each field has a defined type and contains a single piece of
data that can be, for example, a boolean, string or array
expression. A collection of fields are together a single
Elasticsearch document.

2. Documents

Documents are JSON objects that are stored within an


Elasticsearch index and are considered the base unit of
storage. In the world of relational databases, documents can
be compared to a row in table.

For example, let’s assume that you are running an e-


commerce application. You could have one document per
product or one document per order. There is no limit to how
many documents you can store in a particular index.
Data in documents is defined with fields comprised of keys
and values. A key is the name of the field, and a value can be
an item of many different types such as a string, a number, a
boolean expression, another object, or an array of values.

Documents also contain reserved fields that constitute the


document metadata such as:

 _index – the index where the document resides

 _type – the type that the document represents

 _id – the unique identifier for the document

An example of a document:
{
"_id": 3,
“_type”: [“user”],
"age": 28,
"name": ["daniel”],
"year":1989,
}

3. Types

Elasticsearch types are used within documents to subdivide


similar types of data wherein each type represents a unique
class of documents. Types consist of a name and a mapping
(see below) and are used by adding the _type field. This field
can then be used for filtering when querying a specific type.

An index can have any number of types, and you can store
documents belonging to these types in the same index.

4. Mapping

Like a schema in the world of relational databases, mapping


defines the different types that reside within an index. It
defines the fields for documents of a specific type — the data
type (such as string and integer) and how the fields should be
indexed and stored in Elasticsearch.

A mapping can be defined explicitly or generated


automatically when a document is indexed using templates.
(Templates include settings and mappings that can be
applied automatically to a new index.)
# Example
curl -XPUT localhost:9200/example -d '{
"mappings": {
"mytype": {
"properties": {
"name": {
"type": "string"
},
"age": {
"type": "long"
}
}
}
}
}'

5. Index

Indices, the largest unit of data in Elasticsearch, are logical


partitions of documents and can be compared to a database
in the world of relational databases.

Continuing our e-commerce app example, you could have


one index containing all of the data related to the products
and another with all of the data related to the customers.

You can have as many indices defined in Elasticsearch as you


want. These in turn will hold documents that are unique to
each index.

Indices are identified by lowercase names that refer to


actions that are performed actions (such as searching and
deleting) against the documents that are inside each index.

6. Shards

Put simply, shards are a single Lucene index. They are the
building block of Elasticsearch and are what facilitate its
scalability.
Index size is a common cause of Elasticsearch crashes. Since
there is no limit to how many documents you can store on
each index, an index may take up an amount of disk space
that exceeds the limits of the hosting server. As soon as an
index approaches this limit, indexing will begin to fail.

One way to counter this problem is to split up indices


horizontally into pieces called shards. This allows you to
distribute operations across shards and nodes to improve
performance.

When you create an index, you can define how many shards
you want. Each shard is an independent Lucene index that
can be hosted anywhere in your cluster:
# Example
curl -XPUT localhost:9200/example -d '{
"settings" : {
"index" : {
"number_of_shards" : 2,
"number_of_replicas" : 1
}
}
}'

7. Replicas

Replicas, as the name implies, are Elasticsearch fail-safe


mechanisms and are basically copies of your index’s shards.
This is a useful backup when a node crashes. Replicas also
serve read requests, so adding replicas can help to increase
search performance.

To ensure high availability, replicas are not placed on the


same node as the original shards (called the “primary” shard)
from which they were replicated.

Like with shards, the number of replicas can be defined per


index when the index is created. Unlike shards, however, you
may change the number of replicas anytime after the index is
created.

8. Analyzers

Analyzers are used during indexing to break down phrases or


expressions into terms. Defined within an index, an analyzer
consists of a single tokenizer and any number of token filters.
For example, a tokenizer could split a string into specifically
defined terms when encountering a specific expression.

By default, Elasticsearch will apply the “standard” analyzer,


which contains a grammar-based tokenizer that removes
common English words and applies additional filters.
Elasticsearch comes bundled with a series of built-in
tokenizers as well, and you can also use a custom tokenizer.
A token filter is used to filter or modify some tokens. For
example, a ASCII folding filter will convert characters like ê, é,
è to e.
# Example

curl -XPUT localhost:9200/example -d '{


"mappings": {
"mytype": {
"properties": {
"name": {
"type": "string",
"analyzer": "whitespace"
}
}
}
}
}'
Copy

9. Nodes

The heart of any ELK setup is the Elasticsearch instance,


which has the crucial task of storing and indexing data.

In a cluster, different responsibilities are assigned to the


various node types:

 Data nodes — stores data and executes data-related


operations such as search and aggregation
 Master nodes — in charge of cluster-wide management
and configuration actions such as adding and removing
nodes

 Client nodes — forwards cluster requests to the master


node and data-related requests to data nodes

 Tribe nodes — act as a client node, performing read


and write operations against all of the nodes in the
cluster

 Ingestion nodes (this is new in Elasticsearch 5.0) — for


pre-processing documents before indexing

By default, each node is automatically assigned a unique


identifier, or name, that is used for management purposes
and becomes even more important in a multi-node, or
clustered, environment.

When installed, a single node will form a new single-node


cluster entitled “elasticsearch,” but it can also be configured
to join an existing cluster (see below) using the cluster name.
Needless to say, these nodes need to be able to identify each
other to be able to connect.

In a development or testing environment, you can set up


multiple nodes on a single server. In production, however,
due to the number of resources that an Elasticsearch node
consumes, it is recommended to have each Elasticsearch
instance run on a separate server.

10. Cluster

An Elasticsearch cluster is comprised of one or more


Elasticsearch nodes. As with nodes, each cluster has a unique
identifier that must be used by any node attempting to join
the cluster. By default, the cluster name is “elasticsearch,”
but this name can be changed, of course.

One node in the cluster is the “master” node, which is in


charge of cluster-wide management and configurations
actions (such as adding and removing nodes). This node is
chosen automatically by the cluster, but it can be changed if
it fails. (See above on the other types of nodes in a cluster.)

Any node in the cluster can be queried, including the


“master” node. But nodes also forward queries to the node
that contains the data being queried.

As a cluster grows, it will reorganize itself to spread the data.

There are a number of useful cluster APIs that can query the
general status of the cluster.
For example, the cluster health API returns health status
reports of either “green” (all shards are allocated), “yellow”
(the primary shard is allocated but replicas are not), or “red”
(the shard is not allocated in the cluster).
# Output Example
{
"cluster_name" : "elasticsearch",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 5,
"active_shards" : 5,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 5,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 50.0
}

Elasticsearch Queries
Elasticsearch is built on top of Apache Lucene and exposes
Lucene’s query syntax. Getting acquainted with the syntax
and its various operators will go a long way in helping you
query Elasticsearch.
Boolean Operators

As with most computer languages, Elasticsearch supports the


AND, OR, and NOT operators:

 jack AND jill — Will return events that contain both jack
and jill

 ahab NOT moby — Will return events that contain ahab


but not moby

 tom OR jerry — Will return events that contain tom or


jerry, or both
Fields

You might be looking for events where a specific field


contains certain terms. You specify that as follows:

 name:”Ned Stark”
Ranges

You can search for fields within a specific range, using square
brackets for inclusive range searches and curly braces for
exclusive range searches:

 age:[3 TO 10] — Will return events with age between 3


and 10
 price:{100 TO 400} — Will return events with prices
between 101 and 399

 name:[Adam TO Ziggy] — Will return names between


and including Adam and Ziggy
Wildcards, Regexes and Fuzzy Searching

A search would not be a search without the wildcards. You


can use the * character for multiple character wildcards or
the ? character for single character wildcards.

URI Search

The easiest way to search your Elasticsearch cluster is


through URI search. You can pass a simple query to
Elasticsearch using the q query parameter. The following
query will search your whole cluster for documents with a
name field equal to “travis”:

 curl “localhost:9200/index_name/_search?
q=name:travis”

Combined with the Lucene syntax, you can build quite


impressive searches. Usually, you’ll have to URL-encode
characters such as spaces (it’s been omitted in these
examples for clarity):
 curl “localhost:9200/index_name/_search?
q=name:john~1 AND (age:[30 TO 40} OR
surname:K*) AND -city”

A number of options are available that allow you to customize


the URI search, specifically in terms of which analyzer to use
(analyzer), whether the query should be fault-tolerant
(lenient), and whether an explanation of the scoring should
be provided (explain).

Although the URI search is a simple and efficient way to


query your cluster, you’ll quickly find that it doesn’t support
all of the features offered to you by Elasticsearch. The full
power of Elasticsearch is exposed through Request Body
Search. Using Request Body Search allows you to build a
complex search request using various elements and query
clauses that will match, filter, and order as well as
manipulate documents based on multiple criteria.

ElasticSearch Index configurations


Creating Index :::
PUT /index_name

Create index settings :::


POST /index_name/_close

PUT /index_name/_settings
{"max_ngram_diff" : "50","analysis":{"filter":{"my_custom_stop_words_filter":{"filter":
["lowercase"],"ignore_case":"true","char_filter":["my_char_filter"],"type":"stop"},"my_stopwords":
{"type":"stop","stopwords":["_english_"]},"name_ngrams":
{"min_gram":"1","side":"front","type":"edgeNGram","max_gram":"15"},"autocomplete_filter":
{"type":"nGram","min_gram":"2","max_gram":"20"}},"char_filter":{"my_char_filter":
{"type":"mapping","mappings":["- =>\\u0020"]}},"normalizer":{"lowerCase_normaizer":{"filter":
["lowercase"],"char_filter":["my_char_filter"]}},"analyzer":{"partial_name":{"filter":
["lowercase","asciifolding","name_ngrams"],"type":"custom","tokenizer":"standard"},"custom_ana
lyzer":{"filter":["lowercase"],"type":"custom","tokenizer":"standard"},"default":{"filter":
["my_custom_stop_words_filter"],"tokenizer":"whitespace"},"autocomplete":{"filter":
["lowercase","asciifolding"],"char_filter":
["html_strip"],"type":"custom","tokenizer":"whitespace"},"lowerCase":{"filter":
["lowercase"],"char_filter":
["html_strip"],"type":"custom","tokenizer":"keyword"},"lowercase_analyzer":{"filter":
["lowercase"],"char_filter":
["my_char_filter"],"type":"custom","tokenizer":"standard"},"whitespace_analyzer":{"filter":
["lowercase","my_custom_stop_words_filter"],"char_filter":
["my_char_filter","html_strip"],"tokenizer":"whitespace"},"commaSeparated":{"filter":
["lowercase"],"tokenizer":"commaSeparated"},"startsWithAnalyzer":{"filter":
["lowercase"],"type":"custom","tokenizer":"keyword"}},"tokenizer":{"commaSeparated":
{"pattern":",","type":"pattern"}}}, "max_terms_count":"1000000","number_of_replicas":"1"}

POST /index_name/_open

Create index mappings :::


PUT index_name/_mappings

{"properties":{"fieldname":{"type":"text","fields":{"caseinsensitive":
{"type":"keyword","normalizer":"lowerCase_normaizer"},"keyword":
{"type":"keyword","ignore_above":256},"normalize":
{"type":"keyword","normalizer":"lowerCase_normaizer"},"whitespace":
{"type":"text","analyzer":"whitespace_analyzer"}},"analyzer":"autocomplete","fielddata":true}}}

Delete index :::


DELETE v2_conferencesmy

From index settings, mappings and data moved one server to another
server
elasticdump --input=http://elastic:password@localhost:9200/index1 --
output=http://elastic:password@localhost2:9200/index1 --type=settings
elasticdump --input=http://elastic:password@localhost:9200/index1 --
output=http://elastic:password@localhost2:9200/index1 --type=mapping

elasticdump --input=http://elastic:password@localhost:9200/index1 --
output=http://elastic:password@localhost2:9200/index1 --type=data

Advantages
Advantages of ElasticSearch include the following:

 Lots of search options.ElasticSearch implements a lot of features


when it comes to search such as customized splitting text into words,
customized stemming, faceted search, full-text search,
autocompletion, and instant search. Also, fuzzy search is good for
spelling errors. You can find what you are searching for even though
you have a spelling mistake. Autocompletion and instant search refer
to searching while the user types. It can be simple suggestions of
existing tags, trying to predict a search based on search history, or
just doing a completely new search for every keyword.
 Document-oriented. ElasticSearch stores real-world complex
entities as structured JSON documents and indexes all fields by
default, with a higher performance result.
 Speed. Speaking of performance, ElasticSearch is able to execute
complex queries extremely fast. It also caches almost all of the
structured queries commonly used as a filter for the result set and
executes them only once. For every other request containing a cached
filter, it checks the result from the cache.
 Scalability. Software development teams favor ElasticSearch because
it is a distributed system by nature and can easily scale horizontally,
providing the ability to extend resources and balance the loading
between the nodes in a cluster.
 Data record. ElasticSearch records any changes made in transactions
logs on multiple nodes in the cluster to minimize the chance of data
loss.
 Query fine tuning. Elastic search has a powerful JSON-based DSL,
which allows development teams to construct complex queries and
fine tune them to receive the most precise results from a search. It
provides also a way of ranking and grouping results.
 RESTful API. ElasticSearch is API-driven, so actions can be
performed using a simple RESTful API.
 Distributed approach. Indices can be divided into shards, with each
shard able to have any number of replicas. Routing and rebalancing
operations are done automatically when new documents are added.

Disadvantages of Elasticsearch:

 Elasticsearch does not have multi-language support in terms of handling


request and response data (only possible in JSON) unlike in Apache Solr,
where it is possible in CSV, XML and JSON formats.

 Elasticsearch also has a problem of Split brain situations but in rare cases.

 It is not as good at being a data store as some other options like MongoDB,
Hadoop, etc. For smaller use cases, it will perform fine. If you are streaming
TB’s of data every day, you will find that it either chokes or loses data.

 Elasticsearch is way more powerful and flexible, but it’s learning curve is
much steeper.

INSTALLING ELK
The ELK Stack can be installed using a variety of methods
and on a wide array of different operating systems and
environments. ELK can be installed locally, on the cloud,
using Docker and configuration management systems like
Ansible, Puppet, and Chef. The stack can be installed using a
tarball or .zip packages or from repositories.

Many of the installation steps are similar from environment to


environment and since we cannot cover all the different
scenarios, we will provide an example for installing all the
components of the stack — Elasticsearch, Logstash and
Kibana — on Linux. Links to other installation guides can be
found below.

Elastic search
1.sudo apt update
2.sudo apt install -y openjdk-8-jdk wget apt-transport-https
3.wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
4.echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
5.sudo apt update
6.sudo apt install -y elasticsearch
7.sudo systemctl start elasticsearch
8.sudo systemctl enable elasticsearch
9.curl -X GET http://localhost:9200

Logstash installation
Note : if you want to install logstash make sure your system java version 11

1. wget https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz
2.tar zxvf logstash-1.4.2.tar.gz

3.Cd logstash-1.4.2

4. sudo service logstash restart

Kibana installation
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.3.1-linux-x86_64.tar.gz
shasum -a 512 kibana-7.3.1-linux-x86_64.tar.gz
tar xvzf kibana-7.3.1-linux-x86_64.tar.gz
cd kibana-7.3.1-linux-x86_64/
if you have public ip setup then changes into kibana.yml file in config folder
./bin/kibana

index Creation
curl -XPUT 'localhost:9200/sample_test'

For mapping new fields in the existing index:

"curl -XPUT '<server-address>/<index name>/<index type>/_mappings?pretty' -H 'Content-Type:


application/json' -d'

{""properties"":{""<field name>"":{""type"":""<datatype>"",""format"":""<if required>""}}


}'

Note: 1. Format for date should be defined like eg: ""format"":""yyyy/MM/dd||yyy-MM-dd HH:mm:ss||
yyyy-MM-dd||epoch_millis""
2. For string type fields, you dont need to do external mapping, as by default every field is string type
field."

For deleting an index:::

curl -XDELETE 'localhost:9200/sample_test'

For deleting particular data/document in index:::

curl -XDELETE 'localhost:9200/customerdata/details/87686'

List out the indexes in elastic search

http://localhost:9200/_cat/indices

get the index data

http://localhost:9200/sample_test/details/_search

Query converter from sql query to elastic search query Use This Link

https://sqltoelasticsearch.azurewebsites.net/

get the records with limit size from index

http://localhost:9200/customerdata/details/_search?pretty&size=1000

create a document in index

curl -XPUT 'http://localhost:9200/customerdata/details/555/' -H 'Content-Type: application/json' -d'


{
"iCustomerId": "555",
"vFirstName": "Developer"
}
'

update a document in index

curl -XPOST 'http://localhost:9200/customerdata/details/555/_update?pretty&pretty' -H 'Content-Type:


application/json' -d'
{
"doc": { "iCustomerId": "55555",
"vFirstName": "Test Developer" }
}
'

get the particular document data from index


http://localhost:9200/sample_test/details/1

To store the data into index

$json_data = '{"Id":"2","firstname":"nani","lastname":"praveen"}';
$url = "http://localhost:9200/sample_test/details/2";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,$url);
curl_setopt($ch, CURLOPT_POSTFIELDS, $json_data);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type:application/json'));
$response = curl_exec($ch);
curl_close($ch);

To get the data from index

//$json_data = '{"from":0,"size":1000,"query": { "match": { "iCustomerId": "88327" } }}'; // Query


$json_data = '{"from":0,"size":1000}';
$url = 'http://localhost:9200/customerdata/details/_search';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,$url);
curl_setopt($ch, CURLOPT_POSTFIELDS, $json_data);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT ,1);
curl_setopt($ch, CURLOPT_NOSIGNAL, 1);
curl_setopt($ch, CURLOPT_FORBID_REUSE, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type:application/json'));
$response = curl_exec($ch);
curl_close ($ch);

Dump the data from one index to another index or from one server index to another server index

elasticdump --input=http://localhost:6200/customerdata
--output=http://localhost:9200/customerdata_copy --type=mapping

elasticdump --input=http://localhost:6200/customerdata
--output=http://localhost:9200/customerdata_copy --type=data

For fetchingmore than 10000 values from elasticsearch index, run the below command:

curl -XPUT "<server-address>/<index name>/_settings" -d '{ "index" : { "max_result_window" :


500000 } }'

This is a setting for the index, which fetches more than 10000 records.

For fetching distinct values from elasticsearch:


aggregation query is used. Format to define the same is:
{"size":0,"aggs" :{"<aggregation field>" : {"terms" : { "field" : "<required field>","size": 1000 }}}}

i) <aggregation field>: this can be any random name which you want in return output.
ii) <required field> : This is the field name for which you require distinct values.
iii) "size":0 denotes that we only want return for required field, nothing else.
iv) "size":1000 denotes that maximum 1000 distinct <required fields> will result in output. By default, it
will be 10.

For starting elasticsearch getting errors like

i) if you get error: max number of threads [1024] for user [elastic] likely too low, increase to at least
[2048]

Run the following command as a root user:


ulimit -u 2048
ii) if you get error: max virtual memory areas vm.max_map_count [65530] likely too low, increase to at
least [262144]

Run the following command as a root user:


sudo sysctl -w vm.max_map_count=262144

iii)ps aux | grep 'java'

if elasti search running but not display in browser then run the following command
Run the following command:
iv) service iptables stop

Exit
Run as other user except root user
./bin/elasticsearch -d

Kibana Dump the data from query to index using logstash

./logstash --path.data /usr/share/logstash/bin/sample -f sample.config


To stop elasticsearch:
1. Run below command:
Ex:
ps -ef | grep elasticsearch
2. kill process with below command.
Ex: kill 1234
Here,1234 is the process id.
Running Logstash

Logstash requires Java 8. Before downloading and installing Logstash, install Java 8.
No configuation required for logstash.

1.Download logstash same as ES.


2.Running Logstash from the Command Line:
Ex:
Goto path LOGSTASH_HOME/bin

Run below command:


./logstash --path.data /usr/share/logstash/bin/sample -f sample.config &
Here, '&' to run in background

Configuring Kibana:

1.Goto path KIBANA_HOME/config


2.Open kibana.yml
3.Uncomment and change ES ip as shows below.
Ex:
Elasticsearch: localhost:6200
Running kibana:
1.Goto path KIBANA_HOME
2.Run below command.
Ex: ./bin/kibana &
3.To check whether kibana running, open browser and type below link. Then kibana dashboard opens.
http://localhost:5601

Fix some common errors while starting elasticsearch:

1.To set the number of open file handles or file descriptors (ulimit -n) to 65536. Limits usually need to be
set as root before switching to the user that will run Elasticsearch.
Ex:
sudo su
ulimit -n 65536
exit
./bin/elasticsearch -d

Check all currently applied limits with ulimit -a.

2. Out of memory exceptions. Limits usually need to be set as root.


Ex:
sudo su
sysctl -w vm.max_map_count=262144
exit
./bin/elasticsearch -d
3. To set the number of threads that the Elasticsearch user can create.
Ex:
sudo su
ulimit -u 2048
exit
./bin/elasticsearch -d

You might also like