Note: Most of these blogs are for my personal reference and at a given time, some of those might just be unpolished drafts.
What is ELK stack
What is ELK stack?
Instead of writing about what exactly ELK is, let me state the need and use cases for it.
Log aggregation and efficient searching
In a very naive scenario you have one server and lots of log messages generated by your application and system which are crucial to look at once something goes wrong. Now there are basically two problems with it: Manually digging through a log file is really an anachronism. We built all this software to automate things and in the end are going through a log file line by line? Further what are our search criteria? We can definitely leverage some sort of ‘automation/programming’ to analyze based on larger and more complex criteria than simply grepping or vimming a file.
Second problem is related to scale.
We don’t have a single server anymore. We have probably a tens or hundreds of VM running behind a load balancer. We don’t know which server processed the request and definitely are not going to check all the logs one by one. Here comes ELK.
We treat all the log messages generated as some sort of event and stream it into a single storage ordered by timestamp. This channeling of logs/messages/texts is done by Logstash (https://www.elastic.co/products/logstash)(L of ELK). These messages/texts are now fed into Elastic clusters (E of ELK) which is a glorified wrapper around Apache Lucene. Prior to messages are preprocessed based on various conditions. Elastic Clusters mainly do something called ‘reverse indexing’. All the messages are stored as a document and are indexed using the words, phrases. Kibana acts as the front end UI for the whole stack providing an interface where you can query for messages using a specified query language, generate charts/visualizations and so on.
If you are running a java app called
myJavaApp and want to quickly see what exceptions have occurred in last 15 minutes, you can quickly open kibana dashboard and fire up a query like:
product:myJavaApp AND msg:’Exception’ (Assumming those are the indices you have used)
This will quickly load all the documents indexed using the keyword ‘Exception’. You can write more and more complex queries following on.
Although log aggregation is the major use case for ELK stack it can also be used as a framework for generic text search where you can leverage reverse indexing. This can be searching through a web page and so on. You can also set up a local ELK on your system and have your sys logs, var logs analyzed for you.
Originally answered at Quora