Filebeat is extremely lightweight compared to its predecessors when it comes to efficiently sending log events. It uses lumberjack protocol, compression, and is easy to configure using a yaml file. It can send events directly to elasticsearch as well as logstash. It keeps track of files and position of its read, so that it can resume where it left of.

The goal of this tutorial is to set up a proper environment to ship Linux system logs to Elasticsearch with Filebeat. It then shows helpful tips to make good use of the environment in Kibana.

Keep reading

A common use case that comes up when we use any product is how can we get metrics from it? How can we monitor it? Elasticsearch, since its early release, has always provided a way to monitor it using the _cat/stats API. However, for Logstash there wasn’t a way to gather metrics and monitor it until recently. With the release of Logstash 5.0+, Logstash has introduced a set of APIs to monitor Logstash.  In this article we explore the monitoring APIs exposed by Logstash, which includes the Node Info API, the Plugins API, the Node Stats API, and the Hot Threads API.

Keep reading

Parsing Logs Using Logstash

Posted by Vineeth Mohan March 17, 2016

In this tutorial series we are going to utilize the ELK (Elasticsearch-Logstash-Kibana) stack to parse, index, visualize, and analyze logs. Nearly all the processes in a server or in an application are logged into a log file. These log files act as a critical source in helping us to accomplish numerous things, ranging from troubleshooting to anomaly detection by effectively analyzing these logs.

For analyzing the logs, one should parse it into smaller components with appropriate fields and values. Then, index the components in a database and conduct the required analysis. One of the most reliable and scalable stack for these purposes is the ELK stack. Here we have the logs parsed and split into proper individual documents by Logstash. These documents then get indexed into the powerful text analytic engine, Elasticsearch, and lastly, are passed into the visualization tool Kibana.

In this edition of the ELK blog series we are going to see the setup, configuration, and a basic example of how to parse and index logs using Logstash.

Keep reading

The adoption of Log4j overshadows all other java logging frameworks. With Log4j 2, Apache gave us a next-generation Asynchronous Logger based on the famous LMAX Disruptor library. Yes, we scale!

Therefore, it’s a pity that currently there is no official Logstash 2.x plugin for Log4j2. Unofficially? There is, but unless you’re a Ruby expert, it would take considerable effort to compile and install it correctly. In this article we did that for you and present a small demo on docker-compose.

Keep reading

The script fields feature in Elasticsearch gives users the ability to return a script evaluation for each hit, according to the values taken from different fields. Script fields can work on temporary fields that won’t be stored, and they can return the final evaluation of the script as a custom value. Script fields can also access a source document from the index and extract specific elements.

This article provides a short tutorial on the use of script fields, and we also look at the basics of Elasticsearch logging during script execution.

Keep reading