Slow Logs in Elasticsearch

Posted by Vineeth Mohan January 16, 2018

In this blog post we explore slow logs in elasticsearch, which is immensely helpful both in production and debugging environments. We show how slow logs generated by elasticsearch can act as a critical information provider regarding numerous factors in elasticsearch.

Slow Logs

Logging is an integral part of any application. When it comes to a distributed and sharded solution like elasticsearch which has to process heavy amounts of requests, the logging become unavoidable and the significance becomes paramount.

Slow logs, as the name suggests, are used to log the slow requests, whether it be a search or an index request. We can set the threshold of the slowness so as to log only the requests that are higher than that threshold.

In the case of elasticsearch the slow logs are important mainly because:

  1. It helps us to determine whether the applications which are supposed to talk with elasticsearch are doing so.
  2. We can examine the queries that are hitting elasticsearch and validate them.
  3. Logging can provide crucial information about index/cluster health, and thus maintain them.

Now we cover the main types of slow logs available, as well as their use cases.

Types of Slow Logs

There are two main kinds of slow logs in elasticsearch:

Search Slow Log

Search slow logs are used to log the searches which are slow. The slowness threshold depends on the application and its elasticsearch implementation details. Each application can have different thresholds.

Search in elasticsearch happens in two phases:

  1. Query phase - In the query phase the collection of document ids of the results occur. That means after the completion of this phase, only the ids of the documents which are matched against the search are returned, and there will be no other information like the fields or their values, etc.
  2. Fetch phase - In the fetch phase, the documents ids from the query phase are used to fetch the real documents, and with this the search request can be said to be complete.

The search slow logs show the split times for both the query and the fetch phases with the query. Thus, we are able to get a complete picture of the time taken for the query phase, the time take for the fetch phase, and to inspect the entire query itself.

Index Slow Logs

The index slow log is used to log the indexing process. When documents are indexed in elasticsearch, the index slow log keeps a record of the requests which took longer periods of time. Here, also, the time windows can be adjusted in the configuration settings for the index logs.

By default, when enabled, elasticsearch logs the first 1000 lines of the document to the log file. This can be changed to null or to capture entire document depending on how we configure the settings.

In the next sections, let us see how to configure the logs and examine the logs from each case.

Index Slow Logging Setup

Create a test index for which the index slow logs are to be configured.

curl -XPUT 'localhost:9200/testindex-slowlogs'

Now configure the search slow log to the index "testindex-slowlogs" as below:

curl -XPUT 'localhost:9200/testindex-slowlogs/_settings' -H 'Content-Type: application/json' -d '{
 "index.indexing.slowlog.threshold.index.warn": "0ms",
 "index.indexing.slowlog.threshold.index.info": "0ms",
 "index.indexing.slowlog.threshold.index.debug": "0ms",
 "index.indexing.slowlog.threshold.index.trace": "0ms",
 "index.indexing.slowlog.level": "trace",
 "index.indexing.slowlog.source": "1000"
}'

The response of the above request is shown below:

{
 "acknowledged": true
}

In the above settings, all the threshold time values are given as 0ms to log all the index requests.

Put some sample documents into this index like below:

curl -XPOST 'localhost:9200/testindex-slowlogs/' -H 'Content-Type: application/json' -d '{
 "price": 9925,
 "name": "Nariko"
}'

Now under the logs folder of elasticsearch, open the file "elasticsearch_index_indexing_slowlog.log". We can see the logs below:

[2017-09-10T12:07:26,683][WARN ][index.indexing.slowlog.index] [GOgO9TD] [testindex-slowlogs/yNbyYk1ARSW_hd0YRh6J0A] took[142.3micros], took_millis[0], type[product], id[105], routing[] , source[{"price":9925,"name":"Nariko"}]

In the above log, we can see the following information:

  • time stamp
  • the log level
  • the type of log
  • the node name
  • the index name
  • the time taken in micro/milli seconds
  • the index type
  • document id
  • the "_source" field

Using these we can understand how the indexing operation went, and can detect/debug any anomalies if present.

Search Slow Logging

Similar to index slow logs, search slow logs are applied to a specific index. An example configuration to log all the search requests are given below:

curl -XPUT 'localhost:9200/testindex-slowlogs/_settings' -H 'Content-Type: application/json' -d '{
"index.search.slowlog.threshold.query.warn": "0ms",
"index.search.slowlog.threshold.query.info": "0ms",
"index.search.slowlog.threshold.query.debug": "0ms",
"index.search.slowlog.threshold.query.trace": "0ms",
"index.search.slowlog.threshold.fetch.warn": "0ms",
"index.search.slowlog.threshold.fetch.info": "0ms",
"index.search.slowlog.threshold.fetch.debug": "0ms",
"index.search.slowlog.threshold.fetch.trace": "0ms"
}'

Since the above settings has 0ms as the value for all the parameters, all the search requests will be logged. The log file is located in the logs directory under the name : "elasticsearch_index_search_slowlog.log"

Run the following query in the index:

curl -XPUT 'localhost:9200/testindex-slowlogs/_search?pretty' -H 'Content-Type: application/json' -d '{
 "sort": [
   {
     "price": {
       "order": "desc"
     }
   }
 ],
 "query": {
   "match": {
     "name": "Nariko"
   }
 }
}'

After the successful execution of this query, open the search slow log log file mentioned above. You will find a log like below:

[2017-09-10T12:35:53,352][WARN ][index.search.slowlog.query] [GOgO9TD] [testindex-slowlogs][1] took[197.6micros], took_millis[0], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{"query":{"match":{"name":{"query":"Nariko","operator":"OR","prefix_length":0,"max_expansions":50,"fuzzy_transpositions":true,"lenient":false,"zero_terms_query":"NONE","boost":1.0}}},"sort":[{"price":{"order":"desc"}}]}],
[2017-09-10T12:35:53,352][WARN ][index.search.slowlog.query] [GOgO9TD] [testindex-slowlogs][3] took[279.3micros], took_millis[0], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{"query":{"match":{"name":{"query":"Nariko","operator":"OR","prefix_length":0,"max_expansions":50,"fuzzy_transpositions":true,"lenient":false,"zero_terms_query":"NONE","boost":1.0}}},"sort":[{"price":{"order":"desc"}}]}],
[2017-09-10T12:35:53,352][WARN ][index.search.slowlog.query] [GOgO9TD] [testindex-slowlogs][2] took[1ms], took_millis[1], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{"query":{"match":{"name":{"query":"Nariko","operator":"OR","prefix_length":0,"max_expansions":50,"fuzzy_transpositions":true,"lenient":false,"zero_terms_query":"NONE","boost":1.0}}},"sort":[{"price":{"order":"desc"}}]}],
[2017-09-10T12:35:53,352][WARN ][index.search.slowlog.query] [GOgO9TD] [testindex-slowlogs][4] took[370.8micros], took_millis[0], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{"query":{"match":{"name":{"query":"Nariko","operator":"OR","prefix_length":0,"max_expansions":50,"fuzzy_transpositions":true,"lenient":false,"zero_terms_query":"NONE","boost":1.0}}},"sort":[{"price":{"order":"desc"}}]}],
[2017-09-10T12:35:53,355][WARN ][index.search.slowlog.fetch] [GOgO9TD] [testindex-slowlogs][0] took[150.6micros], took_millis[0], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{"query":{"match":{"name":{"query":"Nariko","operator":"OR","prefix_length":0,"max_expansions":50,"fuzzy_transpositions":true,"lenient":false,"zero_terms_query":"NONE","boost":1.0}}},"sort":[{"price":{"order":"desc"}}]}]

There are five logs generated for a single search above. This is because the search is conducted shard-wise and by default configuration, when an index is created, 5 shards are created. When we executed search, the request was passed to every one of the five shards and these were logged. 

In these logs we can see details like type of search, node, and the shard number information with the detailed query.

Conclusion

In this tutorial we explored the importance of the elasticsearch slow logs. We covered both the index and search slow logs, along with configuration examples and log samples.

Other Helpful Tutorials

Give It a Whirl!

It's easy to spin up a standard hosted Elasticsearch cluster on any of our 47 Rackspace, Softlayer, or Amazon data centers. And you can now provision your own AWS Credits on Qbox Private Hosted Elasticsearch

Questions? Drop us a note, and we'll get you a prompt response.

Not yet enjoying the benefits of a hosted ELK-stack enterprise search on Qbox? We invite you to create an account today and discover how easy it is to manage and scale your Elasticsearch environment in our cloud hosting service.