In this guide we explore Refresh and Flush operations in Elasticsearch. This guide will bring resolution to the differences between the two in an effective manner. We also cover the underlying basics of Lucene functionalities, like reopen and commits, which helps in understanding refresh and flux operations.

Keep reading

In this tutorial we cover a few commonly occurring issues for shard management in elasticsearch, their solutions, and also a few best practices. In some use cases, we incorporate special tricks to get things done. 

Keep reading

Slow Logs in Elasticsearch

Posted by Vineeth Mohan January 16, 2018

In this blog post we explore slow logs in elasticsearch, which is immensely helpful both in production and debugging environments. We show how slow logs generated by elasticsearch can act as a critical information provider regarding numerous factors in elasticsearch.

Keep reading

Phrase suggester is an advanced version of the term suggester. The additional functionality, which phrase suggester employs, is the selection of entire corrected phrases instead of individual words. This is based on the ngram-language modeling, and phrase suggesters can make better choices of tokens based on both frequency and concurrency.

In this tutorial, we show you how to use the phrase suggester to correct spellings in phrases, which offers the feature "did you mean" search functionality in elasticsearch.

Keep reading

In a previous post, How to Build an Autocomplete Feature with Elasticsearch, we showed how to build a simple autosuggest in elasticsearch. In this post, we explore the context based autosuggest, and show how to implement it. 

Keep reading

In the previous tutorial, we learned how to set up a QBox Cluster with the ES-Hadoop connector to interface with Hadoop’s data warehouse component, Hive, to perform SQL queries on top of Elasticsearch. The benefits of offloading and manipulating ES indices with Hive enable a multitude of possibilities for high-performing, deeper analysis across large data sets.  

In this tutorial we will take it a step further, by using Logstash to import an existing data set in the form of a CSV file into Elasticsearch in order to perform later batch-analytics in Hadoop’s powerful ecosystem.

Keep reading