In an earlier post, How to Build an Autocomplete Feature with Elasticsearch, we showed how to build a basic autocomplete that looks for all documents in the index. This feature is good for the generic autocomplete feature, but it is not enough if your index has a lot of product categories, for example. Therefore, in this post we'll explore context-based autocompletion, which will help you implement intelligent filtering based on categories and geo points. Let's get started!

Keep reading

We are excited to announce that Elasticsearch 6.3.2 is now available for cluster provisioning on Qbox.io. This is a continuation of our efforts to integrate the latest Elasticsearch versions into Qbox offerings. (We previously made Elasticsearch 6.2.1 clusters available on our platform on March 28, 2018.)

Keep reading

Not yet enjoying the benefits of a hosted ELK-stack enterprise search on Qbox? Discover how easy it is to manage and scale your Elasticsearch environment.

Get Started 5 minutes to get started

Container environments are very dynamic and fluid. New containers are stopped and started all the time due to scaling, rescheduling to new nodes, and updating. Solutions for monitoring containers should thus be flexible, too. For example, we should be able to respond to container start/stop events by launching or stopping some monitoring services and modules. In this way, we can dynamically respond to the changing state of the container environment. 

Starting with version 6.1, Metricbeat introduced support for the Autodiscover feature that allows tracking Docker and Kubernetes APIs to respond to container start and stop events. For example, if a new Apache HTTP container is launched, Autodiscover would automatically enable the Apache module with specific metric sets and channel Apache events to the specified output. Without this feature, we would have to launch all Filebeat or Metricbeat modules manually before running the shipper or change a configuration when a container starts/stops. Autodiscover solves this problem well. Let's see how to set up it with Metricbeat and send Docker container metrics directly to Elasticsearch.

Keep reading

We cannot over-emphasize that the ELK stack is a great solution to ship, search, and analyze logs, system metrics, statistics, and other types of insight-driven data. You can utilize various components of the ELK stack such as Kibana to monitor what is happening in your cluster/s, host and applications getting instant insights guiding your business decisions. 

However, what options do we have for monitoring Elasticsearch itself? To make Elasticsearch serve request fast and ensure the health of the cluster, we need a good monitoring solution that helps identify issues as they arise. Fortunately, there are a lot of free monitoring tools available for the Elasticsearch, including Elasticsearch Kopf , Big Desk, or Whatson

In this article, we'll review one of the best web-based monitoring tools for Elasticsearch -- ElasticHQ. This plugin has been chosen as the built-in monitoring solution by Qbox for its hosted Elasticsearch 6.2.1 clusters.

Keep reading

Logstash ships with many input, codec, filter, and output plugins that can be used to retrieve, transform, filter, and send logs and events from various applications, servers, and network channels. 

In the previous tutorials, we discussed how to use Logstash to ship Redis logsindex emails using Logstash IMAP input plugin, and many other use cases. 

In this article, we continue our journey into the rich world of Logstash input plugins focusing on the Beats family (e.g., Filebeat and Metricbeat), various file and system input plugins, network, email, and chat protocols, cloud platforms, web applications, and message brokers/platforms. Logstash currently supports over 50 input plugin -- and more are coming -- so covering all of them in one article is not possible. Therefore, we decided to overview some of the most popular input plugin categories to give you a general picture of what you can do with Logstash. 

Keep reading

So you have moved all your applications to Docker and have begun enjoying all the fruits of lightweight and fast-to-deploy containers. 

That's great, but once you have multiple containers spread across multiple nodes, you'll need to find a way to track their health, storage, CPU, and memory usage, network load, etc. 

To track these metrics, you need an efficient monitoring solution and some backend store to keep your container data for subsequent analysis and processing. Managing thousands of Docker containers in production made our team here at Qbox quickly realize that Docker container monitoring is a valuable addition to our cluster management process. 

In a previous article, we discussed how to use Metricbeat to ship metrics from Kubernetes. Now, it's time to share our experience of using Metricbeat to monitor bare Docker containers and shipping container data to Elasticsearch and Kibana. This knowledge may be useful for developers and administrators who manage Docker containers without orchestration. Let's get started!

Keep reading