Redis, the popular open source in-memory data store, has been used as a persistent on-disk database that supports a variety of data structures such as lists, sets, sorted sets (with range queries), strings, geospatial indexes (with radius queries), bitmaps, hashes, and Hyper Logs. The in-memory store is used to solve various problems in areas such as real-time messaging, caching, and statistic calculation.

Provisioning an Elasticsearch cluster in Qbox is easy. In this article, we walk you through the initial steps to start and configure your cluster. We then setup and configure logstash to ship the logs to elasticsearch in order to monitor Redis performance. Redis performance logs shipped to elasticsearch can then be visualized and analyzed via Kibana dashboards.

Our Goal

The goal of the tutorial is to use Qbox as a Centralized Logging and Monitoring solution. Qbox provides a turnkey solution for Elasticsearch, Kibana and many of Elasticsearch analysis and monitoring plugins. We will set up Logstash in a separate node or machine to gather Redis logs from single or multiple servers, and use Qbox’s provisioned Kibana to visualize the gathered logs.

Our ELK stack setup has three main components:

  • Elasticsearch: It is used to store all of the application and monitoring logs(Provisioned by Qbox).

  • Logstash: The server component that processes incoming logs and feeds to ES.

  • Kibana: A web interface for searching and visualizing logs (Provisioned by Qbox).

For this post, we will be using hosted Elasticsearch on You can sign up or launch your cluster here, or click "Get Started" in the header navigation. If you need help setting up, refer to "Provisioning a Qbox Elasticsearch Cluster."


The amount of CPU, RAM, and storage that your Elasticsearch Server will require depends on the volume of logs that you intend to gather. For this tutorial, we will be using a Qbox provisioned Elasticsearch with the following minimum specs:

  • Provider: AWS

  • Version: 5.1.1

  • RAM: 1GB

  • CPU: vCPU1

  • Replicas: 0

The above specs can be changed per your desired requirements. Please select the appropriate names, versions, regions for your needs. For this example, we used Elasticsearch version 5.1.1, the most current version is 5.3. We support all versions of Elasticsearch on Qbox. (To learn more about the major differences between 2.x and 5.x, click here.)  

In addition to our Elasticsearch Server, we will require a separate logstash server to process incoming redis from client servers and ship them to Elasticsearch. There can be a single or multiple client servers for which you wish to ship logs to Elasticsearch. For simplicity or testing purposes, the logstash server can also act as the client server itself. The Endpoint and Transport addresses for our Qbox provisioned Elasticsearch cluster are as follows:


Endpoint: REST API


  • Username = ec18487808b6908009d3

  • Password = efcec6a1e0


Note: Please make sure to whitelist the logstash server IP from Qbox Elasticsearch cluster. Also, the elasticsearch server must have access to all client servers to collect redis logs from.

Redis : Set Up Redis In-memory Data Store

There are a couple of prerequisites that need to be downloaded to make the installation as easy as possible. Start off by updating all of the apt-get packages:

sudo apt-get update

Once the process finishes, download a compiler with build essential which will help us install Redis from source:

sudo apt-get install build-essential

Finally, we need to install tcl:

sudo apt-get install tcl8.5

Installing Redis

Download the latest stable release tarball from


Untar it and switch into that directory:

tar xzf redis-stable.tar.gz
cd redis-stable

Proceed with the make command and run recommended make test:

make test

Finish up by running make install, which installs the program system-wide.

sudo make install

Once the program has been installed, Redis comes with a built in script that sets up Redis to run as a background daemon.

To access the script move into the utils directory and run the Ubuntu/Debian install script:

cd utils
sudo ./

As the script runs, you can choose the default options by pressing enter. Once the script completes, the redis-server will be running in the background. You can start and stop redis with these commands (the number depends on the port you set during the installation. 6379 is the default port setting):

sudo service redis_6379 start
sudo service redis_6379 stop

You can then access the redis database by typing the following command:


You now have Redis installed and running. The prompt will look like this:


To set Redis to automatically start at boot, run:

sudo update-rc.d redis_6379 defaults

Redis provides all available metrics through the redis-cli info command. So, if you execute redis-cli info using your terminal, the output should looks like this:

➜  ~ redis-cli> info
# Server
os:Darwin 15.5.0 x86_64
# Clients
# Memory
# Persistence
# Stats
# Replication
# Cluster
# Keyspace

Install Logstash

Download and install the Public Signing Key:

wget -qO - | sudo apt-key add -

We will use the Logstash version 2.4.x as compatible with our Elasticsearch version 5.1.x. The Elastic Community Product Support Matrix can be referred in order to clear any version issues.

Add the repository definition to your /etc/apt/sources.list file:

echo "deb stable main" | sudo tee -a /etc/apt/sources.list

Run sudo apt-get update and the repository is ready for use. You can install it with:

sudo apt-get update && sudo apt-get install logstash

Alternatively, logstash tar can also be downloaded from Elastic Product Releases Site. Then, the steps of setting up and running logstash are pretty simple:

  • Download and unzip Logstash

  • Prepare a logstash.conf config file

  • Run bin/logstash -f logstash.conf -t to check config (logstash.conf)

  • Run bin/logstash -f logstash.conf

Configure Logstash

Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.

Let's create a configuration file called 02-redis-input.conf and set up our "redis" input:

sudo vi /etc/logstash/conf.d/02-redis-input.conf

Insert the following input configuration:

input {
 exec {
   command => "redis-cli info clients"
   interval => 2
   type => "clients"
 exec {
   command => "redis-cli info memory"
   interval => 2
   type => "memory"
 exec {
   command => "redis-cli info cpu"
   interval => 2
   type => "cpu"
 exec {
   command => "redis-cli info stats"
   interval => 2
   type => "stats"
 exec {
   command => "redis-cli info replication"
   interval => 2
   type => "replication"

Save and quit. This specifies a redis input that will listen on tcp port 6379. Now let's create a configuration file called 10-redis-filter.conf, where we will add a filter for redis messages:

sudo vi /etc/logstash/conf.d/10-redis-filter.conf

Insert the following redis filter configuration:

filter {
 split {
 ruby {
   code => "fields = event['message'].split(':')
   event[fields[0]] = fields[1].to_f"

Save and quit. This filter looks for logs that are labeled as "redis" type and it will try to use grok to parse incoming redis logs to make it structured and queryable.

Lastly, we will create a configuration file called 30-elasticsearch-output.conf:

sudo vi /etc/logstash/conf.d/30-elasticsearch-output.conf

Insert the following output configuration:

output {
 elasticsearch {
   hosts => [""]
   user => "ec18487808b6908009d3"
   password => "efcec6a1e0"
   index => "redis-%{+YYYY.MM.dd}"
   document_type => "redis_logs"
 stdout { codec => rubydebug }

Save and exit. This output basically configures Logstash to store the logs data in Elasticsearch which is running at, in an index named after the redis.

If you have downloaded logstash tar or zip, you can create a logstash.conf file having input, filter and output all in one place.

sudo vi LOGSTASH_HOME/logstash.conf

Insert the following input, filter and output configuration in logstash.conf

input {
  exec {
    command => "redis-cli info clients"
    interval => 2
    type => "clients"
  exec {
    command => "redis-cli info memory"
    interval => 2
    type => "memory"
  exec {
    command => "redis-cli info cpu"
    interval => 2
    type => "cpu"
  exec {
    command => "redis-cli info stats"
    interval => 2
    type => "stats"
  exec {
    command => "redis-cli info replication"
    interval => 2
    type => "replication"
filter {
 split {
 ruby {
   code => "fields = event['message'].split(':')
   event[fields[0]] = fields[1].to_f"
output {
 elasticsearch {
   hosts => [""]
   user => "ec18487808b6908009d3"
   password => "efcec6a1e0"
   index => "redis-%{+YYYY.MM.dd}"
   document_type => "redis_logs"
 stdout { codec => rubydebug }

If you want to add filters for other applications that use the redis input, be sure to name the files so they sort between the input and the output configuration (i.e. between 02- and 30-).

Test your Logstash configuration with this command:

sudo service logstash configtest

It should display Configuration OK if there are no syntax errors. Otherwise, try and read the error output to see what's wrong with your Logstash configuration.

Restart Logstash, and enable it, to put our configuration changes into effect:

sudo service logstash restart
sudo update-rc.d logstash defaults 96 9

If you have downloaded logstash tar or zip, it can be run using following command

bin/logstash -f logstash.conf

Next, we'll load the sample Kibana dashboards.

Load Kibana Dashboards

Screen Shot 2017-04-15 at 12.28.22 PM.png

When you are finished setting Logstash server to collect logs from client servers, let's look at Kibana, the web interface provisioned by Qbox. Kibana User interface can be used for filtering, sorting, discovering and visualising logs that are stored in Elasticsearch. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data.

Screen Shot 2017-04-15 at 12.28.38 PM.png

Go ahead and select [redis]-YYY.MM.DD from the Index Patterns menu (left side), then click the Star (Set as default index) button to set the redis index as the default.

Now click the Discover link in the top navigation bar. By default, this will show you all of the log data over the last 15 minutes. You should see a histogram with log events, with log messages below:

Screen Shot 2017-04-15 at 12.28.55 PM.png

Right now, there won't be much in there because you are only gathering redis from your client servers. Here, you can search and browse through your logs. You can also customize your dashboard.

The Kibana interface is divided into four main sections: Discover, Visualize, Dashboard and Settings. Now, having the metrics readily available is all well and good, but it’s no good having them if you don’t know how to spot performance issues. Here’s how you can interpret specific metrics to identify issues:

Screen Shot 2017-04-15 at 12.35.12 PM.png

  • The used_memory metric shows you the total number of bytes that Redis has allocated to memory. If a Redis instance exceeds its available memory (such as when its used_memory is greater than its available memory), the OS will start writing sections that were used for old and unused memory to disk to make space for newer and active pages.

  • The total_commands_processed metric provides you with the total number of processed commands from a Redis server. This metric can help diagnosis latency (the time it takes clients to receive a response from a server), which is the most direct way to detect changes in Redis performance.

  • If there is a decrease in performance, you will see that the total_commands_processed metric either drops or stalls more than usual. This is when Kibana can give you a clear overview of changes that are occurring over a period of time.


Kibana has many other features, such as graphing and filtering, so feel free to poke around!



Qbox provisioned Elasticsearch makes it very easy for us to visualize centralized logs using logstash and Kibana. Remember that we can send pretty much any type of log or indexed data to Logstash, but the data becomes even more useful if it is parsed and structured with grok.

What do we look for in centralized logging? As it happens, many things, but the most important are as follows.

  • A way to parse data and send them to a central database in near real-time.

  • The capacity of the database to handle near real-time data querying and analytics.

  • A visual representation of the data through filtered tables, dashboards, and so on.

The ELK stack (Elasticsearch, Logstash, and Kibana) can do all that and it can easily be extended to satisfy the particular needs we’ll set in front of us.

Other Helpful Tutorials

Give It a Whirl!

It's easy to spin up a standard hosted Elasticsearch cluster on any of our 47 Rackspace, Softlayer, Amazon, or Microsoft Azure data centers. And you can now provision your own AWS Credits on Qbox Private Hosted Elasticsearch

Questions? Drop us a note, and we'll get you a prompt response.

Not yet enjoying the benefits of a hosted ELK stack enterprise search on Qbox? We invite you to create an account today and discover how easy it is to manage and scale your Elasticsearch environment in our cloud hosting service.

comments powered by Disqus