Redis, the popular open source in-memory data store, has been used as a persistent on-disk database that supports a variety of data structures such as lists, sets, sorted sets (with range queries), strings, geospatial indexes (with radius queries), bitmaps, hashes, and Hyper Logs. The in-memory store is used to solve various problems in areas such as real-time messaging, caching, and statistic calculation.

Provisioning an Elasticsearch cluster in Qbox is easy. In this article, we walk you through the initial steps to start and configure your cluster. We then setup and configure logstash to ship the logs to elasticsearch in order to monitor Redis performance. Redis performance logs shipped to elasticsearch can then be visualized and analyzed via Kibana dashboards.

Our Goal

The goal of the tutorial is to use Qbox as a Centralized Logging and Monitoring solution. Qbox provides a turnkey solution for Elasticsearch, Kibana and many of Elasticsearch analysis and monitoring plugins. We will set up Logstash in a separate node or machine to gather Redis logs from single or multiple servers, and use Qbox’s provisioned Kibana to visualize the gathered logs.

Our ELK stack setup has three main components:

  • Elasticsearch: It is used to store all of the application and monitoring logs(Provisioned by Qbox).

  • Logstash: The server component that processes incoming logs and feeds to ES.

  • Kibana: A web interface for searching and visualizing logs (Provisioned by Qbox).

For this post, we will be using hosted Elasticsearch on Qbox.io. You can sign up or launch your cluster here, or click "Get Started" in the header navigation. If you need help setting up, refer to "Provisioning a Qbox Elasticsearch Cluster."

Prerequisites

The amount of CPU, RAM, and storage that your Elasticsearch Server will require depends on the volume of logs that you intend to gather. For this tutorial, we will be using a Qbox provisioned Elasticsearch with the following minimum specs:

  • Provider: AWS

  • Version: 5.1.1

  • RAM: 1GB

  • CPU: vCPU1

  • Replicas: 0

The above specs can be changed per your desired requirements. Please select the appropriate names, versions, regions for your needs. For this example, we used Elasticsearch version 5.1.1, the most current version is 5.3. We support all versions of Elasticsearch on Qbox. (To learn more about the major differences between 2.x and 5.x, click here.)  

In addition to our Elasticsearch Server, we will require a separate logstash server to process incoming redis from client servers and ship them to Elasticsearch. There can be a single or multiple client servers for which you wish to ship logs to Elasticsearch. For simplicity or testing purposes, the logstash server can also act as the client server itself. The Endpoint and Transport addresses for our Qbox provisioned Elasticsearch cluster are as follows:

common_1.png

Endpoint: REST API

https://ec18487808b6908009d3:efcec6a1e0@eb843037.qb0x.com:32563

Authentication

  • Username = ec18487808b6908009d3

  • Password = efcec6a1e0

TRANSPORT (NATIVE JAVA)

eb843037.qb0x.com:30543

Note: Please make sure to whitelist the logstash server IP from Qbox Elasticsearch cluster. Also, the elasticsearch server must have access to all client servers to collect redis logs from.

Redis : Set Up Redis In-memory Data Store

There are a couple of prerequisites that need to be downloaded to make the installation as easy as possible. Start off by updating all of the apt-get packages:

sudo apt-get update

Once the process finishes, download a compiler with build essential which will help us install Redis from source:

sudo apt-get install build-essential

Finally, we need to install tcl:

sudo apt-get install tcl8.5

Installing Redis

Download the latest stable release tarball from Redis.io.

wget http://download.redis.io/releases/redis-stable.tar.gz

Untar it and switch into that directory:

tar xzf redis-stable.tar.gz
cd redis-stable

Proceed with the make command and run recommended make test:

make test

Finish up by running make install, which installs the program system-wide.

sudo make install

Once the program has been installed, Redis comes with a built in script that sets up Redis to run as a background daemon.

To access the script move into the utils directory and run the Ubuntu/Debian install script:

cd utils
sudo ./install_server.sh

As the script runs, you can choose the default options by pressing enter. Once the script completes, the redis-server will be running in the background. You can start and stop redis with these commands (the number depends on the port you set during the installation. 6379 is the default port setting):

sudo service redis_6379 start
sudo service redis_6379 stop

You can then access the redis database by typing the following command:

redis-cli

You now have Redis installed and running. The prompt will look like this:

redis 127.0.0.1:6379>

To set Redis to automatically start at boot, run:

sudo update-rc.d redis_6379 defaults

Redis provides all available metrics through the redis-cli info command. So, if you execute redis-cli info using your terminal, the output should looks like this:

➜  ~ redis-cli
127.0.0.1:6379> info
# Server
redis_version:3.2.3
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:ec5e6acb1f26de13
redis_mode:standalone
os:Darwin 15.5.0 x86_64
arch_bits:64
multiplexing_api:kqueue
gcc_version:4.2.1
process_id:915
run_id:4b0045cf8606b125cc38c91e4b4121df733a8ba4
tcp_port:6379
uptime_in_seconds:1803312
uptime_in_days:20
hz:10
lru_clock:15336947
executable:/usr/local/opt/redis/bin/redis-server
config_file:/usr/local/etc/redis.conf
# Clients
connected_clients:1
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:1010016
used_memory_human:986.34K
used_memory_rss:573440
used_memory_rss_human:560.00K
used_memory_peak:1575712
used_memory_peak_human:1.50M
total_system_memory:8589934592
total_system_memory_human:8.00G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:0.57
mem_allocator:libc
# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1491569495
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:0
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
# Stats
total_connections_received:364
total_commands_processed:1214
instantaneous_ops_per_sec:0
total_net_input_bytes:265629
total_net_output_bytes:16492921
instantaneous_input_kbps:0.00
instantaneous_output_kbps:2076.76
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:43
evicted_keys:0
keyspace_hits:256
keyspace_misses:538
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:484
migrate_cached_sockets:0
# Replication
role:master
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
# CPU
used_cpu_sys:235.67
used_cpu_user:123.15
used_cpu_sys_children:0.05
used_cpu_user_children:0.01
# Cluster
cluster_enabled:0
# Keyspace
db0:keys=2,expires=0,avg_ttl=0

Install Logstash

Download and install the Public Signing Key:

wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

We will use the Logstash version 2.4.x as compatible with our Elasticsearch version 5.1.x. The Elastic Community Product Support Matrix can be referred in order to clear any version issues.

Add the repository definition to your /etc/apt/sources.list file:

echo "deb https://packages.elastic.co/logstash/2.4/debian stable main" | sudo tee -a /etc/apt/sources.list

Run sudo apt-get update and the repository is ready for use. You can install it with:

sudo apt-get update && sudo apt-get install logstash

Alternatively, logstash tar can also be downloaded from Elastic Product Releases Site. Then, the steps of setting up and running logstash are pretty simple:

  • Download and unzip Logstash

  • Prepare a logstash.conf config file

  • Run bin/logstash -f logstash.conf -t to check config (logstash.conf)

  • Run bin/logstash -f logstash.conf

Configure Logstash

Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.

Let's create a configuration file called 02-redis-input.conf and set up our "redis" input:

sudo vi /etc/logstash/conf.d/02-redis-input.conf

Insert the following input configuration:

input {
 exec {
   command => "redis-cli info clients"
   interval => 2
   type => "clients"
 }
 exec {
   command => "redis-cli info memory"
   interval => 2
   type => "memory"
 }
 exec {
   command => "redis-cli info cpu"
   interval => 2
   type => "cpu"
 }
 exec {
   command => "redis-cli info stats"
   interval => 2
   type => "stats"
 }
 exec {
   command => "redis-cli info replication"
   interval => 2
   type => "replication"
 }
}

Save and quit. This specifies a redis input that will listen on tcp port 6379. Now let's create a configuration file called 10-redis-filter.conf, where we will add a filter for redis messages:

sudo vi /etc/logstash/conf.d/10-redis-filter.conf

Insert the following redis filter configuration:

filter {
 split {
 
 }
 ruby {
   code => "fields = event['message'].split(':')
   event[fields[0]] = fields[1].to_f"
 }
}

Save and quit. This filter looks for logs that are labeled as "redis" type and it will try to use grok to parse incoming redis logs to make it structured and queryable.

Lastly, we will create a configuration file called 30-elasticsearch-output.conf:

sudo vi /etc/logstash/conf.d/30-elasticsearch-output.conf

Insert the following output configuration:

output {
 elasticsearch {
   hosts => ["https://eb843037.qb0x.com:32563/"]
   user => "ec18487808b6908009d3"
   password => "efcec6a1e0"
   index => "redis-%{+YYYY.MM.dd}"
   document_type => "redis_logs"
 }
 stdout { codec => rubydebug }
}

Save and exit. This output basically configures Logstash to store the logs data in Elasticsearch which is running at https://eb843037.qb0x.com:30024/, in an index named after the redis.

If you have downloaded logstash tar or zip, you can create a logstash.conf file having input, filter and output all in one place.

sudo vi LOGSTASH_HOME/logstash.conf

Insert the following input, filter and output configuration in logstash.conf

input {
  exec {
    command => "redis-cli info clients"
    interval => 2
    type => "clients"
  }
  exec {
    command => "redis-cli info memory"
    interval => 2
    type => "memory"
  }
  exec {
    command => "redis-cli info cpu"
    interval => 2
    type => "cpu"
  }
  exec {
    command => "redis-cli info stats"
    interval => 2
    type => "stats"
  }
  exec {
    command => "redis-cli info replication"
    interval => 2
    type => "replication"
  }
}
filter {
 split {
 
 }
 ruby {
   code => "fields = event['message'].split(':')
   event[fields[0]] = fields[1].to_f"
 }
}
output {
 elasticsearch {
   hosts => ["https://eb843037.qb0x.com:32563/"]
   user => "ec18487808b6908009d3"
   password => "efcec6a1e0"
   index => "redis-%{+YYYY.MM.dd}"
   document_type => "redis_logs"
 }
 stdout { codec => rubydebug }
}

If you want to add filters for other applications that use the redis input, be sure to name the files so they sort between the input and the output configuration (i.e. between 02- and 30-).

Test your Logstash configuration with this command:

sudo service logstash configtest

It should display Configuration OK if there are no syntax errors. Otherwise, try and read the error output to see what's wrong with your Logstash configuration.

Restart Logstash, and enable it, to put our configuration changes into effect:

sudo service logstash restart
sudo update-rc.d logstash defaults 96 9

If you have downloaded logstash tar or zip, it can be run using following command

bin/logstash -f logstash.conf

Next, we'll load the sample Kibana dashboards.

Load Kibana Dashboards

Screen Shot 2017-04-15 at 12.28.22 PM.png

When you are finished setting Logstash server to collect logs from client servers, let's look at Kibana, the web interface provisioned by Qbox. Kibana User interface can be used for filtering, sorting, discovering and visualising logs that are stored in Elasticsearch. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data.

Screen Shot 2017-04-15 at 12.28.38 PM.png

Go ahead and select [redis]-YYY.MM.DD from the Index Patterns menu (left side), then click the Star (Set as default index) button to set the redis index as the default.

Now click the Discover link in the top navigation bar. By default, this will show you all of the log data over the last 15 minutes. You should see a histogram with log events, with log messages below:

Screen Shot 2017-04-15 at 12.28.55 PM.png

Right now, there won't be much in there because you are only gathering redis from your client servers. Here, you can search and browse through your logs. You can also customize your dashboard.

The Kibana interface is divided into four main sections: Discover, Visualize, Dashboard and Settings. Now, having the metrics readily available is all well and good, but it’s no good having them if you don’t know how to spot performance issues. Here’s how you can interpret specific metrics to identify issues:

Screen Shot 2017-04-15 at 12.35.12 PM.png

  • The used_memory metric shows you the total number of bytes that Redis has allocated to memory. If a Redis instance exceeds its available memory (such as when its used_memory is greater than its available memory), the OS will start writing sections that were used for old and unused memory to disk to make space for newer and active pages.

  • The total_commands_processed metric provides you with the total number of processed commands from a Redis server. This metric can help diagnosis latency (the time it takes clients to receive a response from a server), which is the most direct way to detect changes in Redis performance.

  • If there is a decrease in performance, you will see that the total_commands_processed metric either drops or stalls more than usual. This is when Kibana can give you a clear overview of changes that are occurring over a period of time.

a6_redis_1.gif

Kibana has many other features, such as graphing and filtering, so feel free to poke around!

a6_redis_2.gif

Conclusion

Qbox provisioned Elasticsearch makes it very easy for us to visualize centralized logs using logstash and Kibana. Remember that we can send pretty much any type of log or indexed data to Logstash, but the data becomes even more useful if it is parsed and structured with grok.

What do we look for in centralized logging? As it happens, many things, but the most important are as follows.

  • A way to parse data and send them to a central database in near real-time.

  • The capacity of the database to handle near real-time data querying and analytics.

  • A visual representation of the data through filtered tables, dashboards, and so on.

The ELK stack (Elasticsearch, Logstash, and Kibana) can do all that and it can easily be extended to satisfy the particular needs we’ll set in front of us.

Other Helpful Tutorials

Give It a Whirl!

It's easy to spin up a standard hosted Elasticsearch cluster on any of our 47 Rackspace, Softlayer, Amazon, or Microsoft Azure data centers. And you can now provision your own AWS Credits on Qbox Private Hosted Elasticsearch

Questions? Drop us a note, and we'll get you a prompt response.

Not yet enjoying the benefits of a hosted ELK stack enterprise search on Qbox? We invite you to create an account today and discover how easy it is to manage and scale your Elasticsearch environment in our cloud hosting service.

comments powered by Disqus