A comprehensive log management and analysis strategy is mission critical, enabling organizations to understand the relationship between operational, security, and change management events and to maintain a comprehensive understanding of their infrastructure. Log files from web servers, applications, and operating systems also provide valuable data, although in different formats, and in a random and distributed fashion.

As with any web server, the task of logging NGINX is somewhat of a challenge. NGINX access and error logs can produce thousands of log lines every second, and this data, if monitored properly, can provide you with valuable information not only on what has already transpired but also on what is about to happen. But how do you extract actionable insights from this information? How do you effectively monitor such a large amount of data?

NGINX access logs contain a wealth of information including client requests and currently active client connections that, if monitored efficiently, can provide a clear picture of how the web server and the application that it serves is behaving. This tutorial describes how Qbox can be used to overcome this challenge by monitoring NGINX access logs with Qbox provisioned Elasticsearch Stack.

For this post, we will be using hosted Elasticsearch on Qbox.io. You can sign up or launch your cluster here, or click "Get Started" in the header navigation. If you need help setting up, refer to "Provisioning a Qbox Elasticsearch Cluster."

Provisioning an Elasticsearch cluster in Qbox is easy. In this article, we walk you through the initial steps and show you how simple it is to start and configure your cluster. We shall then install and configure logstash to ship our nginx to Elasticsearch. Nginx logs shipped to Elasticsearch can then be visualized and analyzed via Kibana dashboards.

Our Goal

The goal of the tutorial is to use Qbox as a Centralized Logging and Monitoring solution. Qbox provides out-of-box solutions for Elasticsearch, Kibana and many of Elasticsearch analysis and monitoring plugins. We will set up Logstash in a separate node or machine to gather nginx logs from single or multiple servers, and we will use Qbox’s provisioned Kibana to visualize the gathered logs.

Our ELK stack setup has three main components:

  • Elasticsearch: It is used to store all of the application and monitoring logs(Provisioned by Qbox).

  • Logstash: The server component that processes incoming logs and feeds to ES.

  • Kibana: A web interface for searching and visualizing logs (Provisioned by Qbox).

Prerequisites

The amount of CPU, RAM, and storage that your Elasticsearch server will require depends on the volume of logs that you intend to gather. For this tutorial, we will be using a Qbox provisioned Elasticsearch with the following minimum specs:

  • Provider: AWS

  • Version: 2.3.4

  • RAM: 1GB

  • CPU: vCPU1

  • Replicas: 0

The above specs can be changed per your desired requirements. Please select the appropriate names, versions, regions for your needs. For this example, we used Elasticsearch version 2.3.4. (The most current version is 5.3. We support all versions of Elasticsearch on Qbox. To learn more about the major differences between 2.x and 5.x, click here.)

In addition to our Elasticsearch server, we will require a separate logstash server to process incoming nginx logs from client servers and ship them to Elasticsearch. There can be a single or multiple client servers for which you wish to ship logs to Elasticsearch. For simplicity, the logstash server can also act as the client server itself. The Endpoint and Transport addresses for our Qbox provisioned Elasticsearch cluster are as follows:

1.png

Note: Please make sure to whitelist the logstash server IP from Qbox Elasticsearch cluster. Also, the Elasticsearch server must have access to all client servers to collect nginx logs from.

Nginx: Set Up Reverse Proxy Server

Now that your application is running and listening on a private IP address, you need to set up a way for your users to access it. We will set up an Nginx web server as a reverse proxy for this purpose. and this tutorial will set up an Nginx server from scratch. If you already have an Nginx server setup, you can just copy the location block into the server block of your choice, making sure the location does not conflict with any of your web server's existing content.

On the web server, let's update the apt-get package lists with this command:

sudo apt-get update

Then install Nginx using apt-get:

sudo apt-get install nginx

Note: Nginx can either be configured in its main configuration file  /etc/nginx/nginx.confor in its sites-available default configuration file /etc/nginx/sites-available/default. If nginx is configured in any secondary file other than its main configuration file (nginx.conf), ensure that the secondary conf file is included in the main /etc/nginx/nginx.conf file. Also, make sure that the location block into the server block of your choice does not conflict with or override any of your web server's existing server blocks.

Configure Nginx

Open the default server block configuration file for editing:

sudo vi /etc/nginx/sites-available/default

Delete everything in the file and insert the following configuration. Be sure to substitute your own domain name for the server_name directive (or IP address if you don't have a domain set up), and the app server private IP address for the APP_PRIVATE_IP_ADDRESS. Additionally, change the port (8080) if your application is set to listen on a different port. We will be using the address of our Qbox-provisioned Elasticsearch server and port.

/etc/nginx/sites-available/default

server {
    listen 80;
    server_name example.com;
    location / {
        proxy_pass https://QBOX_ES_SERVER:QBOX_ES_PORT;
       # proxy_pass https://eb843037.qb0x.com:30024;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

This configures the web server to respond to requests at its root. Assuming our server is available at example.com, accessing http://example.com/ via a web browser would send the request to the application server's private IP address on port 8080, which would be received and replied to by the Node.js application.

You can add additional location blocks to the same server block to provide access to other applications on the same web server. For example, if you were also running another Node.js application on the app server on port 8081, you could add this location block to allow access to it via http://example.com/app2:

Nginx Configuration — Additional Locations

location /_search {
    proxy_pass http://APP_PRIVATE_IP_ADDRESS:8081;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
}

Once you are done adding the location blocks for your applications, save, and exit.

On the web server, restart Nginx:

sudo service nginx restart

Assuming that your Node.js application is running and that your application and Nginx configurations are correct, you should be able to access your application via the reverse proxy of the web server. Try it out by accessing your web server's URL (its public IP address or domain name).

Install Logstash

Download and install the Public Signing Key:

wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

We will use the Logstash version 2.3.x as compatible with our Elasticsearch version 2.3.4. The Elastic Community Product Support Matrix can be referenced in order to clear any version issues.

Add the repository definition to your /etc/apt/sources.list file:

echo "deb https://packages.elastic.co/logstash/2.3/debian stable main" | sudo tee -a /etc/apt/sources.list

Run sudo apt-get update and the repository is ready for use. You can install it with:

sudo apt-get update && sudo apt-get install logstash

Alternatively, logstash tar can also be downloaded from Elastic Product Releases Site. The steps of setting up and running logstash are then pretty simple:

  • Download and unzip Logstash

  • Prepare a logstash.conf config file

  • Run bin/logstash -f logstash.conf -t to check config (logstash.conf)

  • Run bin/logstash -f logstash.conf

Configure Logstash

Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d. The configuration consists of three sections: inputs, filters, and outputs.

Let's create a configuration file called 02-nginx-input.conf and set up our "nginx" input:

sudo vi /etc/logstash/conf.d/02-nginx-input.conf

Insert the following input configuration:

input {
  file {
    path => ["/var/log/nginx/access.log", "/var/log/nginx/error.log"]
    type => "nginx"
  }
}

NOTE : Nginx access and error logs file path may differ based upon environment, configuration, and underlying OS.

Save and quit. This specifies a nginx input that will listen on tcp port 5044. Now let's create a configuration file called 10-nginx-filter.conf, where we will add a filter for nginx messages:

sudo vi /etc/logstash/conf.d/10-nginx-filter.conf

Insert the following nginx filter configuration:

filter {
 grok {
   match => [ "message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}"]
   overwrite => [ "message" ]
 }
 mutate {
   convert => ["response", "integer"]
   convert => ["bytes", "integer"]
   convert => ["responsetime", "float"]
 }
 geoip {
   source => "clientip"
   target => "geoip"
   add_tag => [ "nginx-geoip" ]
 }
 date {
   match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
   remove_field => [ "timestamp" ]
 }
 useragent {
   source => "agent"
 }
}

Save and quit. This filter looks for logs that are labeled as "nginx" type and it will try to use grok to parse incoming nginx logs to make it structured and queryable.

Last, we will create a configuration file called 30-elasticsearch-output.conf:

sudo vi /etc/logstash/conf.d/30-elasticsearch-output.conf

Insert the following output configuration:

output {
 elasticsearch {
   hosts => ["https://eb843037.qb0x.com:30024/"]
   user => "5d53675f1e0dd8be3ada"
   password => "3b193023f7"
   index => "nginx-%{+YYYY.MM.dd}"
   document_type => "nginx_logs"
 }
 stdout { codec => rubydebug }
}

Save and exit. This output basically configures Logstash to store the logs data in Elasticsearch, which is running at https://eb843037.qb0x.com:30024/, in an index named after the nginx.

If you have downloaded logstash tar or zip, you can create a logstash.conf file having input, filter, and output all in one place.

sudo vi LOGSTASH_HOME/logstash.conf

Insert the following input, filter and output configuration in logstash.conf

input {
 file {
   path => ["/var/log/nginx/access.log", "/var/log/nginx/error.log"]
   type => "nginx"
 }
}
filter {
 grok {
   match => [ "message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}"]
   overwrite => [ "message" ]
 }
 mutate {
   convert => ["response", "integer"]
   convert => ["bytes", "integer"]
   convert => ["responsetime", "float"]
 }
 geoip {
   source => "clientip"
   target => "geoip"
   add_tag => [ "nginx-geoip" ]
 }
 date {
   match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
   remove_field => [ "timestamp" ]
 }
 useragent {
   source => "agent"
 }
}
output {
 elasticsearch {
   hosts => ["https://eb843037.qb0x.com:30024/"]
   user => "5d53675f1e0dd8be3ada"
   password => "3b193023f7"
   index => "nginx-%{+YYYY.MM.dd}"
   document_type => "nginx_logs"
 }
 stdout { codec => rubydebug }
}

If you want to add filters for other applications that use the nginx input, be sure to name the files so they sort between the input and the output configuration (i.e., between 02- and 30-).

Test your Logstash configuration with this command:

sudo service logstash configtest

It should display Configuration OK if there are no syntax errors. Otherwise, try and read the error output to see what's wrong with your Logstash configuration.

Restart Logstash, and enable it, to put our configuration changes into effect:

sudo service logstash restart
sudo update-rc.d logstash defaults 96 9

If you have downloaded logstash tar or zip, it can be run using following command

bin/logstash -f logstash.conf

Let’s start hitting a few random curl requests to localhost, which will pass the request to our Elasticsearch cluster to see if nginx logs are being pushed to our cluster via logstash. Remember, we added Qbox Cluster base address as a proxy pass url to localhost server in nginx.

curl -XGET http://localhost/_cat/indices
curl -XGET http://localhost/nginx-2017.03.31/_settings
curl -XGET http://localhost/nginx-2017.03.31/_count
curl -XGET http://localhost/nginx-2017.03.31/
curl -XGET http://localhost/nginx-2017.03.31/_search

If hitting the requests via browser, enter above credentials (username, password) in case the browser prompts for username and password, while hitting these requests.

Next, we'll load the sample Kibana dashboards.

Load Kibana Dashboards

2.png

When you are finished setting Logstash server to collect logs from client servers, let's look at Kibana, the web interface provisioned by Qbox. Kibana User interface can be used for filtering, sorting, discovering, and visualizing logs that are stored in Elasticsearch. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data.

Go ahead and select [nginx]-YYY.MM.DD from the Index Patterns menu (left side), then click the Star (Set as default index) button to set the nginx index as the default.

Now click the Discover link in the top navigation bar. By default, this will show you all of the log data over the last 15 minutes. You should see a histogram with log events, with log messages below:

3.png

Right now, there won't be much in there because you are only gathering nginx from your client servers. Here, you can search and browse through your logs. You can also customize your dashboard.

The Kibana interface is divided into four main sections : Discover, Visualize, Dashboard, and Settings and can be used for

  • Searching like looking for "admin" to see if anyone is trying to log into your servers as admin.

  • Search for a particular device id (search for device id: "5345")

  • Change the time frame by selecting an area on the histogram or from the time period menu in above navigation panel.

  • Create visualizations and populate them in specific dashboards.

kibana.gif

Kibana has many other features, such as graphing and filtering, so feel free to poke around!

Conclusion

Monitoring many services on a single server poses some difficulties. Monitoring many services on many servers requires a whole new way of thinking and a new set of tools. As you start embracing microservices, containers, and clusters, the number of deployed containers will begin increasing rapidly. The same holds true for servers that form the cluster. We can no longer log into a node and look at logs. Centralized logging can be very useful when attempting to identify problems with your servers or applications because it allows you to search through all of your logs in a single place.

Qbox-provisioned Elasticsearch makes it very easy for us to visualize centralized logs using logstash and Kibana. Remember that we can send essentially any type of log or indexed data to Logstash, but the data becomes even more useful if it is parsed and structured with grok.

What do we look for in centralized logging? As it happens, many things, but the most important are as follows.

  • A way to parse data and send them to a central database in near real time.

  • The capacity of the database to handle near real-time data querying and analytics.

  • A visual representation of the data through filtered tables, dashboards, and so on.

The ELK stack (Logstash, Elasticsearch, and Kibana) can do all that, and it can easily be extended to satisfy the particular needs we’ll set in front of us.

Give It a Whirl!

It's easy to spin up a standard hosted Elasticsearch cluster on any of our 47 Rackspace, Softlayer, Amazon, or Microsoft Azure data centers. And you can now provision your own AWS Credits on Qbox Private Hosted Elasticsearch

Questions? Drop us a note, and we'll get you a prompt response.

Not yet enjoying the benefits of a hosted ELK stack enterprise search on Qbox? We invite you to create an account today and discover how easy it is to manage and scale your Elasticsearch environment in our cloud hosting service.

comments powered by Disqus