In this series we are going to discuss the scaling of Elasticsearch in detail including the settings, both general and fine-tuned ones.

In this article we will discuss the system settings in detail. This will guide you on the parameters and values to be considered in various levels including the operating system (we are considering the Unix-based systems here). Focus will also be given to the memory settings in Elasticsearch, and we will look even deeper into the heap memory management and fine tuning of the same. Then we will talk about segment merge tuning.

System Settings

Number of Open Files

Unix-based systems have a limit on the maximum number of files that a process can open, and sometimes this limit proves to be a bottleneck when Elasticsearch is under heavy load.
In order to view the maximum number of open files descriptors we can use the following command: sysctl fs.file-max

It is best to have this value to at least 64000. We can set the maximum number of open file descriptors to the desired value by adding the following line: fs.file-max=64000 to the config file in /etc/sysctl.conf.

Memory Map Count

All operating systems have a limit on the number of memory map areas a process may have. Most of the times the default limits are fine, but again sometimes the default limits can become insufficient and cause memory exceptions. In such cases we can change the default values by setting vm.max_map_count variable to desired value in /etc/sysctl.conf.

Disable Swap OR Enable mlockall

In this blog series we will discuss the scaling of Elasticsearch in detail including the settings, both general and fine-tuned ones.

In this blog post we will discuss the system settings in detail to guide you on the parameters and values to be considered in various levels including the operating system (we are considering the Unix-based systems here). Focus will also be given to the memory settings in Elasticsearch, and we will look even deeper into the heap memory management and fine tuning of the same. We will then consider segment merge tuning.

Under heavy operations, there is a chance that Elasticsearch might end up using the swap memory of the system. This action has the side effect of drastically affecting the performance. In order to avoid such a  scenario we can use any one or both of the following approaches.

  1. Elasticsearch process can be set to lock to the RAM alone and prevented from swapping out by setting bootstrap.mlockall: true in the elasticsearch.yml file.
  2. Swap can also be completely disabled or the tendency to swap can be reduced using the vm.swappiness variable in /etc/sysctl.conf

Maximum Memory for Elasticsearch

There are numerous instances, such as heavy traffic and indexing of very large files, that can overload Elasticsearch overloaded and create various performance issues. Most of this issues are caused by insufficient heap memory allocation.

To understand heap memory issues we need to have a brief look into the garbage collection mechanism of Java, the native language in which Elasticsearch is written.

In Java the allocation and de-allocation of the memory space for objects is done by JVM as an automated process called Garbage Collection. When the heap sizes are small, the objects are cleared by the GC process quite faster, and the delay in cleaning the heap is very small.

When we allocate large heap sizes, the GC process would take more time to clean up the objects in the memory. If new data is pushed into the memory before, the GC process could create the adequate space by cleaning up the memory, becoming flooded, and creating a heap problem.

There are a couple of ways by which we could handle the problem, but here we are mentioning the most efficient method for larger applications. For larger applications, it is better to allocate sufficient heap size with some special considerations to be followed as below.

The maximum memory can be set using the ES_HEAP_SIZE environment variable. There are a few general guidelines in setting the heap size for Elasticsearch:

  1. It should not be more than 50% of the total available RAM. Since the filesystem caches are extensively used by Lucene, sufficient memory unavailability would be hindering the performance.
  2. Max memory that can be allocated for heap is 32GB.
  3. If heap size exceeds 32GB, pointers occupy double the space, and hence less memory would be available for operations, which finally results in performance degradation.

More on the effect of increasing the heap size and its effects are discussed in this article.

Tuning Main Memory

Sometimes even with optimized heap memory allotment, there might be performance issues caused by default settings inside the heap memory. Take a look at the heap structure below:

Elasticsearch-Heap-Memory-Example.png#as

As you can see, the heap memory is constituted of 2 components, namely the young generation and the old generation. From the above functionality of the heap memory components, it is clear that the younger generation needs relatively less frequent garbage collection process because the contents managed by it are short lived.

By default the JVM ration for old generation to young generation is 2 and is controlled by the JVM parameter New Ratio. For better performance it is ideal to have young generation size be higher because there are less frequent garbage collection processes happening there. We can check the ratio of young and old generation in heap memory by typing in the following command in the command line:

curl <IP:PORT>/_nodes/stats?pretty

Response for the above command showing the sizes occupied by young and old generations:

 "pools" : {
            "young" : {
              "used_in_bytes" : 246514840,
              "max_in_bytes" : 279183360,
              "peak_used_in_bytes" : 279183360,
              "peak_max_in_bytes" : 279183360
            },
            "survivor" : {
              "used_in_bytes" : 30364136,
              "max_in_bytes" : 34865152,
              "peak_used_in_bytes" : 34865152,
              "peak_max_in_bytes" : 34865152
            },
            "old" : {
              "used_in_bytes" : 95647872,
              "max_in_bytes" : 1798569984,
              "peak_used_in_bytes" : 95647872,
              "peak_max_in_bytes" : 1798569984
            }
          }

Part from the response where it shows the collection time for young and old:

"gc" : {
          "collectors" : {
            "young" : {
              "collection_count" : 6,
              "collection_time_in_millis" : 359
            },
            "old" : {
              "collection_count" : 1,
              "collection_time_in_millis" : 57
            }
          }

This yields a response where we can see the sizes occupied by young and old generations and if needed can change the ratio accordingly using the New Ratio parameter in JVM java option.

The bigger the younger generation, the less often minor collections occur. We can set the size for the younger generation in the Elasticsearch startup script provided by setting the variable eg: ES_HEAP_NEWSIZE=1g, or we could set it from the JVM opts using the New Ratio parameter eg: Setting the jvm opts to –XX:NewRatio=4 means that the ratio between the old and young generation is 1:4.

SSD Hard Disk

The kind of hard disks we use for indexing also matters when it comes to performance because different kinds of hard disks have different write speeds. As of now, the SSD hard disks have a few advantages when compared with other types of hard disks. SSDs have much faster writing time, boot time, and much lower power consumption. Since the writing speed is faster, it is advisable to use SSDs for better indexing performance.

Segments Merge Tuning

Some operations in Elasticsearch require auto-generation of segments that at times get generated at a greater rate than the data ingestion rate. This might make the segment merging process much slower. When this happens, Elasticsearch will automatically throttle the indexing requests to a single thread. This index request throttling is done in order to prevent the generation of unmanageable number of segments that can result in a segment explosion problem.

So in order to make the segment merging process not affect the search performance, we can change the throttle limit. The higher the throttle limit, the higher the flow of the data to Elasticsearch, but sometimes the default throttle limit in Elasticsearch (usually 20 MB/s) might be too low. When it is too low we can change the same by using the following code:

curl -XPUT <IP:PORT>/_cluster/settings
{
    "persistent" : {
        "indices.store.throttle.max_bytes_per_sec" : "200mb"
    }
}

The above settings will make the throttle speed 200 mb/second, which is intended for SSDs. The default value 20 Mb/s is given considering the spinning hard disks.

Now, in certain cases like in bulk import of data, where we are not at all bothered about the search performances, we can disable the throttling, which in turn will allow the indexing to be as fast as the disks allow us. We can use the below settings for that:

curl -XPUT <IP:PORT> /_cluster/settings
{
     "transient" : {
         "indices.store.throttle.type" : "none" 
     }
}

Conclusion

In this article we have seen several settings to boost performance.

In the next of this series we will see a few more parameters and run performance tests and see how these settings affect the Elasticsearch performance.