We recently announced our significant global expansion with Microsoft Azure data centers. Today, we bring more good news for users who are looking for even more reliability in their hosted Elasticsearch service provider: Qbox users can now select three-node replicated Elasticsearch clusters in any of our Amazon data centers.

With our new replicated cluster service offering, we now provide the best search solution of any hosted Elasticsearch vendor. These three-node clusters with default replicas are highly resilient to outages and restarts, which equates to much more flexibility—and confidence—during resizing operations.

Continue reading to see how you can enjoy higher throughput and greater reliability in your ES searches.

On our conventional standard clusters, the user controls the number of replicas for each index. When you add a node, you must also add a replica—if this is the purpose for the new node. We realize that this is not always the case. For write-heavy clusters with growing datasets and over-allocated shards, the intention of adding more nodes is to support more data per shard. That's why we haven't been automatically adding replicas on all of our standard clusters. Those clusters give the user the option to increase data capacity instead of supporting a higher throughput.

It will always remain important for us to offer the standard cluster type to accommodate users who are running write-heavy configurations. As we consider our own experience and that of our customers, however, it's become crystal clear that most users add nodes because they need to improve performance and throughput. So, from now on, users who run read-intensive apps against Elasticsearch have the option for a convenient, high-performance, highly resilient solution: replicated clusters. With automatic shard replication on a three-node cluster—along with click-to-increment additional replicas—you get extremely high-availability and blistering read-intensive search performance.

It's a fact: with our replicated cluster service offering, we provide the best search solution of any hosted Elasticsearch vendor.

Advantages of Replicated Clusters

With replicated clusters, you gain these benefits:

  • Built for resiliency
  • Smart load balancing
  • Simple scaling
  • Endpoint simplification
  • Local SSD, high-memory servers
  • Controlled sharding

High resiliency — We design our replicated clusters for maximum resiliency in a three-node configuration: one client node and two data nodes, plus shard replication. With our replicated clusters, it's less likely that the client node will fail due to resource strain, so we configure the client node as a dedicated master node.

Minimal downtime — You get even better load balancing with a replicated cluster because the potential for downtime is even lower when you need to remove nodes or migrate. The major benefit of a dedicated master node is that failover restarts on the data nodes will be faster because there is no discovery delay while the nodes decide which will become the new master. We won't bore you with those details here, but the bonus is that there is less downtime during topology modifications such as node removal and migrations.

Higher throughput — For each of our replicated clusters, each shard will have at least one replica shard to correspond to each primary shard of the index. This means that there will be at least two complete copies of every index. See the simple instructions below to learn how we make it easy for you to add more replicas (more copies). With each additional replica, the user instantly adds another copy of each and every index. The major benefit here is that each additional replica will give you an overall increase in throughput (queries per minute).

Lower-cost SSD hardware — To support each replicated cluster, we use all-SSD hardware nodes with a high RAM-to-disk ratio. This minimizes costs, and we pass the savings along to our customers. We all benefit from much higher performance and also greatly diminish the risk of instability from heap overflow.

NOTE: Until we work out favorable pricing with other cloud vendors, we are making replicated clusters available only in our AWS data centers.

Endpoint Simplification

For many users, deploying any change to their app is a huge process that often results in downtime. On standard clusters, you must add the endpoint of a new node to your app code and then re-deploy in order to fully achieve the additional throughput that comes from the new node.

However, you can avoid this tedium altogether by managing only the endpoints of client nodes. Yet another benefit of a replicated cluster is that it's necessary to keep track of only one client endpoint—or a few endpoints on larger, multi-node clusters. This makes it a breeze when connecting to new nodes.

Deciding between Replicated and Standard Clusters

Because of the benefits that we're presenting here, we recommend replicated clusters as the best choice for most customers. There are, however, some environments for which we would suggest standard clusters.

One exception is any log-storage environment that isn't mission critical. Another exception in which replicated clusters aren't necessary is a single-node environment for testing or staging. We would also recommend a standard cluster if it's necessary to maintain very explicit control over your sharding (most customers do not). The excess CPU and disk space on standard nodes would accommodate more economical write-heavy (low-read) applications.

How Many Replicas?

Before we show how easy it is to configure a replicated cluster, let's think about the failover advantages that come with replicas. With one replica, you have basic failover—but only for search operations.

Indexing requests, however, will occur only when there is a quorum of shards available. And, by accepting the default of a single replica, you'll only have two copies—which means that the quorum is two.

You can also get failover for indexing operations—but only when you opt for two or more replicas. This is because Elasticsearch can continue to update the index with two of the three shard copies. Because of consistency issues, it cannot do so with less than a quorum.

The point we want to convey here is this: by adding just one more replica, you achieve complete operational failover capability for your cluster.

Easy Setup

It's as easy to provision a replicated cluster as it is to provision a standard cluster. Simply follow these steps:

elasticsearch replication replica hosted hosting cluster

1. On your cluster dashboard, click the New Cluster button.

2. Choose AWS Replicated.

3. Next, choose the Capacity and enter a number for the number of additional Replicas that you need. Enter a cluster Name and make the other selections in the Basics panel.

4. The Elasticsearch default for the number of shards (number_of_shards) for each index is 5. If you want to change this value, then check the Default Shard Count box in the Options panel and move on to the final step below.

elasticsearch replication replica hosted hosting cluster

5. Although 5 five is a sensible default for the number of shards for each index, you may decide that another value is better for your requirements. You can change the value for default Shards here, but remember that you can specify a different number_of_shards value each time you create a new index.

elasticsearch replication replica hosted hosting cluster

NOTE: For clusters containing many indices (as is the case with Logstash), we recommend that you specify a value that is lower than the Elasticsearch default of 5. For Logstash, you may want to consider a value of 1.

Other Benefits of Replicated Clusters

As with standard clusters, you can perform migrations to increase node size as your dataset grows. Also, choosing a higher-capacity configuration will support higher throughput (queries per minute). Comparing the two, you get significant cost and performance benefits with replicated clusters.

Of course, higher capacities support higher throughput. So, let's say that you find yourself in need of 4+ replicas. Unless you're on the largest node size, it's generally more economical to increment in Capacity and decrease the number of replicas.

On replicated clusters, the minimum configuration is one client node, two data nodes plus one replica. If you require three replicas (a total of four data nodes), you could lower your cost and also improve performance by simply increasing the Capacity—the result being a return to two data nodes and one replica on larger VMs.

NOTE on large datasets: With our replicated clusters, we're ready to support datasets larger than 120 GB on this new system. Contact Qbox support and we'll help you with the configuration.

Upcoming feature: Since it's difficult to implement auto-scaling on standard clusters, we will be rolling out auto-scaling for replicated clusters in the near future.

The Case for Replicated Clusters

The primary motivation for the design of this new feature is an outworking of a central tenet of our philosophy: three-node clusters are highly resilient in the case of single-node outages and restarts. This topology offers full failover and high availability, in virtually all scenarios. Making use of a client node equates to a lower price point for the basic three-node cluster. Client nodes handle less intensive work than data nodes, and they provide a way to keep costs lower for a production cluster with search failover.">

To increase search throughput, we recommend adding additional replica shards to your three-node replicated cluster. The Base Price/Hr on the Replicated section of our pricing page is the per-compute-hour base price for a three-node cluster. The +1 Replica column lists the approximate monthly cost for each additional replica (in addition to the three-node cluster and first replica shard that is included in the Base Price).

Please don't hesitate to contact us immediately if you have any questions.

Other Helpful Resources

Have a look at these other resources that can help you optimize your Elasticsearch dev-ops work:

Give It a Whirl!

This feature is available now, and you can provision a replicated cluster immediately. Also, it's easy to spin up a standard hosted Elasticsearch cluster on any of our 47 Rackspace, Softlayer, Amazon or Microsoft Azure data centers.

Not yet enjoying hosted Elasticsearch? We invite you to create a free account today.

comments powered by Disqus