Bas is a DZone MVB and is not an employee of DZone and has posted 11 posts at DZone. View Full User Profile

Solr update performance

04.10.2011
| 9028 views |
  • submit to reddit
When I started working with Solr I issued updates just like I was used to do with databases: a single command followed by a commit. Later I discovered this was far from optimal, and started using different update strategies.

To demonstrate the differences I’ve done some simple benchmarks with three different update strategies, and as you will see the performance difference can be huge. I will also give some tips on how to easily optimize the updates in your application.

The benchmark scripts were built in PHP with Solarium. I’ve left out the setup part in the tests, see the Solarium wiki for more info about using Solarium.
I used two systems on a local network, one running Solr and one running the PHP client. This adds some network latency, so results for a local Solr might vary.
The Solr index has just over 100K documents. Each of the test scripts will add 1000 documents to this index.

First test: adding and committing each document

This test commits each single document. A very common scenario. Normally this would be spread out over time, but for the benchmark I do a thousand add/commits in a loop:

$start = microtime(true);

for ($id = 8000000; $id < 8001000; $id++) {
    $document = new Solarium_Document_ReadWrite;
    $document->id = $id;
    $document->name = 'test';

    $query = new Solarium_Query_Update;
    $query->addDocument($document);
    $query->addCommit();
    $client->update($query);
}

echo round(microtime(true)-$start, 2);

Result: 12.04 seconds (83 documents per second)

The performance difference is caused by Solr having to do only a single commit instead of a thousand. This test issues the same number of Solr requests (actually +1 for the commit) so network latency should be the same.

Third test: adding and committing all documents in a single request

The final test combines the complete update into a single Solr request. While the commands are now issues in a single request, it is still the exact same set of commands as used in the second test.

$start = microtime(true);

$query = new Solarium_Query_Update;
for ($id = 8002000; $id < 8003000; $id++) {
    $document = new Solarium_Document_ReadWrite;
    $document->id = $id;
    $document->name = 'test';
    $query->addDocument($document);
}

$query->addCommit();
$client->update($query);

echo round(microtime(true)-$start, 2);

Result: 0.45 seconds (2220 documents per second)

The performance difference is caused by a lower number of requests. A lower number of requests is more optimal in multiple layers. Most importantly network latency, but less requests also mean less overhead in Solarium and Solr.

How to optimize updates

As you can see from the results the performance difference can be huge. The most optimal update strategy for your application will depend on multiple factors. First of all you need to determine how you are going to update:

  • batch updates, e.g. a nightly update cron or other fixed interval
  • continuous (maybe even concurrent) updates, e.g. based on user input
  • or maybe a combination of both

For batch updates you will probably get the best performance with a solution similar to the third test. For very big updates you might need to break it up into several requests, followed by a single commit. How to implement this depends on the Solr client library you use. As you can see it is quite easy using Solarium.

If you have continuous updates you should prevent issuing commits with each update. This can cause a high number of commits or even concurrent commits.
The easiest solution for this scenario is using the Solr ‘autoCommit’ feature. This way you can add documents without worrying about when to commit. You only need to a a single setting to your Solr config and remove commits.

Disclaimer
The results of these benchmarks are influenced by many factors: index size, document size, index schema, update frequency, hardware, network, solr configuration and many more factors. The tests are a worst-case scenario, if you use single update+commits but they are spread out enough it might not be an issue at all.
You should really run your own tests in your own environment with a realistic workload to validate the results.

References
Published at DZone with permission of Bas De Nooijer, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Tags: