Memory is overrated

by

A recurring question on the Solr mailing list is “how do I speed up my searches” and a recurring answer is “equip the machine with enough RAM to have your whole index cached in memory”. That answer is also given when the size index in question is 200GB.

So what is wrong with that?

Technically nothing: Lucene & Solr search eats IOPS like candy and having the full index in disk cache is close to an optimal solution (optimal would be a full in-memory index without the file access indirection, but I digress). There is the matter of getting updates into disk cache, which does involve some trickiness if the index if updated in a master-slave setup and copied. But that can be solved with even more RAM, so I guess that falls under the same “buy more RAM”-logic. What cannot be solved is the long warmup time if the server is rebooted or the disk cache is otherwise cleared, but that is a rare occurrence.

Economically, copious amounts of RAM does not make sense. Yes, you guessed it, this is about Solid State Drives.

  • Their price is 1/10 of RAM (or 1/5 if you want RAID 1)
  • They suffer a lot less from the cleared disk cache problem
  • They can be easily RAIDed for TB-scale
  • They even draw less power than the same amount of RAM

Of course, it all boils down to how fast SSDs are, compared to the humongous disk cache solution. We experimented with this 5 years ago but hardware and software has improved since then, so it was time for new measurements.

Setup reasoning and methodology

5 years ago, our measurements were very close to the Lucene 2.x searcher itself. The search results were extracted, but there were no web services or similar transport overhead. We chose to do so as this gave us very clean data for comparison.

This time our tests uses the standard Solr 4 web service, with the server and the test-client being on different machines. While the non-trivial transport overhead gives a less clean comparison, it has the distinct advantage of providing real world numbers and thus useful for informing readers of what they can expect from a similar setup. The tests were run a multiple of times with the best results being used for all the charts. A ZIP with the full result set as well as the test scripts is available upon request, should anyone be interested.

Like last time, the test corpus is our local index at the State and University Library, Denmark.

  • It has 11M documents at 49GB
  • Queries are edismaxed over 30 fields
  • The result set contains 5-10 fields per document for a maximum of 20 documents and is about 30KB of XML
  • Faceted queries involves two fields: One with 10M unique values (15M instances) and one with 626 (1.5M instances)
  • The test queries are logged user queries, which are being issued by multiple threads using JMeter
  • Between each test, Solr is shut down, the disk cache cleared and Solr started again
  • The first query is not measured
  • MMapDirectory is used.

Hardware

The test machine is an 2*8 core “Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz” with 64GB of RAM, with the amount of RAM being software adjustable. Storage is 3 * 7200 RPM drives in RAID-5 and 3 * 200GB Dell MZ-5EA2000-0D3 SSDs in RAID-0.

Addendum 2013-06-07: The SSDs does not have TRIM enabled and has been used for 2½ years, during which a lot of indexes has been created, along with a 10M+ file test and some 40GB database tests. They should be fairly fragmented by now.

Test results

The most eye-opening graphs are for 8GB RAM + SSD vs. full memory cached index. Keep in mind that 2 of the 8GB are used for the Java heap itself and some memory are used for general bookkeeping, leaving a little less than 5GB (or 10% of the index size) for caching.

SSD @ 8GB RAM vs. fully cached

SSD with 8GB of RAM vs. fully cached index, non-faceting searches

SSD with 8GB of RAM vs. fully cached index, faceting searches

SSD with 8GB of RAM vs. fully cached index, faceting searches

(Please ignore the strange U at the end of the first graph; it is a long story that warrants another blog post)

As can be seen, the “8GB RAM + SSD”-solution is very close to having the index fully cached in memory for our setup. Your mileage may vary, but this is consistent with our general observation at Statsbiblioteket: Our main search servers each has 3 active search installations with a shared index size of 110MB+ on SSD. They are equipped with 16GB of RAM each and has ~7GB free for disk caching.

Further supporting the case for SSD, the 95 and 99 percentiles (calculated using a sliding window over the last 1000 searches) for the searches are nearly always below 1 second: The users are getting consistently snappy results. Again, please ignore the part of the graph after 200 seconds.

SSD with 8GB of RAM

SSD with 8GB of RAM, non-faceting searches

The SSD-solution is not independent of the amount of RAM available for cache though. We tested with 4GB of RAM, which leaves just 1GB (2% of index size) for caching. As can be seen below, performance is 1/4th of the 8GB of RAM + SSD setup.

SSD with 4GB of RAM

SSD with 4GB of RAM, non-faceting searches

Just for kicks, here’s a chart showing the performance using spinning disks on an unwarmed index.

Spinning drives with 32GB of RAM

Spinning drives with 32GB of RAM, non-faceting searches

The graph does illuminate a big problem with spinning drives: If the searcher is warmed using queries, it takes a very long time to reach peak performance. Copying the full index to /dev/null is faster and the result is maximum performance, but that trick is only effective if the whole index fits in the disk cache.

Conclusion

Using SSDs as storage for search delivers near maximum performance at a fraction of the cost of an equivalent RAM solution. As always, do test before buying.

About these ads

2 Responses to “Memory is overrated”

  1. Chris Says:

    Thanks a lot for posting this! Any chance you could the clarify the prevalence of proximity searching in your tests? The two main factors, I think, are how often did users explicitly quote things, and also whether you had enabled the pf2/pf3 edismax parameters (thereby adding proximity work to *every* query). If you want to clarify even further, do you allow wildcard searches?

  2. Toke Eskildsen Says:

    pf2/pf3 were not enabled, wildcards were allowed. For RAM & SSDs about 100K queries were processed in the 250 second test. A quick grep through those queries resulted in

    - 16408 phrase (“foo bar”) queries, out of which 15559 were qualified (zoo:”foo bar”)
    - 1715 truncated (foo*) queries
    - 949 wildcard (f?o) queries, out of which 184 were qualified (zoo:f?o)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

%d bloggers like this: