Free To Feel

Heading to entrepreneur.


Joshua Chi
Github

Some suggestions when you work with solr

Background

Since we deploy solr to production, it was running fine for first few days, and then some day it will be slow to response. It happened at least several times like this. We can start this by analyzing from this graphite screenshot.

Solr Graphite screenshot

The whole server strucuture:

Notice: we will not discuss the strucuture was designed correctly or not in this post. This blog will just focus on how to use solr itself.

More info about this graphite

A warning from production solr log

PERFORMANCE WARNING: Overlapping onDeckSearchers=X

You will find an explanation from solr wiki page:

This warning means that at least one searcher hadn't yet finished warming in the background, when a commit was issued and another searcher started warming. This can not only eat up a lot of ram (as multiple on deck searches warm caches simultaneously) but it can can create a feedback cycle, since the more searchers warming in parallel means each searcher might take longer to warm.

Typically the way to avoid this error is to either reduce the frequency of commits, or reduce the amount of warming a searcher does while it's on deck (by reducing the work in newSearcher listeners, and/or reducing the autowarmCount on your caches)

See also the option in SolrConfig.xml.

I want to add addtional information before we start analyzing. The warning was always there in production log. We saw around double size this warning type entries comparing with the ones day before.

Too early conclusion

From the graphite screenshot I thought it must be something wrong with solr itself. The things which we can control might only be solr configure and JVM. After playing all those two factors and several round tsung stress testings, I kept getting this warning. I failed to get rid of this waring, which put me back to find more articles about this issue.

Conclusion

It was very possible that we used it wrong which was obvious if I had paied more attention to ...to avoid this error is to either reduce the frequency of commits, or reduce the amount of warming a searcher does....

So the solution could be batch the write requests.

Solrj client provides ConcurrentUpdateSolrServer which contained a queue. So we can queued all requests firstly with commitWithin enabled.

More tips about how to optimize your index performance if you also have the write issue:

comments powered by Disqus