Logging
"SolrDispatchFilter.init() done" OR "Shutting down CoreContainer"
select level=ERROR
Exclusivity check failed for
The overseer keeps trying to run the next items in the work queue, but they are blocked by the fact that they are for the same collection
DistributedQueue.orderedChildren
ZkNodeProps props = ZkNodeProps.load(eventData);
https://java2practice.com/2013/02/19/how-to-solve-org-apache-lucene-index-corruptindexexception/
https://stackoverflow.com/questions/11599143/identify-slow-solr-queries
INFO: [core0] webapp=/solr path=/select/ params={indent=on&start=0&q=*:*&version=2.2&rows=10} hits=1074 status=0 QTime=1
splunk:
QTime>1000
"SolrDispatchFilter.init() done" OR "Shutting down CoreContainer"
select level=ERROR
The overseer keeps trying to run the next items in the work queue, but they are blocked by the fact that they are for the same collection
DistributedQueue.orderedChildren
ZkNodeProps props = ZkNodeProps.load(eventData);
3. then run this command
java -cp lucene-core-3.1.0.jar -ea:org.apache.lucene… org.apache.lucene.index.CheckIndex “your indexed directory path” –fix
in my case it is
java -cp lucene-core-3.1.0.jar -ea:org.apache.lucene… org.apache.lucene.index.CheckIndex “C:\Program Files\gisgraphy-3.0-beta2\solr\data\index” –fix
https://solr.pl/en/2011/01/17/checkindex-for-the-rescue/java -cp lucene-core-3.1.0.jar -ea:org.apache.lucene… org.apache.lucene.index.CheckIndex “your indexed directory path” –fix
in my case it is
java -cp lucene-core-3.1.0.jar -ea:org.apache.lucene… org.apache.lucene.index.CheckIndex “C:\Program Files\gisgraphy-3.0-beta2\solr\data\index” –fix
https://stackoverflow.com/questions/11599143/identify-slow-solr-queries
INFO: [core0] webapp=/solr path=/select/ params={indent=on&start=0&q=*:*&version=2.2&rows=10} hits=1074 status=0 QTime=1
You need to look at Qtime
QTime>1000
java.lang.OutOfMemoryError: Unable to create new native thread
It means the jvm can't get a new native thread from the OS. There is a max allocation of threads per user which is usually about ~53k and can be checked with
ulimit -a
In general, any process running under the same user will throw these exceptions if the thread allocation is maxed so you will have to find out which process(es) is(are) consuming an unusually large number of threads. You can check the number of threads (light weight processes) used for a specific pid using
ps -o nlwp [pid]
for i in `seq 8983 8988`; do echo ""; echo Port $i; ps -o nlwp `ps aux | grep solr | grep $i | awk '{print $2}' | sort | tail -n 1`