ElasticSearch immense term错误

参考
http://rockybean.info/2015/02/09/elasticsearch-immense-term-exception

ElasticSearch immense term错误

作者: rockybean 时间: February 9, 2015 分类: 技术
在使用ElasticSearch的过程中遇到了一个immense term的异常报错,调研了一下出现的原因,又学习到些新东西,见到记录在这里。

这个错误大致内容如下:

java.lang.IllegalArgumentException: Document contains at least one immense term in field="reqParams.data" (whose UTF8 encoding is longer than the max length 32766), all of which were skipped.  Please correct the analyzer to not produce such terms.  The prefix of the first immense term is: '[123, 34, 98, 114, 111, 97, 100, 99, 97, 115, 116, 73, 100, 34, 58, 49, 52, 48, 56, 49, 57, 57, 57, 56, 56, 44, 34, 116, 121, 112]...', original message: bytes can be at most 32766 in length; got 40283
    at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:685)
    at org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:359)
    at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:318)
    at org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:239)
    at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:454)
    at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1511)
    at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1246)
    at org.elasticsearch.index.engine.internal.InternalEngine.innerCreateNoLock(InternalEngine.java:482)
    at org.elasticsearch.index.engine.internal.InternalEngine.innerCreate(InternalEngine.java:435)
    at org.elasticsearch.index.engine.internal.InternalEngine.create(InternalEngine.java:404)
    at org.elasticsearch.index.shard.service.InternalIndexShard.create(InternalIndexShard.java:403)
    at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:449)
    at org.elasticsearch.action.bulk.TransportShardBulkAction.shardUpdateOperation(TransportShardBulkAction.java:541)
    at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:240)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:511)
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:419)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.util.BytesRefHash$MaxBytesLengthExceededException: bytes can be at most 32766 in length; got 40283
    at org.apache.lucene.util.BytesRefHash.add(BytesRefHash.java:284)
    at org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:151)
    at org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:659)
    ... 18 more
大意就是说,文档里面有一个巨大的term,超过了lucene处理的最大值(32766),不予处理并抛出异常。错误描述很明确,term太大了,超过了32766字节。网上简单搜索了下,相关文章很多,这里不啰嗦了,讲下查到的一个解决方案。

首先,term是用于搜索的最小单位,一般来讲一个过长的term意义不会太大,有谁会去完整匹配一个100字的关键词呢?!一般都是输入一段关键语句,搜索引擎先将这关键语句分词,获取一系列的term,然后拿这些term去匹配已有文档的倒排索引,打分后返回结果。所以term一般不会很长,像32766这种长度的term即便存下来对于搜索来讲也是毫无意义的,所以当遇到这种超长的term时,如果可以只存储其部分信息,那么就可以解决我们遇到的immense term的问题了。好在ElasticSearch已经提供了解决方案,就是ignore_above,这个配置详情可以查看链接,示例配置如下:

curl -XPUT 'http://localhost:9200/twitter' -d '
{
    "mappings":{
        "tweet" : {
            "properties" : {
                "message" : {"type" : "string", "index":"not_analyzed","ignore_above":256 }
            }
        }
    }
}
'
上面建立了twitter的索引,其中tweet下的message字段不做分词等处理,直接将原始内容来做索引,当内容长度大于256字节时,只索引前面256个字符,后面的内容被丢弃。这样就不会出现前文所提的immense term的错误了。

一般ignore_above设置就是为not_analyzed字段存在的,不可滥用。

参考资料

UTF8 encoding is longer than the max length 32766

elasticsearch的mapping疑问

猜你喜欢

转载自wangqiaowqo.iteye.com/blog/2306079