elasticsearch bulk方法超时问题

批量插入数据的时候,出现了超时的问题,因为es默认时间为10s,批量插入的时候超过了这个时间,出现了超时异常

批量插入语句:

 es_conn.bulk(data_list)

异常:

elasticsearch.exceptions.ConnectionTimeout: ConnectionTimeout caused by - ReadTimeoutError(HTTPConnectionPool(host='172.18.11.241', port='9200'): Read timed out. (read timeout=10))

然后看了下bulk方法,看到有一个timeout参数

 @query_params('_source', '_source_exclude', '_source_include', 'fields',
        'pipeline', 'refresh', 'routing', 'timeout', 'wait_for_active_shards')
    def bulk(self, body, index=None, doc_type=None, params=None):
然后在bulk方法中添加 timeout=100

es_conn.bulk(json_list,timeout=100)
仍旧出现异常,貌似是解析timeout失败,单位丢失或者没有被识别什么的:

elasticsearch.exceptions.RequestError: TransportError(400, 'parse_exception', 'failed to parse setting [timeout] with value [100] as a time value: unit is missing or unrecognized')
然后Google了一下,将timeout改为request_timeout就好了,虽然还没找到timeout为什么不好用:

es_conn.bulk(json_list,request_timeout=100)

参考:https://stackoverflow.com/questions/28287261/connection-timeout-with-elasticsearch

ES版本:6.0







猜你喜欢

转载自blog.csdn.net/lom9357bye/article/details/79156019
今日推荐