es第十篇:Elasticsearch for Apache Hadoop

es for apache hadoop(elasticsearch-hadoop.jar)允许hadoop作业(mapreduce、hive、pig、cascading、spark)与es交互。

At the core, elasticsearch-hadoop integrates two distributed systems: Hadoop, a distributed computing platform and Elasticsearch, a real-time search and analytics engine. From a high-level view both provide a computational component: Hadoop through Map/Reduce or recent libraries like Apache Spark on one hand, and Elasticsearch through its search and aggregation on the other.elasticsearch-hadoop goal is to connect these two entities so that they can transparently benefit from each other.

Map/Reduce and Shards

A critical component for scalability is parallelism or splitting a task into multiple, smaller ones that execute at the same time, on different nodes in the cluster. The concept is present in both Hadoop through its splits (the number of parts in which a source or input can be divided) and Elasticsearch through shards (the number of parts in which a index is divided into).In short, roughly speaking more input splits means more tasks that can read at the same time, different parts of the source. More shards means more buckets from which to read an index content (at the same time).As such, elasticsearch-hadoop uses splits and shards as the main drivers behind the number of tasks executed within the Hadoop and Elasticsearch clusters as they have a direct impact on parallelism.Hadoop splits as well as Elasticsearch shards play an important role regarding a system behavior - we recommend familiarizing with the two concepts to get a better understanding of your system runtime semantics.

Apache Spark and Shards

While Apache Spark is not built on top of Map/Reduce it shares similar concepts: it features the concept of partition which is the rough equivalent of Elasticsearch shard or the Map/Reduce split. Thus, the analogy above applies here as well - more shards and/or more partitions increase the number of parallelism and thus allows both systems to scale better.Due to the similarity in concepts, through-out the docs one can think interchangebly of Hadoop InputSplit and Spark Partition.

Reading from Elasticsearch

Shards play a critical role when reading information from Elasticsearch. Since it acts as a source, elasticsearch-hadoop will create one Hadoop InputSplit per Elasticsearch shard, or in case of Apache Spark one Partition, that is given a query that works against index I. elasticsearch-hadoop will dynamically discover the number of shards backing I and then for each shard will create, in case of Hadoop an input split (which will determine the maximum number of Hadoop tasks to be executed) or in case of Spark a partition which will determine the RDD maximum parallelism.

With the default settings, Elasticsearch uses 5 primary shards per index which will result in the same number of tasks on the Hadoop side for each query.
elasticsearch-hadoop does not query the same shards - it iterates through all of them (primaries and replicas) using a round-robin approach. To avoid data duplication, only one shard is used from each shard group (primary and replicas).

A common concern (read optimization) for improving performance is to increase the number of shards and thus increase the number of tasks on the Hadoop side. Unless such gains are demonstrated through benchmarks, we recommend against such a measure since in most cases, an Elasticsearch shard can easily handle data streaming to a Hadoop or Spark task.

Writing to Elasticsearch

Writing to Elasticsearch is driven by the number of Hadoop input splits (or tasks) or Spark partitions available. elasticsearch-hadoop detects the number of (primary) shards where the write will occur and distributes the writes between these. The more splits/partitions available, the more mappers/reducers can write data in parallel to Elasticsearch.

Whenever possible, elasticsearch-hadoop shares the Elasticsearch cluster information with Hadoop and Spark to facilitate data co-location. In practice, this means whenever data is read from Elasticsearch, the source nodes' IPs are passed on to Hadoop and Spark to optimize task execution. If co-location is desired/possible, hosting the Elasticsearch and Hadoop and Spark clusters within the same rack will provide significant network savings.

常用设置:

必需项:

es.resource.read

从哪个索引读取数据,值格式是<index>/<type>,如artists/_doc。

支持多个index,如artists,bank/_doc,表示从artists和bank索引的_doc/读取数据。artists,bank/,表示从artists和bank索引中读取数据,type任意。_all/_doc表示从所有索引的_doc读取数据。

es.resource.write

写数据到哪个索引,值格式是<index>/<type>,如artists/_doc。

不支持多个索引,但是支持动态索引。索引名依据文档的某个或某些字段产生,如文档字段有id、name、password、age、created_date、updated_date,则es.resource.write的值可以是{name}/_doc,甚至还支持格式化,如{updated_date|yyyy-MM-dd}/_doc。但是这里应该是有bug。实测,要求es.index.auto.create值必须为true,否则会报错:Target index [{name}/_doc] does not exist and auto-creation is disabled [setting 'es.index.auto.create' is 'false'],即使对应的索引存在。但是实际生产中,索引不可能是自动创建的,绝对是通过人为移交脚本创建的。

es.resource

读写数据到哪个索引,值格式是<index>/<type>,如artists/_doc。当read和write的索引是同一个时,就可以用这个来简化配置。

es.nodes

es集群地址,默认是localhost。

es.port

es集群端口,默认是9200。

es.write.operation

往es插入文档时,es的操作,值可以是index、create、update、upsert,默认是index。

index:根据文档id,如果不存在则插入,如果已存在,则替换。就是es的原生index操作

create:根据文档id,如果不存在则插入,否则抛异常。

update:根据文档id,如果存在则更新,否则抛异常。

更新效果示例:

假如原来文档内容是{"id" : 1, "name" : "zhangsan", "password" : "abc123"},新文档内容是{"id" : 1, "name" : "lisi", "age" : 20},

则更新后文档内容是{"id" : 1, "name" : "lisi", "password" : "abc123", "age":20}。

upsert:根据文档id,如果不存在则插入,否则更新。更新效果同上。

es.input.jsones.output.json

es.input.json,值为true或者false。当写入es的数据为json字符串时,es是否解析该字符串到索引各字段中,还是直接当成一个普通字符串存储。 默认为false,即不解析。

示例:

es与hive整合时,hive es外部表es_test对应es test/_doc,es test索引字段有id、name、password、age、created_date、updated_date。

情况1:es_test字段有id、name、password、age、created_date、updated_date

此种情况下,在建表语句中不能设置es.input.json值为true,只能为false。否则在插入数据时,会报"org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: When using JSON input, only one field is expected"错误。

情况2:es_test仅有1个字段data

此种情况下,es.input.json值为false时,在插入json字符串数据时会报空指针异常。值为true时,会把json字符串中各字段的值都解析到索引中各对应字段中,就好像正常插入数据一样。

es.output.json,值为true或者false,默认是false。值为true时,通过elasticsearch-hadoop.jar从es读取数据会直接返回json字符串。

最佳实践:慎用es.input.json和es.output.json。hive与es一个一个字段对应,多好,省的各种B事。

es.mapping.id

写数据到es时,文档的id由数据中的哪个字段指定。如果不指定的话,则文档id会由es自动生成,这样每插入一条数据,es都会多一条文档,没法做更新了。所以生产环境下,es.mapping.id是必须配置的,值是那种值是唯一的字段名,例如主键id、资讯id、产品id、客户id、订单id等。

es.mapping.date.rich

Whether to create a rich Date like object for Date fields in Elasticsearch or returned them as primitives (String or long). By default this is true. The actual object type is based on the library used; noteable exception being Map/Reduce which provides no built-in Date object and as such LongWritable and Text are returned regardless of this setting.

发布了276 篇原创文章 · 获赞 109 · 访问量 24万+

猜你喜欢

转载自blog.csdn.net/lvtula/article/details/103069617
今日推荐