elasticsearch 1.7集群配置

一、简介
 
ElasticSearch和Solr都是基于Lucene的搜索引擎,不过ElasticSearch天生支持分布式,而Solr是4.0版本后的SolrCloud才是分布式版本,Solr的分布式支持需要ZooKeeper的支持。
 
这里有一个详细的ElasticSearch和Solr的对比:http://solr-vs-elasticsearch.com/
 
二、基本用法
 
Elasticsearch集群可以包含多个索引(indices),每一个索引可以包含多个类型(types),每一个类型包含多个文档(documents),然后每个文档包含多个字段(Fields),这种面向文档型的储存,也算是NoSQL的一种吧。
 
ES比传统关系型数据库,对一些概念上的理解:
 
Relational DB -> Databases -> Tables -> Rows -> Columns
Elasticsearch -> Indices   -> Types  -> Documents -> Fields
从创建一个Client到添加、删除、查询等基本用法:
 
1、创建Client
 
public ElasticSearchService(String ipAddress, int port) {
        client = new TransportClient()
                .addTransportAddress(new InetSocketTransportAddress(ipAddress,
                        port));
    }
这里是一个TransportClient。
 
ES下两种客户端对比:
 
TransportClient:轻量级的Client,使用Netty线程池,Socket连接到ES集群。本身不加入到集群,只作为请求的处理。
 
Node Client:客户端节点本身也是ES节点,加入到集群,和其他ElasticSearch节点一样。频繁的开启和关闭这类Node Clients会在集群中产生“噪音”。
 
2、创建/删除Index和Type信息
 
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
// 创建索引
public void createIndex() {
     client.admin().indices().create( new CreateIndexRequest(IndexName))
             .actionGet();
}
 
// 清除所有索引
public void deleteIndex() {
     IndicesExistsResponse indicesExistsResponse = client.admin().indices()
             .exists( new IndicesExistsRequest( new String[] { IndexName }))
             .actionGet();
     if (indicesExistsResponse.isExists()) {
         client.admin().indices().delete( new DeleteIndexRequest(IndexName))
                 .actionGet();
     }
}
 
// 删除Index下的某个Type
public void deleteType(){
     client.prepareDelete().setIndex(IndexName).setType(TypeName).execute().actionGet();
}
 
// 定义索引的映射类型
public void defineIndexTypeMapping() {
     try {
         XContentBuilder mapBuilder = XContentFactory.jsonBuilder();
         mapBuilder.startObject()
         .startObject(TypeName)
             .startObject( "properties" )
                 .startObject(IDFieldName).field( "type" , "long" ).field( "store" , "yes" ).endObject()
                 .startObject(SeqNumFieldName).field( "type" , "long" ).field( "store" , "yes" ).endObject()
                 .startObject(IMSIFieldName).field( "type" , "string" ).field( "index" , "not_analyzed" ).field( "store" , "yes" ).endObject()
                 .startObject(IMEIFieldName).field( "type" , "string" ).field( "index" , "not_analyzed" ).field( "store" , "yes" ).endObject()
                 .startObject(DeviceIDFieldName).field( "type" , "string" ).field( "index" , "not_analyzed" ).field( "store" , "yes" ).endObject()
                 .startObject(OwnAreaFieldName).field( "type" , "string" ).field( "index" , "not_analyzed" ).field( "store" , "yes" ).endObject()
                 .startObject(TeleOperFieldName).field( "type" , "string" ).field( "index" , "not_analyzed" ).field( "store" , "yes" ).endObject()
                 .startObject(TimeFieldName).field( "type" , "date" ).field( "store" , "yes" ).endObject()
             .endObject()
         .endObject()
         .endObject();
 
         PutMappingRequest putMappingRequest = Requests
                 .putMappingRequest(IndexName).type(TypeName)
                 .source(mapBuilder);
         client.admin().indices().putMapping(putMappingRequest).actionGet();
     } catch (IOException e) {
         log.error(e.toString());
     }
}

 

这里自定义了某个Type的索引映射(Mapping),默认ES会自动处理数据类型的映射:针对整型映射为long,浮点数为double,字符串映射为string,时间为date,true或false为boolean。
 
注意:针对字符串,ES默认会做“analyzed”处理,即先做分词、去掉stop words等处理再index。如果你需要把一个字符串做为整体被索引到,需要把这个字段这样设置:field("index", "not_analyzed")。
 
3、索引数据
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
// 批量索引数据
public void indexHotSpotDataList(List<Hotspotdata> dataList) {
     if (dataList != null ) {
         int size = dataList.size();
         if (size > 0 ) {
             BulkRequestBuilder bulkRequest = client.prepareBulk();
             for ( int i = 0 ; i < size; ++i) {
                 Hotspotdata data = dataList.get(i);
                 String jsonSource = getIndexDataFromHotspotData(data);
                 if (jsonSource != null ) {
                     bulkRequest.add(client
                             .prepareIndex(IndexName, TypeName,
                                     data.getId().toString())
                             .setRefresh( true ).setSource(jsonSource));
                 }
             }
 
             BulkResponse bulkResponse = bulkRequest.execute().actionGet();
             if (bulkResponse.hasFailures()) {
                 Iterator<BulkItemResponse> iter = bulkResponse.iterator();
                 while (iter.hasNext()) {
                     BulkItemResponse itemResponse = iter.next();
                     if (itemResponse.isFailed()) {
                         log.error(itemResponse.getFailureMessage());
                     }
                 }
             }
         }
     }
}
 
// 索引数据
public boolean indexHotspotData(Hotspotdata data) {
     String jsonSource = getIndexDataFromHotspotData(data);
     if (jsonSource != null ) {
         IndexRequestBuilder requestBuilder = client.prepareIndex(IndexName,
                 TypeName).setRefresh( true );
         requestBuilder.setSource(jsonSource)
                 .execute().actionGet();
         return true ;
     }
 
     return false ;
}
 
// 得到索引字符串
public String getIndexDataFromHotspotData(Hotspotdata data) {
     String jsonString = null ;
     if (data != null ) {
         try {
             XContentBuilder jsonBuilder = XContentFactory.jsonBuilder();
             jsonBuilder.startObject().field(IDFieldName, data.getId())
                     .field(SeqNumFieldName, data.getSeqNum())
                     .field(IMSIFieldName, data.getImsi())
                     .field(IMEIFieldName, data.getImei())
                     .field(DeviceIDFieldName, data.getDeviceID())
                     .field(OwnAreaFieldName, data.getOwnArea())
                     .field(TeleOperFieldName, data.getTeleOper())
                     .field(TimeFieldName, data.getCollectTime())
                     .endObject();
             jsonString = jsonBuilder.string();
         } catch (IOException e) {
             log.equals(e);
         }
     }
 
     return jsonString;
}

 

ES支持批量和单个数据索引。
 
4、查询获取数据
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
// 获取少量数据100个
private List<Integer> getSearchData(QueryBuilder queryBuilder) {
     List<Integer> ids = new ArrayList<>();
     SearchResponse searchResponse = client.prepareSearch(IndexName)
             .setTypes(TypeName).setQuery(queryBuilder).setSize( 100 )
             .execute().actionGet();
     SearchHits searchHits = searchResponse.getHits();
     for (SearchHit searchHit : searchHits) {
         Integer id = (Integer) searchHit.getSource().get( "id" );
         ids.add(id);
     }
     return ids;
}
 
// 获取大量数据
private List<Integer> getSearchDataByScrolls(QueryBuilder queryBuilder) {
     List<Integer> ids = new ArrayList<>();
     // 一次获取100000数据
     SearchResponse scrollResp = client.prepareSearch(IndexName)
             .setSearchType(SearchType.SCAN).setScroll( new TimeValue( 60000 ))
             .setQuery(queryBuilder).setSize( 100000 ).execute().actionGet();
     while ( true ) {
         for (SearchHit searchHit : scrollResp.getHits().getHits()) {
             Integer id = (Integer) searchHit.getSource().get(IDFieldName);
             ids.add(id);
         }
         scrollResp = client.prepareSearchScroll(scrollResp.getScrollId())
                 .setScroll( new TimeValue( 600000 )).execute().actionGet();
         if (scrollResp.getHits().getHits().length == 0 ) {
             break ;
         }
     }
 
     return ids;
}

 

这里的QueryBuilder是一个查询条件,ES支持分页查询获取数据,也可以一次性获取大量数据,需要使用Scroll Search。
 
5、聚合(Aggregation Facet)查询 
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
// 得到某段时间内设备列表上每个设备的数据分布情况<设备ID,数量>
public Map<String, String> getDeviceDistributedInfo(String startTime,
         String endTime, List<String> deviceList) {
 
     Map<String, String> resultsMap = new HashMap<>();
 
     QueryBuilder deviceQueryBuilder = getDeviceQueryBuilder(deviceList);
     QueryBuilder rangeBuilder = getDateRangeQueryBuilder(startTime, endTime);
     QueryBuilder queryBuilder = QueryBuilders.boolQuery()
             .must(deviceQueryBuilder).must(rangeBuilder);
 
     TermsBuilder termsBuilder = AggregationBuilders.terms( "DeviceIDAgg" ).size(Integer.MAX_VALUE)
             .field(DeviceIDFieldName);
     SearchResponse searchResponse = client.prepareSearch(IndexName)
             .setQuery(queryBuilder).addAggregation(termsBuilder)
             .execute().actionGet();
     Terms terms = searchResponse.getAggregations().get( "DeviceIDAgg" );
     if (terms != null ) {
         for (Terms.Bucket entry : terms.getBuckets()) {
             resultsMap.put(entry.getKey(),
                     String.valueOf(entry.getDocCount()));
         }
     }
     return resultsMap;
}

 

Aggregation查询可以查询类似统计分析这样的功能:如某个月的数据分布情况,某类数据的最大、最小、总和、平均值等。
 
三、集群配置
 
配置文件elasticsearch.yml
 
集群名和节点名:
 
#cluster.name: elasticsearch
 
#node.name: "Franz Kafka"
 
是否参与master选举和是否存储数据
 
#node.master: true
 
#node.data: true
 
分片数和副本数
 
#index.number_of_shards: 5
#index.number_of_replicas: 1
 
master选举最少的节点数,这个一定要设置为整个集群节点个数的一半加1,即N/2+1
 
#discovery.zen.minimum_master_nodes: 1
 
discovery ping的超时时间,拥塞网络,网络状态不佳的情况下设置高一点
 
#discovery.zen.ping.timeout: 3s
 
注意,分布式系统整个集群节点个数N要为奇数个!!
 
四、Elasticsearch插件
 
1、elasticsearch-head是一个elasticsearch的集群管理工具:./elasticsearch-1.7.1/bin/plugin -install mobz/elasticsearch-head
 
2、elasticsearch-sql:使用SQL语法查询elasticsearch:./bin/plugin -u https://github.com/NLPchina/elasticsearch-sql/releases/download/1.3.5/elasticsearch-sql-1.3.5.zip --install sql
 
github地址:https://github.com/NLPchina/elasticsearch-sql
 
3、elasticsearch-bigdesk是elasticsearch的一个集群监控工具,可以通过它来查看ES集群的各种状态。
 
安装:./bin/plugin -install lukas-vlcek/bigdesk
 
访问:http://192.103.101.203:9200/_plugin/bigdesk/,
 
4、elasticsearch-servicewrapper插件是ElasticSearch的服务化插件,
 
在https://github.com/elasticsearch/elasticsearch-servicewrapper 下载该插件后,解压缩,将service目录拷贝到elasticsearch目录的bin目录下。
 
而后,可以通过执行以下语句安装、启动、停止ElasticSearch:
 
sh elasticsearch install
 
sh elasticsearch start
 
sh elasticsearch stop

猜你喜欢

转载自hugoren.iteye.com/blog/2265664
今日推荐