ElasticSearch the use of two (basic use articles)

ElasticSearch client operating

The actual development, there are three ways as a client elasticsearch services:
first, elasticsearch-head plug
second, using Restful interface to provide direct access elasticsearch the
third, using API elasticsearch provided access

Postman installation tool

Postman中文版是postman这款强大网页调试工具的windows客户端,提供功能强大的Web API & HTTP 
请求调试。软件功能非常强大,界面简洁明晰、操作方便快捷,设计得很人性化。Postman中文版能够发
送任何类型的HTTP 请求 (GET, HEAD, POST, PUT..),且可以附带任何数量的参数。

Download Postman tool
Postman's official website: https: //www.getpostman.com
Here Insert Picture Description
registered Postman tool
may not be registered directly
Here Insert Picture Description
Here Insert Picture Description

Postman use tools Restful interface to access

ElasticSearch interface syntax

curl ‐X<VERB> '<PROTOCOL>://<HOST>:<PORT>/<PATH>?<QUERY_STRING>' ‐d '<BODY>'

Wherein:
Here Insert Picture Description
creating the index and index mapping mapping
request url: PUT localhost: 9200 / blog1
request body:

{
    "mappings": {
        "article": {
            "properties": {
                "id": {
                 "type": "long",    
                    "store": true,
                    "index":"not_analyzed"
                },
                "title": {
                 "type": "text",    
                    "store": true,
                    "index":"analyzed",
                    "analyzer":"standard"
                },
                "content": {
                 "type": "text",    
                    "store": true,
                    "index":"analyzed",
                    "analyzer":"standard"
                }
            }
        }
    }
}

postman Screenshot:
Request Content If you have questions, text format into json
Here Insert Picture Description
elasticsearch-head view:
Here Insert Picture Description
Set Mapping After creating the index
we can set the mapping information when creating an index, of course, you can create an index and then set mapping.
Is not provided in a step maping information, create an index used directly put method, and then set the mapping information. Request url: POST http://127.0.0.1:9200/blog2/hello/_mapping
request body:

{
    "hello": {
            "properties": {
                "id":{
                 "type":"long",    
                 "store":true    
                },
                "title":{
                 "type":"text",    
                 "store":true,    
                 "index":true,    
                 "analyzer":"standard"    
                },
                "content":{
                 "type":"text",    
                 "store":true,
                 "index":true,    
                 "analyzer":"standard"    
                }
            }
        }
  }

PostMan Screenshot
Here Insert Picture Description
delete the index index
request url: DELETE localhost: 9200 / blog1
Postman Screenshot:
Here Insert Picture Description
elasticsearch-head view:
Here Insert Picture Description
create a document document
request url: POST localhost: 9200 / blog1 / article / 1
request body:

{
"id":1,    
"title":"ElasticSearch是一个基于Lucene的搜索服务器",    
"content":"它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口。Elasticsearch是用
Java开发的,并作为Apache许可条款下的开放源码发布,是当前流行的企业级搜索引擎。设计用于云计
算中,能够达到实时搜索,稳定,可靠,快速,安装使用方便。"
}

Screenshot postman:
Here Insert Picture Description
elasticsearch head-View:
Here Insert Picture Description
modify a document document
request url: POST localhost: 9200 / blog1 / article / 1
request body:

{
"id":1,    
"title":"【修改】ElasticSearch是一个基于Lucene的搜索服务器",    
"content":"【修改】它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口。Elasticsearch是用Java开发的,并作为Apache许可条款下的开放源码发布,是当前流行的企业级搜索引
擎。设计用于云计算中,能够达到实时搜索,稳定,可靠,快速,安装使用方便。"
}

postman Screenshot:
Here Insert Picture Description
elasticsearch-head View:
Here Insert Picture Description
Deleting a document document
request url: DELETE localhost: 9200 / blog1 / Article This article was / 1
postman Screenshot:
Here Insert Picture Description
elasticsearch-head View:
Here Insert Picture Description
query document - based on the id query
request url: GET localhost: 9200 / blog1 / article / 1
Postman Screenshot:
Here Insert Picture Description
query document -querystring query
request url: POST localhost: 9200 / blog1 / article / _search
request body:

{
    "query": {
        "query_string": {
            "default_field": "title",
            "query": "搜索服务器"
        }
    }
}

postman Screenshot:
Here Insert Picture Description
Note: The Search Content "search server" to "cable", also can search for documents, the reasons will be explained in the following answers

{
    "query": {
        "query_string": {
            "default_field": "title",
            "query": "钢索"
        }
    }
}

-Term query query document
request url: POST localhost: 9200 / blog1 / article / _search
request body:

{
    "query": {
        "term": {
            "title": "搜索"
        }
    }
}

postman Screenshot:
Here Insert Picture Description

IK word and integrated use ElasticSearch

The above query analysis problems
during query string, we find to search "Search Server" and "cable" can search the data; and when performing a query term, we search for "search" but not to the search data; the reason is ElasticSearch standard word is caused when we create an index field using a standard word breaker:

{
    "mappings": {
        "article": {
            "properties": {
                "id": {
                 "type": "long",    
                    "store": true,
                    "index":"not_analyzed"
                },
                "title": {
                 "type": "text",    
                    "store": true,
                    "index":"analyzed",
                    "analyzer":"standard" //标准分词器   
                },
                "content": {
                 "type": "text",    
                    "store": true,
                    "index":"analyzed",
                    "analyzer":"standard" //标准分词器   
                }
            }
        }
    }
}

For example, "I am a programmer" in standard word word word breaker test results:
http://127.0.0.1:9200/_analyze?analyzer=standard&pretty=true&text= I am a programmer
segmentation results:

{
  "tokens" : [
    {
      "token" : "我",
      "start_offset" : 0,
      "end_offset" : 1,
      "type" : "<IDEOGRAPHIC>",
      "position" : 0
    },
    {
      "token" : "是",
      "start_offset" : 1,
      "end_offset" : 2,
      "type" : "<IDEOGRAPHIC>",
      "position" : 1
    },
    {
      "token" : "程",
      "start_offset" : 2,
      "end_offset" : 3,
      "type" : "<IDEOGRAPHIC>",
      "position" : 2
    },
    {
      "token" : "序",
      "start_offset" : 3,
      "end_offset" : 4,
      "type" : "<IDEOGRAPHIC>",
      "position" : 3
    },
    {
      "token" : "员",
      "start_offset" : 4,
      "end_offset" : 5,
      "type" : "<IDEOGRAPHIC>",
      "position" : 4
    }
  ]
}

And we need the word effect is: I am, yes, the program, the programmer so would need the support of good support analyzer for the Chinese to support Chinese word segmentation has many, word tokenizer, Paodingjieniu, Pangu word, Ansj word and so on, but we often still IK to introduce the following word breaker.

IK word Introduction

IKAnalyzer是一个开源的,基于java语言开发的轻量级的中文分词工具包。从2006年12月推出1.0版开始,
IKAnalyzer已经推出 了3个大版本。最初,它是以开源项目Lucene为应用主体的,结合词典分词和文法分析算法的
中文分词组件。新版本的IKAnalyzer3.0则发展为 面向Java的公用分词组件,独立于Lucene项目,同时提供了对
Lucene的默认优化实现。
IK分词器3.0的特性如下:
1)采用了特有的“正向迭代最细粒度切分算法“,具有60万字/秒的高速处理能力。 2)采用了多子处理器分析模
式,支持:英文字母(IP地址、Email、URL)、数字(日期,常用中文数量词,罗马数字,科学计数法),中文
词汇(姓名、地名处理)等分词处理。 3)对中英联合支持不是很好,在这方面的处理比较麻烦.需再做一次查询,同
时是支持个人词条的优化的词典存储,更小的内存占用。 4)支持用户词典扩展定义。 5)针对Lucene全文检索优
化的查询分析器IKQueryParser;采用歧义分析算法优化查询关键字的搜索排列组合,能极大的提高Lucene检索的
命中率。

ElasticSearch integrated IK tokenizer

IK word's installation
Download: https: //github.com/medcl/elasticsearch-analysis-ik/releases
Here Insert Picture Description
decompression, elasticsearch file copied to the folder after extracting elasticsearch-5.6.8 \ under plugins, and rename the folder analysis-ik
Here Insert Picture Description
restart elasticSearch, IK to load word breaker
Here Insert Picture Description

IK word test

IK provides two segmentation algorithm ik_smart and ik_max_word which ik_smart to a minimum for the most fine-grained segmentation ik_max_word division we were to try
minimize segmentation: in the browser address bar enter the address
http://127.0.0.1:9200/_analyze? analyzer = ik_smart & pretty = true & text = I am a programmer
resulting output is:

{
  "tokens" : [
    {
      "token" : "我",
      "start_offset" : 0,
      "end_offset" : 1,
      "type" : "CN_CHAR",
      "position" : 0
    },
    {
      "token" : "是",
      "start_offset" : 1,
      "end_offset" : 2,
      "type" : "CN_CHAR",
      "position" : 1
    },
    {
      "token" : "程序员",
      "start_offset" : 2,
      "end_offset" : 5,
      "type" : "CN_WORD",
      "position" : 2
    }
  ]
}

The finest segmentation: in the browser address bar enter the address http://127.0.0.1:9200/_analyze?analyzer=ik_max_word&pretty=true&text= I am a programmer
resulting output is:

{
  "tokens" : [
    {
      "token" : "我",
      "start_offset" : 0,
      "end_offset" : 1,
      "type" : "CN_CHAR",
      "position" : 0
    },
    {
      "token" : "是",
      "start_offset" : 1,
      "end_offset" : 2,
      "type" : "CN_CHAR",
      "position" : 1
    },
    {
      "token" : "程序员",
      "start_offset" : 2,
      "end_offset" : 5,
      "type" : "CN_WORD",
      "position" : 2
    },
    {
      "token" : "程序",
      "start_offset" : 2,
      "end_offset" : 4,
      "type" : "CN_WORD",
      "position" : 3
    },
    {
      "token" : "员",
      "start_offset" : 4,
      "end_offset" : 5,
      "type" : "CN_CHAR",
      "position" : 4
    }
  ]
}

Modify the index mapping mapping

Rebuilding indexes
delete the original blog1 index DELETE localhost: 9200 / blog1
create blog1 index, this time using word ik_max_word
PUT localhost: 9200 / blog1

{
    "mappings": {
        "article": {
            "properties": {
                "id": {
                 "type": "long",    
                    "store": true,
                    "index":"not_analyzed"
                },
                "title": {
                 "type": "text",    
                    "store": true,
                    "index":"analyzed",
                    "analyzer":"ik_max_word"
                },
                "content": {
                 "type": "text",    
                    "store": true,
                    "index":"analyzed",
                    "analyzer":"ik_max_word"
                }
            }
        }
    }
}

Create a document POST localhost: 9200 / blog1 / article / 1

{
"id":1,    
"title":"ElasticSearch是一个基于Lucene的搜索服务器",    
"content":"它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口。Elasticsearch是用
Java开发的,并作为Apache许可条款下的开放源码发布,是当前流行的企业级搜索引擎。设计用于云计
算中,能够达到实时搜索,稳定,可靠,快速,安装使用方便。"
}

Test again queryString query
query_string query is the query string word, and then go to the library to match the index fields to be queried.
Request url: POST localhost: 9200 / blog1 / article / _search
request body:

{
    "query": {
        "query_string": {
            "default_field": "title",
            "query": "搜索服务器"
        }
    }
}

postman Screenshot:
Here Insert Picture Description
The request body search string was changed to "cable", the query again:

{
    "query": {
        "query_string": {
            "default_field": "title",
            "query": "钢索"
        }
    }
}

postman Screenshot:
Here Insert Picture Description
retest term test
term query, the query index database fields will be carried out, for example, title word, and then to match the term conditions.
Request url: POST localhost: 9200 / blog1 / article / _search
request body:

{
    "query": {
        "term": {
            "title": "搜索"
        }
    }
}

postman Screenshot:
Here Insert Picture Description

ElasticSearch cluster

ES集群是一个 P2P类型(使用 gossip 协议)的分布式系统,除了集群状态管理以外,其他所有的请求都可以发送到
集群内任意一台节点上,这个节点可以自己找到需要转发给哪些节点,并且直接跟这些节点通信。所以,从网络架
构及服务配置上来说,构建集群所需要的配置极其简单。在 Elasticsearch 2.0 之前,无阻碍的网络下,所有配置了
相同 cluster.name 的节点都自动归属到一个集群中。2.0 版本之后,基于安全的考虑避免开发环境过于随便造成的
麻烦,从 2.0 版本开始,默认的自动发现方式改为了单播(unicast)方式。配置里提供几台节点的地址,ES 将其视作
gossip router 角色,借以完成集群的发现。由于这只是 ES 内一个很小的功能,所以 gossip router 角色并不需要
单独配置,每个 ES 节点都可以担任。所以,采用单播方式的集群,各节点都配置相同的几个节点列表作为 router
即可。
集群中节点数量没有限制,一般大于等于2个节点就可以看做是集群了。一般处于高性能及高可用方面来考虑一般
集群中的节点数量都是3个及3个以上。

Related concepts cluster
cluster cluster

一个集群就是由一个或多个节点组织在一起,它们共同持有整个的数据,并一起提供索引和搜索功能。一
个集群由一个唯一的名字标识,这个名字默认就是“elasticsearch”。这个名字是重要的,因为一个节点只
能通过指定某个集群的名字,来加入这个集群

Node node

一个节点是集群中的一个服务器,作为集群的一部分,它存储数据,参与集群的索引和搜索功能。和集群类似,一
个节点也是由一个名字来标识的,默认情况下,这个名字是一个随机的漫威漫画角色的名字,这个名字会在启动的
时候赋予节点。这个名字对于管理工作来说挺重要的,因为在这个管理过程中,你会去确定网络中的哪些服务器对
应于Elasticsearch集群中的哪些节点。
一个节点可以通过配置集群名称的方式来加入一个指定的集群。默认情况下,每个节点都会被安排加入到一个叫
做“elasticsearch”的集群中,这意味着,如果你在你的网络中启动了若干个节点,并假定它们能够相互发现彼此,
它们将会自动地形成并加入到一个叫做“elasticsearch”的集群中。
在一个集群里,只要你想,可以拥有任意多个节点。而且,如果当前你的网络中没有运行任何Elasticsearch节点,
这时启动一个节点,会默认创建并加入一个叫做“elasticsearch”的集群。

Fragmentation shards & replicas and replication

一个索引可以存储超出单个结点硬件限制的大量数据。比如,一个具有10亿文档的索引占据1TB的磁盘空间,而任
一节点都没有这样大的磁盘空间;或者单个节点处理搜索请求,响应太慢。为了解决这个问题,Elasticsearch提供
了将索引划分成多份的能力,这些份就叫做分片。当你创建一个索引的时候,你可以指定你想要的分片的数量。每
个分片本身也是一个功能完善并且独立的“索引”,这个“索引”可以被放置到集群中的任何节点上。分片很重要,主
要有两方面的原因: 1)允许你水平分割/扩展你的内容容量。 2)允许你在分片(潜在地,位于多个节点上)之上
进行分布式的、并行的操作,进而提高性能/吞吐量。
至于一个分片怎样分布,它的文档怎样聚合回搜索请求,是完全由Elasticsearch管理的,对于作为用户的你来说,
这些都是透明的。
在一个网络/云的环境里,失败随时都可能发生,在某个分片/节点不知怎么的就处于离线状态,或者由于任何原因
消失了,这种情况下,有一个故障转移机制是非常有用并且是强烈推荐的。为此目的,Elasticsearch允许你创建分
片的一份或多份拷贝,这些拷贝叫做复制分片,或者直接叫复制。
复制之所以重要,有两个主要原因: 在分片/节点失败的情况下,提供了高可用性。因为这个原因,注意到复制分
片从不与原/主要(original/primary)分片置于同一节点上是非常重要的。扩展你的搜索量/吞吐量,因为搜索可以
在所有的复制上并行运行。总之,每个索引可以被分成多个分片。一个索引也可以被复制0次(意思是没有复制)
或多次。一旦复制了,每个索引就有了主分片(作为复制源的原来的分片)和复制分片(主分片的拷贝)之别。分
片和复制的数量可以在索引创建的时候指定。在索引创建之后,你可以在任何时候动态地改变复制的数量,但是你
事后不能改变分片的数量。
默认情况下,Elasticsearch中的每个索引被分片5个主分片和1个复制,这意味着,如果你的集群中至少有两个节
点,你的索引将会有5个主分片和另外5个复制分片(1个完全拷贝),这样的话每个索引总共就有10个分片。

Cluster structures
Three elasticsearch server ready to
create elasticsearch-cluster folder, and copy services within three elasticsearch, to each deleted file in the data directory.
Modify each server configuration
modifications elasticsearch-cluster \ node * \ config \ elasticsearch.yml profile
node1 node:

#节点1的配置信息:
#集群名称,保证唯一
cluster.name: my‐elasticsearch
#节点名称,必须不一样
node.name: node‐1
#必须为本机的ip地址
network.host: 127.0.0.1
#服务端口号,在同一机器下必须不一样
http.port: 9200
#集群间通信端口号,在同一机器下必须不一样
transport.tcp.port: 9300
#设置集群自动发现机器ip集合
discovery.zen.ping.unicast.hosts: ["127.0.0.1:9300","127.0.0.1:9301","127.0.0.1:9302"]

node2 Node:

#节点2的配置信息:
#集群名称,保证唯一
cluster.name: my‐elasticsearch
#节点名称,必须不一样
node.name: node‐2
#必须为本机的ip地址
network.host: 127.0.0.1
#服务端口号,在同一机器下必须不一样
http.port: 9201
#集群间通信端口号,在同一机器下必须不一样
transport.tcp.port: 9301
#设置集群自动发现机器ip集合
discovery.zen.ping.unicast.hosts: ["127.0.0.1:9300","127.0.0.1:9301","127.0.0.1:9302"]

node3 node:

#节点3的配置信息:
#集群名称,保证唯一
cluster.name: my‐elasticsearch
#节点名称,必须不一样
node.name: node‐3
#必须为本机的ip地址
network.host: 127.0.0.1
#服务端口号,在同一机器下必须不一样
http.port: 9202
#集群间通信端口号,在同一机器下必须不一样
transport.tcp.port: 9302
#设置集群自动发现机器ip集合
discovery.zen.ping.unicast.hosts: ["127.0.0.1:9300","127.0.0.1:9301","127.0.0.1:9302"]

Start each node server
, double-click elasticsearch-cluster \ node * \ bin \ elasticsearch.bat
start node 1:
Here Insert Picture Description
Start Node 2:
Here Insert Picture Description
Start node 3:
Here Insert Picture Description
cluster test
adding indexing and mapping PUT localhost: 9200 / blog1

{
    "mappings": {
        "article": {
            "properties": {
                "id": {
                 "type": "long",    
                    "store": true,
                    "index":"not_analyzed"
                },
                "title": {
                 "type": "text",
                    "store": true,
                    "index":"analyzed",
                    "analyzer":"standard"
                },
                "content": {
                 "type": "text",    
                    "store": true,
                    "index":"analyzed",
                    "analyzer":"standard"
                }
            }
        }
    }
}

Adding documents
POST localhost: 9200 / blog1 / article / 1

{
"id":1,    
"title":"ElasticSearch是一个基于Lucene的搜索服务器",    
"content":"它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口。Elasticsearch是用Java
开发的,并作为Apache许可条款下的开放源码发布,是当前流行的企业级搜索引擎。设计用于云计算中,能够达到实时
搜索,稳定,可靠,快速,安装使用方便。"
}

See elasticsearch-header using the cluster where
0-4 thick black border is the master node, the node is fine. Master and slave nodes are distributed on a different server, the master node is broken, the node can work normally. Library index for a default node 5, is divided into five equivalent, then, sub-nodes each. High availability, load balancing purposes.
Here Insert Picture Description

Published 81 original articles · won praise 5 · views 20000 +

Guess you like

Origin blog.csdn.net/qq_36205206/article/details/104643762