Regarding the background of elasticsearch and the concept of distinguishing from conventional relational databases
One, install elasticsearch
brew install elasticsearch
If an error is reported, follow the prompts to install java related environment first. You can basically give up without a ladder.
Two, install kibana
brew install kibana
brew services start kibana
- Browser access
http://localhost:5601
-> Dev Tools
Three, index index
- Create an index named my_index
PUT my_index
- View index information
GET my_index/_settings
- Delete index
DELETE my_index
Four, type
- Only one can be created in the 6+ version, and no longer can be created in the 7+ version (a default one will be given).
- In the introduction of this article, there is no need to create type specially
Five, document Document
-
Create data rows
PUT my_index/my_type/1 { "id":1, "name":"Alice", "date":"2020-01-01", "like": "apple and orange" }
Where my_index is the index, my_type is the type, and 1 is the document id. The request method can be
PUT
or thePOST
document id can be specified, or it can be automatically generated, it must be usedPOST
POST my_index/my_type { "id":2, "name":"Bob", "date":"2020-01-01", "like":"banana and pear" }
Take a look
GET my_index/_search
and observe the _id field. Bob's is AXBXHlWdGy6roZEZWR9P -
View the data row.
Known id, view Bob's information separatelyGET my_index/my_type/AXBXHlWdGy6roZEZWR9P
{ "_index": "my_index", "_type": "my_type", "_id": "AXBXHlWdGy6roZEZWR9P", "_version": 1, "found": true, "_source": { "id": 1102, "name": "Bob", "date": "2020-01-02" } }
See only rows of data content
GET my_index/my_type/AXBXHlWdGy6roZEZWR9P/_source
to view only id and name fields contentGET my_index/my_type/AXBXHlWdGy6roZEZWR9P/_source?_source=id,name
-
The update of the data row
elasticsearch is to mark the old document as deleted (deleted at a later point in time) and inaccessible, create a new document and index it.
(1) Update the entire row of data in the same way as creating.
(2) Update a field individuallyPUT my_index/my_type/1/_update { "doc":{ "id":1103 } }
Six, result analysis
GET my_index/_search
{
"took": 0,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 2,
"max_score": 1,
"hits": [
{
"_index": "my_index",
"_type": "my_type",
"_id": "AXBXHlWdGy6roZEZWR9P",
"_score": 1,
"_source": {
"id": 1102,
"name": "Bob",
"date": "2020-01-02",
"like": "banana and pear"
}
},
{
"_index": "my_index",
"_type": "my_type",
"_id": "1",
"_score": 1,
"_source": {
"id": 1101,
"name": "Alice",
"date": "2020-01-01",
"like": "apple and orange"
}
}
]
}
}
- took
the number of milliseconds the entire search request took. - timed_out
query timed out or not. You may be limitedGET /_search?timeout=10ms
orGET /_search?timeout=1s
the return of the results collected before the request times out. (It should be noted that timeout will not stop the execution of the query, it only tells you the node that has successfully returned the result and then closes the connection. In the background, other shards may still execute the query, although the result has been sent) - _shards The
number of shards participating in the query (total field), how many are successful (successful field), and how many are failed (failed field) - The hits
total field represents the total number of matched documents;
max_score refers to the maximum value of _score in all document matching queries; the
hits array contains the first 10 matched data by default. Each node has a _score field, which is the relevance score, which measures how well the document matches the query. The array is sorted in descending order of _score. It can be usedsize
andfrom
set the number and the offset return resultsGET my_index/_search?size=1&from=1
Seven, search-simple query string
It is not recommended to directly expose the query string search to users, unless these users are trustworthy with your data and cluster.
- Field like contains apple
GET my_index/my_type/_search?q=like:apple
- The field name contains Alice or the field id is 1102
GET my_index/_search?q=+name:Alice +id:1102
- The field name contains Alice or the field id is not 1103
GET my_index/_search?q=+name:Alice -id:1103
- The field name contains Alice or contains Bob
GET my_index/my_type/_search?q=name:(Alice Bob)
- Field date >= '2020-01-02'
GET my_index/my_type/_search?q=date:>=2020-01-02
Eight, search-structured query
GET is better for submitting query requests, but because GET carrying interactive data is not well supported, the search API also supports POST.
- term filtering. Used to match exactly which values, such as numbers, dates, boolean values or not_analyzed strings.
POST my_index/my_type/_search { "query": { "term": { "id": 1102} } }
- terms filtering. Similar to term, the field specifies multiple values, which is equivalent to in.
POST my_index/my_type/_search { "query": { "terms": { "id": [1102, 1103]} } }
- range filtering. Range lookup.
POST my_index/my_type/_search { "query": { "range": { "date": { "gte": "2020", "lt": "2020-01-02" } } } }
- match query When
doing an exact match search, it is best to use a filter statement, because the filter statement can cache data.POST my_index/my_type/_search { "query": { "match": { "like": "apple"} } }
- multi_match query
POST my_index/my_type/_search { "query": { "multi_match": { "like": ["apple", "banana"]} } }
- Other
size and from
Return only specific fieldsPOST my_index/my_type/_search { "query": { }, "size": 1, "from": 0 }
POST my_index/my_type/_search { "query": { }, "_source": ["id", "name"] }
Three, install the ik Chinese word segmenter
-
Through the
elasticsearch-plugin
installation, note that the elasticsearch version number corresponds to the ik version number, the version number of this installation is 5.6.15
/usr/local/Cellar/[email protected]/5.6.15/bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v5.6.15/elasticsearch-analysis-ik-5.6.15.zip
-
Stop elasticsearch and restart it using startup method 2. Find the line that loads ik in the startup information
-
brew info elasticsearch
Check the relevant information directory, find
Config: /usr/local/etc/elasticsearch/ and
enter the directorycd /usr/local/etc/elasticsearch/analysis-ik
, where the IKAnalyzer.cfg.xml file is the configuration dictionary file