Learning ElasticSearch: additions and deletions to the document change search

  • Here we are with index database index as an example, is still using the postman to send data to complete json
    Here Insert Picture Description

1, by

Here Insert Picture Description

  • 1 refers _id documents, not following id, id with the same general _id
{
	"id":1,
	"title":"我是一个标题1",
	"content":"我是内容1"
}
  • Get results, success
{
    "_index": "index",
    "_type": "1",
    "_id": "AW6dhLRtj92VZTE9RyoB",
    "_version": 1,
    "result": "created",
    "_shards": {
        "total": 2,
        "successful": 1,
        "failed": 0
    },
    "created": true
}

Here Insert Picture Description

2, delete

Here Insert Picture Description

  • Return json
{
    "found": true,
    "_index": "index",
    "_type": "1",
    "_id": "AW6dhLRtj92VZTE9RyoB",
    "_version": 2,
    "result": "deleted",
    "_shards": {
        "total": 2,
        "successful": 1,
        "failed": 0
    }
}
  • Here are the figures behind the _id

3, change

  • The principle here is: delete and add, where you can directly add operation, it will delete and add
  • Before modifying
    Here Insert Picture Description
    Here Insert Picture Description
  • Return json
{
    "_index": "index",
    "_type": "hello",
    "_id": "1",
    "_version": 2,
    "result": "updated",
    "_shards": {
        "total": 2,
        "successful": 1,
        "failed": 0
    },
    "created": false
}
  • View data
    Here Insert Picture Description

4, according to query id

Here Insert Picture Description

  • The request is to query changed to get
  • Json data returned
{
	"_index": "index",
	"_type": "hello",
	"_id": "1",
	"_version": 2,
	"found": true,
	"_source": {
		"id": 1,
		"title": "修改之后的文档",
		"content": "修改之后的内容"
	}
}

5, according to the keyword query

  • Here is a post request
    Here Insert Picture Description
{
	"query":{
		"term":{
			"title":"修"
		}
	}
}
  • And in which the query term are keywords., Where queries can only enter a Chinese character
  • Return json
{
	"took": 127,
	"timed_out": false,
	"_shards": {
		"total": 5,
		"successful": 5,
		"skipped": 0,
		"failed": 0
	},
	"hits": {
		"total": 2,
		"max_score": 0.28582606,
		"hits": [
			{
				"_index": "index",
				"_type": "hello",
				"_id": "1",
				"_score": 0.28582606,
				"_source": {
					"id": 1,
					"title": "修改之后的文档",
					"content": "修改之后的内容"
				}
			},
			{
				"_index": "index",
				"_type": "hello",
				"_id": "3",
				"_score": 0.25811607,
				"_source": {
					"id": 1,
					"title": "修改之后的文档as12d",
					"content": "修改之后的内12容"
				}
			}
		]
	}
}

6, queryString inquiry

  • Using this word will be, with the word above query is the same.
    Here Insert Picture Description
  • Returned json
{
	"took": 125,
	"timed_out": false,
	"_shards": {
		"total": 5,
		"successful": 5,
		"skipped": 0,
		"failed": 0
	},
	"hits": {
		"total": 2,
		"max_score": 0.5716521,
		"hits": [
			{
				"_index": "index",
				"_type": "hello",
				"_id": "1",
				"_score": 0.5716521,
				"_source": {
					"id": 1,
					"title": "修改之后的文档",
					"content": "修改之后的内容"
				}
			},
			{
				"_index": "index",
				"_type": "hello",
				"_id": "3",
				"_score": 0.51623213,
				"_source": {
					"id": 1,
					"title": "修改之后的文档as12d",
					"content": "修改之后的内12容"
				}
			}
		]
	}
}

7, extension

  • Segmentation effect in view tokenizer
  • Enter http://127.0.0.1:9200/_analyze?analyzer=standard&text=
  • Rising behind text with English or Chinese

Support for English

{
    "tokens": [
        {
            "token": "i",
            "start_offset": 0,
            "end_offset": 1,
            "type": "<ALPHANUM>",
            "position": 0
        },
        {
            "token": "am",
            "start_offset": 2,
            "end_offset": 4,
            "type": "<ALPHANUM>",
            "position": 1
        },
        {
            "token": "a",
            "start_offset": 5,
            "end_offset": 6,
            "type": "<ALPHANUM>",
            "position": 2
        },
        {
            "token": "hbu",
            "start_offset": 7,
            "end_offset": 10,
            "type": "<ALPHANUM>",
            "position": 3
        },
        {
            "token": "student",
            "start_offset": 11,
            "end_offset": 18,
            "type": "<ALPHANUM>",
            "position": 4
        }
    ]
}

For support of Chinese

{
    "tokens": [
        {
            "token": "我",
            "start_offset": 0,
            "end_offset": 1,
            "type": "<IDEOGRAPHIC>",
            "position": 0
        },
        {
            "token": "是",
            "start_offset": 1,
            "end_offset": 2,
            "type": "<IDEOGRAPHIC>",
            "position": 1
        },
        {
            "token": "河",
            "start_offset": 2,
            "end_offset": 3,
            "type": "<IDEOGRAPHIC>",
            "position": 2
        },
        {
            "token": "北",
            "start_offset": 3,
            "end_offset": 4,
            "type": "<IDEOGRAPHIC>",
            "position": 3
        },
        {
            "token": "大",
            "start_offset": 4,
            "end_offset": 5,
            "type": "<IDEOGRAPHIC>",
            "position": 4
        },
        {
            "token": "学",
            "start_offset": 5,
            "end_offset": 6,
            "type": "<IDEOGRAPHIC>",
            "position": 5
        },
        {
            "token": "青",
            "start_offset": 6,
            "end_offset": 7,
            "type": "<IDEOGRAPHIC>",
            "position": 6
        },
        {
            "token": "年",
            "start_offset": 7,
            "end_offset": 8,
            "type": "<IDEOGRAPHIC>",
            "position": 7
        }
    ]
}
  • In fact this is very bad for the Chinese, so the actual development we do not use a standard word breaker
Published 134 original articles · won praise 91 · views 160 000 +

Guess you like

Origin blog.csdn.net/weixin_44588495/article/details/103228622