elasticsearch6.x set word segmentation mapping

{
  "mapping": {
    "fulltext": {
      "properties": {
        "article": {
          "content": {
            "type": "text",
            "analyzer": "ik_smart",
            "search_analyzer": "ik_smart",
            "copy_to": "input_all"
          }
        },
        "input_all": {
          "type": "text",
          "analyzer": "ik_smart",
          "search_analyzer": "ik_smart"
        }
      }
    }
  }
}

First, specify the tokenizer of content, then copy_to to the custom field input_all, and then specify the tokenizer for input_all. Note that the input_all field must be set under properties.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325265218&siteId=291194637