14 ElasticSearch installs the ik tokenizer plug-in

IK tokenizer (ES plug-in)

Word segmentation is to divide a piece of Chinese or other keywords into individual keywords. When we search, we will segment our own information, we will segment the data in the database or index library, and then perform matching operations one by one. The default Chinese word segmentation treats each character as a word, which does not meet the requirements, so we need to install the Chinese word segmenter ik to solve this problem

The warehouse address of the ik tokenizer:

Release v7.17.3 · medcl/elasticsearch-analysis-ik · GitHub

        Note: The version of the ik tokenizer should be consistent with the version of Elasticsearch.

I encountered the following problems when installing the ElasticsearchIk tokenizer plugin:

        Elasticsearch restart failed:

       Delete the hidden files in the plugins directory:

 

Then double-click Elasticsearch in the bin directory to restart, and the familiar interface will come back:

 

 

Guess you like

Origin blog.csdn.net/weixin_68930048/article/details/128294568