This example uses the elasticsearch version 7.1.1, apply it on the premise that your elasticsearch have successfully started and can be a normal visit.
1, elasticsearch-analysis-hanlp Chinese word plugin download:
download link:
https://github.com/KennFalcon/elasticsearch-analysis-hanlp/releases
In releases to find the corresponding list page ES version, click on the download zip package
2, elasticsearch-analysis-pinyin spelling word plugin download:
download link:
https://github.com/medcl/elasticsearch-analysis-pinyin/releases
In releases to find the corresponding list page ES version, click on the download zip package
3, download hanlp Chinese word dictionary complete package ( test installation elasticsearch-analysis-hanlp default dictionary package can also be used with this step can be omitted )
download link:
https://github.com/hankcs/HanLP/releases
4, the elasticsearch-analysis-hanlp with elasticsearch-analysis-pinyin decompressed and renamed and placed ES installation directory plugins directory, such as:
5, extract the data-for-1.7.5 dictionary file, and the data-for-1.7.5 \ data \ dictionary \ custom modifications under the Chinese name of the directory into the elasticsearch-7.1.1 \ plugins \ analysis- hanlp \ data \ dictionary \ custom folder to a name, and the like data-for-1.7.5 \ data directory overwriting elasticsearch-7.1.1 \ plugins \ analysis- hanlp \ data directory: (do not complete this step is ignored dictionary package)
6 in D: \ \ \ elasticsearch-7.1.1 config directory new analysis-hanlp folder and D: \ elasticsearch-7.1.1 \ plugins \ analysis-hanlp \ config copy the files in the directory to D: \ elasticsearch -7.1.1 \ \ config analysis-hanlp directory, such as:
7, start the ES service is not normally a problem (which I do not normally appear some strange questions, it is not important) , well I'll be your service has successfully started.
8, using hand plug test plug-Pinyin, good looks is a success.
9, and then test the Chinese word plugin, success ~ ✿✿ヽ( ° ▽ ° ) Techno ✿
10, create a new index document2 to their field to Chinese word and pinyin word. Success ~
1 document2/_mapping/ 2 3 { 4 5 "properties":{ 6 7 "doctitle":{ 8 9 "type":"text", 10 11 "analyzer":"hanlp", 12 13 "search_analyzer":"hanlp", 14 15 "fields":{ 16 17 "py":{ 18 19 "type":"text", 20 21 "analyzer":"pinyin", 22 23 "search_analyzer":"pinyin" 24 25 } 26 27 } 28 29 } 30 31 } 32 33 }
11 , adding the test data (because I already have a ready-made data so a direct copy of data to index document2 )
12 , test results (here request must be made post never use get Do not ask me how I know)
So far Chinese pinyin word segmentation installation configuration is complete, JavaAPI application next one