Article directory
1. Deploy the ElasticSearch database
1. Preparation
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.17.6
Pwd="/data/software/elasticsearch/"
mkdir ${Pwd}/{
data,plugins,config,logs} -p
chmod -R 777 ${Pwd}
echo "vm.max_map_count=262144" >> /etc/sysctl.conf
2. Create configuration file
cat > ${Pwd}/config/elasticsearch.yml << EOF
cluster.name: "docker-cluster"
network.host: 0.0.0.0
EOF
3. Run the container
docker run -itd --name elasticsearch \
-v ${Pwd}/data:/usr/share/elasticsearch/data \
-v ${Pwd}/plugins:/usr/share/elasticsearch/plugins \
-v ${Pwd}/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v ${Pwd}/logs:/usr/share/elasticsearch/logs \
-v /etc/localtime:/etc/localtime \
-v /etc/sysctl.conf:/etc/sysctl.conf \
-e ES_JAVA_OPTS="-Xms512m -Xmx512m" \
-e "node.name=es1" \
-e "discovery.seed_hosts=es1" \
-e "cluster.initial_master_nodes=es1" \
-e "http.host=0.0.0.0" \
-p 9200:9200 \
-p 9300:9300 \
--privileged \
--restart=always \
docker.elastic.co/elasticsearch/elasticsearch:7.17.6
4. Settings vm.max_map_count
to prevent startup failure
docker exec -it elasticsearch sysctl -w vm.max_map_count=262144
2. Add word segmentation plug-in (analysis-ik)
1. First download the specified version of the analysis-ik word segmenter analysis -ik download address from
github (corresponding to the ES version) 2. cp the download package into the container && install the plug-in
docker cp elasticsearch-analysis-ik-7.17.6.zip elasticsearch:/tmp
docker exec -it elasticsearch /bin/bash
cd bin
./elasticsearch-plugin install file:///tmp/elasticsearch-analysis-ik-7.17.6.zip
-> Installing file:///tmp/elasticsearch-analysis-ik-7.17.6.zip
-> Downloading file:///tmp/elasticsearch-analysis-ik-7.17.6.zip
[=================================================] 100%??
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: plugin requires additional permissions @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
* java.net.SocketPermission * connect,resolve
See https://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html
for descriptions of what these permissions allow and the associated risks.
Continue with installation? [y/N]y # 输入Y
-> Installed analysis-ik
-> Please restart Elasticsearch to activate any plugins installed
3. Restart elasticsearch after the installation is complete.
docker restart elasticsearch
3. Test the ElasticSearch database + analysis-ik word segmenter plug-in
Postman
In order to facilitate testing, I use tools to test here :
1. Test the ElasticSearch database
2. View installed plug-ins
3. Verify whether the word segmentation function is normal,
create a test index,
and conduct word segmentation testing.
Test using curl command in server:
1. Test the ElasticSearch database
curl http://16.32.15.115:9200
{
"name" : "es1",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "dC8v3zOoQgGWqgt0smdKtw",
"version" : {
"number" : "7.17.6",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "f65e9d338dc1d07b642e14a27f338990148ee5b6",
"build_date" : "2022-08-23T11:08:48.893373482Z",
"build_snapshot" : false,
"lucene_version" : "8.11.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
2. View installed plug-ins
curl http://16.32.15.115:9200/_cat/plugins
es1 analysis-ik 7.17.6
3. Verify whether the word segmenter function is normal
and create a test2 index.
curl -XPUT http://16.32.15.115:9200/test2/
{
"acknowledged":true,"shards_acknowledged":true,"index":"test2"}
Carry out word segmentation test
curl -X POST -H "Content-Type: application/json" -d '{
"analyzer": "ik_smart",
"text": "社会你腾哥,人狠话不多!!!"
}' http://16.32.15.115:9200/test2/_analyze --silent | jq
Note: In order to display the returned json data, I added --silent | jq
the parameter. This parameter requires the jq command to be installed. If the parameter is not removed, it will be fine.
- –silent parameter: used to suppress the display of progress bars and error messages:
- | jq command: Pass the result returned by curl to the jq command through the pipe symbol for JSON data processing.
Return content:
{
"tokens": [
{
"token": "社会",
"start_offset": 0,
"end_offset": 2,
"type": "CN_WORD",
"position": 0
},
{
"token": "你",
"start_offset": 2,
"end_offset": 3,
"type": "CN_CHAR",
"position": 1
},
{
"token": "腾",
"start_offset": 3,
"end_offset": 4,
"type": "CN_CHAR",
"position": 2
},
{
"token": "哥",
"start_offset": 4,
"end_offset": 5,
"type": "CN_CHAR",
"position": 3
},
{
"token": "人",
"start_offset": 6,
"end_offset": 7,
"type": "CN_CHAR",
"position": 4
},
{
"token": "狠",
"start_offset": 7,
"end_offset": 8,
"type": "CN_CHAR",
"position": 5
},
{
"token": "话",
"start_offset": 8,
"end_offset": 9,
"type": "CN_CHAR",
"position": 6
},
{
"token": "不多",
"start_offset": 9,
"end_offset": 11,
"type": "CN_WORD",
"position": 7
}
]
}