elasticsearch|big data|deployment, installation and security enhancement of elasticsearch low-version cluster---password setting issues

one,

Version problem

The standard for elasticsearch's high and low version classification is 6.3. The version before this is a low version, and the version after 6.3, including 6.3, is a high version. This division is mainly in terms of security, that is, the use and deployment of the x-pack plug-in. Low versions need to be manually Install this security plug-in, and higher versions do not need to be installed. On the other hand, higher versions of es will have fewer vulnerabilities, and in this example the last version of the lower version, version 6.2.4, is used

The Java environment uses openjdk, and the version is 1.8.0_392-b08. Under this version, the test of es deployment is passed:

[es@node1 bin]$ java -version
openjdk version "1.8.0_392"
OpenJDK Runtime Environment (Temurin)(build 1.8.0_392-b08)
OpenJDK 64-Bit Server VM (Temurin)(build 25.392-b08, mixed mode)

two,

Environmental issues

This example plans to use four VMware virtual machine servers, the operating versions are centos7, the IPs are 192.168.123.11/12/13/14, a total of four servers, each server has 8G of memory

Since elasticsearch is a Java project, it requires a lot of memory. Therefore, it is recommended that the memory should not be less than 8G, and a CPU with 4 cores is enough. The requirements are not too many.

Another key environment is the time server. This must be present. Whether it is experimental or used in production, the time server should not be ignored. In this example, since it is in the Internet web environment, the time of Alibaba Cloud is used. server

Configuration of time server in ntp service:

server ntp.aliyun.com iburst

Verification after the time server is correctly configured:

[root@node1 es]# ntpstat
synchronised to NTP server (203.107.6.88) at stratum 3
   time correct to within 74 ms
   polling server every 1024 s

Secondly, there are some general settings, such as turning off selinux, which will not be mentioned again here.

jdk installation:

vim /etc/profile Add at the end of this file:

export JAVA_HOME=/usr/local/jdk
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH


tar xvf OpenJDK8U-jdk_x64_linux_hotspot_8u392b08.tar.gz
mv jdk8u392-b08 /usr/local/jdk
##激活变量
source /etc/profile
###测试jdk是否安装成功
[root@node1 ~]# java -version
openjdk version "1.8.0_392"
OpenJDK Runtime Environment (Temurin)(build 1.8.0_392-b08)
OpenJDK 64-Bit Server VM (Temurin)(build 25.392-b08, mixed mode)

three,

Download of software resources

es download:

Elasticsearch 6.3.2 | Elastic

Download the x-pack plug-in:

https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-6.2.4.zip

Download of openjdk:

Index of /Adoptium/8/jdk/x64/linux/ | Tsinghua University Open Source Software Mirror Station | Tsinghua Open Source Mirror

Four,

Deployment of elasticsearch cluster

Create a new directory /data and decompress the es downloaded above, change the name and place it in the /data directory. Execute on each server:

mkdir /data
unzip elasticsearch-6.2.4.zip
mv elasticsearch-6.2.4 /data/es
scp -r /data/es 192.168.123.12:/data/
scp -r /data/es 192.168.123.13:/data/
scp -r /data/es 192.168.123.14:/data/

Create a new user es. This user does not need to set a password. You can just use the su command to switch later.

useradd es

Install x-pack plug-in offline:

During this period, you need to enter y twice to confirm the installation.

[root@node1 bin]# /data/es/bin/elasticsearch-plugin install file:///data/es/bin/x-pack-6.2.4.zip 
-> Downloading file:///data/es/bin/x-pack-6.2.4.zip
[=================================================] 100%   
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@     WARNING: plugin requires additional permissions     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
* java.io.FilePermission \\.\pipe\* read,write
* java.lang.RuntimePermission accessClassInPackage.com.sun.activation.registries
* java.lang.RuntimePermission getClassLoader
* java.lang.RuntimePermission setContextClassLoader
* java.lang.RuntimePermission setFactory
* java.net.SocketPermission * connect,accept,resolve
* java.security.SecurityPermission createPolicy.JavaPolicy
* java.security.SecurityPermission getPolicy
* java.security.SecurityPermission putProviderProperty.BC
* java.security.SecurityPermission setPolicy
* java.util.PropertyPermission * read,write
See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html
for descriptions of what these permissions allow and the associated risks.

Continue with installation? [y/N]y
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@        WARNING: plugin forks a native controller        @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
This plugin launches a native controller that is not subject to the Java
security manager nor to system call filters.

Continue with installation? [y/N]y
Elasticsearch keystore is required by plugin [x-pack-security], creating...
-> Installed x-pack with: x-pack-core,x-pack-deprecation,x-pack-graph,x-pack-logstash,x-pack-ml,x-pack-monitoring,x-pack-security,x-pack-upgrade,x-pack-watcher
[root@node1 bin]# echo $?
0

 After the installation is complete, you will see x-pack in the bin directory, config and other directories.

Modify the main configuration file of elasticsearch:

Which server these two configurations are executed on depends on the actual situation. For example, if executed on 192.168.123.12, it is node-2,192.168.123.12.

node.name: node-1

network.host: 192.168.123.11

cat >/data/es/config/elasticsearch.yml <<EOF
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: myes
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /data/es/data
#
# Path to log files:
#
path.logs: /var/log/es/
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 192.168.123.11
#
# Set a custom port for HTTP:
#
http.port: 19200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["node-1", "node-2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: false
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type,Content-Length
EOF

According to the configuration file, create a log file and write it to the directory, and assign the group es to the /data/es directory, and assign the group recursively:

mkdir /var/log/es

chown -Rf es. /data/es/
chown -Rf es. /var/log/es/

Create startup script:

 cat >/etc/init.d/es<<EOF
#!/bin/bash
#chkconfig:2345 60 12
#description: elasticsearch
es_path=/data/es
es_pid=`ps aux|grep elasticsearch | grep -v 'grep elasticsearch' | awk '{print $2}'`
case "$1" in
start)
    su - es -c "$es_path/bin/elasticsearch -d"
    echo "elasticsearch startup"
    ;;
stop)
    kill -9 $es_pid
    echo "elasticsearch stopped"
    ;;
restart)
    kill -9 $es_pid
    su - es -c "$es_path/bin/elasticsearch -d"
    echo "elasticsearch startup"
    ;;
*)
    echo "error choice ! please input start or stop or restart"
    ;;
esac
 
exit $?
EOF

Start script authorization:

chown -Rf es. /etc/init.d/es

After starting elasticsearch, looking at the logs, you can see that 13 14 servers have joined the cluster, but these logs are not intuitive:

[root@node1 bin]# tail -f /var/log/es/myes.log 
[2023-12-10T00:52:34,441][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [node-1] [.monitoring-es-6-2023.12.09/4GLFZLlsRH6nj4ZKIUsxvw] auto expanded replicas to [1]
[2023-12-10T00:52:34,441][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [node-1] [.watcher-history-7-2023.12.09/_WVuCnwrSlGtYaLbSAfbLg] auto expanded replicas to [1]
[2023-12-10T00:52:35,028][INFO ][o.e.x.w.WatcherService   ] [node-1] paused watch execution, reason [no local watcher shards found], cancelled [0] queued tasks
[2023-12-10T00:52:35,305][INFO ][o.e.x.w.WatcherService   ] [node-1] paused watch execution, reason [new local watcher shard allocation ids], cancelled [0] queued tasks
[2023-12-10T00:52:35,982][INFO ][o.e.c.r.a.AllocationService] [node-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-es-6-2023.12.09][0]] ...]).
[2023-12-10T00:52:40,831][INFO ][o.e.c.s.MasterService    ] [node-1] zen-disco-node-join[{node-3}{kZxWJkP1Tjqo1DkDLcKg0w}{uZU4sePKSkuDZaxFl8_J_A}{192.168.123.13}{192.168.123.13:9300}{ml.machine_memory=8370089984, ml.max_open_jobs=20, ml.enabled=true}], reason: added {
   
   {node-3}{kZxWJkP1Tjqo1DkDLcKg0w}{uZU4sePKSkuDZaxFl8_J_A}{192.168.123.13}{192.168.123.13:9300}{ml.machine_memory=8370089984, ml.max_open_jobs=20, ml.enabled=true},}
[2023-12-10T00:52:41,494][INFO ][o.e.c.s.ClusterApplierService] [node-1] added {
   
   {node-3}{kZxWJkP1Tjqo1DkDLcKg0w}{uZU4sePKSkuDZaxFl8_J_A}{192.168.123.13}{192.168.123.13:9300}{ml.machine_memory=8370089984, ml.max_open_jobs=20, ml.enabled=true},}, reason: apply cluster state (from master [master {node-1}{Ihs-2_jwTte3q7zd82z9cg}{XX2DgvdSR_yGZ886Ao2n4w}{192.168.123.11}{192.168.123.11:9300}{ml.machine_memory=8975544320, ml.max_open_jobs=20, ml.enabled=true} committed version [34] source [zen-disco-node-join[{node-3}{kZxWJkP1Tjqo1DkDLcKg0w}{uZU4sePKSkuDZaxFl8_J_A}{192.168.123.13}{192.168.123.13:9300}{ml.machine_memory=8370089984, ml.max_open_jobs=20, ml.enabled=true}]]])
[2023-12-10T00:52:46,046][INFO ][o.e.c.s.MasterService    ] [node-1] zen-disco-node-join[{node-4}{xBsEmrIhSQWgLauziJ-YTg}{YUChKHGFTPWJQy8m0HANQA}{192.168.123.14}{192.168.123.14:9300}{ml.machine_memory=8370089984, ml.max_open_jobs=20, ml.enabled=true}], reason: added {
   
   {node-4}{xBsEmrIhSQWgLauziJ-YTg}{YUChKHGFTPWJQy8m0HANQA}{192.168.123.14}{192.168.123.14:9300}{ml.machine_memory=8370089984, ml.max_open_jobs=20, ml.enabled=true},}
[2023-12-10T00:52:46,691][INFO ][o.e.c.s.ClusterApplierService] [node-1] added {
   
   {node-4}{xBsEmrIhSQWgLauziJ-YTg}{YUChKHGFTPWJQy8m0HANQA}{192.168.123.14}{192.168.123.14:9300}{ml.machine_memory=8370089984, ml.max_open_jobs=20, ml.enabled=true},}, reason: apply cluster state (from master [master {node-1}{Ihs-2_jwTte3q7zd82z9cg}{XX2DgvdSR_yGZ886Ao2n4w}{192.168.123.11}{192.168.123.11:9300}{ml.machine_memory=8975544320, ml.max_open_jobs=20, ml.enabled=true} committed version [36] source [zen-disco-node-join[{node-4}{xBsEmrIhSQWgLauziJ-YTg}{YUChKHGFTPWJQy8m0HANQA}{192.168.123.14}{192.168.123.14:9300}{ml.machine_memory=8370089984, ml.max_open_jobs=20, ml.enabled=true}]]])
[2023-12-10T00:52:46,747][INFO ][o.e.x.w.WatcherService   ] [node-1] paused watch execution, reason [no local watcher shards found], cancelled [0] queued tasks

OK, now you can set the password:

[root@node1 bin]# /data/es/bin/x-pack/setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,kibana,logstash_system.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y


Enter password for [elastic]: 
Reenter password for [elastic]: 
Enter password for [kibana]: 
Reenter password for [kibana]: 
Enter password for [logstash_system]: 
Reenter password for [logstash_system]: 
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [elastic]

The following is an intuitive approach:

As you can see, the cluster is in a green health status. Of course, because we have set a password, we need to enter the password set in the above steps. Now security is guaranteed.

[root@node1 bin]# curl -XGET http://192.168.123.11:19200/_cat/health -uelastic
Enter host password for user 'elastic':
1702141169 00:59:29 myes green 4 4 12 5 0 0 0 0 - 100.0%

You can see that the cluster has four nodes, node-1 is the master node, and the cluster is set up successfully! ! ! ! ! 

[root@node1 bin]# curl -XGET http://192.168.123.11:19200/_cat/nodes -uelastic
Enter host password for user 'elastic':
192.168.123.11 32 58 2 0.24 0.26 0.26 mdi * node-1
192.168.123.12 39 73 1 0.32 0.24 0.25 mdi - node-2
192.168.123.13 31 59 1 0.37 0.34 0.32 mdi - node-3
192.168.123.14 34 84 2 1.21 0.65 0.36 mdi - node-4

 Enter the IP + port in the browser, enter the account password and you can see the node status:

five,

Password deletion and reset

Change password through API:

The lower version of ES can only change the password through the API. The higher version has the tool elasticsearch-reset-password to change the password. The following describes how to change the password of the elastic user through the API.

User permission description:

Among them, the user permissions are as follows:

elastic account: has the superuser role and is a built-in super user.
kibana account: With the kibana_system role, user kibana is used to connect to and communicate with elasticsearch. The Kibana server submits requests as this user to access the cluster monitoring API and .kibana index. Cannot access index.
logstash_system account: has the logstash_system role. User Logstash is used when storing monitoring information in Elasticsearch.

[root@node1 bin]# curl -H "Content-Type:application/json" -XPOST -u elastic  'http://192.168.123.11:19200/_xpack/security/user/elastic/_password' -d '{ "password" : "123456" }'
Enter host password for user 'elastic':
{}

When changing the password, you need to verify the original password. After the change is successful, a null value will be returned. As shown above, the password will be changed to 123456.

reset Password

Delete the security index and generate the password again through the command /data/es/bin/x-pack/setup-passwords interarchive. For example, as shown below, delete the index through es-head.

six,

Deployment and use of es-head

Download address:
https://github.com/liufengji/es-head

This example uses 1.0.8 and 0.1.2

 installation steps:

Open the plug-in management page of Google Chrome, or enter chrome://extensions/ directly in the browser

Decompress the file downloaded above elasticsearch-head-master.zip. If you find a file with a crx suffix, change the suffix to rar and decompress it

Click Load decompressed extension in the upper left corner and navigate to the decompressed directory

start using:

 

 

 

Click Connect, and then a window will pop up asking for your account and password. After entering, you can enter the head interface.

OK, just install the lower version of es cluster and security enhancement (password setting) and install the es-head plug-in.

Guess you like

Origin blog.csdn.net/alwaysbefine/article/details/134902679