ElasticSearch cluster 6.8 enables x-pack security authentication login and configures kibana, logstash, file beat, cerebro at the same time

1. ElasticSearch cluster 6.8 enables x-pack security authentication login (username and password)

ElasticSearch 6.8 does not enable x-pack security authentication login by default; in versions prior to 6.8, x-pack security authentication login requires a license, which can only be used for 30 days. After 6.8 (inclusive), you can use the basic (basic) license for free

2. Conditions for enabling x-pack security authentication login

xpack.security.enabled:true 
Elasticsearch x-pack security authentication login needs to enable tcp TLS (for clusters, non-clusters do not need to be set)

3. Opening steps and commands

  1. Generate CA certificate
  2. Generate certificates and private keys for each node in the cluster through ca
  3. Modify the elasticsearch.yml configuration file to enable x-pack security authentication login
  4. Restart the ElasticSearch service
  5. Create the elasticsearch.keystore file (the default created location is /etc/elasticsearch/elasticsearch.keystore)
  6. set password

1. Generate a CA certificate (executed on one of the nodes)

Configure on one of the nodes 
Switch to the /etc/elasticsearch directory and execute the elasticsearch internal command

cd /etc/elasticsearch
/usr/share/elasticsearch/bin/elasticsearch-certutil ca      #用户名和密码不用设置,一路回车

Generate ca certificate fileelastic-stack-ca.p12

2. Generate a certificate and private key for the cluster through ca (executed only on one node)

Configure on the node that generates the ca certificate 
Use the elasticsearch internal command

/usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
The user name and password can be empty, if they are empty, press Enter all the way to execute to the end

Generate a certificate and private key file

elastic-certificates.p12 

Give this file the right to  chmod 660 elastic-certificates.p12  or change the user who owns the file  chown elasticsearch:elasticsearch elastic-certificates.p12  Otherwise, it will report at startup that the file cannot be read. This file is
placed in the /etc/elasticsearch directory of elasticsearch

Distribute this file elastic-certificates.p12 to other machines in the cluster and put it under /etc/elasticsearch (because the default configuration file path of es installed by rpm is)

3. Modify the elasticsearch.yml configuration file to enable x-pack security authentication login (each node in the cluster must be configured)

Configure at the end of elasticsearch.yml

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

4. Restart the ElasticSearch service

Elasticsearch must be restarted after modifying the configuration

systemctl restart elasticsearch

5. Create elasticsearch.keystore file (can be skipped if it exists)

The default already created location is /etc/elasticsearch/elasticsearch.keystore
If it already exists, you can ignore this step

#Judge whether there is elasticsearch.keystore 
ls /etc/elasticsearch/elasticsearch.keystore 

#Execute 
/usr/share/elasticsearch/bin/elasticsearch-keystore create if it does not exist

6. Set the password (executed only on one of the nodes)

6.1 How to view the master node

Method 1:  Execute in the browser or execute curl view under linux ( because no password has been set, this method will not work )

# The request may require a username and password, because no password has been set, this method will not work 
http://ip:9200/_cat/master 
or the command line 
curl -XGET "http://ip:9200/_cat/master"

Method 2:  You can shut down other node services , leaving only one node, and the surviving node is the master node. Because after the above elasticsearch.yml configuration, web access requires a user name and password, and the corresponding master node cannot be confirmed, other node services can be shut down , leaving only one node

6.2 Set password

  • Set password is to set the password of the built-in user

built-in user

username _effect
elastic root
kibana Used to connect Kibana to Elasticsearch
logstash_system Used when Logstash stores monitoring information in Elasticsearch
beats_system Used by Beats when storing monitoring information in Elasticsearch
apm_system Used by the APM server when storing monitoring information in Elasticsearch
remote_monitoring_user Used by Metricbeat users when collecting and storing monitoring information in Elasticsearch

Executed only on one node in the cluster , an index will be created by default.secutity-6

注意注意:执行之前一定要看一下集群的日志,由于刚才进行了重启,集群的状态从red还未变成yellow或者green,集群状态变成yellow时就可以操作了,目的是要能创建索引。否则创建失败。

Set password process

#如需要自动生成密码可以将interactive 替换为 auto,让系统自动生成
 
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive

Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y

Enter password for [elastic]: 
Reenter password for [elastic]: 
Enter password for [apm_system]: 
Reenter password for [apm_system]: 
Enter password for [kibana]: 
Reenter password for [kibana]: 
Enter password for [logstash_system]: 
Reenter password for [logstash_system]: 
Enter password for [beats_system]: 
Reenter password for [beats_system]: 
Enter password for [remote_monitoring_user]: 
Reenter password for [remote_monitoring_user]: 
Changed password for user [apm_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]

After the elastic user sets the password, the password boot process (that is, the command line prompt) will no longer be valid, and executing the elasticsearch-setup-passwords command again will throw an exception

Failed to authenticate user 'elastic' against http://192.168.200.220:9200/_xpack/security/_authenticate?pretty
Possible causes include:
 * The password for the 'elastic' user has already been changed on this cluster
 * Your elasticsearch node is running against a different keystore
   This tool used the keystore at /etc/elasticsearch/elasticsearch.keystore

ERROR: Failed to verify bootstrap password

6.3 After the setting is complete, start the elasticsearch service on other nodes

systemctl start elasticsearch

Observe the log and visit the es service in the browser

4. Delete password or reset password

If you forget the previously set password, you can reset the password according to the following operations:

The following errors may occur when executing the generate password command

1. Delete password steps

  1. Turn off xpack security verification of ElasticSearch
    Modify xpack.security.enabled  and  xpack.security.transport.ssl.enabled  in elasticsearch.yml to false
  2. Restart the ElasticSearch service
 systemctl restart elasticsearch
  1. After restarting, use the following command to delete the index.secutity  -6

curl -XDELETE http://localhost:9200/.secutity-6 -u elastic -p es@yky 
or 
delete in kinaba UI

2. You can use api to reset password

5. Add username and password related to kibana, logstash, etc.

1. kibana configuration username and password

# Add or modify the following two lines under kibana.yml 
 vi /etc/kibana/kibana.yml 
 elasticsearch.username: elastic 
 elasticsearch.password: {your modified password}

2. Logstash configuration username and password

Open the configuration file conf, add user and password to elasticsearch in output

output { 
    elasticsearch { 
      hosts => ["10.68.24.136:9200","10.68.24.137:9200"] 
      index => "%{[indexName]}-%{+YYYY.MM.dd}" 
      user => "Yes Authorized user name" 
      password => "corresponding password" 
    }

This user must have permissions, otherwise  a 401 permission error will be reported :

[WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, 
but got an error. {:url=>"http://*****:9200/", 
:error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError,
 :error=>"Got response code '401' contacting Elasticsearch at URL 'http://***:9200/'"}

2.1 Select a user with permissions

  1. The configured user needs to have es index write, delete, and create index permissions, and the corresponding permission names are: write, delete, create_index ;
  2. If you want to manage monitoring, you need to add manage_index_templates, monitor, manage_ilm  permissions;
  3. It is best to put these permissions into a role, which can be named: logstash_write , or it can be customized.

2.1.1 Method 1: Use the system built-in user

The easiest way is to use super user elastic (not very safe)

The built-in user logstash_system of the es system  is a dedicated user for monitoring the logstash service, and has no index read and write permissions.
The built-in user elastic of the es system  is the super administrator user of es

2.1.2 Method 2: Create a new user

Create new users and roles and assign permissions

You can refer to the official configuration
Configuring Security in Logstash | Logstash Reference [7.16] | Elastic

2.2 Create a user role (use built-in users, please ignore this step)

There are two ways UI interface and api

2.2.1 Operation in kibana interface

Both new users and roles can be created in kibana: Configure under the Management > Role menu in the kibana interface 

  1. Create a new role logstash_writer, and then add cluster management permissions as: manage_index_templates, monitor, manage_ilm
  2. Add index permission for this role, select the index to be configured, and then the operation permissions of this role on the index are: write, delete, create_index

2.2.2 API method

POST _xpack/security/role/logstash_writer
{
  "cluster": ["manage_index_templates", "monitor", "manage_ilm"], 
  "indices": [
    {
      "names": [ "logstash-*" ], 
      "privileges": ["write","create","create_index","manage","manage_ilm"]  
    }
  ]
}

2.3 Add monitoring to logstash (optional)

In logstash.yml, release and modify the following lines of x-pack monitoring.

xpack.monitoring.enable: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: 你定义的密码,如果没有密码直接注释
xpack.monitoring.elasticsearch.hosts: ["http://ip1:9200","http://ip2:9200"]

3. Add password to filebeat

Open filebeat.yml and modify the following content

output.elasticsearch: 
  # Array of hosts to connect to. 
  hosts: ["192.168.0.130:9200", "192.168.0.131:9200", "192.168.0.132:9200"] 
  index: "xxxxx-log-%{+yyyy -MM-dd}" 
  # Optional protocol and basic auth credentials. 
  #protocol: "https" #Don't worry about 
  username: "elastic" #esDefault administrator user 
  password: "changeme" #esThe password corresponding to the default administrator user

4. Add password to cerebro

Edit the configuration file vim /etc/cerebro/application.conf in the array of hosts=


 

hosts = [
  {
    host = "http://xxxxx:9200"
    name = "ES"
    headers-whitelist = [ "x-proxy-user", "x-proxy-roles", "X-Forwarded-For" ]

    auth = {
      username = "用户名"
      password = "密码"
    }

  }

  {
    host = "http://xxxxxx2:9200"
    name = "ES"
    headers-whitelist = [ "x-proxy-user", "x-proxy-roles", "X-Forwarded-For" ]
    auth = {
      username = "用户名"
      password = "密码"
    }
  }
 
]

Guess you like

Origin blog.csdn.net/yy4545/article/details/121957479
Recommended