- All instructions herein are running in TLS environment. For reference, please make changes to your environment (node IP, certification path), no environmental certificates delete certificate-related instruction
- All commands in this article etcdctl default api, namely etcd api v2 operating under, v3 instruction slight changes may not match, please refer to the official document: https://etcd.io/docs/
Etcd use
-
Example: create, query, delete key (/ test / ok, value 11)
Input data # Etcd exemplary ETCDCTL_API = . 3 etcdctl \ --endpoints = HTTPS: // 172.16.10.70:2379 \ --cacert = / etc / Kubernetes / SSL / ca.pem \ --cert = / etc / ETCD / SSL / etcd.pem \ --key = / etc / ETCD / SSL / etcd- the key.pem \ PUT / Test / OK . 11
# ETCD query data example ETCDCTL_API = . 3 etcdctl \ --endpoints = HTTPS: // 172.16.10.70:2379 \ --cacert = / etc / Kubernetes / ssl / ca.pem \ --cert = / etc / ETCD / ssl / etcd.pem \ --key = / etc / ETCD / ssl / etcd- key.pem \ GET / the Test / the ok
# Etcd delete data example ETCDCTL_API = 3 etcdctl \ --endpoints = HTTPS: // 172.16.10.70:2379 \ --cacert = / etc / Kubernetes / ssl / ca.pem \ --cert = / etc / ETCD / ssl / etcd.pem \ --key = / etc / ETCD / SSL / etcd- the key.pem \ del / Test / OK
To maintain Etcd by Curl
-
View version
curl ‐k ‐‐cert /etc/etcd/ssl/etcd.pem ‐‐key /etc/etcd/ssl/etcd‐key.pem https://127.0.0.1:2379/version
-
View Etcd exposed prometheus indicators in the monitoring of its callable prometheus
curl ‐k ‐‐cert /etc/etcd/ssl/etcd.pem ‐‐key /etc/etcd/ssl/etcd‐key.pem https://127.0.0.1:2379/metrics
View version by Etcdctl
-
View etcd, etcd api v2 version
etcdctl - v
-
View etcd, etcd v3 version api
ETCDCTL_API=3 etcdctl version
Delete Etcd node
-
Query node ID
etcdctl \ ‐‐endpoints=https://172.16.10.70:2379 \ ‐‐ca‐file=/etc/kubernetes/ssl/ca.pem \ ‐‐cert‐file=/etc/etcd/ssl/etcd.pem \ ‐‐key‐file=/etc/etcd/ssl/etcd‐key.pem \
member list
340acbd004e6bcdb: name=etcd3 peerURLs=https://172.16.10.72:2380 clientURLs=https://172.16.10.72:2379
isLeader=false
9784cb04cceb3a48: name=etcd1 peerURLs=https://172.16.10.70:2380 clientURLs=https://172.16.10.70:2379
isLeader=true
ba343177666dd96e: name=etcd2 peerURLs=https://172.16.10.71:2380 clientURLs=https://172.16.10.71:2379
isLeader=false
-
Delete nodes, such as deleting Eecd3
etcdctl \ ‐‐endpoints=https://172.16.10.70:2379 \ ‐‐ca‐file=/etc/kubernetes/ssl/ca.pem \ ‐‐cert‐file=/etc/etcd/ssl/etcd.pem \ ‐‐key‐file=/etc/etcd/ssl/etcd‐key.pem \ member remove 340acbd004e6bcdb
-
Modify the configuration file etcd.conf, modify and remove node parameters ETCD_INITIAL_CLUSTER information, restart the service etcd
Join Etcd node
Etcd existing node failure re-added (re-added etcd3 Example)
Remove the failed node in the cluster
-
In any etcd a node server querying the node ID, remove the failed node ID, as the above Step 3-1 Step
-
Data deletion target node
# Stop target service node ETCD systemctl ETCD STOP # backup before removal CD / var / lib / && && mkdir -p etcd_bak -czvf etcd_bak the tar / etcd_`date the Y% m%% +% H% M% D S`.tar. ETCD GZ # delete node data RM -rf / var / lib / ETCD / *
Edit the target node configuration file, --initial-cluster-state value of existing (otherwise it will generate a new ID, the ID does not match the original will not join the cluster)
vim /etc/etcd/etcd.conf [member] ETCD_NAME=etcd3 ETCD_DATA_DIR="/var/lib/etcd/" ETCD_SNAPSHOT_COUNT="100" ETCD_HEARTBEAT_INTERVAL="100" ETCD_ELECTION_TIMEOUT="1000" ETCD_LISTEN_PEER_URLS="https://172.16.10.72:2380" ETCD_LISTEN_CLIENT_URLS="https://172.16.10.72:2379,https://127.0.0.1:2379" ETCD_MAX_SNAPSHOTS="5" ETCD_MAX_WALS="5" # [cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.10.72:2380" ETCD_INITIAL_CLUSTER="etcd1=https://172.16.10.70:2380,etcd2=https://172.16.10.71:2380,etcd3=https://172.16.10.72:2380" ETCD_INITIAL_CLUSTER_STATE="existing" ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="https://172.16.10.72:2379" # [security] ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem" ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" ETCD_CLIENT_CERT_AUTH="true" ETCD_TRUSTED_CA_FILE="/etc/kubernetes/ssl/ca.pem" ETCD_AUTO_TLS="true" ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem" ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" ETCD_PEER_CLIENT_CERT_AUTH="true" ETCD_PEER_TRUSTED_CA_FILE="/etc/kubernetes/ssl/ca.pem" ETCD_PEER_AUTO_TLS="true"
Adding nodes to the cluster, the need to enter the target node etcd name and PEER_URLS
etcdctl \ ‐‐endpoints=https://172.16.10.70:2379 \ ‐‐ca‐file=/etc/kubernetes/ssl/ca.pem \ ‐‐cert‐file=/etc/etcd/ssl/etcd.pem \ ‐‐key‐file=/etc/etcd/ssl/etcd‐key.pem \ member add etcd3 https://172.16.10.72:2380
Start the target node etcd Service
systemctl start etcd && systemctl status etcd
View cluster health status
etcdctl \ ‐‐endpoints=https://172.16.10.70:2379 \ ‐‐ca‐file=/etc/kubernetes/ssl/ca.pem \ ‐‐cert‐file=/etc/etcd/ssl/etcd.pem \ ‐‐key‐file=/etc/etcd/ssl/etcd‐key.pem \ cluster‐health
Snapshot backup of Etcd
= ETCDCTL_API 3 etcdctl \ --endpoints = HTTPS: // 172.16.10.70:2379 \ --cacert = / etc / Kubernetes / ssl / ca.pem \ --cert = / etc / ETCD / ssl / etcd.pem \ - the -key = / etc / ETCD / SSL / ETCD-the key.pem \ Snapshot Save / tmp / snapshot_`date the Y% m%% +% H% m% D S`.db ETCDCTL_API = . 3 : represents v3 version of ETCD API interface Note: be sure to add ETCDCTL_API = 3 to function properly backup; if you do not add will not be backed up
Etcd cluster recovery snapshot backups; (each node needs to perform)
-
Stop Etcd Service
systemctl stop etcd
-
Delete the current Etcd data (note that the backup)
cd /var/lib/ && mkdir ‐p etcd_bak && tar ‐czvf etcd_bak/etcd_`date +%Y%m%d%H%M%S`.tar.gz etcd ‐‐remov e‐files
-
Restore snapshot mirror
ETCDCTL_API=3 etcdctl \ ‐‐cacert=/etc/kubernetes/ssl/ca.pem \ ‐‐cert=/etc/etcd/ssl/etcd.pem \ ‐‐key=/etc/etcd/ssl/etcd‐key.pem \ ‐‐name etcd1 \ ‐‐data‐dir=/var/lib/etcd \ ‐‐initial‐cluster etcd1=https://172.16.10.70:2380,etcd2=https://172.16.10.71:2380,etcd3=https://172.16.10.72:2380 \ ‐‐initial‐cluster‐token k8s‐etcd‐cluster \ ‐‐initial‐advertise‐peer‐urls https://172.16.10.70:2380 \ snapshot restore /tmp/2019‐12-18_snapshot.db
--name: represents the name of (non-host name) of the current node etcd --data-dir: indicates the current data directory etcd node --initial-cluster: a cluster peer access addresses of all nodes; example: etcd1 HTTPS =: // / 172.16.10.70:2380,etcd2= HTTPS: ///172.16.10.71: 2380, etcd3 = HTTPS: ///172.16.10.72: 2380
--initial-token-cluster: cluster communication between nodes the token --initial-advertise-peer-urls : a communication address of the current node to other nodes
-
Start all Etcd node server
systemctl start etcd
-
View cluster health status
etcdctl \ ‐‐endpoints=https://172.16.10.70:2379 \ ‐‐ca‐file=/etc/kubernetes/ssl/ca.pem \ ‐‐cert‐file=/etc/etcd/ssl/etcd.pem \ ‐‐key‐file=/etc/etcd/ssl/etcd‐key.pem \ cluster‐health
No snapshot backup and restore data by db directory
-
If the current Etcd cluster fails without a snapshot backup files, you can restore the data by db data directory;
-
Db copied from the directory data from a data source, not the hash integrity, need --skip-hash-check = true parameter to skip the integrity check.
ETCDCTL_API=3 etcdctl \ --cacert=/etc/kubernetes/ssl/ca.pem \ --cert=/etc/etcd/ssl/etcd.pem \ --key=/etc/etcd/ssl/etcd-key.pem \ --name etcd3 \ --data-dir=/var/lib/etcd \ --initial-cluster etcd1=https://172.16.10.70:2380,etcd2=https://172.16.10.71:2380,etcd3=https://172.16.10.72:2380 \ --initial-cluster-token k8s-etcd-cluster \ --initial-advertise-peer-urls https://172.16.10.72:2380 \ --skip-hash-check=true \ snapshot restore /var / lib / etcd_bak / etcd / Member / SNAP / db - name: the name of etcd indicates that the current node (non-host name) --data- dir: indicates the current data directory etcd node --initial-cluster: the cluster all access address of the peer node; Example: https: //172.16.10.70: 2380, etcd2 = https: //172.16.10.71: 2380, etcd3 = https: //172.16.10.72: 2380
Cluster----initial token: token each node in the cluster communication --initial-advertise-peer-urls: a communication address of the current node to other nodes