environmental planning
Overview
The virtual machine deploys a small cluster of 2 nodes and 1 master, and installs etcd on both the master and the node to realize the etcd cluster.
Software selection and version
software | Version |
---|---|
Linux system | CentOS Linux release 7.3.1611 (Core) |
governor | 1.9 |
Docker | 17.12-ce |
etcd | 3.0 |
Server role assignment
Role | IP | components | configure |
---|---|---|---|
master | 10.10.99.225 | kube-apiserver kube-controller-manager kube-scheduler etcd | 4 cores 4G |
node01 | 10.10.99.233 | kubelet kube-proxy docker flannel etcd | 4 cores 4G |
node02 | 10.10.99.228 | kubelet kube-proxy docker flannel etcd | 4 cores 4G |
other settings
All servers turn off the selinux function.
install docker
The following operations are performed on all nodes
Dependent environment installation
yum install -y yum-utils device-mapper-persistent-data lvm2
Add docker-ce source
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Install docker-ce
// 查看docker-ce的版本
yum list docker-ce --showduplicates|sort -r
// 安装指定版本的docker-ce
yum install -y docker-ce-17.12.1.ce-1.el7.centos
Set up the daemon.json file
mkdir /etc/docker
vim /etc/docker/daemon.json
{
"registry-mirrors": [ "https://registry.docker-cn.com"],
"insecure-registries":["10.10.99.226:80"]
}
The domestic mirror source and my own harbor warehouse are set up here, and the second private warehouse can also not be set.
start the docker service
systemctl start docker
systemctl status docker
systemctl enable docker
Self-signed TLS certificate
Create a certificate on master, then distribute
Components and certificates used
components | certificate used |
---|---|
etcd | ca.pem server.pem server-key.pem |
cube apiserver | ca.pem server.pem server-key.pem |
kubelet | ca.pem ca-key.pem |
kube-proxy | ca.pem kube-proxy.pem kube-proxy-key.pem |
kubectl | ca.pem admin.pem admin-key.pem |
Download cfssl certificate generation tool
// 创建生成证书的目录
mkdir /root/ssl
cd /root/ssl
// 下载相关工具
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
// 移动到/usr/local/bin
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
cfssl is used to generate certificates, cfssljson is used to import json into certificates, and cfssl-certinfo is used to view certificate content
CA configuration
Create a new CA configuration file
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
Create a new CA certificate issuance request file
cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shenzhen",
"ST": "Guangzhou",
"O": "k8s",
"OU": "System"
}
]
}
EOF
Generate CA certificate and private key
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
ca-key.pem ca.pem
These two files will be generated after this step
Generate server certificate and private key
cat > server-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"10.10.99.225",
"10.10.99.233",
"10.10.99.228",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shenzhen",
"ST": "Guangzhou",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
After this step , server-key.pem and server.pem are generated
Generate admin certificate and private key
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shenzhen",
"ST": "Guangzhou",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
After this step , two files admin-key.pem and admin.pem are generated
Generate kube-proxy certificate and private key
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Shenzhen",
"ST": "Guangzhou",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
After this step , two files , kube-proxy-key.pem and kube-proxy.pem , are generated
keep the pem certificate, delete or move other files
mkdir /root/ssl/config
cd /root/ssl
ls | grep -v pem | xargs -i mv {} config
Deploy etcd cluster
It needs to be deployed on both the master and the node. First deploy on the master, and then copy the relevant binaries and programs to the node.
Download etcd package
wget https://github.com/coreos/etcd/releases/tag/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz
Create a cluster component installation directory
mkdir /opt/kubernetes
mkdir /opt/kubernetes/{bin,cfg,ssl}
All components of the cluster are in this directory for the convenience of management.
The bin directory stores binary files, cfg stores configuration files, and ssl stores certificates
Move binary files to the specified directory
mv /root/etcd-v3.2.12-linux-amd64/etcd /opt/kubernetes/bin/
mv /root/etcd-v3.2.12-linux-amd64/etcdctl /opt/kubernetes/bin/
Create etcd configuration file
cat > /opt/kubernetes/cfg/etcd <<EOF
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://10.10.99.225:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.10.99.225:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.99.225:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.10.99.225:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://10.10.99.225:2380,etcd02=https://10.10.99.233:2380,etcd03=https://10.10.99.228:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
ETCD_NAME : Specify etcd cluster name
ETCD_DATA_DIR : etcd data directory
ETCD_LISTEN_PEER_URLS : Listening client address
ETCD_LISTEN_CLIENT_URLS : Listening data port
ETCD_INITIAL_CLUSTER : Cluster node information
ETCD_INITIAL_CLUSTER_TOKEN : Authentication token, customizable
ETCD_INITIAL_CLUSTER_STATE : Cluster establishment status
Note: The above configuration file is the content of the master configuration file. To configure the configuration file of the node node, you need to modify ETCD_NAME, ETCD_LISTEN_PEER_URLS, ETCD_LISTEN_CLIENT_URLS, ETCD_INITIAL_ADVERTIS_PEER_URLS, ETCD_ADVERTISE_CLIENT_URLS , and modify it to the ip address of the corresponding node, and the node name should match the corresponding in ETCD_INITIAL_CLUSTER .
Create etcd startup configuration file
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=-/opt/kubernetes/cfg/etcd
ExecStart=/opt/kubernetes/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-state=new \
--cert-file=/opt/kubernetes/ssl/server.pem \
--key-file=/opt/kubernetes/ssl/server-key.pem \
--peer-cert-file=/opt/kubernetes/ssl/server.pem \
--peer-key-file=/opt/kubernetes/ssl/server-key.pem \
--trusted-ca-file=/opt/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
Import certificate
cp /root/ssl/server*pem /root/ssl/ca*pem /opt/kubernetes/ssl/
Start etcd service
systemctl daemon-reload
systemctl start etcd.service
systemctl enable etcd.service
It may be very stuck when it is started for the first time, just Ctrl+C to exit. The possible reason is that it will try to connect to other etcd nodes specified at startup (other nodes have not been deployed at this time, and the connection fails).
Master and node trust each other (unnecessary)
This step is not necessary, but can be easily managed. For example, in the following operation, the etcd related programs and files configured on the master are copied to the node without a password.
Generate ssl certificate on master:
// 一直回车
ssh-keygen
ssh-copy-id 10.10.99.228
ssh-copy-id 10.10.99.233
Copy etcd necessary files from master to node and start
// 拷贝etcd二进制文件
scp /opt/kubernetes/bin/* 10.10.99.233:/opt/kubernetes/bin
scp /opt/kubernetes/bin/* 10.10.99.228:/opt/kubernetes/bin
// 拷贝etcd配置文件
scp /opt/kubernetes/cfg/etcd 10.10.99.228:/opt/kubernetes/cfg
scp /opt/kubernetes/cfg/etcd 10.10.99.233:/opt/kubernetes/cfg
// 拷贝证书
scp /opt/kubernetes/ssl/* 10.10.99.233:/opt/kubernetes/ssl
scp /opt/kubernetes/ssl/* 10.10.99.228:/opt/kubernetes/ssl
// 启动文件拷贝
scp /usr/lib/systemd/system/etcd.service 10.10.99.228:/usr/lib/systemd/system
scp /usr/lib/systemd/system/etcd.service 10.10.99.233:/usr/lib/systemd/system
// 各个node启动etcd
systemctl daemon-reload
systemctl start etcd.service
systemctl enable etcd.service
Note: Each node needs to change the ip in the etcd configuration file.
Test etcd cluster status
// 将kubernetes命令路径加入系统变量
echo "PATH=$PATH:/opt/kubernetes/bin" >> /etc/profile
source /etc/profile
// 测试集群健康状态
etcdctl -ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://10.10.99.225:2379,https://10.10.99.228:2379,https://10.10.99.233:2379" cluster-health