Kubernetes1.13 new features
-
Use kubeadm (GA) simplifies cluster management Kubernetes
Most of the Kubernetes engineers should use kubeadm. It is an important tool for managing cluster life cycle, from creation to the configuration and then upgrade; now kubeadm became GA. kubeadm guide the production process on the existing cluster hardware and configuration best practices Kubernetes core components, in order to provide a safe and simple connection process for the new node and allows for easy upgrades. The GA version is worth noting that now has graduated from advanced features, especially pluggable and configurable. Range kubeadm an administrator and automation, higher-level system toolbox, this version is an important step towards this direction.
Container storage interface (CSI) is now GA, alpha incorporated in v1.9, as introduced in the beta v1.10. By CSI, Kubernetes wound layer becomes truly scalable. This provides an opportunity to prepare and Kubernetes interoperability without touching the core code of the plug-ins for third-party storage providers. The specification itself reached the state 1.0.
In 1.11, we announced CoreDNS has reached general availability of DNS-based service discovery. In 1.13, CoreDNS now kube-dns replaced Kubernetes default DNS server. CoreDNS is a common, authoritative DNS server to provide backward compatibility with Kubernetes but scalable integration. CoreDNS having fewer moving parts than previous DNS server, because it is a single executable file and a single process, and to support flexible use cases to create a custom DNS entries. It is also written in Go, it has a memory safety.
First, official documents
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#downloads-for-v1131
https://kubernetes.io/docs/home/?path=users&persona=app-developer&level=foundational
https://github.com/etcd-io/etcd
https://shengbao.org/348.html
https://github.com/coreos/flannel
http://www.cnblogs.com/blogscc/p/10105134.html
https://blog.csdn.net/xiegh2014/article/details/84830880
https://blog.csdn.net/tiger435/article/details/85002337
https://www.cnblogs.com/wjoyxt/p/9968491.html
https://blog.csdn.net/zhaihaifei/article/details/79098564
http://blog.51cto.com/jerrymin/1898243
http://www.cnblogs.com/xuxinkun/p/5696031.html
Second, the download link
Client Binaries
https://dl.k8s.io/v1.13.1/kubernetes-client-linux-amd64.tar.gz
Server Binaries
https://dl.k8s.io/v1.13.1/kubernetes-server-linux-amd64.tar.gz
Node Binaries
https://dl.k8s.io/v1.13.1/kubernetes-node-linux-amd64.tar.gz
etcd
https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
flannel
https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
Third, the separation of roles
k8s-master1 10.2.8.44 k8s-master etcd、kube-apiserver、kube-controller-manager、kube-scheduler k8s-node1 10.2.8.65 k8s-node etcd、kubelet、docker、kube_proxy k8s-node2 10.2.8.34 k8s-node etcd、kubelet、docker、kube_proxy
Four, Master deployment
4.1 software download
wget https://dl.k8s.io/v1.13.1/kubernetes-server-linux-amd64.tar.gz
wget https://dl.k8s.io/v1.13.1/kubernetes-client-linux-amd64.tar.gz
wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
4.2 cfssl installation
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
4.3 Creating etcd certificate
mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p
cd /k8s/etcd/ssl/
1) etcd ca configure
cat << EOF | tee ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"etcd": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
2) etcd ca certificate
cat << EOF | tee ca-csr.json
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
3) etcd server certificate
cat << EOF | tee server-csr.json
{
"CN": "etcd",
"hosts": [
"10.2.8.44",
"10.2.8.65",
"10.2.8.34"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
4) generating a certificate and private key initialization ETCD ca ca
cfssl gencert -initca ca-csr.json | cfssljson -bare ca [root@elasticsearch01 ssl]# ls ca-config.json ca-csr.json server-csr.json [root@elasticsearch01 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca 2018/12/26 16:13:54 [INFO] generating a new CA key and certificate from CSR 2018/12/26 16:13:54 [INFO] generate received request 2018/12/26 16:13:54 [INFO] received CSR 2018/12/26 16:13:54 [INFO] generating key: rsa-2048 2018/12/26 16:13:54 [INFO] encoded CSR 2018/12/26 16:13:54 [INFO] signed certificate with serial number 144752911121073185391033754516204538929473929443 [root@elasticsearch01 ssl]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server-csr.json
Generation server certificate
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server [root@elasticsearch01 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server 2018/12/26 16:18:53 [INFO] generate received request 2018/12/26 16:18:53 [INFO] received CSR 2018/12/26 16:18:53 [INFO] generating key: rsa-2048 2018/12/26 16:18:54 [INFO] encoded CSR 2018/12/26 16:18:54 [INFO] signed certificate with serial number 388122587040599986639159163167557684970159030057 2018/12/26 16:18:54 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@elasticsearch01 ssl]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server.csr server-csr.json server-key.pem server.pem
4.4 etcd installation
1) Unzip
tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/
2) a main configuration file etcd
vim /k8s/etcd/cfg/etcd.conf #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/data1/etcd" ETCD_LISTEN_PEER_URLS="https://10.2.8.44:2380" ETCD_LISTEN_CLIENT_URLS="https://10.2.8.44:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.2.8.44:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.2.8.44:2379" ETCD_INITIAL_CLUSTER="etcd01=https://10.2.8.44:2380,etcd02=https://10.2.8.65:2380,etcd03=https://10.2.8.34:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" #[Security] ETCD_CERT_FILE="/k8s/etcd/ssl/server.pem" ETCD_KEY_FILE="/k8s/etcd/ssl/server-key.pem" ETCD_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem" ETCD_CLIENT_CERT_AUTH="true" ETCD_PEER_CERT_FILE="/k8s/etcd/ssl/server.pem" ETCD_PEER_KEY_FILE="/k8s/etcd/ssl/server-key.pem" ETCD_PEER_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem" ETCD_PEER_CLIENT_CERT_AUTH="true"
3) configuration startup file etcd
mkdir /data1/etcd vim /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory=/data1/etcd/ EnvironmentFile=-/k8s/etcd/cfg/etcd.conf # set GOMAXPROCS to number of processors ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /k8s/etcd/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\" --cert-file=\"${ETCD_CERT_FILE}\" --key-file=\"${ETCD_KEY_FILE}\" --trusted-ca-file=\"${ETCD_TRUSTED_CA_FILE}\" --client-cert-auth=\"${ETCD_CLIENT_CERT_AUTH}\" --peer-cert-file=\"${ETCD_PEER_CERT_FILE}\" --peer-key-file=\"${ETCD_PEER_KEY_FILE}\" --peer-trusted-ca-file=\"${ETCD_PEER_TRUSTED_CA_FILE}\" --peer-client-cert-auth=\"${ETCD_PEER_CLIENT_CERT_AUTH}\"" Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
4) etcd02, etcd03 noted before starting the same starting configuration
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
5) Check the service
/k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://10.2.8.44:2379,https://10.2.8.65:2379,https://10.2.8.34:2379" cluster-health
member c21df2258ce015e6 is healthy: got healthy result from https://10.2.8.34:2379
member d427109ed3caf9c3 is healthy: got healthy result from https://10.2.8.44:2379
member ec8c40660d3c1192 is healthy: got healthy result from https://10.2.8.65:2379
cluster is healthy
4.5 certificate and private key generation kubernets
1) Production kubernetes ca certificate
cd /k8s/kubernetes/ssl
cat << EOF | tee ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat << EOF | tee ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca - [root@elasticsearch01 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - 2018/12/27 09:47:08 [INFO] generating a new CA key and certificate from CSR 2018/12/27 09:47:08 [INFO] generate received request 2018/12/27 09:47:08 [INFO] received CSR 2018/12/27 09:47:08 [INFO] generating key: rsa-2048 2018/12/27 09:47:08 [INFO] encoded CSR 2018/12/27 09:47:08 [INFO] signed certificate with serial number 156611735285008649323551446985295933852737436614 [root@elasticsearch01 ssl]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
2) Production apiserver certificate
cat << EOF | tee server-csr.json
{
"CN": "kubernetes",
"hosts": [
"10.254.0.1",
"127.0.0.1",
"10.2.8.44",
"10.2.8.65",
"10.2.8.34",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server [root@elasticsearch01 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server 2018/12/27 09:51:56 [INFO] generate received request 2018/12/27 09:51:56 [INFO] received CSR 2018/12/27 09:51:56 [INFO] generating key: rsa-2048 2018/12/27 09:51:56 [INFO] encoded CSR 2018/12/27 09:51:56 [INFO] signed certificate with serial number 399376216731194654868387199081648887334508501005 2018/12/27 09:51:56 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@elasticsearch01 ssl]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server.csr server-csr.json server-key.pem server.pem
3) Production kube-proxy certificate
cat << EOF | tee kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy [root@elasticsearch01 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 2018/12/27 09:52:40 [INFO] generate received request 2018/12/27 09:52:40 [INFO] received CSR 2018/12/27 09:52:40 [INFO] generating key: rsa-2048 2018/12/27 09:52:40 [INFO] encoded CSR 2018/12/27 09:52:40 [INFO] signed certificate with serial number 633932731787505365511506755558794469389165123417 2018/12/27 09:52:40 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@elasticsearch01 ssl]# ls ca-config.json ca-csr.json ca.pem kube-proxy-csr.json kube-proxy.pem server-csr.json server.pem ca.csr ca-key.pem kube-proxy.csr kube-proxy-key.pem server.csr server-key.pem
4.6 kubernetes server deployment
kubernetes master node running the following components: kube-apiserver kube-scheduler kube-controller-manager kube-scheduler and kube-controller-manager can be run in clustered mode, resulting in a work process by leader election, other processes in blocking mode, master three-node high availability mode available
1) Unzip the file
tar -zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/
2) deployment kube-apiserver components to create TLS Bootstrapping Token
[root@elasticsearch01 bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
f2c50331f07be89278acdaf341ff1ecc
vim /k8s/kubernetes/cfg/token.csv
f2c50331f07be89278acdaf341ff1ecc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
Creating Apiserver profile
vim /k8s/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://10.2.8.44:2379,https://10.2.8.65:2379,https://10.2.8.34:2379 \
--bind-address=10.2.8.44 \
--secure-port=6443 \
--advertise-address=10.2.8.44 \
--allow-privileged=true \
--service-cluster-ip-range=10.254.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/server.pem \
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
Create a file apiserver systemd
vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
Start Service
systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver [root@elasticsearch01 bin]# systemctl status kube-apiserver ● kube-apiserver.service - Kubernetes API Server Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-12-27 14:41:22 CST; 20s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 22060 (kube-apiserver) CGroup: /system.slice/kube-apiserver.service └─22060 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.2.8.44:2379,https://10.2.... [root@elasticsearch01 bin]# ps -ef |grep kube-apiserver root 22060 1 5 14:41 ? 00:00:14 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.2.8.44:2379,https://10.2.8.65:2379,https://10.2.8.34:2379 --bind-address=10.2.8.44 --secure-port=6443 --advertise-address=10.2.8.44 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/k8s/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/k8s/kubernetes/ssl/server.pem --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem --client-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/k8s/etcd/ssl/ca.pem --etcd-certfile=/k8s/etcd/ssl/server.pem --etcd-keyfile=/k8s/etcd/ssl/server-key.pem [root@elasticsearch01 bin]# netstat -tulpn |grep kube-apiserve tcp 0 0 10.2.8.44:6443 0.0.0.0:* LISTEN 22060/kube-apiserve tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 22060/kube-apiserve
3) deployment kube-scheduler component creates kube-scheduler configuration file
vim /k8s/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
Parameter Remarks: --address: receiving port in 127.0.0.1:10251 http / metrics request; kube-scheduler does not currently support https requests received; --kubeconfig: kubeconfig specified file path, kube-scheduler connection and use it to verify kube- apiserver; --leader-elect = true: the cluster mode of operation, the function is enabled elections; was elected leader of the node is responsible for processing, other nodes is blocked;
Creating kube-scheduler systemd file
vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
Start Service
systemctl daemon-reload systemctl enable kube-scheduler.service systemctl start kube-scheduler.service [root@elasticsearch01 bin]# systemctl status kube-scheduler.service ● kube-scheduler.service - Kubernetes Scheduler Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-12-27 15:16:51 CST; 17s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 29026 (kube-scheduler) CGroup: /system.slice/kube-scheduler.service └─29026 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
4) deployment kube-controller-manager component creates kube-controller-manager configuration file
vim /k8s/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.254.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--root-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"
Creating kube-controller-manager systemd file
vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
Start Service
systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager [root@elasticsearch01 bin]# systemctl status kube-controller-manager ● kube-controller-manager.service - Kubernetes Controller Manager Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-12-27 15:19:19 CST; 11s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 29510 (kube-controller) CGroup: /system.slice/kube-controller-manager.service └─29510 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=tru..
4.7 verify kubeserver Service
Set Environment Variables
vim /etc/profile
PATH=/k8s/kubernetes/bin:$PATH
source /etc/profile
View master service status
kubectl get cs,nodes [root@elasticsearch01 bin]# kubectl get cs,nodes NAME STATUS MESSAGE ERROR componentstatus/controller-manager Healthy ok componentstatus/scheduler Healthy ok componentstatus/etcd-0 Healthy {"health":"true"} componentstatus/etcd-1 Healthy {"health":"true"} componentstatus/etcd-2 Healthy {"health":"true"}
Five, Node deployment
kubernetes work node running the following components:
Docker
kubelet
Kube-Proxy
flannel
system environment
CentOS Linux Release 7.4.1708 (Core)
Docker version
Server Version: 18.09.0
Cgroup Driver: cgroupfs
5.1 Docker installation environment
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install docker-ce -y
systemctl start docker && systemctl enable docker
5.2 kubelet deployment components
kublet worker running on each node, receiving a request transmitted kube-apiserver, Pod container management, interactive command execution, such as exec, run, logs and the like; node to automatically register when kublet kube-apiserver startup information, statistics built cadvisor and resource usage monitoring node; to ensure safety, the safety open only port for receiving the https request, the request for authentication and authorization, against unauthorized access (e.g. apiserver, heapster)
1) Install the binary files
wget https://dl.k8s.io/v1.13.1/kubernetes-node-linux-amd64.tar.gz
tar zxvf kubernetes-node-linux-amd64.tar.gz
cd kubernetes/node/bin/
cp kube-proxy kubelet kubectl /k8s/kubernetes/bin/
2) Copy the relevant certificate to the node node
[root@elasticsearch01 ssl]# scp *.pem 10.2.8.65:$PWD
[email protected]'s password:
ca-key.pem 100% 1679 914.6KB/s 00:00
ca.pem 100% 1359 1.0MB/s 00:00
kube-proxy-key.pem 100% 1675 1.2MB/s 00:00
kube-proxy.pem 100% 1403 1.1MB/s 00:00
server-key.pem 100% 1679 809.1KB/s 00:00
server.pem
3) Create a kubelet bootstrap kubeconfig achieved through a script file
vim /k8s/kubernetes/cfg/environment.sh
#!/bin/bash
#创建kubelet bootstrapping kubeconfig
BOOTSTRAP_TOKEN=f2c50331f07be89278acdaf341ff1ecc
KUBE_APISERVER="https://10.2.8.44:6443"
#设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/k8s/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
#设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
# 创建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=/k8s/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=/k8s/kubernetes/ssl/kube-proxy.pem \
--client-key=/k8s/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
Execute scripts
[root@elasticsearch02 cfg]# sh environment.sh
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@elasticsearch02 cfg]# ls
bootstrap.kubeconfig environment.sh kube-proxy.kubeconfig
4) Create a kubelet parameter configuration template file
vim /k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 10.2.8.65
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.254.0.10"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
5) Create a profile kubelet
vim /k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.2.8.65 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
6) Create a file kubelet systemd
vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
7) The kubelet-bootstrap bind to the cluster system role
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
Note that the default connection localhost: 8080 port can operate in master
[root@elasticsearch01 ssl]# kubectl create clusterrolebinding kubelet-bootstrap \
> --clusterrole=system:node-bootstrapper \
> --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
8) to start the service systemctl daemon-reload systemctl enable kubelet systemctl start kubelet
[root@elasticsearch02 cfg]# systemctl status kubelet ● kubelet.service - Kubernetes Kubelet Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-12-27 17:34:30 CST; 18s ago Main PID: 24676 (kubelet) Memory: 88.6M CGroup: /system.slice/kubelet.service └─24676 /k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=10.2.8.44 --kubeconfig=/k8s/kubernetes...
9) Master accepts kubelet CSR request may be manually or automatically approve CSR request. We recommend use of an automatic way, since the starting version v1.8, the certificate may be automatically generated after rotation approve csr, following the request operation is a manual method Approve CSR CSR List View
[root@elasticsearch01 ssl]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc 102s kubelet-bootstrap Pending
Receiving node
[root@elasticsearch01 ssl]# kubectl certificate approve node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc
certificatesigningrequest.certificates.k8s.io/node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc approved
View CSR
[root@elasticsearch01 ssl]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-ij3py9j-yi-eoa8sOHMDs7VeTQtMv0N3Efj3ByZLMdc 5m13s kubelet-bootstrap Approved,Issued
5.3 kube-proxy deployment components
kube-proxy node running on all nodes that monitor changes in service and apiserver Endpoint to create routing rules to carry out the service load balancing 1) create kube-proxy configuration file
vim /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.2.8.65 \
--cluster-cidr=10.254.0.0/16 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
2) Create a file kube-proxy systemd
vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
3) to start the service systemctl daemon-reload systemctl enable kube-proxy systemctl start kube-proxy
[root@elasticsearch02 cfg]# systemctl status kube-proxy ● kube-proxy.service - Kubernetes Proxy Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-12-27 18:31:42 CST; 11s ago Main PID: 5376 (kube-proxy) Memory: 40.9M CGroup: /system.slice/kube-proxy.service ‣ 5376 /k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=10.2.8.44 --cluster-cidr=10.254.0.0/...
4) Check cluster status
[root@elasticsearch01 cfg]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.2.8.65 Ready <none> 9m15s v1.13.1
5) The same operation and deployment authentication node 10.2.8.34 csr, after the authentication certificate generated kubelet-client
Note that if the period kubelet, kube-proxy configuration errors, such as listening IP or hostname errors node not found, need to remove kubelet-client certificate, kubelet restart the service, you can restart the certification csr
[root@elasticsearch03 kubernetes]# ls ssl
ca-key.pem kubelet-client-2018-12-27-20-13-52.pem kubelet.crt kube-proxy-key.pem server-key.pem
ca.pem kubelet-client-current.pem kubelet.key kube-proxy.pem server.pem
[root@elasticsearch01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.2.8.34 Ready <none> 13h v1.13.1
10.2.8.65 Ready <none> 14h v1.13.1
Six Flanneld network deployment
Flanneld no default network, can not communicate between Node POD nodes within the communication only Node, deployment steps for concise, it is mounted on the flannel flanneld services need to start later in the docker. When flannel service starts major work to do the following steps: dividing subnet network configuration information from etcd in and register in etcd the subnet information to be recorded in /run/flannel/subnet.env
6.1 etcd registered network segment
[root@elasticsearch02 cfg]# /k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://10.2.8.44:2379,https://10.2.8.65:2379,https://10.2.8.34:2379" set /k8s/network/config '{ "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}' { "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}
flanneld current version (v0.10.0) does not support etcd v3, so the use of key etcd v2 API and write the configuration data segment; Pod segment written must be $ {CLUSTER_CIDR} / 16 segment address, must kube-controller- --cluster-cidr same parameter value manager;
6.2 flannel installation
1) extracting installer
tar -xvf flannel-v0.10.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/
2) Configuration flanneld
vim /k8s/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://10.2.8.44:2379,https://10.2.8.65:2379,https://10.2.8.34:2379 -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem -etcd-prefix=/k8s/network"
Create a file flanneld systemd
vim /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/k8s/kubernetes/cfg/flanneld
ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
note
mk-docker-opts.sh script allocated to the Pod flanneld subnet information writing / run / flannel / docker file, this file using environment variables bridge configuration docker0 docker subsequent startup; flanneld system default interface to communicate with other nodes where the route for the node has a plurality of network interfaces (e.g., network and the public network), the communication interface can be specified with parameters -iface; flanneld runtime requires root privileges;
3) Configure Docker start specified subnet modified EnvironmentFile = / run / flannel / subnet.env, ExecStart = / usr / bin / dockerd $ DOCKER_NETWORK_OPTIONS to
vim /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
4) Start the service Note To turn off before starting the flannel flannel so will docker and related kubelet covered bridge docker0
systemctl daemon-reload
systemctl stop docker
systemctl start flanneld
systemctl enable flanneld
systemctl start docker
systemctl restart kubelet
systemctl restart kube-proxy
5) Authentication Service
[root@elasticsearch02 bin]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=10.254.35.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.254.35.1/24 --ip-masq=false --mtu=1450"
[root@elasticsearch02 bin]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 52:54:00:a4:ca:ff brd ff:ff:ff:ff:ff:ff
inet 10.2.8.65/24 brd 10.2.8.255 scope global eth0
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:06:0a:ab:32 brd ff:ff:ff:ff:ff:ff
inet 10.254.35.1/24 brd 10.254.35.255 scope global docker0
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether 72:59:dc:2b:0a:21 brd ff:ff:ff:ff:ff:ff
inet 10.254.35.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
[root@elasticsearch01 k8s]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.2.8.34 Ready <none> 16h v1.13.1
10.2.8.65 Ready <none> 18h v1.13.1
Documents directory:
- kubernetes1.13.1 + etcd3.3.10 + flanneld0.10 cluster deployment
- kubernetes1.13.1 部署 kuberneted-dashboard v1.10.1
- kubernetes1.13.1 deployment coredns
- kubernetes1.13.1 deployment of ingress-nginx forwards dashboard and configure https
- kubernetes1.13.1 deploy metrics-server0.3.1
- kubernetes1.13.1 using cluster storage block ceph rbd
- kubernetes1.13.1 cluster combined ceph rbd deploy the latest version jenkins
Reference Documents
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#deploying-the-dashboard-ui
https://github.com/kubernetes/kubernetes/tree/7f23a743e8c23ac6489340bbb34fa6f1d392db9d/cluster/addons/dashboard
https://github.com/kubernetes/dashboard
https://blog.csdn.net/nklinsirui/article/details/80581286
https://github.com/kubernetes/dashboard/issues/3472