k8s build

K8s official documents Address: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/   if the cloud host deployment, be sure to open the port in advance

1.  Server Planning

Roles

IP

Package

k8s-master1

192.168.31.63

cube-apiserver

kube-controller-manager

kube-scheduler

etcd

k8s-master2

192.168.31.64

cube-apiserver

kube-controller-manager

kube-scheduler

 

k8s-node1

192.168.31.65

omelet

kube-proxy

docker

etcd

k8s-node2

192.168.31.66

omelet

kube-proxy

docker

etcd

Load Balancer(Master)

192.168.31.61

192.168.31.60 (VIP)

Nginx L4

Load Balancer(Backup)

192.168.31.62

Nginx L4

1. System Initialization

Modify the host name:

hostnamectl set-hostname k8s-master1


Turn off the firewall:

# systemctl stop firewalld

# systemctl disable firewalld

 

Close selinux:

# Setenforce 0 #  temporary

# sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久

 

Close swap:

# Swapoff -a # temporary

# Vim / etc / fstab # permanent

 

Time synchronization system:

# ntpdate time.windows.com

2 .2 deployment three Etcd nodes   

TLS, etcd address:

Link: https: //pan.baidu.com/s/1kyC5KgsF5DB2fZK5UGPaQg
extraction code: o101

# tar zxvf etcd.tar.gz

# Cd DCE

# cp TLS/etcd/ssl/{ca,server,server-key}.pem ssl

 

It was copied to the Etcd three nodes:

# scp –r etcd [email protected]:/opt

# scp etcd.service [email protected]:/usr/lib/systemd/system/

 

Login three nodes to modify the configuration file  name and IP:

 # We /opt/etcd/cfg/etcd.conf

#[Member]

ETCD_NAME = "etcd-1"   name must be replaced

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.31.63:2380"  内网ip

ETCD_LISTEN_CLIENT_URLS="https://192.168.31.63:2379"

 

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.63:2380"

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.63:2379"

ETCD_INITIAL_CLUSTER = "etcd-1 = https: //192.168.31.63: 2380, etcd-2 = https: //192.168.31.64: 2380, etcd-3 = https: //192.168.31.65: 2380" deploy three nodes network ip

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE = "new" state of the cluster

 

# systemctl daemon-reload

# systemctl start etcd

# Ps -ef | grep etcd  View etcd process

# Systemctl enable etcd  set boot

# Tail / var / log / messages -f view the system log

2.3 Check cluster status

 

# /opt/etcd/bin/etcdctl \

> --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \

> --Endpoints = "https://192.168.31.63:2379,https://192.168.31.64:2379,https://192.168.31.65:2379" \ deploy three nodes within the network ip must be replaced

> cluster-health

If the following field appears, indicating that the cluster status is healthy

member 37f20611ff3d9209 is healthy: got healthy result from https://192.168.31.63:2379

member b10f0bac3883a232 is healthy: got healthy result from https://192.168.31.64:2379

member b46624837acedac9 is healthy: got healthy result from https://192.168.31.65:2379

cluster is healthy

1. department Master  Node

1.1  generation apiserver certificate

# cd TLS/k8s

 

Modification request file hosts field contains all nodes etcd IP:

# vi server-csr.json

{

    "CN" "kubernetes"

    "hosts": [

      "10.0.0.1",

      "127.0.0.1",

      "Kubernetes"

      "kubernetes.default",

      "kubernetes.default.svc",

      "kubernetes.default.svc.cluster",

      "kubernetes.default.svc.cluster.local",

      "192.168.31.60", your network ip

      "192.168.31.61",

      "192.168.31.62",

      "192.168.31.63",

      "192.168.31.64",

      "192.168.31.65",

      "192.168.31.66"

    ],

    "key": {

        "algo": "rsa",

        "size": 2048

    },

    "names": [

        {

            "C": "CN",

            "L": "BeiJing",

            "ST": "BeiJing",

            "O": "k8s",

            "OU": "System"

        }

    ]

}

 

# ./generate_k8s_cert.sh

# Ls * PEM

ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  server-key.pem  server.pem

 

. 3 .2 deployment apiserver, controller-manager and scheduler

In Master node completes the following actions.

 

Download the binary package: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1161

 master installation package Download: link: https: //pan.baidu.com/s/1kyC5KgsF5DB2fZK5UGPaQg 
extraction code: o101 

Binary file location: kubernetes / serverr / bin

 

# tar zxvf k8s-master.tar.gz

# Cd kubernetes

# cp TLS/k8s/ssl/*.pem ssl

# Cp -r kubernetes / opt

# cp kube-apiserver.service kube-controller-manager.service kube-scheduler.service /usr/lib/systemd/system

 

# cat /opt/kubernetes/cfg/kube-apiserver.conf

KUBE_APISERVER_OPTS="--logtostderr=false \

--v = 2 \

--log-dir = / opt / kubernetes / logs \

--etcd-servers = https: //192.168.31.63: 2379, https: //192.168.31.64: 2379, https: //192.168.31.65: 2379 \ etcd alternative nodes within the network ip

--bind-address = 192.168.31.63 \ Alternatively ip master node

--secure-port=6443 \

--advertise-address = 192.168.31.63 \ replacement master node ip

……

 

# systemctl start kube-apiserver

# systemctl start kube-controller-manager

# systemctl start kube-scheduler

# systemctl enable kube-apiserver

# systemctl enable kube-controller-manager

# systemctl enable kube-scheduler

# systemctl  start kube-apiserver

# Ls / opt / kubernetes / logs View Log

# less /opt/kubernetes/logs/kube-apiserver.INFO

# tail -f /opt/kubernetes/logs/kube-controller-manager.INFO

# For i in $ (ls / opt / kubernetes / bin); do systemctl enable $ i; done boot

# Mv / opt / kubernetes / bin / kubectl / usr / local / bin / kubectl move to an environment variable

# chmod a+x /usr/local/bin/kubect

# Kubectl get cs view component status

View # 3 assembly process ps -ef | grep kube

3 .3 enable TLS Bootstrapping

As kubelet TLS Bootstrapping  authorization :

 

# cat /opt/kubernetes/cfg/token.csv

c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"

 

Format: token, users, uid, user group

 

To kubelet-bootstrap authorization:

 

kubectl create clusterrolebinding kubelet-bootstrap \

--clusterrole=system:node-bootstrapper \

--user = omelet Bootstrap

 

Alternatively token may be self-generated:

 

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

 

But apiserver configuration token must be consistent with node node configuration in bootstrap.kubeconfig.

1. Deploy Worker the Node 

1.1  Installation Docker

Download the binary package: https://download.docker.com/linux/static/stable/x86_64/

docker Download: link: https: //pan.baidu.com/s/1kyC5KgsF5DB2fZK5UGPaQg 
extraction code: o101 

# tar zxvf k8s-node.tar.gz

# tar zxvf docker-18.09.6.tgz

# mv docker/* /usr/bin

# mkdir /etc/docker

# mv daemon.json /etc/docker

# mv docker.service /usr/lib/systemd/system

# systemctl start docker

# systemctl enable docker

# Docker info info view by docker docker Launch OK

 

 

 Execution docker info appear as warning

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled


Solution:

we /etc/sysctl.conf


Add the following

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1


Finally, the implementation of

sysctl -p


At this point docker info you will not see this being given

Execution docker info appear as warning

 

. 4 .2 kube-proxy and deployment kubelet

A copy of the certificate to Node:

 

# cd TLS/k8s

# scp ca.pem kube-proxy*.pem [email protected]:/opt/kubernetes/ssl/

# Cp cube apiserver.service cube controller manager.service cuboid

# tar zxvf k8s-node.tar.gz

Kubernetes # mv / opt

# cp kubelet.service kube-proxy.service /usr/lib/systemd/system

 

View the following three files IP address:

[Root @ k8s-node2 kubernetes] # grep 192 * 

 Modify the following two files in the host name:

[root@k8s-node2 cfg]# vim bootstrap.kubeconfig

 [root@k8s-node2 cfg]# vim kubelet.conf

 

 [root@k8s-node2 cfg]# vim kubelet.kubeconfig

[root@k8s-node2 cfg]# vim kube-proxy-config.yml

 

[root@k8s-node2 cfg]# vim kube-proxy.kubeconfig

 

 

 

 

# systemctl start kubelet

# systemctl start kube-proxy

# systemctl enable kubelet

# systemctl enable kube-proxy

# tail /opt/kubernetes/logs/kubelet.INFO 查看日志

 

4.3 允许给Node颁发证书

# kubectl get csr

 

 

# kubectl certificate approve node-csr-MYUxbmf_nmPQjmH3LkbZRL2uTO-_FCzDQUoUfTy7YjI  替换你的node名称

# kubectl get node 

 

 

 

 

 

 

 

Guess you like

Origin www.cnblogs.com/xuzhongtao/p/12290481.html