centos7 build kubernetes1.16.7 cluster I

surroundings

Three Centos 7 server: kube_1, kube_2, kube_3, configuration: 2-core 4G

Set the host name (* does not change, then the node is added work will be a big mistake, the details change for a long time)

# Temporary modification 
hostname XXX

# Permanent modification (recommended) 
hostnamectl the SET-xxx hostname

 

Shut down, disable the firewall:

1 systemctl stop firewalld
2 
3 systemctl disable firewalld

Disable SELINUX:

1 setenforce 0

Disable swap

1 swapoff -a 

Modify / etc / fstab file, the file system will swap commented as follows

# /dev/mapper/centos-swap swap                    swap    defaults        0 0

 

Create a  /etc/sysctl.d/k8s.conf file, add the following:

1 net.bridge.bridge-nf-call-ip6tables = 1
2 net.bridge.bridge-nf-call-iptables = 1
3 net.ipv4.ip_forward = 1

Run the following command for the changes to take effect:

1 modprobe br_netfilter
2 
3 sysctl -p /etc/sysctl.d/k8s.conf

Installation Docker

. 1  # STEP. 1: Some systems install the necessary tools 
2 the sudo yum the install yum-utils -Y-persistent- Device-Mapper Data LVM2
 . 3  
. 4  # the Step 2: Add the software source information 
. 5 the sudo yum-config-Manager---add the repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker- ce.repo
 . 6  
. 7  # the Step. 3: update and install the CE-Docker 
. 8  the sudo yum makecache FAST
 . 9 the sudo yum the install docker- -Y CE
 10  
. 11  # the Step. 4: service open Docker 
12 is  the sudo start-service Docker
 13 is  
14  # the Step. 5: set boot 
15 the sudo systemctl enable Docker

Ali cloud mirroring configuration accelerator:

 1 mkdir -p /etc/docker
 2 
 3 tee /etc/docker/daemon.json <<-'EOF'
 4 {
 5   "registry-mirrors": ["https://obww7jh1.mirror.aliyuncs.com"]
 6 }
 7 EOF
 8 
 9 systemctl daemon-reload
10 
11 systemctl restart docker

安装 omelet kubeadm kubectl

 1 cat <<EOF > /etc/yum.repos.d/kubernetes.repo
 2 [kubernetes]
 3 name=Kubernetes
 4 baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
 5 enabled=1
 6 gpgcheck=1
 7 repo_gpgcheck=1
 8 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
 9 EOF
10 
11 yum install -y kubelet-1.16.7 kubeadm-1.16.7 kubectl-1.16.7
12 
13 systemctl enable --now kubelet

Construction of Kubernetes cluster

1, Master node initialization kube1

1 kubeadm The init pod-network-cidr = 1024400/16 And image-repository = registryaliyuncscom / googlecontainers
  • --pod-network-cidr: subsequent installation prerequisites flannel, and is 10.244.0.0/16, reference
  • --image-repository: Specifies the mirroring warehouse

Output log:

 1 [addons] Applied essential addon: CoreDNS
 2 
 3 Your Kubernetes control-plane has initialized successfully!
 4 
 5 To start using your cluster, you need to run the following as a regular user:
 6 
 7   mkdir -p $HOME/.kube
 8   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 9   sudo chown $(id -u):$(id -g) $HOME/.kube/config
10 
11 You should now deploy a pod network to the cluster.
12 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
13   https://kubernetes.io/docs/concepts/cluster-administration/addons/
14 
15 Then you can join any number of worker nodes by running the following on each as root:
16 
17 kubeadm join 192.168.1.127:6443 --token yjscgl.eybl86olwr3g2ct9 \
18     --discovery-token-ca-cert-hash sha256:91f7982ff4ffb9099b5228449044483192b73d52932929674985ef595a769055 

From the log, it can be seen, to use the cluster, we need to execute the following command:

1   mkdir -p $HOME/.kube
2   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3   sudo chown $(id -u):$(id -g) $HOME/.kube/config

 Also need to deploy to the cluster in a Pod Network, selected here  flannel , because the reason you want to access foreign resources, where additional processing

# The yml file downloaded to the local 
[root @ localhost ~] # wget https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml 
# to open the file, the file will modify all quay.io as quay-mirror.qiniu.com ( https://blog.csdn.net/zsd498537806/article/details/85157560 )

# Finally mirror pulling 
[the root @ localhost ~] # kubectl Apply -f Kube-flannel.yml

At this point, Master node initialization is complete, view the cluster information:

# 查看集群
[root@localhost ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.1.127:6443
KubeDNS is running at https://192.168.1.127:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

# 查看 node
[root@localhost ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   106m   v1.16.7
k8s-node1    Ready    <none>   102m   v1.16.7
k8s-node2    Ready    <none>   33m    v1.16.4

# 查看pod
[root@localhost ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-58cc8c89f4-955zb             1/1     Running   0          106m
kube-system   coredns-58cc8c89f4-bp746             1/1     Running   0          106m
kube-system   etcd-k8s-master                      1/1     Running   0          106m
kube-system   kube-apiserver-k8s-master            1/1     Running   0          105m
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          105m
kube-system   kube-flannel-ds-amd64-ckdzv          1/1     Running   0          102m
kube-system   kube-flannel-ds-amd64-fvrmj          1/1     Running   0          105m
kube-system   kube-flannel-ds-amd64-m8557          1/1     Running   0          34m
kube-system   kube-proxy-6lgbv                     1/1     Running   0          34m
kube-system   kube-proxy-d8sxd                     1/1     Running   0          106m
kube-system   kube-proxy-v9xnz                     1/1     Running   0          102m
kube-system   kube-scheduler-k8s-master            1/1     Running   0          106m

* If the initialization encountered problems:

[root@localhost ~]# kubeadm reset

[root@localhost ~]# rm -rf /var/lib/cni/

[root@localhost ~]# rm -f $HOME/.kube/config

* If the node NotReady appear, coredns to the Pending ( https://www.jianshu.com/p/d446121dbfc2 )

[root@localhost ~]# kubectl get nodes
NAME          STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   2m48s   v1.16.7

# 查看 Pods 信息
[root@localhost ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE
kube-system   coredns-9d85f5447-4f65b                         0/1     Pending   0          63m
kube-system   coredns-9d85f5447-b2m6m                         0/1     Pending   0          63m
kube-system   etcd-localhost.localdomain                      1/1     Running   0          63m
kube-system   kube-apiserver-localhost.localdomain            1/1     Running   0          63m
kube-system   kube-controller-manager-localhost.localdomain   1/1     Running   0          63m
kube-system   kube-proxy-sz9ld                                1/1     Running   0          63m
kube-system   kube-scheduler-localhost.localdomain            1/1     Running   0          63m

 Solution: Install the CNI, CNI installation configuration file

$ mkdir -p /etc/cni/net.d
$ cat >/etc/cni/net.d/10-mynet.conf <<EOF
{
    "cniVersion": "0.2.0",
    "name": "mynet",
    "type": "bridge",
    "bridge": "cni0",
    "isGateway": true,
    "ipMasq": true,
    "ipam": {
        "type": "host-local",
        "subnet": "10.22.0.0/16",
        "routes": [
            { "dst": "0.0.0.0/0" }
        ]
    }
}
EOF
$ cat >/etc/cni/net.d/99-loopback.conf <<EOF
{
    "cniVersion": "0.2.0",
    "name": "lo",
    "type": "loopback"
}
EOF

Here two configurations for a closure of a container is a card hanging on the bridge, and the other line and is responsible for the configuration (local loop)

Add work node

Embodiment 1: Join returned when using kubeadm init (work command input node)

kubeadm join 192.168.1.127:6443 --token yjscgl.eybl86olwr3g2ct9 \
    --discovery-token-ca-cert-hash sha256:91f7982ff4ffb9099b5228449044483192b73d52932929674985ef595a769055 

Method 2: Rebuild token

kubeadm token generate

kubeadm token create <generated-token> --print-join-command --ttl=24h

 

reference:

kubernetes v1.16.x environment to build: https://www.jianshu.com/p/832bcd89bc07   

CNI thorough understanding Kubernetes: https://www.jianshu.com/p/d446121dbfc2 

 

gcr.io and pull quay.io mirror fails: https://blog.csdn.net/zsd498537806/article/details/85157560

Guess you like

Origin www.cnblogs.com/wzllzw/p/12323595.html