kubernetes installed on centos7

1. Update:
1. yum update -y
2. yum install docker
2. Install kubeadm:
1. cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

2. cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
3. sysctl - -system
3: Update the yum warehouse of linux:
1. If linux can access the external network, use the following:
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https ://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum- key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
2. If linux cannot use the external network, use the following:
cat < /etc/yum .repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/ kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
3, setenforce 0
4, sed -i 's/^SELINUX= enforcing$/SELINUX=permissive/' /etc/selinux/config
5. yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
6. systemctl enable --now kubelet
3. Create a cluster:
1. If you can access the external network, use As follows:
kubeadm init
–control-plane-endpoint “xxxx:6443”
–pod-network-cidr=192.168.0.0/16
–apiserver-advertise-address=xxxx
2. If you cannot access the external network, use the following:
kubeadm init --image-repository registry.aliyuncs.com/google_containers
–control-plane-endpoint “xxxx:6443”
–pod-network-cidr=192.168.0.0/16
–apiserver -advertise-address=xxxx
where xxxx is the IP of the host
3. To create a common system user, execute the following:
i. mkdir -p $HOME/.kube
ii. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube /config
iii, sudo chown ( id − u ): (id -u):(idu): (id -g) $HOME/.kube/config
4. Add nodes to the cluster:
i. kubeadm token list (if not, execute step ii)
ii. kubeadm token create
iii. openssl x509 -pubkey -in /etc/ kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null |
openssl dgst -sha256 -hex | sed 's/^.* //'
5. Replace the value in the Execute the following command on the node machine:
kubeadm join --token : --discovery-token-ca-cert-hash sha256:
4. Add the network plug-in calico:
1. wget https://docs.projectcalico.org/manifests/tigera- operator.yaml
2. kubectl create -f tigera-operator.yaml
3. wget https://docs.projectcalico.org/manifests/custom-resources.yaml
4. Modify the value of custom-resources.yaml and keep it in the cluster network segment 5.
kubectl create -f custom-resources.yaml
5. Check node status:
insert image description here

6. Add web ui dashboard management console
1. wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
2. Modify recommended.yaml
i. # Add line 40, spec The following
type: NodePort
ii, # 191 line
imagePullPolicy: IfNotPresent
# Change Always to IfNotPresent
3, kubectl apply -f recommended.yaml
4, kubectl get pods -n kubernetes-dashboard
5, kubectl get svc -n kubernetes-dashboard
6, kubectl create serviceaccount dashboard-serviceaccount -n kubernetes-dashboard
7. kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-serviceaccount
8. kubectl get secret -n kubernetes-dashboard | grep dashboard-serviceaccount-token
9. Replace the found token name with dashboard-serviceaccount-token-XXX, and execute:
kubectl describe secret dashboard-serviceaccount-token-XXX -n kubernetes-dashboard
as shown in the figure below:
insert image description here
7. Remotely log in to the dashboard:
insert image description here
enter https:/ in the browser /ip: the port in the figure, log in with the token obtained earlier:
the token is similar to the following:
insert image description here

insert image description here
insert image description here

8. Remove nodes:
1. kubectl drain --delete-local-data --force --ignore-daemonsets
2. kubeadm reset

Guess you like

Origin blog.csdn.net/u013326684/article/details/116167029