The latest version k8s Cluster Setup

Installation Notes

Operating System Version:

cat /proc/version
# Linux version 3.10.0-862.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) ) #1 SMP Fri Apr 20 16:44:24 UTC 2018
rpm -q centos-release
# centos-release-7-5.1804.el7.centos.x86_64
cat /etc/redhat-release
# CentOS Linux release 7.5.1804 (Core)
复制代码

docker version:

docker --version
# Docker version 18.09.6, build 481bc77156
复制代码

kubernetes version

kubelet --version
# Kubernetes v1.14.2
复制代码

installation steps

1. Set the mutual trust between the ssh server

2. Close SeLinux and FireWall [all machines perform]

sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
systemctl stop firewalld
systemctl disable firewalld

swapoff -a

setenforce 0
vi /etc/selinux/config
SELINUX=disabled

复制代码

3. The mounting assembly [all machines perform]

(1) Installation docker

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum -y install docker-ce
docker --version
# Docker version 17.06.2-ce, build cec0b72
systemctl start docker
systemctl status docker
systemctl enable docker
复制代码

(2) Installation kubelet, kubeadm, kubectl

Setting Warehouse Address:

cat>>/etc/yum.repos.d/kubrenetes.repo<<EOF
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF
复制代码

Run the installation immediately

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX= disabled/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
复制代码

4. Install image [the machine to perform all]

# 安装镜像
docker pull mirrorgooglecontainers/kube-apiserver:v1.14.2
docker pull mirrorgooglecontainers/kube-controller-manager:v1.14.2
docker pull mirrorgooglecontainers/kube-scheduler:v1.14.2
docker pull mirrorgooglecontainers/kube-proxy:v1.14.2
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.3.10
docker pull coredns/coredns:1.3.1
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

# 取别名
docker tag mirrorgooglecontainers/kube-apiserver:v1.14.2 k8s.gcr.io/kube-apiserver:v1.14.2
docker tag mirrorgooglecontainers/kube-controller-manager:v1.14.2 k8s.gcr.io/kube-controller-manager:v1.14.2
docker tag mirrorgooglecontainers/kube-scheduler:v1.14.2 k8s.gcr.io/kube-scheduler:v1.14.2
docker tag mirrorgooglecontainers/kube-proxy:v1.14.2 k8s.gcr.io/kube-proxy:v1.14.2
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

# 删除镜像
docker rmi mirrorgooglecontainers/kube-apiserver:v1.14.2
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.14.2
docker rmi mirrorgooglecontainers/kube-scheduler:v1.14.2
docker rmi mirrorgooglecontainers/kube-proxy:v1.14.2
docker rmi mirrorgooglecontainers/pause:3.1
docker rmi mirrorgooglecontainers/etcd:3.3.10
docker rmi coredns/coredns:1.3.1
docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
复制代码

5. Installation Master

(1) Initialization

kubeadm init --kubernetes-version=v1.14.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
复制代码

kubernetes-version: The current version k8s

pod-network-cidr: Specifies the network-wide Pod. This parameter depends on the network used by the program, will be used herein classic flannel network solution.

service-cidr:

If not, you'll get the output: output

(2) Set.kube/config

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
复制代码

(3) stored in the output kubeadm joinline command, the nodenode performs

kubeadm join 10.255.73.26:6443 --token xfnfrl.4zlyx5ecu4t7n9ie \
    --discovery-token-ca-cert-hash sha256:c68bbf21a21439f8de92124337b4af04020f3332363e28522339933db813cc4b
复制代码

(4) arranged kubect

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
echo $KUBECONFIG
复制代码

(5) Installation network Pod

Pod network installation is a necessary condition for communication between the Pod, k8s supports many network programs, where we still choose the classic flannelscheme

a new file at any position kube-flannel.yaml, the contents of the file: file content

b. First, set the system parameters sysctl net.bridge.bridge-nf-call-iptables=1

c. Use kube-flannel.yamla file,kubectl apply -f kube-flannel.yaml

d. Check pod network is normal kubectl get pods --all-namespaces -o wide, if READYall is 1/1, normal

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE     IP             NODE              NOMINATED NODE   READINESS GATES
kube-system   coredns-fb8b8dccf-2hwr4                   1/1     Running   0          7h44m   10.244.0.3     pjr-ofckv-73-26   <none>           <none>
kube-system   coredns-fb8b8dccf-nwqt9                   1/1     Running   0          7h44m   10.244.0.2     pjr-ofckv-73-26   <none>           <none>
复制代码

e. Check node status kubectl get nodes

pjr-ofckv-73-26   Ready    master   7h47m   v1.14.2
复制代码

6 Add Node node

(1) executed masterwhen init output node kubeadm join, i.e.,

kubeadm join 10.255.73.26:6443 --token xfnfrl.4zlyx5ecu4t7n9ie \
    --discovery-token-ca-cert-hash sha256:c68bbf21a21439f8de92124337b4af04020f3332363e28522339933db813cc4b
复制代码

If you do not save when deploying master node, you can kubeadm token listretrieve, ip is the machine where the master node ip, port 6443(default may be)

(2) check node state kubectl get nodes

All nodes are all Readystate indicates normal cluster

pjr-ofckv-73-24   Ready    <none>   47m     v1.14.2
pjr-ofckv-73-25   Ready    <none>   7h34m   v1.14.2
pjr-ofckv-73-26   Ready    master   7h55m   v1.14.2
复制代码

(3) View all Pod status kubectl get pods --all-namespaces -o wide

READY All components are all1/1

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE     IP             NODE              NOMINATED NODE   READINESS GATES
kube-system   coredns-fb8b8dccf-2hwr4                   1/1     Running   0          7h55m   10.244.0.3     pjr-ofckv-73-26   <none>           <none>
kube-system   coredns-fb8b8dccf-nwqt9                   1/1     Running   0          7h55m   10.244.0.2     pjr-ofckv-73-26   <none>           <none>
kube-system   etcd-pjr-ofckv-73-26                      1/1     Running   0          7h54m   10.255.73.26   pjr-ofckv-73-26   <none>           <none>
kube-system   kube-apiserver-pjr-ofckv-73-26            1/1     Running   0          7h54m   10.255.73.26   pjr-ofckv-73-26   <none>           <none>
kube-system   kube-controller-manager-pjr-ofckv-73-26   1/1     Running   0          7h54m   10.255.73.26   pjr-ofckv-73-26   <none>           <none>
kube-system   kube-flannel-ds-amd64-9qhcl               1/1     Running   0          48m     10.255.73.24   pjr-ofckv-73-24   <none>           <none>
kube-system   kube-flannel-ds-amd64-xmrzz               1/1     Running   0          7h51m   10.255.73.26   pjr-ofckv-73-26   <none>           <none>
kube-system   kube-flannel-ds-amd64-zqdzp               1/1     Running   0          7h34m   10.255.73.25   pjr-ofckv-73-25   <none>           <none>
kube-system   kube-proxy-kgcxj                          1/1     Running   0          7h34m   10.255.73.25   pjr-ofckv-73-25   <none>           <none>
kube-system   kube-proxy-rpn4z                          1/1     Running   0          7h55m   10.255.73.26   pjr-ofckv-73-26   <none>           <none>
kube-system   kube-proxy-tm8df                          1/1     Running   0          48m     10.255.73.24   pjr-ofckv-73-24   <none>           <none>
kube-system   kube-scheduler-pjr-ofckv-73-26            1/1     Running   0          7h54m   10.255.73.26   pjr-ofckv-73-26   <none>           <none>
复制代码

7. Delete nodes [NOTE: An error was encountered during the operation, temporarily uncertain following the deletion is correct]

(1) executed in the master node

kubectl drain pjr-ofckv-73-24 --delete-local-data --force --ignore-daemonsets
kubectl delete node pjr-ofckv-73-24
复制代码

(2) On the node removed

kubeadm reset
复制代码

8. dashbord mounted on the master node

dashboardVersion isv1.10.0

(1) Mirror Download kubernetes-dashboard-amd64:v1.10.0

docker pull registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0
docker tag registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker image rm registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0
复制代码

(2) Installation dashborad

kubectl create -f kubernetes-dashboard.yaml
复制代码

kubernetes-dashboard.yamlNew can be at any location, details refer to: file content

Precautions: NodePortand hostPathset the official version: the official version , you can compare the differences between

If (3) See dashborad the pod Ann normal start, if normal then the successful start

kubectl get pods --namespace=kube-system
复制代码

Success output as follows: READY is 1/1, STATUS is Running,

NAME                                      READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-595d866bb8-n8bh7     1/1     Running   0          141m
复制代码

At this point you meet other conditions such as ContainerCreating, you can kubectl describe pod kubernetes-dashboard-xxxxxxxx-yyyy --namespace=kube-systemspecify the view podcause of the error, I installed, it showed no node node on kubernetes-dashboard-amd64:v1.10.0this image.

In addition, if the error after modification, re-run kubectl create -f kubernetes-dashboard.yamlwill be prompted to file exists, it can be used kubectl delete -f kubernetes-dashboard.yamlto purge files.

In addition, the kubernetes-dashboard.yamlfiles involved in the folder /home/share/certsalso needs to be created in advance, I master node and the new node

Extranet (4) View dashboard exposed port

kubectl get service --namespace=kube-system
复制代码

Output is as follows: 31234is the external network access interface, access dashboardwill use the page

NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
kube-dns               ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   3d23h
kubernetes-dashboard   NodePort    10.108.134.118   <none>        443:31234/TCP            150m
复制代码

(5) generating a private key and certificate signature executed in the master node, if the input or selected, directly enter

openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
rm dashboard.pass.key
openssl req -new -key dashboard.key -out dashboard.csr # 全部回车
复制代码

(6) generating an SSL certificate

openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
复制代码

Is then generated dashboard.keyand dashboard.crtplaced in the path /home/share/certs, the path which will be arranged to be operated immediately below

(7) Create a dashborduser

kubectl create -f dashboard-user-role.yaml
复制代码

dashboard-user-role.yamlFile contents: Dashboard-the User-role.yaml

(8) for login token, if you forget, you can directly execute the following command to get

kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system
复制代码

Output is as follows:

Name:         admin-token-rfc2l
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin
              kubernetes.io/service-account.uid: 42eeeee9-802c-11e9-a88a-f0000aff491a

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1yZmMybCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjQyZWVlZWU5LTgwMmMtMTFlOS1hODhhLWYwMDAwYWZmNDkxYSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.gRK_RO2Nk24tRCLq9ekkWvL_hNOTKKxQB0FrJEAHASGEpNP9Ew9JHBwljA-jPBZiNDxheOURQJuypDvCLXdRqyAWM26QEeYKB8EdHxiZb7fcTazMnPnl7hbBsWOsuTonpD2gWQYaRFFmkJds-ta5UKvtGJiKeUUEAzBilNvRp60mws5L-KAPB0yFAtHWXyz682eVu_NjcEWH-1f_uZ-noXJJPqvz0XarmR1RenQtnMd3brKjhk02FUIQyD2l1s6hH6tHVm59LZ74jLPcXTlaUpEG6LE_vJHzktTsHdRmtKg6wDeq_blvGtT4vU8k92LFC-r2p3O2BJQ-jqfy1y-T6w
复制代码

(9) Login

Add: https://masterIp:31234(第(4)步输出)/#!/settings?namespace=default

Select 令牌obtained using the above tokenlogin

Installation Issues

  • The connection to the server localhost:? 8080 was refused - did you specify the right host or port when the node node installation will encounter this problem, because the server node node where the file is missing /etc/kubernetes/admin.conf, the solution:

1. On the master node needs to /etc/kubernetes/admin.confcopy the file to the node node /etc/kubernetes/admin.confin

2. Set variables

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
复制代码
  • If the node node has been in the not readystate I was in when first installed, the node node does not download the image in the implementation kubeadm joinjoins the cluster, can indeed join the cluster, but has been in a UnReadystate, by tail /var/log/messagesreferring to the error log, know that this is because there is no mirror image installed because the installation program will automatically go k8s.gcr.ionode drop-down mirror image, unfortunately, no梯梯

Reference Documents

Use Kubeadm cluster deployment Kubernetes 1.13.1 practice record

Welcome to public concern number: Programmer's financial circles

Discussion on a 技术,金融,赚钱small circle, to provide you with the best 有味道content, updated daily!

Guess you like

Origin blog.csdn.net/weixin_34174105/article/details/91361472