Ubuntu16.04 cluster installation kubernetes1.13

Ubuntu16.04 cluster installation kubernetes1.13

The latest installation can use the following ways: https://www.cnrancher.com/docs/rancher/v2.x/cn/overview/quick-start-guide

Convenient!

The following is the text.

Foreword

Docker containers, virtualization technology has a home here, you can improve high speed software deployment. Run Docker a container, the container as a process of computing a resource allocation, resource isolation between different containers, each container is as if a machine,
and through a host bridge, a local area network can be simulated. You can refer to: novice tutorial

Docker-compose container can manage on their stand-alone, but also multi-machine management can not, at present Kubernetes container arrangement is the best solution, enabling multi-vessel arrangement machine.

Kubernetes referred to as k8s, it is Google's open source cluster management system container. On the basis of Docker technology, providing a container for the application of the deployment operation, resource scheduling, service discovery and dynamic stretching and a series of features that improve the ease of large-scale container cluster management.

k8s for various reasons, in China too difficult to install on the network is not good after all. By building k8s, but we still catch up with the trend. We are ready to install the latest k8s1.13.

Ready to work

We can: VMware Workstation multiple virtual machines to install the new three Ubuntu16.04machines:

Three machines and the host name IP:

ubuntu1:192.168.152.3   4G内存 20G硬盘 必须2CPU核
ubuntu2:192.168.152.4   2G内存 20G硬盘
ubuntu3:192.168.152.5   2G内存 20G硬盘

We must permanently closed swapspace:

First vim /etc/fstabadd #comment out all swappartitions, and then restart the three machines!

Cluster Installation k8s

The following commands are used rootfor user.

Installation Docker

Run Docker official installation script on each machine:

curl -fsSL get.docker.com -o get-docker.sh
sh get-docker.sh --mirror Aliyun

Installation kubelet kubeadm kubectl

In the 4G of memory ubuntu1on the machine, replace Ali source and install kubelet kubeadm kubectl:

apt-get update && apt-get install -y apt-transport-https

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

apt-get update
apt-get install -y kubelet kubeadm kubectl

Creating a cluster using kubeadm

1.13.0 version provides a special function for China, the problem can be solved before the mirror wall.

kubeadm init \
    --image-repository registry.aliyuncs.com/google_containers \
    --pod-network-cidr=192.167.0.0/16 \
    --ignore-preflight-errors=cri \
    --kubernetes-version=1.13.1

Image from registry.aliyuncs.com/google_containersDownload Ali cloud point, and podthe segment change 192.167.0.0/16(fear of virtual machines and the external network conflict).

After patiently wait for:

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.152.3:6443 --token to697t.fdu5pffmr0862z8g --discovery-token-ca-cert-hash sha256:15da0d9ac768ad5fe546a2b93ed7516222fa043ef9d5e454a72e1f2ca4870862

Above kubeadm join 192.168.152.3:6443 --token to697t.fdu5pffmr0862z8g --discovery-token-ca-cert-hash sha256:15da0d9ac768ad5fe546a2b93ed7516222fa043ef9d5e454a72e1f2ca4870862Remember.

Then you can use ordinary users, or rootuser:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

So you can use the user operate k8sa cluster of.

We can use the kubectl getcommand to view the current status of only one node:

root@ubuntu1:~# kubectl get nodes
NAME      STATUS     ROLES    AGE     VERSION
ubuntu1   NotReady   master   2m59s   v1.13.2

We can use kubectl describethe detailed information (Node) object to view this node, status and event (Event) information:

root@ubuntu1:~# kubectl describe node ubuntu1
Name:               ubuntu1
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=ubuntu1
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 18 Jan 2019 10:34:29 +0800
Taints:             node-role.kubernetes.io/master:NoSchedule
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 18 Jan 2019 10:39:19 +0800   Fri, 18 Jan 2019 10:34:29 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 18 Jan 2019 10:39:19 +0800   Fri, 18 Jan 2019 10:34:29 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 18 Jan 2019 10:39:19 +0800   Fri, 18 Jan 2019 10:34:29 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Fri, 18 Jan 2019 10:39:19 +0800   Fri, 18 Jan 2019 10:34:29 +0800   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

We know that NotReadythe reason is:

  Ready            False   Fri, 18 Jan 2019 10:39:19 +0800   Fri, 18 Jan 2019 10:34:29 +0800   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Because we have not deployed any network plug-ins.

We can also kubectlcheck on the various systems Pod node status, which kube-systemis reserved for project work k8s Pod of space systems:

root@ubuntu1:~# kubectl get pods -n kube-system
NAME                              READY   STATUS              RESTARTS   AGE
coredns-78d4cf999f-7k9bv          0/1     ContainerCreating   0          7m6s
coredns-78d4cf999f-mtz4b          0/1     ContainerCreating   0          7m6s
etcd-ubuntu1                      1/1     Running             0          6m16s
kube-apiserver-ubuntu1            1/1     Running             0          6m35s
kube-controller-manager-ubuntu1   1/1     Running             0          6m7s
kube-proxy-26mzm                  1/1     Running             0          7m6s
kube-scheduler-ubuntu1            1/1     Running             0          6m31s

Some failed because we have not deployed any network plug-ins.

By default, the Master node does not allow users to run Pod, and Kubernetes do this is to rely on the Taint / Toleration of mechanisms Kubernetes. Once a node is added a Taint, namely "branded tainted" then all Pod will not run on this node, since Kubernetes the Pod has "over the top."

root@ubuntu1:~# kubectl describe node ubuntu1
Name:               ubuntu1
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=ubuntu1
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 18 Jan 2019 10:34:29 +0800
Taints:             node-role.kubernetes.io/master:NoSchedule

You can see Taints: node-role.kubernetes.io/master:NoSchedule, we get rid of this stain, a single-node cluster to create successful, it sounds very interesting, single-node cluster.

We now perform:

kubectl taint nodes --all node-role.kubernetes.io/master-

This step is to configure the Master node eventually allow users to run pod, but also to ensure the deployment of the following plug-ins to run correctly.

At this step, a basic cluster to complete Kubernetes finished !! Sahua, writing tutorial before thanks to big brother ~ ~ ~ ~

Deployment of plug-ins

We want to install some plugins to assist k8s.

Deployment of network components

Directly run 安装the command:

# 安装
kubectl apply -f https://git.io/weave-kube-1.6

# 删除
kubectl delete -f https://git.io/weave-kube-1.6

Check podwhether the deployment was successful:

kubectl get pods -n kube-system

Plug-deploy storage container

# 安装
kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/operator.yaml

kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/cluster.yaml


# 删除
kubectl delete -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/operator.yaml

kubectl delete -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/cluster.yaml

# 查看安装情况
kubectl get pods -n rook-ceph-system

kubectl get pods -n rook-ceph

Join master node work

In the other two ubuntuon the installation kubeadm, kubelet and kubectl:

apt-get update && apt-get install -y apt-transport-https

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

apt-get update
apt-get install -y kubelet kubeadm kubectl

Then enter:

kubeadm join 192.168.152.3:6443 --token to697t.fdu5pffmr0862z8g --discovery-token-ca-cert-hash sha256:15da0d9ac768ad5fe546a2b93ed7516222fa043ef9d5e454a72e1f2ca4870862

This time back to the masternode server, run the following command to view node status:

kubectl get nodes

If you find

root@ubuntu1:~# kubectl get nodes
NAME      STATUS     ROLES    AGE   VERSION
ubuntu1   Ready      master   73m   v1.13.2
ubuntu2   NotReady   <none>   11m   v1.13.2
ubuntu3   NotReady   <none>   32s   v1.13.2

It can be NotReadya node with:

journalctl -f -u kubelet

Check reason, sometimes because the network is unstable, it will have to try again.

If we forget the Master node to join token, you can use the following command to view:

kubeadm token list

By default, token is valid for 24 hours, if our token has expired, you can use the following command to regenerate:

kubeadm token create

If we do not have –discovery-token-ca-cert-hashvalues, you can use the following command to generate:

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

Such clusters would set the stage, then we have to use.

k8s use

We enter practice articles. You can learn k8s document .

1. Get the next namespace POD:

kubectl get  pod  -n kube-system

2. Get all of the namespace POD:

kubectl get pods --all-namespaces

3. Check the PODlog:

kubectl logs -f --tail=20 kubernetes-dashboard-57df4db6b-m9zbq -n kube-system

View namespace kube-systemunder kubernetes-dashboard-57df4db6b-m9zbqlogs.

4. Check under the service namespace:

kubectl get svc -n kube-system

Cipian Reference: Mr. Chun Articles

Guess you like

Origin www.cnblogs.com/nima/p/11751322.html