Docker - k8s Cluster Setup

A, k8s Profile

1 Introduction

  • Official Chinese documents: https://www.kubernetes.org.cn/docs
  • Kubernetes is an open source, cloud management platform for application of the container on multiple hosts, Kubernetes goal is to make application deployment container of simple and effective (powerful), Kubernetes provides application deployment, planning, updates, maintenance a mechanism.
  • Kubernetes a core feature is the ability of self-management of the container to ensure that the cloud platform to run container in accordance with the user's desired state (such as the user wants apache has been run, the user does not need to be concerned about how to do it, Kubernetes to monitor automatically, and then go restart, the new, short, let apache has been providing services), the administrator can load a miniature service, let the planner to find the right location, at the same time, Kubernetes also tools and systems to enhance the human aspects, allowing users to easily deploy their own application.

2.Kubernetes composition

  • Kubernetes node has a running application container essential services, which are controlled by the Master. Docker be run on each node. Docker to be responsible for all specific images to download and run the container.
  • Kubernetes mainly consists of the following core components:
  1. etcd: Save the state of the entire cluster;
  2. apiserver: provides the only entrance resources operations, and to provide authentication, authorization, access control, API registration and discovery mechanisms;
  3. controller manager: responsible for maintaining the state of the cluster, such as fault detection, automatic extension, rollover, etc.;
  4. scheduler: responsible for scheduling resources according to a predetermined scheduling policy to schedule the Pod corresponding machine;
  5. kubelet: responsible for maintaining the life cycle of the container, is also responsible for managing Volume (CVI) and network (CNI) of;
  6. Container runtime: responsible for image management and operation of real Pod and containers (CRI);
  7. kube-proxy: Service is responsible for providing for the internal cluster service discovery and load balancing;
    in addition to the core components, there are some recommended Add-ons:
  8. kube-dns: responsible for providing DNS service for the entire cluster
  9. Ingress Controller: provides extranet service entrance
  10. Heapster: providing resource monitoring
  11. Dashboard: provides a GUI
  12. Federation: providing a cluster across the available area
  13. Fluentd-elasticsearch: provides cluster log collection, storage and query

Two, Kubernetes Cluster Setup

  • lab environment:
Host computer Node Properties ip
server1 k8s-master 172.25.66.1
server2 k8s-node1 172.25.66.2
  • This experiment requires networking

  • Prior to clean up the environment (because we did before to build a swarm cluster, if not, then you can skip

      [root@server2 ~]# docker swarm leave 
      Node left the swarm.
      [root@server3 ~]# docker swarm leave 
      Node left the swarm.
      [root@server1 ~]# docker swarm leave --force
    
      [root@server1 ~]# docker container prune 
      WARNING! This will remove all stopped containers.
      Are you sure you want to continue? [y/N] y
      [root@server2 ~]# docker container prune 
      [root@server3 ~]# docker container prune
    

1. install the appropriate software

	[root@server1 mnt]# ls
cri-tools-1.12.0-0.x86_64.rpm  kubelet-1.12.2-0.x86_64.rpm
kubeadm-1.12.2-0.x86_64.rpm    kubernetes-cni-0.6.0-0.x86_64.rpm
kubectl-1.12.2-0.x86_64.rpm
[root@server1 mnt]# yum install -y *

[root@server2 mnt]# ls
cri-tools-1.12.0-0.x86_64.rpm  kubelet-1.12.2-0.x86_64.rpm
kubeadm-1.12.2-0.x86_64.rpm    kubernetes-cni-0.6.0-0.x86_64.rpm
kubectl-1.12.2-0.x86_64.rpm
[root@server2 mnt]# yum install -y *

2. Close the system swap partition

[root@server1 ~]# swapoff -a
[root@server1 ~]# vim /etc/fstab 
[root@server1 ~]# tail -n 1 /etc/fstab 
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
[root@server1 ~]# systemctl enable kubelet.service
[root@server1 mnt]# systemctl start kubelet.service

[root@server2 ~]# swapoff -a
[root@server2 ~]# vim /etc/fstab 
[root@server2 ~]# tail -n 1 /etc/fstab 
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
[root@server2 ~]# systemctl enable kubelet.service
 [root@server2 ~]# systemctl start kubelet.service 

View kubelet state, we will find that he is a failure, but we still need to start him

Here Insert Picture Description
3. Check kubeadm will use a mirror
Here Insert Picture Description
4. import needs a mirror

[root@server1 images]# docker load -i kube-apiserver.tar
[root@server1 images]# docker load -i kube-controller-manager.tar 
[root@server1 images]# docker load -i kube-proxy.tar 
[root@server1 images]# docker load -i pause.tar 
[root@server1 images]# docker load -i etcd.tar
[root@server1 images]# docker load -i coredns.tar 
[root@server1 images]# docker load -i kube-scheduler.tar 
[root@server1 images]# docker load -i flannel.tar 

the same Server2
5. The initialization on server1

[root@server1 mnt]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.25.66.1
[root@docker1 mnt]# vim kube-flannel.yml 
 76       "Network": "10.244.0.0/16"

Here Insert Picture Description
This indicates successful initialization, the black box must always remember, it is a command other nodes to join

Note that the value is: if the first fails to initialize a cluster, you need to execute the command again after the implementation of the cluster initialization command "kubeadm reset" reset, reset, clustering initialization.

6, on server1 (master node): The last step is operated result (initialize clusters) of the message

[root@server1 mnt]# useradd k8s	添加一个普通用户,用户名随意给。我这里指定的是k8s用户
[root@server1 mnt]# vim /etc/sudoers	编辑/etc/sudoers文件,给k8s用户赋予所有的权限。按wq!保存退出(或者编辑visudo文件,按wq保存退出)

92 k8s     ALL=(ALL)       NOPASSWD: ALL
[root@server1 mnt]# su - k8s 
[k8s@server1 ~]$ mkdir -p $HOME/.kube
[k8s@server1 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[k8s@server1 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

7, on server1 (master node): Resolve command does not kubectl completion problems

[k8s@server1 ~]$ echo "source <(kubectl completion bash)" >> ./.bashrc
[k8s@server1 ~]$ logout 
[root@server1 mnt]# su - k8s 
Last login: Thu Jun 13 05:33:21 CST 2019 on pts/0	
#此时会发现kubectl命令能补全了

8, on server1 (master node): kube-flannel.yml transmitting files to / home / k8s directory. Because the next kube-flannel.yml original file / root / k8s directory, k8s ordinary users can not access. And follow the prompts to initialize a cluster, to continue

[k8s@server1 ~]$ logout 
[root@server1 mnt]# cp kube-flannel.yml /home/k8s/
[root@server1 mnt]# su - k8s 
Last login: Thu Jun 13 05:35:24 CST 2019 on pts/0
[k8s@server1 ~]$ kubectl apply -f kube-flannel.yml

Here Insert Picture Description
View running container
Here Insert Picture Description
9, on server2 (node node): According to the results of a cluster master node initialization tips, join the cluster.

[root@server2 images]# kubeadm join 172.25.66.1:6443 --token m971hz.qvblb58fnknbprsb --discovery-token-ca-cert-hash 	sha256:23d4a7cfa55bea7a0ee914c8e1ae4308184ebd1442d837f60b49d976980c6a3e

Here Insert Picture DescriptionThe following message appears indicating that the operation added to the cluster's success

And then execute the command to join the cluster if for the first time to join the cluster fails, you can run the "kubeadm reset" reset, reset NOTE: After the value is

[root@server2 images]# modprobe ip_vs_sh 
[root@server2 images]# modprobe ip_vs_wrr

See server2 running on the container
Here Insert Picture Description
10, on server1 (master node): Test

  • Gets pod default namaspace (default) under, and all the nodes to see whether the state is full of Ready.
    Here Insert Picture Description

  • Gets pod in all namespace, to view the status of all of the pod are Running whether
    Here Insert Picture Description
    we can see from the chart, by the state of two namespace is CrashLoopBackOff, this is not true. So how to solve it?
    1. On the physical machine, set iptables strategy allows virtual machines server1, server2 access to the Internet

      [root@foundation66 k8s]# iptables -t nat -I POSTROUTING -s 172.25.66.0/24 -j MASQUERADE
    

After doing iptables policy, wait a minute, (about about 1 minute) to view the status again to see if it is a Running state, the second-step operation. If not, then execution. After the general steps 1 operation will be successful.

2. Run removed, the state is not Running namespace

[k8s@server1 ~]$ kubectl describe pod coredns-576cbf47c7-w74cd -n kube-system
[k8s@server1 ~]$ kubectl describe pod coredns-576cbf47c7-fsljr -n kube-system

Guess you like

Origin blog.csdn.net/weixin_42446031/article/details/91634769