centos7 based kubeadm installation and deployment Kubernetes (1.15.2) cluster

First, what is Kubernetes

Kubernetes is Google (Internal Google: Borg) open source container cluster management system that provides application deployment, maintenance, extension mechanism and other functions, use Kubernetes can easily manage application container of the cross-machine operation, automated deployment container cluster can be achieved, automatic scaling capacity, maintenance and other functions. It is a container layout tool, but also a new distributed architecture based program leading container technology.

Two, Kubernetes architecture and components

K8S cluster have the management node and the work node types.

1) architecture

k8s management node cluster , resource data is responsible for managing cluster, the cluster provides access to the entrance. It has Etcd storage service (optional), run Api Server process, Controller Manager Scheduler service process and service process, work node associated with Node. Kubernetes API server provides HTTP Rest key service process interface is Kubernetes increase in all resources, delete, change, and other operations of the only entrance. Entry process is the cluster control; Kubernetes Controller Manager is an automation control center Kubernetes all resource objects; Kubernetes Schedule is responsible for resource scheduling (Pod scheduling) process
work node is a node Kubernetes cluster architecture really run Pod, providing container and Pod computing resources, Pod and containers running on all nodes work, work node by node communication kubelet service and management to manage the life cycle of the container, and to communicate with other nodes in the cluster.

2) core components

Management Node:
1.Kubernetes the API Server
   as Kubernetes inlet system, which encapsulates the core object deletions change search operation to RESTful API interface mode to the external and internal client component calls. REST object maintains persistence to Etcd stored.
  2.Kubernetes Scheduler
  carried nodes (node) Select (ie dispensing machines) for the establishment of a new Pod, responsible for resource scheduling cluster. Assembly detached, can be easily replaced with other schedulers.
 3.Kubernetes Controller
  responsible for performing various controllers, now provides a number of controllers to ensure the normal operation of Kubernetes.
Work node:
1.Kubelet
  responsible for control of the container, Kubelet receives a request to create Pod from Kubernetes API Server, start and stop the container, the container operating status monitoring and reporting to Kubernetes API Server.
  2.Kubernetes Proxy
  responsible for creating proxy services for Pod, Kubernetes Proxy Service gets all the information from Kubernetes API Server, and create a proxy service based on information Service, the Service to achieve Pod request routing and forwarding, enabling Kubernetes level virtual forwarding The internet.
centos7 based kubeadm installation and deployment Kubernetes (1.15.2) cluster

Third, the basic environment ready

Environment configuration information is as follows (this test environment as a virtual machine):

IP addresses CPU name system Kernel version CPU RAM
192.168.100.6 master01.cluster.k8 CentOS 7.6 5.2.6 4c 4G
192.168.100.7 node01.cluster.k8 CentOS 7.6 5.2.6 4c 4G
192.168.100.8 node02.cluster.k8 CentOS 7.6 5.2.6 4c 4G

1. Set the hostname hostname, such as executing on the master node:

[root@master01 ~]# hostnamectl set-hostname master01.cluster.k8

可以按照此方式设置其它主机的主机名

2.添加本机的域名解析,修改master和node节点上的/etc/hosts文件,执行如下命令:

[root@master01 ~]# cat <<EOF >>/etc/hosts
192.168.100.6 master01.cluster.k8
192.168.100.7 node01.cluster.k8
192.168.100.8 node02.cluster.k8
EOF

3.关闭防火墙、selinux和swap

1)关闭防火墙
[root@master01 ~]#  systemctl stop firewalld
[root@master01 ~]#  systemctl disable firewalld
2)禁用selinux
[root@master01 ~]#  setenforce 0  #临时生效
[root@master01 ~]#  sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config  #永久生效(需要重启服务器)
3)关闭交换分区
[root@master01 ~]# swapoff -a
[root@master01 ~]# sed -i 's/.*swap.*/#&/' /etc/fstab

4.配置内核参数,将桥接的IPv4流量传递到iptables的链中

[root@master01 ~]# cat > /etc/sysctl.d/k8s.conf  <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@master01 ~]# sysctl --system

注意:如果配置时出现No such file or directory,简单来说就是执行一下

[root@master01 ~]#  modprobe br_netfilter

5.安装ntpdate,保证集群时间同步

[root@master01 ~]# yum install -y ntpdate
[root@master01 ~]# ntpdate -u ntp.aliyun.com
#可以添加定时任务,进行时钟定时同步
[root@master01 ~]# crontab -e
*/10 * * * * /usr/sbin/ntpdate ntp.aliyun.com;/sbin/hwclock -w;

6.其他软件安装

[root@master01 ~]#yum install wget

7.配置yum源

1)配置阿里云的Kubernetes源
[root@master01 ~]#cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttps://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2)配置docker源
[root@master01 ~]#wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

四、安装部署

1、安装docker

[root@master01 ~]#yum install -y yum-utils device-mapper-persistent-data lvm2
# 列出Docker版本
[root@master01 ~]#yum list docker-ce --showduplicates | sort -r
# 安装指定版本
[root@master01 ~]#yum install docker-ce-18.06.3
[root@master01 ~]#systemctl enable docker && systemctl start docker

注意,如果测试环境没有公网ip的情况下,为了保证docker能够正常去拉取镜像,这时就需要为docker配置代理,配置方式如下:

1)直接修改/etc/sysconfig/docker,添加如下内容 (不建议这种方式,这种方式在docker升级或者一些更新操作的时候会导致失效)

HTTP_PROXY="http://[proxy-addr]:[proxy-port]/"
HTTPS_PROXY="https://[proxy-addr]:[proxy-port]/"
export HTTP_PROXY HTTPS_PROXY

2)为docker服务创建一个systemd的目录,并创建docker proxy的代理配置文件,该方式配置后会一直生效
[root@master01 ~]# mkdir -p /etc/systemd/system/docker.service.d

创建/etc/systemd/system/docker.service.d/http-proxy.conf文件,并添加HTTP_PROXY环境变量

[root@master01 ~]#vim /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://[proxy-addr]:[proxy-port]/" "HTTPS_PROXY=https://[proxy-addr]:[proxy-port]/" "NO_PROXY=localhost,127.0.0.1,docker-registry.somecorporation.com"

重新加载配置:

 [root@master01 ~]#systemctl daemon-reload

重新启动docker

 [root@master01 ~]#systemctl restart docker

2、在三个节点分别安装kubeadm、kubelet、kubectl

[root@master01 ~]# yum install -y kubeadm、kubelet、kubectl
[root@master01 ~]# systemctl enable kubelet

Kubelet负责与其他节点集群通信,并进行本节点Pod和容器生命周期的管理。Kubeadm是Kubernetes的自动化部署工具,降低了部署难度,提高效率。Kubectl是Kubernetes集群管理工具

3、部署master管理节点

[root@master01 ~]# kubeadm init --kubernetes-version=1.15.2 \
--apiserver-advertise-address=192.168.100.6 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=100.64.0.0/10 \
--pod-network-cidr=10.244.0.0/16

注意:kubeadm在部署的过程中,默认是从k8s.grc.io下载镜像,国内访问会有问题,所以建议通过--image-repository自己来指定国内的镜像仓库,
这里使用阿里云的镜像仓库
如果集群初始化成功,则会返回如下信息

kubeadm join 192.168.100.6:6443 --token v***ht.38oa8f6snaaiycga     --discovery-token-ca-cert-hash sha256:4930dc9796565dd23f221ad7336afee37a7f4790c7487ded6ca26efffae3058a

这些信息是用于将其它node节点加入到Kubernetes集群,在其它node节点上直接执行上边这条命令信息,即可将节点加入到集群。

4、配置集群的管理工具kubectl

[root@master01 ~]#mkdir -p /root/.kube
cp /etc/kubernetes/admin.conf /root/.kube/config

注意:需要将master节点的/root/.kube/config同时也拷贝到其它node节点,否则在通过kubectl查看集群资源信息时会出现The connection to the server localhost:8080 was refused - did you specify the right host or port?错误。
如果将master节点的/root/.kube/config拷贝到其它node节点后,在其它node节点通过kubectl获取相关集群资源对象信息时出现Unable to connect to the server: Forbidden,请检查一下节点是否配置了https代理,如果需要取消代理。

5、部署node节点

直接节点上执行第三步中集群初始化完成后输出的信息,就可以将节点加入到Kubernetes集群中:

[root@node01 ~]#kubeadm join 192.168.100.6:6443 --token v***ht.38oa8f6snaaiycga     --discovery-token-ca-cert-hash sha256:4930dc9796565dd23f221ad7336afee37a7f4790c7487ded6ca26efffae3058a

此时查看集群的节点信息:

[root@master01 ~]#kubectl get nodes
NAME                STATUS   ROLES    AGE     VERSION
node01.cluster.k8   NotReady    master   5m   v1.15.2
node02.cluster.k8   NotReady    <none>   1m   v1.15.2
node03.cluster.k8   NotReady    <none>   1m   v1.15.2

可以看到集群节点的状态现在还处于NotReady状态,应为现在还没有安装网络插件,主流的网络插件有:flannel、calico、canel等

6、部署flannel网络

[root@master01 ~]#kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

After the deployment process needs to pull flannel mirror, may be slightly slower, such as the deployment is complete, go to view cluster status, can be found in the cluster nodes have Ready.

[root@master01 ~]#kubectl get nodes
NAME                STATUS   ROLES    AGE     VERSION
node01.cluster.k8   Ready    master   15m   v1.15.2
node02.cluster.k8   Ready    <none>   10m   v1.15.2
node03.cluster.k8   Ready    <none>   10m   v1.15.2

This time we will be able to create a Pod by kubectl tool.

7, in order to facilitate the management, can be deployed as a cluster Dashboard

[root@master01 ~]#kubectl apply -f  https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

important point:

First: k8s dashboard of docker default image is downloaded from a mirror k8s.grc.io, the country may not be accessible, so you can first download kubernetes-dashboard.yaml locally, and then modify the docker mirror warehouse address aliyun address of the warehouse.
[root@master01 ~]#wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
#替换镜像仓库
[root@master01 ~]#sed -i 's/k8s.gcr.io/registry.aliyuncs.com/google_containers/g' kubernetes-dashboard.yaml
Second: kubernetes-dashboard.yaml, the definition of kubernetes-dashboard of Service (Service) is as follows:
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

The default is not specified spec.type, so the default type is ClusterIP, resulting in the results for the kubernetes-dashboard service does not map a local port, to be able to access the dashboard through the node IP + port number, you need to make the following changes kubernetes-dashboard.yaml:

[root@master01 ~]#sed -i '/targetPort:/a\ \ \ \ \ \ nodePort: 30443\n\ \ type: NodePort' kubernetes-dashboard.yaml
#修改后,kubernetes-dashboard的服务(Service)的定义如下:
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30443
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard

centos7 based kubeadm installation and deployment Kubernetes (1.15.2) cluster
Redeployment Dashboard

[root@master01 ~]#kubectl delete -f kubernetes-dashboard.yaml
[root@master01 ~]#kubectl apply -f kubernetes-dashboard.yaml

We recommend using the Firefox browser Firefox to access the Dashboard address: https://192.168.100.6:30443

Note: Other browsers may not be accessible because Kubernetes Dashboard own certificate problem, so this problem can be solved by themselves generate a self-signed certificate is signed. You can see: https://blog.51cto.com/10616534/2430512
centos7 based kubeadm installation and deployment Kubernetes (1.15.2) cluster

8, access to the Dashboard login token (tokens):

#创建一个用于Dashboard访问集群的服务账号dashboard-admin
[root@master01 ~]#kubectl create serviceaccount  dashboard-admin -n kube-system
#将服务账号dashboard-admin和默认的集群角色cluster-admin绑定
[root@master01 ~]#kubectl create clusterrolebinding  dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
#获取token
[root@master01 ~]#kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

centos7 based kubeadm installation and deployment Kubernetes (1.15.2) cluster

9, using token Login Dashboard

centos7 based kubeadm installation and deployment Kubernetes (1.15.2) cluster
After certification by the Dashboard interface can be seen as follows:
centos7 based kubeadm installation and deployment Kubernetes (1.15.2) cluster

10, try to deploy a pod, and view through the Dashboard

[root@master01 ~]#kubectl create deployment my-nginx --image=nginx

centos7 based kubeadm installation and deployment Kubernetes (1.15.2) cluster

Guess you like

Origin blog.51cto.com/10616534/2430506