KUBERNETES-1-1-初始化与集群

1.配置本地域名解析master,node1,node2,将防火墙服务关闭,同时禁止其开机启动。因为后续涉及到网络之间的通讯比较多,要配置防火墙也比较复杂。这里会涉及vim,wget,以及epel-release的安装,比较简单,这里就不详述了。

[root@master ~]# yum install -y wget epel-release vim

[root@master ~]# vim /etc/hosts
[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.20.0.128 master.example.com master
172.20.0.129 node1.example.com node1
172.20.0.130 node2.example.com node2
[root@master ~]# systemctl status firewalld
[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
 

2.配置docker-ce和kubernetes的yum源,完成repo文件配置后,检查安装包的可用情况。将repo文件远程复制到node1和node2上。

[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master yum.repos.d]# vim kubernetes.repo 
[root@master yum.repos.d]# cat kubernetes.repo
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpg=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enable=1

[root@master yum.repos.d]# yum repolist
[root@master yum.repos.d]# scp docker-ce.repo kubernetes.repo node1:/etc/yum.repos.d/
[root@master yum.repos.d]# scp docker-ce.repo kubernetes.repo node2:/etc/yum.repos.d/

3.这里用yum安装docker-ce kubelet kubeadm kubectl,但是需要rpm-package-key.gpg进行验证。这里docker的版本是有要求的,这里要特别注意安装包版本之间的兼容。

[root@master ~]# wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@master ~]# rpm --import rpm-package-key.gpg

[root@master ~]# yum install docker-ce kubelet kubeadm kubectl

4.编辑docker.service文件,增加Environment参数。重新载入daemon,启动docker服务,查看docker信息。

[root@master ~]# vim /usr/lib/systemd/system/docker.service

[root@master ~]# grep ExecStart -B2 /usr/lib/systemd/system/docker.service
Environment="HTTPS_PROXY=http://206.189.28.51:3128"
Environment="NO_PROXY=127.0.0.0/8,172.20.0.0/16"
ExecStart=/usr/bin/dockerd -H unix://

[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl start docker
[root@master ~]# docker info

5.考虑到后续有大量的网络桥接工作,先确认网络桥接功能处于开启状态。

[root@master ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables 
1
[root@master ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables 
1
 

6.查看kubelet生成的文件,以及服务文件的参数(为空,后续可传入)。设置开机启动kubelet和docker,这里不用现在就启动。否则,可能会报错。

[root@master ~]# rpm -ql kubelet
/etc/kubernetes/manifests
/etc/sysconfig/kubelet
/etc/systemd/system/kubelet.service
/usr/bin/kubelet
[root@master ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=
[root@master ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
[root@master ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service


7.编写一个可执行脚本,将组件从国内镜像站pull下来,再打上标签为国外源镜像仓库。通过init初始化kubernetes。

[root@master ~]# vim kubernetes.sh

[root@master ~]# cat kubernetes.sh 
#!/bin/bash
images=(kube-proxy-amd64:v1.11.1 kube-scheduler-amd64:v1.11.1 kube-controller-manager-amd64:v1.11.1 kube-apiserver-amd64:v1.11.1
etcd-amd64:3.2.18 coredns:1.1.3 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.9 k8s-dns-kube-dns-amd64:1.14.9
k8s-dns-dnsmasq-nanny-amd64:1.14.9 )
for imageName in ${images[@]} ; do
  docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName
  docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName k8s.gcr.io/$imageName
  #docker rmi registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName
done
docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1

[root@master ~]# chmod +x kubernetes.sh 
[root@master ~]# ./kubernetes.sh 

[root@master ~]# swapoff -a
[root@master ~]# sed -i 's/.*swap.*/#&/' /etc/fstab
[root@master ~]# kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.20.0.128
 

特别注意:网上还有一种方法,操作如下,但是我操作多次都不成功,鉴于国内网络情况,不推荐。

编辑kubelet参数文件,增加"--fail-swap-on=false"参数,这样可以当swap没有启动的时候定义为false,然后通过ignore忽略到该错误。通过kubeadm init初始化master节点的kubernetes部署。具体操作如下:

[root@master ~]# vim /etc/sysconfig/kubelet
[root@master ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

[root@master ~]# kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

  kubeadm join 172.20.0.128:6443 --token r7pywg.zedbc48t5uzhgryu --discovery-token-ca-cert-hash sha256:36cd5484b32012e85b5d17268e21480e7ff9617a11df70c3927dd6faf6cbb616
 

8.创建隐藏目录,通过复制生成管理配置和认证文件。

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
 

9.查看kubernetes状态,并部署flannel(github上有详细说明,参见coreos/flannel)。flannel的镜像要pull到本地。同时要确认kube-system名称空间中的kube-flannel-ds-amd64-xftt2容器运行起来了。这样master节点才算正式搭建完毕。

[root@master ~]# kubectl get componentstatus
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
[root@master ~]# kubectl get nodes
NAME                 STATUS     ROLES     AGE       VERSION
master.example.com   NotReady   master    1h        v1.11.1
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

[root@master ~]# kubectl get nodes
NAME                 STATUS     ROLES     AGE       VERSION
master.example.com   NotReady   master    1h        v1.11.1
[root@master ~]# docker image pull quay.io/coreos/flannel:v0.10.0-amd64
[root@master ~]# kubectl get nodes
NAME                 STATUS    ROLES     AGE       VERSION
master.example.com   Ready     master    1h        v1.11.1
[root@master ~]# kubectl get pods -n kube-system
NAME                                         READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-5rshr                     1/1       Running   0          1h
coredns-78fcdf6894-sgrjm                     1/1       Running   0          1h
etcd-master.example.com                      1/1       Running   0          1h
kube-apiserver-master.example.com            1/1       Running   0          1h
kube-controller-manager-master.example.com   1/1       Running   0          1h
kube-flannel-ds-amd64-xftt2                  1/1       Running   0          21m
kube-proxy-l259p                             1/1       Running   0          1h
kube-scheduler-master.example.com            1/1       Running   0          1h
[root@master ~]# kubectl get ns
NAME          STATUS    AGE
default       Active    1h
kube-public   Active    1h
kube-system   Active    1h

10.将node1加入到集群,这个指令在kubeadm init执行结束的最后有,可以保存下来。如果忘记了,使用kubeadmin token list查询。(注:在执行这步操作之前,对node1参照master节点的方式,安装程序包,启动docker和kubelet,下载镜像再打标签,启动flannel,中间会涉及到一些排错,排错完成要kubeadm reset再执行kubeadm join)

[root@node1 ~]# kubeadm join 172.20.0.128:6443 --token r7pywg.zedbc48t5uzhgryu --discovery-token-ca-cert-hash sha256:36cd5484b32012e85b5d17268e21480e7ff9617a11df70c3927dd6faf6cbb616

注意这里出现了报错,解决方案是kubeadmin reset,然后重新执行上面的指令。
[preflight] Some fatal errors occurred:
    [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
    [ERROR FileAvailable--etc-kubernetes-bootstrap-kubelet.conf]: /etc/kubernetes/bootstrap-kubelet.conf already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

11.在master节点上确认node1节点已经加入集群,实验任务完成。(如果要加入node2也按照上述步骤执行即可)

[root@master ~]# kubectl get nodes
NAME                 STATUS    ROLES     AGE       VERSION
master.example.com   Ready     master    37m       v1.11.1
node1.example.com    Ready     <none>    21s       v1.11.1
 

猜你喜欢

转载自blog.csdn.net/ligan1115/article/details/84843030