kubeadm部署k8s1.17.0

kubeadm安装k8s教程

1 安装环境

centos7
新装好的centos机器要运行下yum -y update.
不能解析www.baidu.com的话配置下/etc/resolv.conf.
加入:
nameserver 8.8.8.8

2 安装步骤

2.1 配置各节点的基本设置

2.1.1 修改各节点/etc/hosts

192.168.99.11 node1
192.168.99.12 node2
192.168.99.21 master1
192.168.99.22 master2
192.168.99.23 master3

把node和master信息都放进各自节点的/etc/hosts文件

2.1.2 各节点关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

2.1.3 各节点禁用SELINUX

# 临时禁用selinux
setenforce 0
# 永久关闭 修改/etc/sysconfig/selinux文件设置
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

2.1.4 禁止交换分区

swapoff -a

# 永久禁用,打开/etc/fstab注释掉swap那一行。
sed -i 's/.*swap.*/#&/' /etc/fstab

2.1.5 修改内核参数

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

2.2 安装docker

docker每个节点都要安装。

2.2.1 配置yum源

## 配置默认源
## 备份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

## 下载阿里源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

## 刷新
yum makecache fast

## 配置k8s源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF

## 重建yum缓存
yum clean all
yum makecache fast
yum -y update

2.2.2 安装docker

下载docker的yum源文件

yum -y install yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

查看docker版本

yum list docker-ce --showduplicates |sort -r  

指定下载18.09

yum install -y docker-ce-18.09.9-3.el7
systemctl enable docker
systemctl start docker

2.2.3 修改docker启动参数配置加速器

cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://xxxx.mirror.aliyuncs.com"], #需要修改未自己阿里云加速地址
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

systemctl restart docker

2.3 安装k8s

管理节点配置

先在k8s-master上安装管理节点

下载kubeadmin.kubelet

直接安装最新的kubeadm和kubelet

yum install -y kubeadm kubelet

我个人喜欢指定版本,安装kubeadmin和kubelet命令是:

yum install -y kubelet-1.17.0-0
yum install -y kubeadm-1.17.0-0

初始化kubeadm

这里不直接初始化,因为国内用户不能直接拉取相关的镜像,所以这里想查看需要的镜像版本

 kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.17.0
k8s.gcr.io/kube-controller-manager:v1.17.0
k8s.gcr.io/kube-scheduler:v1.17.0
k8s.gcr.io/kube-proxy:v1.17.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5

由于官方镜像都是谷歌镜像,国内不翻墙拉不到。你可以先去阿里云拉取然后改tag就行。
下面是写好的脚本。帮你去阿里云拉取镜像然后改好tag。

vim kubeadm.sh

#!/bin/bash

## 使用如下脚本下载国内镜像,并修改tag为google的tag
set -e

KUBE_VERSION=v1.16.0
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.15-0
CORE_DNS_VERSION=1.6.2

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})

for imageName in ${images[@]} ; do
  docker pull $ALIYUN_URL/$imageName
  docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
  docker rmi $ALIYUN_URL/$imageName
done

运行执行

sh kubeadm.sh

然后运行docker images 看下是否全拉全,拉不全再次执行脚本。
安装多个master节点,则初始化命令使用:

kubeadm init  --apiserver-advertise-address 192.168.10.20 --control-plane-endpoint 192.168.10.20  --kubernetes-version=v1.16.0  --pod-network-cidr=10.244.0.0/16  --upload-certs

其中192.168.10.20时当前Master节点,–pod-network-cidr就这样好了,不用改动
systemctl enable kubelet
systemctl start kubelet
出现以下信息即初始化成功:这个最好保存好,里面有node接入集群和master加入集群的命令,直接复制使用.


Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 10.142.113.40:6443 --token 2jlcuy.6j0s6gxro39nkx8t \
    --discovery-token-ca-cert-hash sha256:0cebce7af4d7d964ea570b4bd3364552bd8aab7a715db7eaf8526b3ed88eedc4 \
    --control-plane --certificate-key f230b623af7fc854126e11ae905e54c988187f2586e49cd7c1f2bfda828dd15b

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.142.113.40:6443 --token 2jlcuy.6j0s6gxro39nkx8t \
    --discovery-token-ca-cert-hash sha256:0cebce7af4d7d964ea570b4bd3364552bd8aab7a715db7eaf8526b3ed88eedc4

添加Master节点:

kubeadm join 192.168.10.20:6443 --token z34zii.ur84appk8h9r3yik --discovery-token-ca-cert-hash sha256:dae426820f2c6073763a3697abeb14d8418c9268288e37b8fc25674153702801     --control-plane --certificate-key 1b9b0f1fdc0959a9decef7d812a2f606faf69ca44ca24d2e557b3ea81f415afe

添加node节点命令:

kubeadm join 10.142.113.40:6443 --token 2jlcuy.6j0s6gxro39nkx8t \
    --discovery-token-ca-cert-hash sha256:0cebce7af4d7d964ea570b4bd3364552bd8aab7a715db7eaf8526b3ed88eedc4

添加成功一般有如下信息看下:这是master节点的

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

他让你运行以下命令,你就可以在Master节点上只用kubectl命令了:

       mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

node节点添加成功的命令:

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

ok.到这边节点添加结束。此时你运行kubectl get nodes会看到姐弟啊 都是NotReady状态。那是因为cni还没安装。不急.

安装flanneld

下载flannel配置文件:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

里面看下镜像tag,我的是v0.12.0
需要的镜像我用脚本方式进行拉取和重新tag
vi flannel.sh

#!/bin/bash

set -e

FLANNEL_VERSION=v0.12.0

# 在这里修改源
QUAY_URL=quay.io/coreos
QINIU_URL=quay-mirror.qiniu.com/coreos

images=(flannel:${FLANNEL_VERSION}-amd64
flannel:${FLANNEL_VERSION}-arm64
flannel:${FLANNEL_VERSION}-arm
flannel:${FLANNEL_VERSION}-ppc64le
flannel:${FLANNEL_VERSION}-s390x)

for imageName in ${images[@]} ; do
  docker pull $QINIU_URL/$imageName
  docker tag  $QINIU_URL/$imageName $QUAY_URL/$imageName
  docker rmi $QINIU_URL/$imageName
done

执行sh flannel.sh就可以拉取,可能会有超时,重新拉取就行了,flannel需要在各个节点上部署,所以每个节点都要执行一遍上面的脚本。
最后运行

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

进行安装。
然后查看flannel安装情况:
kubectl get po -n kube-system | grep flannel
要都是running的那就完成了。最后看下kubectl get nodes,各节点状态是ready。则集群安装成功。
安装完后的结果:

[root@localhost zhouyu]# kubectl get po -n kube-system -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP              NODE      NOMINATED NODE   READINESS GATES
coredns-6955765f44-5xr4d          1/1     Running   0          98m   10.244.0.2      master1   <none>           <none>
coredns-6955765f44-ntlj2          1/1     Running   0          98m   10.244.0.3      master1   <none>           <none>
etcd-master1                      1/1     Running   0          98m   10.142.113.40   master1   <none>           <none>
etcd-master2                      1/1     Running   0          66m   10.142.113.41   master2   <none>           <none>
etcd-master3                      1/1     Running   0          60m   10.142.113.42   master3   <none>           <none>
kube-apiserver-master1            1/1     Running   0          98m   10.142.113.40   master1   <none>           <none>
kube-apiserver-master2            1/1     Running   0          72m   10.142.113.41   master2   <none>           <none>
kube-apiserver-master3            1/1     Running   0          60m   10.142.113.42   master3   <none>           <none>
kube-controller-manager-master1   1/1     Running   2          98m   10.142.113.40   master1   <none>           <none>
kube-controller-manager-master2   1/1     Running   0          72m   10.142.113.41   master2   <none>           <none>
kube-controller-manager-master3   1/1     Running   0          60m   10.142.113.42   master3   <none>           <none>
kube-flannel-ds-amd64-6p22w       1/1     Running   0          33m   10.142.113.41   master2   <none>           <none>
kube-flannel-ds-amd64-f2hpq       1/1     Running   0          33m   10.142.113.40   master1   <none>           <none>
kube-flannel-ds-amd64-gnz4l       1/1     Running   0          33m   10.142.113.42   master3   <none>           <none>
kube-flannel-ds-amd64-nbpzg       1/1     Running   0          33m   10.142.113.43   node1     <none>           <none>
kube-proxy-262m8                  1/1     Running   0          60m   10.142.113.42   master3   <none>           <none>
kube-proxy-49fnt                  1/1     Running   0          72m   10.142.113.41   master2   <none>           <none>
kube-proxy-75b6m                  1/1     Running   0          98m   10.142.113.40   master1   <none>           <none>
kube-proxy-bpkxk                  1/1     Running   0          59m   10.142.113.43   node1     <none>           <none>
kube-scheduler-master1            1/1     Running   2          98m   10.142.113.40   master1   <none>           <none>
kube-scheduler-master2            1/1     Running   0          72m   10.142.113.41   master2   <none>           <none>
kube-scheduler-master3            1/1     Running   0          60m   10.142.113.42   master3   <none>           <none>

要是有节点还不是ready状态,kubectl describe node nodename看下。有可能是镜像没拉下来,也有可能是其他问题。网上都有解答

猜你喜欢

转载自blog.csdn.net/u013276277/article/details/105603516
今日推荐