使用kubeadm在Ubuntu 16.04上搭建Kubernetes1.5集群

一、环境准备

我准备了三台Ubuntu 16.04虚拟机,各项参数如下:

节点 IP地址 CPU 内存
master 192.168.0.158 4核 4GB
node1 192.168.0.159 1核 2GB
node2 192.168.0.160 1核 2GB

Kubernetes官网上提到每台机器至少要有1GB内存,不然集群起来之后,留给运行在容器内的应用的内存就很少了。同时要保证所有机器之前的网络是互相连通的。

这里再说一下,一开始我只给master分配了2个核,但是等我把kubeadm跑起来后,docker在pull一些必需的镜像时会出现OutOfCPU的情况,大概是因为kubeadm把第一台机器做master以及第一个node,所以初始化的服务都跑在这台机器上?所以后来我把master加到了4核。

二、搭建步骤

(1/4)安装docker、kubelet、kubeadm和kubectl

root用户ssh到每台机器上,运行:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
# 先安装docker
apt-get install -y docker.io
apt-get install -y kubelet kubeadm kubectl kubernetes-cni

(2/4)初始化master

装好kubelet和kubectl后,在master上以root用户运行:

kubeadm init --token=yiqian.123456

注意: 官方文档上kubeadm init的时候是不带token的,但我这样运行之后,console也没给我一个默认的token(exo me???)。但其他节点在加入集群的时候是需要指定master的token的,所以我在kubeadm初始化的时候手动给master指定了一个token。

这里还有一个坑就是,如果初始化失败了,比如像Issue#33544,那么你停掉再重新init会出错,这是因为kubeadm在安装了相关包之后会生成/etc/kubernetes/var/lib/kubelet等目录,所以我们先需要手动清理一下再重新初始化:

systemctl stop kubelet;
docker rm -f $(docker ps -q); mount | grep "/var/lib/kubelet/*" | awk '{print $3}' | xargs umount 1>/dev/null 2>/dev/null;
rm -rf /var/lib/kubelet /etc/kubernetes /var/lib/etcd /etc/cni;
systemctl start kubelet;
kubeadm init --token=<token>

最后,init成功的运行输出如下:

root@xyq-k8s-master:/home/administrator# kubeadm init --token=yiqian.123456
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[init] Using Kubernetes version: v1.5.1
[tokens] Validating provided token
[tokens] Accepted provided token
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 14.534854 seconds
[apiclient] Waiting for at least one node to register and become ready
[apiclient] First node is ready after 0.506558 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 2.005108 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join --token=yiqian.123456 192.168.0.158

这个时候呢,我们可以通过kubectl看看集群当前的状态:

root@xyq-k8s-master:/home/administrator# kubectl get nodes
NAME             STATUS         AGE
xyq-k8s-master   Ready,master   1m
root@xyq-k8s-master:/home/administrator# kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY     STATUS              RESTARTS   AGE
kube-system   dummy-2088944543-gjmv4                   1/1       Running             0          1m
kube-system   etcd-xyq-k8s-master                      0/1       Pending             0          2s
kube-system   kube-apiserver-xyq-k8s-master            1/1       Running             0          20s
kube-system   kube-controller-manager-xyq-k8s-master   1/1       Running             0          22s
kube-system   kube-discovery-1769846148-9dpvt          1/1       Running             0          1m
kube-system   kube-dns-2924299975-lc4rr                0/4       ContainerCreating   0          1m
kube-system   kube-proxy-w5v9m                         1/1       Running             0          1m
kube-system   kube-scheduler-xyq-k8s-master            0/1       Pending             0          4s

可以看出来,除了kube-dns之外,其他pods都跑起来了。是因为在没有部署集群网络时,dns是起不来的。So我们进入下一步!

(3/4)初始化master

官方一共给出了五种网络addon,我选了Weave Net(因为它的安装步骤最方便诶嘿嘿=。=),直接运行:

kubectl apply -f https://git.io/weave-kube

等weave跑起来后,kube-dns自然就跑起来啦。

root@xyq-k8s-master:/home/administrator# kubectl apply -f https://git.io/weave-kube
daemonset "weave-net" created
root@xyq-k8s-master:/home/administrator# kubectl get pods --all-namespaces
NAMESPACE     NAME                                     READY     STATUS    RESTARTS   AGE
kube-system   dummy-2088944543-gjmv4                   1/1       Running   0          3m
kube-system   etcd-xyq-k8s-master                      1/1       Running   3          1m
kube-system   kube-apiserver-xyq-k8s-master            1/1       Running   0          2m
kube-system   kube-controller-manager-xyq-k8s-master   1/1       Running   0          2m
kube-system   kube-discovery-1769846148-9dpvt          1/1       Running   0          3m
kube-system   kube-dns-2924299975-lc4rr                4/4       Running   0          3m
kube-system   kube-proxy-w5v9m                         1/1       Running   0          3m
kube-system   kube-scheduler-xyq-k8s-master            1/1       Running   0          1m
kube-system   weave-net-8j1zg                          2/2       Running   0          1m

(4/4)将其他节点加入集群

分别ssh到node1和node2上去,以root用户运行:

kubeadm join --token=yiqian.123456 192.168.0.158

再回到我们的master节点查看nodes,就会发现集群已经搭好了。

root@xyq-k8s-master:/home/administrator# kubectl get nodes
NAME             STATUS         AGE
xyq-k8s-master   Ready,master   10m
xyq-k8s-s1       Ready          5m
xyq-k8s-s2       Ready          5m

关于翻墙

由于我是在远程服务器上搭建虚拟机然后ssh上去操作,所以不能设置全局代理,只能把http proxy加到docker的配置文件/etc/default/docker中:

export http_proxy=<这个不能告诉你>
export https_proxy=<这个也不能告诉你>

先去kubernetes源码中找1.5版本中各个组件的版本:

镜像名称 版本号
gcr.io/google_containers/kubedns-amd64 1.7
gcr.io/google_containers/kube-dnsmasq-amd64 1.3
gcr.io/google_containers/exechealthz-amd64 1.1

然后创建一个github项目,可以fork我的repo或者原作者的repo。
最后在Docker Hub上分别创建以上三个镜像的自动构建项目。以kube-dns为例:


 

 

 

 

创建好之后要手动Trigger一下编译


 

 

等待编译成功之后就可以在本地直接pull了。

docker pull yiqianx/kubedns-amd64

以上。
有问题欢迎大家评论指正~

引用

猜你喜欢

转载自www.linuxidc.com/Linux/2017-07/145504.htm