kubernetes(4):Centos7 使用docker 安装kubeadm-dind-cluster

本文的原文连接是:
https://blog.csdn.net/freewebsys/article/details/80002439

1,关于kubeadm-dind-cluster


A Kubernetes multi-node test cluster based on kubeadm
其实有一个项目就是 minikube ,但是需要vm ,还得研究下。
https://github.com/kubernetes/minikube
这个也是一个解决方案。
好处是这个镜像里面啥都有了不用一个一个下载。
毕竟国内的网络不好。gcr.io 的下载不了。
而且在启动中还是要配置国内镜像地址才行。

2,使用virtualbox安装centos7


安装过程忽略。修改启动网络。
vi /etc/sysconfig/network-scripts/ifcfg-enp0s3

ONBOOT=yes

关闭防火墙:

systemctl stop firewalld
systemctl disable firewalld

使用aliyun镜像:

curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
#安装docker,并且安装 kubectl 控制工具,不需要kube的其他lib。
yum install -y docker net-tools lrzsz kubernetes-client

然后修改docker镜像地址:/etc/docker/daemon.json ,修改完成之后dind 的镜像也会继承这个配置。

{
  "registry-mirrors": ["https://registry.docker-cn.com"]
}

启动docker

systemctl start docker
systemctl enable docker

下镜像和启动脚本:

curl -O https://cdn.rawgit.com/Mirantis/kubeadm-dind-cluster/master/fixed/dind-cluster-v1.8.sh
chmod +x dind-cluster-v1.8.sh
docker pull docker.io/mirantis/kubeadm-dind-cluster:v1.8

3,启动kubeadm-dind-cluster


一键启动:使用1.9 启动有点问题,使用1.8启动。
不知道为啥卡住了。1.8 是可以的。

./dind-cluster-v1.8.sh clean #要是之前启动过,要先清除下。再启动。
./dind-cluster-v1.8.sh up #启动服务

配置hosts:

104.18.62.176 rawgit.com cdn.rawgit.com

日志:

./dind-cluster-v1.8.sh up
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
  WARNING: You're not using the default seccomp profile
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
* Making sure DIND image is up to date
Trying to pull repository docker.io/mirantis/kubeadm-dind-cluster ...
v1.8: Pulling from docker.io/mirantis/kubeadm-dind-cluster
Digest: sha256:e7a2e7c125e5e39b006a1df488248793ebdb05fb926d9f170606a5cc8b304e20
Status: Image is up to date for docker.io/mirantis/kubeadm-dind-cluster:v1.8
* Starting DIND container: kube-master
* Running kubeadm: init --config /etc/kubeadm.conf --skip-preflight-checks
Initializing machine ID from random generator.
Synchronizing state of docker.service with SysV init with /lib/systemd/systemd-sysv-install...
Executing /lib/systemd/systemd-sysv-install enable docker

real    0m15.062s
user    0m0.474s
sys     0m0.573s
Loaded image: gcr.io/google_containers/kubedns-amd64:1.7
Loaded image: gcr.io/google_containers/etcd-amd64:2.2.5
Loaded image: gcr.io/google_containers/etcd-amd64:3.0.17
Loaded image: gcr.io/google_containers/exechealthz-amd64:1.1
Loaded image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
Loaded image: gcr.io/google_containers/pause-amd64:3.0
Loaded image: gcr.io/google_containers/etcd:2.2.1
Loaded image: mirantis/hypokube:base
Loaded image: gcr.io/google_containers/kube-discovery-amd64:1.0
Sending build context to Docker daemon   237 MB
Step 1 : FROM mirantis/hypokube:base
 ---> 13bf30297b02
Step 2 : COPY hyperkube /hyperkube
 ---> 1d32a2376c49
Removing intermediate container 694590609a31
Successfully built 1d32a2376c49
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /lib/systemd/system/kubelet.service.
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.9.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kube-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.192.0.2]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 29.005347 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node kube-master as master by adding a label and a taint
[markmaster] Master kube-master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 9e5775.1350c70f4b2bfaa1
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 9e5775.1350c70f4b2bfaa1 10.192.0.2:6443 --discovery-token-ca-cert-hash sha256:61c12cb5409c8fb50c7f9d7f02c6356350962349b2ef99622d047192927ef931


real    0m33.946s
user    0m3.242s
sys     0m0.103s
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
daemonset "kube-proxy" configured
No resources found
* Setting cluster config
Cluster "dind" set.
Context "dind" modified.
Switched to context "dind".
* Starting node container: 1
* Starting DIND container: kube-node-1
* Node container started: 1
* Starting node container: 2
* Starting DIND container: kube-node-2
* Node container started: 2
* Joining node: 1
* Joining node: 2
* Running kubeadm: join --skip-preflight-checks --token 9e5775.1350c70f4b2bfaa1 10.192.0.2:6443 --discovery-token-ca-cert-hash sha256:61c12cb5409c8fb50c7f9d7f02c6356350962349b2ef99622d047192927ef931
Initializing machine ID from random generator.
* Running kubeadm: join --skip-preflight-checks --token 9e5775.1350c70f4b2bfaa1 10.192.0.2:6443 --discovery-token-ca-cert-hash sha256:61c12cb5409c8fb50c7f9d7f02c6356350962349b2ef99622d047192927ef931
Initializing machine ID from random generator.
Synchronizing state of docker.service with SysV init with /lib/systemd/systemd-sysv-install...
Executing /lib/systemd/systemd-sysv-install enable docker
Synchronizing state of docker.service with SysV init with /lib/systemd/systemd-sysv-install...
Executing /lib/systemd/systemd-sysv-install enable docker

real    0m39.489s
user    0m0.531s
sys     0m0.718s
Loaded image: gcr.io/google_containers/kubedns-amd64:1.7
Loaded image: gcr.io/google_containers/etcd-amd64:2.2.5
Loaded image: gcr.io/google_containers/etcd-amd64:3.0.17
Loaded image: gcr.io/google_containers/exechealthz-amd64:1.1
Loaded image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
Loaded image: gcr.io/google_containers/pause-amd64:3.0
Loaded image: gcr.io/google_containers/etcd:2.2.1
Loaded image: mirantis/hypokube:base
Loaded image: gcr.io/google_containers/kube-discovery-amd64:1.0
Sending build context to Docker daemon 5.014 MB
real    0m38.203s
user    0m0.515s
sys     0m0.706s
Loaded image: gcr.io/google_containers/kubedns-amd64:1.7
Loaded image: gcr.io/google_containers/etcd-amd64:2.2.5
Loaded image: gcr.io/google_containers/etcd-amd64:3.0.17
Loaded image: gcr.io/google_containers/exechealthz-amd64:1.1
Loaded image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
Loaded image: gcr.io/google_containers/pause-amd64:3.0
Loaded image: gcr.io/google_containers/etcd:2.2.1
Loaded image: mirantis/hypokube:base
Loaded image: gcr.io/google_containers/kube-discovery-amd64:1.0
Sending build context to Docker daemon   237 MB
Step 1 : FROM mirantis/hypokube:baseon 192.7 MB
 ---> 13bf30297b02
Step 2 : COPY hyperkube /hyperkube
Sending build context to Docker daemon   237 MB
Step 1 : FROM mirantis/hypokube:base
 ---> 13bf30297b02
Step 2 : COPY hyperkube /hyperkube
 ---> 1a5eeb592f2b
Removing intermediate container 326789440fe0
Successfully built 1a5eeb592f2b
 ---> a3e5aec19f29
Removing intermediate container aca5ff768d1f
Successfully built a3e5aec19f29
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /lib/systemd/system/kubelet.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /lib/systemd/system/kubelet.service.
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[bootstrap] Detected server version: v1.8.10
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[discovery] Trying to connect to API Server "10.192.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.192.0.2:6443"
[discovery] Requesting info from "https://10.192.0.2:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.192.0.2:6443"
[discovery] Successfully established connection with API Server "10.192.0.2:6443"
[bootstrap] Detected server version: v1.8.10
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

real    0m1.665s
user    0m0.244s
sys     0m0.058s

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

real    0m2.037s
user    0m0.251s
sys     0m0.081s
* Node joined: 2
* Node joined: 1
* Deploying k8s dashboard
The connection to the server rawgit.com was refused - did you specify the right host or port?
The connection to the server rawgit.com was refused - did you specify the right host or port?
Unable to connect to the server: dial tcp 31.13.72.17:443: i/o timeout
Unable to connect to the server: dial tcp 31.13.72.17:443: i/o timeout
Unable to connect to the server: dial tcp 75.126.215.88:443: i/o timeout
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
clusterrolebinding "add-on-cluster-admin" created
* Patching kube-dns deployment to make it start faster
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment "kube-dns" configured
* Taking snapshot of the cluster
deployment "kube-dns" scaled
deployment "kubernetes-dashboard" scaled
pod "kube-proxy-7nwc2" deleted
pod "kube-proxy-fmkbw" deleted
pod "kube-proxy-zrkpf" deleted
NAME                         READY     STATUS    RESTARTS   AGE
etcd-kube-master             1/1       Running   0          7m
kube-apiserver-kube-master   1/1       Running   0          8m
kube-scheduler-kube-master   1/1       Running   0          6m
* Waiting for kube-proxy and the nodes
...............................................................................................................
........................................................................................Error waiting for kube-proxy and the nodes

然后就可以啦。超级方便。

curl 127.0.0.1:8080

# docker ps
CONTAINER ID        IMAGE                                COMMAND                  CREATED             STATUS              PORTS                      NAMES
7c9b24e2ef16        mirantis/kubeadm-dind-cluster:v1.8   "/sbin/dind_init s..."   51 minutes ago      Up 51 minutes       8080/tcp                   kube-node-2
8ae0a6562dfc        mirantis/kubeadm-dind-cluster:v1.8   "/sbin/dind_init s..."   51 minutes ago      Up 51 minutes       8080/tcp                   kube-node-1
143786df1397        mirantis/kubeadm-dind-cluster:v1.8   "/sbin/dind_init s..."   52 minutes ago      Up 52 minutes       127.0.0.1:8080->8080/tcp   kube-master

可以看到在本地docker 上面起了一个master 两个 node 镜像。
安装kubectl工具:就可以使用了。
实际上访问的是容器的kubernetes集群。虚拟机本地并没有安装kubernetes服务。

# kubectl get nodes
NAME          STATUS     AGE
kube-master   NotReady   17m
kube-node-1   NotReady   16m
kube-node-2   NotReady   16m

# kubectl get pods -n kube-system
NAME                                  READY     STATUS    RESTARTS   AGE
etcd-kube-master                      1/1       Running   1          16m
kube-apiserver-kube-master            1/1       Running   1          17m
kube-controller-manager-kube-master   1/1       Running   1          7m
kube-proxy-5g27w                      1/1       Running   0          8m
kube-proxy-ftxqs                      1/1       Running   0          8m
kube-proxy-w5zmh                      1/1       Running   0          8m
kube-scheduler-kube-master            1/1       Running   1          16m

# kubectl get service -n kube-system
NAME                   CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kube-dns               10.96.0.10       <none>        53/UDP,53/TCP   18m
kubernetes-dashboard   10.102.194.129   <nodes>       80:30325/TCP    10m

登录node1 之后查看:

# docker exec -it 8ae0a6562dfc bash

# docker images
REPOSITORY                                      TAG                 IMAGE ID            CREATED             SIZE
mirantis/hypokube                               final               a3e5aec19f29        27 minutes ago      373.2 MB
mirantis/hypokube                               base                13bf30297b02        3 hours ago         136.3 MB
gcr.io/google_containers/etcd-amd64             3.0.17              243830dae7dd        13 months ago       168.9 MB
gcr.io/google_containers/kube-discovery-amd64   1.0                 c5e0c9a457fc        19 months ago       134.2 MB
gcr.io/google_containers/kubedns-amd64          1.7                 bec33bc01f03        20 months ago       55.06 MB
gcr.io/google_containers/exechealthz-amd64      1.1                 c3a89c92ef5b        20 months ago       8.332 MB
gcr.io/google_containers/kube-dnsmasq-amd64     1.3                 9a15e39d0db8        22 months ago       5.13 MB
gcr.io/google_containers/pause-amd64            3.0                 99e59f495ffa        23 months ago       746.9 kB
gcr.io/google_containers/etcd-amd64             2.2.5               72bd8a257d7a        2 years ago         30.45 MB
gcr.io/google_containers/etcd                   2.2.1               ef5842ca5c42        2 years ago         28.19 MB

# docker ps
CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS       
       PORTS               NAMES
f8b513f7874f        a3e5aec19f29                               "/usr/local/bin/kube-"   19 minutes ago 
     Up 19 minutes                           k8s_kube-proxy_kube-proxy-w5zmh_kube-system_f0cab1d8-4304-11e8-a132-02429e281b47_0
ecb34739739d        gcr.io/google_containers/pause-amd64:3.0   "/pause"                 19 minutes ago 
     Up 19 minutes                           k8s_POD_kube-proxy-w5zmh_kube-system_f0cab1d8-4304-11e8-a132-02429e281b47_0

就发现了,镜像已经把需要的镜像都下载了。省的在网络上捡垃圾了。

4,第一个hellowrld


环境好了,但是dashborad没有起来。差一点点。
然后来一个 helloword:

helloworld.yaml :

apiVersion: v1
kind: ReplicationController
metadata:
  name: go-admin
  labels:
    name: go-admin
spec:
  replicas: 1
  selector:
    name: go-admin
  template:
    metadata:
     labels:
       name: go-admin
    spec:
      containers:
      - name: master
        image: docker.io/golangpkg/go-admin:latest
        ports:
        - containerPort: 8080

kubectl create -f helloworld.yaml
replicationcontroller “go-admin” created

# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
go-admin-587569789c-rxl2d   0/1       Pending   0          1m

然后没有启动成功,一直是pading。

查看详细信息:

kubectl describe pod go-admin
发现:
No nodes are available that match all of the predicates: NodeNotReady (3)

然后重新下载了下 脚本。
再创建下,就好了:

# kubectl get pods -o wide
NAME             READY     STATUS    RESTARTS   AGE       IP           NODE
go-admin-bjjz6   1/1       Running   0          13m       10.244.3.4   kube-node-2

# docker ps
CONTAINER ID        IMAGE                                COMMAND                  CREATED             STATUS              PORTS                      NAMES
f1338e439870        mirantis/kubeadm-dind-cluster:v1.8   "/sbin/dind_init s..."   21 minutes ago      Up 21 minutes       8080/tcp                   kube-node-2
84d86dd3e84e        mirantis/kubeadm-dind-cluster:v1.8   "/sbin/dind_init s..."   21 minutes ago      Up 21 minutes       8080/tcp                   kube-node-1
f237ab502791        mirantis/kubeadm-dind-cluster:v1.8   "/sbin/dind_init s..."   22 minutes ago      Up 22 minutes       127.0.0.1:8080->8080/tcp   kube-master
# docker exec -it kube-node-2 bash
# curl 10.244.3.4:8080

使用wide 参数看到 go-admin 被放到了 node2 上面。
然后登录 node2,执行 curl 发现页面访问成功。

5,总结


kubeadm-dind-cluster 是个不错的解决方案,比如我就需要一个开发环境。
开发环境测试下 kubernetes 。搭建速度快。而且要是有问题了。直接在用docker 删除了,重现创建下,镜像啥的都打包起来了,不用安装虚拟机了。
目前遇到了一个问题,镜像更新了,升级kubernetes 结果node 好像有问题了。
然后重新下载 dind-cluster-v1.8.sh 在启动就好了。

# kubectl get pods -n kube-system -o wide
NAME                                    READY     STATUS             RESTARTS   AGE       IP           NODE
etcd-kube-master                        1/1       Running            1          26m       10.192.0.2   kube-master
kube-apiserver-kube-master              1/1       Running            1          27m       10.192.0.2   kube-master
kube-controller-manager-kube-master     1/1       Running            1          24m       10.192.0.2   kube-master
kube-dns-855bdc94cb-2rq5t               0/3       OutOfcpu           0          26m       <none>       kube-master
kube-dns-855bdc94cb-5268c               0/3       OutOfcpu           0          26m       <none>       kube-master
kube-dns-855bdc94cb-gg2tz               0/3       OutOfcpu           0          26m       <none>       kube-master
kube-dns-855bdc94cb-gkrcm               0/3       OutOfcpu           0          26m       <none>       kube-master
kube-dns-855bdc94cb-krb8h               0/3       OutOfcpu           0          26m       <none>       kube-master
kube-dns-855bdc94cb-lpmkc               0/3       OutOfcpu           0          26m       <none>       kube-master
kube-dns-855bdc94cb-pjzcg               0/3       OutOfcpu           0          26m       <none>       kube-master
kube-dns-855bdc94cb-t6x4h               0/3       ImagePullBackOff   0          25m       10.244.3.3   kube-node-2
kube-dns-855bdc94cb-vp4sl               0/3       OutOfcpu           0          26m       <none>       kube-master
kube-proxy-4knln                        1/1       Running            0          25m       10.192.0.3   kube-node-1
kube-proxy-hmh88                        1/1       Running            0          25m       10.192.0.4   kube-node-2
kube-proxy-r9rrr                        1/1       Running            0          25m       10.192.0.2   kube-master
kube-scheduler-kube-master              1/1       Running            1          26m       10.192.0.2   kube-master
kubernetes-dashboard-6b5bdcfbc6-dnnhh   0/1       ImagePullBackOff   0          25m       10.244.2.6   kube-node-1

发现还是 dns 和 dashboard 没有起来,并且网络kube-dns也没有好。
继续研究中。

本文的原文连接是:
https://blog.csdn.net/freewebsys/article/details/80002439

猜你喜欢

转载自blog.csdn.net/freewebsys/article/details/80002439