Use kubeadm to build a Kubernetes (1.10.2) cluster (domestic environment)

content

  1. Target
  2. Prepare
  3. step
  4. Uninstall the cluster

Target

  • Set up a secure Kubernetes cluster on your machine.
  • Install network plugins in the cluster so that applications can communicate with each other.
  • Run a simple microservice on the cluster.

Prepare

host

  • One or more hosts running Ubuntu 16.04+.
  • It is best to choose a dual-core host with at least 2 GB of RAM.
  • The complete network connection in the cluster can be either public or private.

software

Install Docker

sudo apt-get update
sudo apt-get install -y docker.io

Kubunetes recommends using the old version docker.io. If you need to use the latest version docker-ce, you can refer to the previous blog: Docker first experience .

disable swap file

Then the swap file needs to be disabled, which is a mandatory step for Kubernetes. Implementing it is as simple as editing the /etc/fstabfile, commenting out swapthe referenced lines, saving and restarting and typing sudo swapoff -a.

For disabling swapmemory, you may be a little puzzled, the specific reason can check the Issue on Github: Kubelet/Kubernetes should work with Swap Enabled .

step

(1/4) 安装 bebeadm, kubelet and kubectl

  • kubeadm: A command tool to bootstrap a k8s cluster.
  • kubelet: A component that runs on all machines in a cluster and is used to perform operations such as launching pods and containers.
  • kubectl: A command-line tool for manipulating a running cluster.
sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s http://packages.faasx.com/google/apt/doc/apt-key.gpg | sudo apt-key add -
sudo cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://mirrors.ustc.edu.cn/kubernetes/apt/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl

The apt-key download address uses a domestic mirror, and the official address is: https://packages.cloud.google.com/apt/doc/apt-key.gpg .
The apt installation package address uses the image of the University of Science and Technology of China, and the official address is: http://apt.kubernetes.io/ .

(2/4) Initialize the master node

Due to network reasons, we need to pull the Images needed for k8s initialization in advance and add the corresponding k8s.gcr.iotags:

## 拉取镜像
docker pull reg.qiniu.com/k8s/kube-apiserver-amd64:v1.10.2
docker pull reg.qiniu.com/k8s/kube-controller-manager-amd64:v1.10.2
docker pull reg.qiniu.com/k8s/kube-scheduler-amd64:v1.10.2
docker pull reg.qiniu.com/k8s/kube-proxy-amd64:v1.10.2
docker pull reg.qiniu.com/k8s/etcd-amd64:3.1.12
docker pull reg.qiniu.com/k8s/pause-amd64:3.1

## 添加Tag
docker tag reg.qiniu.com/k8s/kube-apiserver-amd64:v1.10.2 k8s.gcr.io/kube-apiserver-amd64:v1.10.2
docker tag reg.qiniu.com/k8s/kube-scheduler-amd64:v1.10.2 k8s.gcr.io/kube-scheduler-amd64:v1.10.2
docker tag reg.qiniu.com/k8s/kube-controller-manager-amd64:v1.10.2 k8s.gcr.io/kube-controller-manager-amd64:v1.10.2
docker tag reg.qiniu.com/k8s/kube-proxy-amd64:v1.10.2 k8s.gcr.io/kube-proxy-amd64:v1.10.2
docker tag reg.qiniu.com/k8s/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12
docker tag reg.qiniu.com/k8s/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1

## 在Kubernetes 1.10 中,增加了CoreDNS,如果使用CoreDNS(默认关闭),则不需要下面三个镜像。
docker pull reg.qiniu.com/k8s/k8s-dns-sidecar-amd64:1.14.10
docker pull reg.qiniu.com/k8s/k8s-dns-kube-dns-amd64:1.14.10
docker pull reg.qiniu.com/k8s/k8s-dns-dnsmasq-nanny-amd64:1.14.10

docker tag reg.qiniu.com/k8s/k8s-dns-sidecar-amd64:1.14.10 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.10
docker tag reg.qiniu.com/k8s/k8s-dns-kube-dns-amd64:1.14.10 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.10
docker tag reg.qiniu.com/k8s/k8s-dns-dnsmasq-nanny-amd64:1.14.10 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.10

It is said that kubeadm can customize the mirror Registry, but I did not experiment successfully.

The master node is the machine running the control components, including etcd (cluster database) and API service (kubectl CLI communication service).
To initialize the master node, just run the following command on any machine with kubeadm installed:

sudo kubeadm init --kubernetes-version=v1.10.2 --feature-gates=CoreDNS=true --pod-network-cidr=192.168.0.0/16

init common main parameters:

  • --kubernetes-version: Specify the Kubenetes version. If this parameter is not specified, the latest version information will be downloaded from the google website.

  • --pod-network-cidr: Specify the IP address range of the pod network. Its value depends on which network network plugin you choose in the next step. For example, I used Calico network in this article, which needs to be specified as 192.168.0.0/16.

  • --apiserver-advertise-address: Specify the IP address advertised by the master service. If not specified, the network interface will be automatically detected, usually the intranet IP.

  • --feature-gates=CoreDNS: Whether to use CoreDNS, the value is true/false, the CoreDNS plugin was upgraded to the Beta stage in 1.10, and will eventually become the default option for Kubernetes.

For a more detailed introduction to kubeadm, please refer to the official kubeadm documentation .

The final output is as follows:

raining@raining-ubuntu:~$ sudo kubeadm init --kubernetes-version=v1.10.2 --feature-gates=CoreDNS=true --pod-network-cidr=192.168.0.0/16
[sudo] password for raining: 
[init] Using Kubernetes version: v1.10.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [raining-ubuntu kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.8]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [raining-ubuntu] and IPs [192.168.0.8]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 39.501722 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node raining-ubuntu as master by adding a label and a taint
[markmaster] Master raining-ubuntu tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: vtyk9m.g4afak37myq3rsdi
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.0.8:6443 --token vtyk9m.g4afak37myq3rsdi --discovery-token-ca-cert-hash sha256:19246ce11ba3fc633fe0b21f2f8aaaebd7df9103ae47138dc0dd615f61a32d99

If you want to use it as a non-root user kubectl, you can execute the following command (also kubeadm initpart of the output):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

The token output by kubeadm init is used for identity authentication between the master and joining nodes. The token is confidential and needs to be kept secure, because anyone with this token can add nodes to the cluster at will. You can also use commands to list, create, delete tokens, see the official reference documentationkubeadm for details .

We enter in the browser https://<master-ip>:6443to verify whether the deployment is successful, and the return is as follows:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {
    
  },
  "code": 403
}

(3/4) Install network plugin

Installing a network plugin is a must because your pods need to communicate with each other.

Network deployment must be prioritized over any application deployment, as kube-dns(as used in this article coredns) cannot be used until network deployment is successful. kubeadm only supports the container network interface (CNI) network type (kubenet is not supported).

The more common network addons are: Calico, Canal, Flannel, Kube-router, Romana, Weave Net, etc. For a detailed network list, please refer to the plugin page .

Use the following command to install the network plugin:

kubectl apply -f <add-on.yaml>

In this article, I am using Calico network, installed as follows:

# 使用国内镜像
kubectl apply -f http://mirror.faasx.com/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml

In order for Calico to work properly, it must be used when executing kubeadm init --pod-network-cidr=192.168.0.0/16.

For more details, check out Calico's official documentation: kubeadm quickstart .

After the network plug-in is installed, you can judge whether the network plug-in is running normally by checking coredns podthe running status:

kubectl get pods --all-namespaces

# 输出
NAMESPACE     NAME                                      READY     STATUS    RESTARTS   AGE
kube-system   calico-etcd-zxmvh                         1/1       Running   0          4m
kube-system   calico-kube-controllers-f9d6c4cb6-42w9j   1/1       Running   0          4m
kube-system   calico-node-jq5qb                         2/2       Running   0          4m
kube-system   coredns-7997f8864c-kfswc                  1/1       Running   0          1h
kube-system   coredns-7997f8864c-ttvj2                  1/1       Running   0          1h
kube-system   etcd-raining-ubuntu                       1/1       Running   0          1h
kube-system   kube-apiserver-raining-ubuntu             1/1       Running   0          1h
kube-system   kube-controller-manager-raining-ubuntu    1/1       Running   0          1h
kube-system   kube-proxy-vrjlq                          1/1       Running   0          1h
kube-system   kube-scheduler-raining-ubuntu             1/1       Running   0          1h

When the waiting coredns podstate becomes Running , you can continue to add slave nodes.

Isolate the master node

By default, for security reasons, pods are not run on the master node. If you want to run pods on the master node, for example, when running a stand-alone kubernetes cluster, you can run the following command:

kubectl taint nodes --all node-role.kubernetes.io/master-

The output looks like this:

node "test-01" untainted
taint key="dedicated" and effect="" not found.
taint key="dedicated" and effect="" not found.

This will remove the node-role.kubernetes.io/masterflags from all nodes, including the master node, and the Scheduler can schedule pods to run on any node.

(4/4) Join other nodes

Nodes are where your workloads (containers, pods, etc.) run. To add nodes to a cluster, just perform the following steps on each machine:

  • SSH login to the machine
  • switch to root (like sudo su -)
  • Execute the command output by kubeadm init :kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

The output after execution looks like this:

raining@ubuntu1:~$ sudo kubeadm join 192.168.0.8:6443 --token vtyk9m.g4afak37myq3rsdi --discovery-token-ca-cert-hash sha256:19246ce11ba3fc633fe0b21f2f8aaaebd7df9103ae47138dc0dd615f61a32d99
[preflight] Running pre-flight checks.
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[discovery] Trying to connect to API Server "192.168.0.8:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.8:6443"
[discovery] Requesting info from "https://192.168.0.8:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.8:6443"
[discovery] Successfully established connection with API Server "192.168.0.8:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

kubectl get nodesAfter a few seconds, you can see the newly added machine by running on the master node :

NAME             STATUS    ROLES     AGE       VERSION
raining-ubuntu   Ready     master    1h        v1.10.2
ubuntu1          Ready     <none>    2m        v1.10.2

(Optional) Manage the cluster on a non-master node

To be able to use kubectl on other computers to manage your cluster, copy the administrator's kubeconfig file from the master node to your computer:

scp root@<master ip>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf get nodes

(Optional) Map API service to local

If you want to connect to the API service from outside the cluster, you can use the tool kubectl proxy:

scp root@<master ip>:/etc/kubernetes/admin.conf .
kubectl --kubeconfig ./admin.conf proxy

In this way, the API service can be http://localhost:8001/api/v1accessed .

(Optional) Deploy a microservice

Now you can test your new cluster. Sock Shop is a sample microservice that shows how to run and connect a series of services in Kubernetes. To learn more about microservices, check out the GitHub README .

kubectl create namespace sock-shop
kubectl apply -n sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true"

You can run the following command to check whether the front-end service has an open corresponding port:

kubectl -n sock-shop get svc front-end

The output is similar to:

NAME        TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
front-end   NodePort   10.107.207.35   <none>        80:30001/TCP   31s

It may take a few minutes to download and enable all containers kubectl get pods -n sock-shopto get the status of the service.

The output is as follows:

raining@raining-ubuntu:~$ kubectl get pods -n sock-shop
NAME                            READY     STATUS    RESTARTS   AGE
carts-6cd457d86c-wdbsg          1/1       Running   0          1m
carts-db-784446fdd6-9gsrs       1/1       Running   0          1m
catalogue-779cd58f9b-nf6n4      1/1       Running   0          1m
catalogue-db-6794f65f5d-kwc2x   1/1       Running   0          1m
front-end-679d7bcb77-4hbjq      1/1       Running   0          1m
orders-755bd9f786-gbspz         1/1       Running   0          1m
orders-db-84bb8f48d6-98wsm      1/1       Running   0          1m
payment-674658f686-xc7gk        1/1       Running   0          1m
queue-master-5f98bbd67-xgqr6    1/1       Running   0          1m
rabbitmq-86d44dd846-nf2g6       1/1       Running   0          1m
shipping-79786fb956-bs7jn       1/1       Running   0          1m
user-6995984547-nvqw4           1/1       Running   0          1m
user-db-fc7b47fb9-zcf5r         1/1       Running   0          1m

Then visit the IP and corresponding port of the cluster node in your browser, eg http://<master_ip>/<cluster-ip>:<port>. In this example, it might be 30001, but it might not be the same as yours. If there is a firewall, make sure the corresponding ports are open before you access.

sock-shop-home

It should be noted that if multiple nodes are deployed, the IP of the node should be used for access, not the IP of the Master server.

Finally, to uninstall socks shop , just run on the master node:

kubectl delete namespace sock-shop

Uninstall the cluster

To undo what kubeadm did, first exclude the node , and make sure to clear the node before shutting it down.

On the master node run:

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

Then on the node that needs to be removed, reset the installation state of kubeadm:

kubeadm reset

If you want to reconfigure the cluster, just run kubeadm initor kubeadm joinand use the required parameters.

References

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325252973&siteId=291194637