"K8s Learning Diary" Kubeadm deploys kubernetes cluster

Deploy kubernetest cluster on Vultr

Original address https://elfgzp.cn/2020/04/11/k8s-%E5%AD%A6%E4%B9%A0%E6%97%A5%E8%AE%B0-kubeadm-%E9%83% A8% E7% BD% B2-kubernetes-% E9% 9B% 86% E7% BE% A4.html

In the recent study kubernetest, but there are a lot of tutorials on Google on how to deploy kubernetes.

Originally wanted to buy their own JDand HUAWEIthe ECSabove deployments, toss a long time to no avail. ECS provides a selection of frustration or with a cloud service provider, there is in VPCthe deployment will be under conditions more convenient.

ECS configuration selection

Because just learning, I will not deploy highly available k8sclusters, so prepare a table Masterand Nodenodes.

Since Masterat least two CPU cores. This is an alternative Vultron the 2 核 4G 内存configuration ECS.

2c4g

NodeOf course, larger memory node configuration is better, of course, only in the purpose of learning, the choice here Masterconfigure the same.

Foreign cloud service vendors generally no bandwidth limits, generally in accordance with flow calculation, this configuration has 3Ttraffic is certainly enough.

Then his hourly fee model is in this configuration 0.03 $ / his equivalent 0.21 ¥ / h, that is, two cents per hour! Even if you use one day, it will be four dollars.

I intend to learn k8swhen two instances in the deployment, without the direct destruction, not Miya.

If you are a new user, you can still get it for free 100 $. Here is the invited connection Vultr Give $ 100. If you think it is good, you can try it. The author really thinks that their service is not bad, so give them an advertisement.

Here two selected CentOS 7 Without SELinuxexamples.

SELinuxIt is Linuxa security-related software in order to facilitate learning and deployment, we close it directly, so I chose Without SELinuxready to begin deployment of.

Note that in Additional Featurescheck at Enable Private Networking, so that Vultris within your server distribution network IP.

Set up two nodes HostNameto prevent a node name will be conflict.

hostname

In Deploy Nowbefore the Servers Qtyincrease is 2, so you do not repeatedly open the deployment page, and directly deploy two instances.

Do not be this $20.00 /moscared, this is the month $20, we only need to be destroyed like finished, and the new user a gift 100$can be used for a long time.

ECS environment configuration

After the completion of the deployment of two instances, you can Instancesfind a list of them. (Considering readers who have not used cloud services, I will elaborate more here.)

ins2

You can click through in this instance Overviewto find his login account password, the default users root.

Then Settingsyou can see the network of these two instances IP.

The intranet of the two examples of the author is as follows:

Examples Number of cores RAM Intranet IP
Master 2 4G 10.24.96.3
Node 2 4G 10.24.96.4

Then it started, but sshafter entering the system need to do some preparation work.

K8s deployment preparations

First, to avoid unnecessary trouble, turn off the CentOS 7firewall, because of their own cloud service vendors have security groups, we can also be achieved by configuring network security security group.

systemctl disable firewalld && systemctl stop firewalld

If not previously selected at the time of deployment instances Without SELinuxwhere you need to let the container can access the host file, you enter the following command.

# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

We also need to close swap, as for why you can search for it.

swapoff -a

To ensure that sysctlconfiguration net.bridge.bridge-nf-call-iptablesis set to 1.

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

Make sure the loaded br_netfiltermodule. This can be run lsmod | grep br_netfilterto complete. To load it explicitly, call modprobe br_netfilter.

modprobe br_netfilter
lsmod | grep br_netfilter

Install docker:

yum install -y docker
systemctl enable docker && systemctl start docker

The author has made the above steps into a script, you can view https://gist.github.com/elfgzp/02485648297823060a7d8ddbafebf140#file-vultr_k8s_prepare-sh .
In order to quickly enter the next step, you can execute the following command to skip the preparation operation directly.

curl https://gist.githubusercontent.com/elfgzp/02485648297823060a7d8ddbafebf140/raw/781c2cd7e6dba8f099e2b6b1aba9bb91d9f60fe2/vultr_k8s_prepare.sh | sh

Install Kubeadm

The next steps can refer to the official documentation, the official documentation link .

# 配置 yum 源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

# 安装 kubelet kubeadm kubectl
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

# 启动 kubelet
systemctl enable --now kubelet

Because Vultrforeign cloud host, so we do not consider Googleaccess issues, but if it is the host country needs to be yumthe source of repomodifications for the following configuration.

cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kuebrnetes]
name=KubernetesRepository
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

The script for the above operation is https://gist.github.com/elfgzp/02485648297823060a7d8ddbafebf140#file-vultr_k8s_install_kubeadm-sh .

curl https://gist.githubusercontent.com/elfgzp/02485648297823060a7d8ddbafebf140/raw/#/vultr_k8s_prepare.sh | sh

Create a k8s cluster using Kubeadm

Create k8s Master node

We first want to Masterperform on the instance kubeadm. But we start with kubeadm config print init-defaultsa look at its default initialization file.

kubeadm config print init-defaults

Of course, you can also generate a configuration file and specify the configuration file to initialize:

kubeadm config print init-defaults > kubeadm.yaml
# 修改 kubeadm.yml
kubeadm init --config kubeadm.yaml

If the initialization fails, you can execute the following command to remake:

kubeadm reset
rm -rf $HOME/.kube/config
rm -rf /var/lib/cni/
rm -rf /etc/kubernetes/
rm -rf /etc/cni/
ifconfig cni0 down
ip link delete cni0

The next direct execution kubeadm initis initialized, the host country may need to modify imageRepositorythe configuration to modify k8sthe mirror repository.

cat <<EOF > kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
    extraArgs:
        runtime-config: "api/all=true"
kubernetesVersion: "v1.18.1"
imageRepository: registry.aliyuncs.com/google_containers
EOF
kubeadm init --config kubeadm.yaml

After the execution is complete, we will get the following output:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join {你的IP}:6443 --token 3prn7r.iavgjxcmrlh3ust3 \
    --discovery-token-ca-cert-hash sha256:95283a2e81464ba5290bf4aeffc4376b6d708f506fcee278cd2a647f704ed55d

According to his ideas, we'll kubectlconfiguration into the $HOME/.kube/confignext, note that each completed execution kubeadm initafter, the profile will change, so it is necessary to re-copying. kubeadmWill join command output configuration information for Nodethe cluster.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

If you are using a rootuser, you can directly use environment variables specify the configuration file:

echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> ~/.bashrc
. ~/.bashrc

Then use kubectl get nodesto view the state of the node:

NAME      STATUS   ROLES    AGE     VERSION
master1   NotReady    master   6m52s   v1.18.1

At this point the state is NotReadyof course the state is right, because we do not have to install the network plug-ins. Next, install the network plug-in, here is a Weavenetwork plug-ins:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

There are other network plug-ins that can refer to the official documentation, Installing a Pod network add-on .

Can view Podsto see if the installation was successful state:

kubectl get pods -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   coredns-66bff467f8-br94l   1/1     Running   0          14m
kube-system   coredns-66bff467f8-pvsfn   1/1     Running   0          14m
kube-system   kube-proxy-b2phr           1/1     Running   0          14m
kube-system   weave-net-8wv4k            2/2     Running   0          2m2s

If it is found STATUSnot Runningthrough, kubectl logsand kubectl describecommand to view detailed error information.

kubectl logs weave-net-8wv4k -n kube-system weave
kubectl logs weave-net-8wv4k -n kube-system weave-npc
kubectl describe pods weave-net-8wv4k -n kube-system 

At this time, Masternode status becomes Readya.

NAME      STATUS   ROLES    AGE     VERSION
master1   Ready    master   6m52s   v1.18.1

Deployment Nodenode

Deployment Nodenode also requires "preparation phase" of the work here is not to explain, and directly execute the script:

curl https://gist.githubusercontent.com/elfgzp/02485648297823060a7d8ddbafebf140/raw/781c2cd7e6dba8f099e2b6b1aba9bb91d9f60fe2/vultr_k8s_prepare.sh | sh
curl https://gist.githubusercontent.com/elfgzp/02485648297823060a7d8ddbafebf140/raw/781c2cd7e6dba8f099e2b6b1aba9bb91d9f60fe2/vultr_k8s_install_kubeadm.sh | sh

We need to perform kubeadmat Masterthe output node initialization joincommands. If you do not remember, can be achieved by Masterre-execute the following command to get joincommand.

kubeadm token create --print-join-command
kubeadm join {你的IP}:6443 --token m239ha.ot52q6goyq0pcadx     --discovery-token-ca-cert-hash sha256:95283a2e81464ba5290bf4aeffc4376b6d708f506fcee278cd2a647f704ed55d

If you are having problems to join you can also use kubeadm restto reset.

kubeadm reset

Of course, jointhe command also can provide a configuration file, we just need to Nodeexecute the following command can generate a default configuration file.

kubeadm config print join-defaults > kubeadm-join.yaml
kubeadm join --config kubeadm-join.yaml

Then again kubectlto view the nodesstatus, if you want Nodeto perform, then the node needs to be Masteron the /etc/kubernetes/admin.confcopy to Nodethe node.

Next we verify the Nodestatus of Readythe added success:

kubectl get nodes
NAME      STATUS   ROLES    AGE     VERSION
master1   Ready    master   6m52s   v1.18.1
node1     Ready    <none>   29s     v1.18.1

to sum up

These are in Vultruse kubeadmto deploy k8sall process cluster friends, of course, stepped on a lot of the pit. Especially if you want to deploy on different cloud hosts, you ultimately choose to use the same cloud service provider's ECS.

Reference documents

The definitive guide to Kubernetes: full contact from Docker to Kubernetes practice (version 4)

In-depth analysis of Kubernetes

This article is published by OpenWrite, a blog post multi-platform platform !

Guess you like

Origin www.cnblogs.com/elfgzp/p/12682539.html