Deploy kubernetest cluster on Vultr
In the recent study kubernetest
, but there are a lot of tutorials on Google on how to deploy kubernetes
.
Originally wanted to buy their own JD
and HUAWEI
the ECS
above deployments, toss a long time to no avail. ECS provides a selection of frustration or with a cloud service provider, there is in VPC
the deployment will be under conditions more convenient.
ECS configuration selection
Because just learning, I will not deploy highly available k8s
clusters, so prepare a table Master
and Node
nodes.
Since Master
at least two CPU cores. This is an alternative Vultr
on the 2 核 4G 内存
configuration ECS
.
Node
Of course, larger memory node configuration is better, of course, only in the purpose of learning, the choice here Master
configure the same.
Foreign cloud service vendors generally no bandwidth limits, generally in accordance with flow calculation, this configuration has 3T
traffic is certainly enough.
Then his hourly fee model is in this configuration 0.03 $ / h
is equivalent 0.21 ¥ / h
, that is, two cents per hour! Even if you use one day, it will be four dollars.
I intend to learn k8s
when two instances in the deployment, without the direct destruction, not Miya.
If you are a new user, you can still get it for free 100 $
. Here is the invited connection Vultr Give $ 100. If you think it is good, you can try it. The author really thinks that their service is not bad, so give them an advertisement.
Here two selected CentOS 7 Without SELinux
examples.
SELinux
It is Linux
a security-related software in order to facilitate learning and deployment, we close it directly, so I chose Without SELinux
ready to begin deployment of.
Note that in Additional Features
check at Enable Private Networking
, so that Vultr
is within your server distribution network IP
.
Set up two nodes HostName
to prevent a node name will be conflict.
In Deploy Now
before the Servers Qty
increase is 2
, so you do not repeatedly open the deployment page, and directly deploy two instances.
Do not be this $20.00 /mo
scared, this is the month $20
, we only need to be destroyed like finished, and the new user a gift 100$
can be used for a long time.
ECS environment configuration
After the completion of the deployment of two instances, you can Instances
find a list of them. (Considering readers who have not used cloud services, I will elaborate more here.)
You can click through in this instance Overview
to find his login account password, the default users root
.
Then Settings
you can see the network of these two instances IP
.
The intranet of the two examples of the author is as follows:
Examples | Number of cores | RAM | Intranet IP |
---|---|---|---|
Master | 2 | 4G | 10.24.96.3 |
Node | 2 | 4G | 10.24.96.4 |
Then it started, but ssh
after entering the system need to do some preparation work.
K8s deployment preparations
First, to avoid unnecessary trouble, turn off the CentOS 7
firewall, because of their own cloud service vendors have security groups, we can also be achieved by configuring network security security group.
systemctl disable firewalld && systemctl stop firewalld
If not previously selected at the time of deployment instances Without SELinux
where you need to let the container can access the host file, you enter the following command.
# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
We also need to close swap, as for why you can search for it.
swapoff -a
To ensure that sysctl
configuration net.bridge.bridge-nf-call-iptables
is set to 1.
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
Make sure the loaded br_netfilter
module. This can be run lsmod | grep br_netfilter
to complete. To load it explicitly, call modprobe br_netfilter
.
modprobe br_netfilter
lsmod | grep br_netfilter
Install docker
:
yum install -y docker
systemctl enable docker && systemctl start docker
The author has made the above steps into a script, you can view https://gist.github.com/elfgzp/02485648297823060a7d8ddbafebf140#file-vultr_k8s_prepare-sh .
In order to quickly enter the next step, you can execute the following command to skip the preparation operation directly.
curl https://gist.githubusercontent.com/elfgzp/02485648297823060a7d8ddbafebf140/raw/781c2cd7e6dba8f099e2b6b1aba9bb91d9f60fe2/vultr_k8s_prepare.sh | sh
Install Kubeadm
The next steps can refer to the official documentation, the official documentation link .
# 配置 yum 源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
# 安装 kubelet kubeadm kubectl
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# 启动 kubelet
systemctl enable --now kubelet
Because Vultr
foreign cloud host, so we do not consider Google
access issues, but if it is the host country needs to be yum
the source of repo
modifications for the following configuration.
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kuebrnetes]
name=KubernetesRepository
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
The script for the above operation is https://gist.github.com/elfgzp/02485648297823060a7d8ddbafebf140#file-vultr_k8s_install_kubeadm-sh .
curl https://gist.githubusercontent.com/elfgzp/02485648297823060a7d8ddbafebf140/raw/#/vultr_k8s_prepare.sh | sh
Create a k8s cluster using Kubeadm
Create k8s Master node
We first want to Master
perform on the instance kubeadm
. But we start with kubeadm config print init-defaults
a look at its default initialization file.
kubeadm config print init-defaults
Of course, you can also generate a configuration file and specify the configuration file to initialize:
kubeadm config print init-defaults > kubeadm.yaml
# 修改 kubeadm.yml
kubeadm init --config kubeadm.yaml
If the initialization fails, you can execute the following command to remake:
kubeadm reset
rm -rf $HOME/.kube/config
rm -rf /var/lib/cni/
rm -rf /etc/kubernetes/
rm -rf /etc/cni/
ifconfig cni0 down
ip link delete cni0
The next direct execution kubeadm init
is initialized, the host country may need to modify imageRepository
the configuration to modify k8s
the mirror repository.
cat <<EOF > kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
extraArgs:
runtime-config: "api/all=true"
kubernetesVersion: "v1.18.1"
imageRepository: registry.aliyuncs.com/google_containers
EOF
kubeadm init --config kubeadm.yaml
After the execution is complete, we will get the following output:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join {你的IP}:6443 --token 3prn7r.iavgjxcmrlh3ust3 \
--discovery-token-ca-cert-hash sha256:95283a2e81464ba5290bf4aeffc4376b6d708f506fcee278cd2a647f704ed55d
According to his ideas, we'll kubectl
configuration into the $HOME/.kube/config
next, note that each completed execution kubeadm init
after, the profile will change, so it is necessary to re-copying. kubeadm
Will join command output configuration information for Node
the cluster.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
If you are using a root
user, you can directly use environment variables specify the configuration file:
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> ~/.bashrc
. ~/.bashrc
Then use kubectl get nodes
to view the state of the node:
NAME STATUS ROLES AGE VERSION
master1 NotReady master 6m52s v1.18.1
At this point the state is NotReady
of course the state is right, because we do not have to install the network plug-ins. Next, install the network plug-in, here is a Weave
network plug-ins:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
There are other network plug-ins that can refer to the official documentation, Installing a Pod network add-on .
Can view Pods
to see if the installation was successful state:
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bff467f8-br94l 1/1 Running 0 14m
kube-system coredns-66bff467f8-pvsfn 1/1 Running 0 14m
kube-system kube-proxy-b2phr 1/1 Running 0 14m
kube-system weave-net-8wv4k 2/2 Running 0 2m2s
If it is found STATUS
not Running
through, kubectl logs
and kubectl describe
command to view detailed error information.
kubectl logs weave-net-8wv4k -n kube-system weave
kubectl logs weave-net-8wv4k -n kube-system weave-npc
kubectl describe pods weave-net-8wv4k -n kube-system
At this time, Master
node status becomes Ready
a.
NAME STATUS ROLES AGE VERSION
master1 Ready master 6m52s v1.18.1
Deployment Node
node
Deployment Node
node also requires "preparation phase" of the work here is not to explain, and directly execute the script:
curl https://gist.githubusercontent.com/elfgzp/02485648297823060a7d8ddbafebf140/raw/781c2cd7e6dba8f099e2b6b1aba9bb91d9f60fe2/vultr_k8s_prepare.sh | sh
curl https://gist.githubusercontent.com/elfgzp/02485648297823060a7d8ddbafebf140/raw/781c2cd7e6dba8f099e2b6b1aba9bb91d9f60fe2/vultr_k8s_install_kubeadm.sh | sh
We need to perform kubeadm
at Master
the output node initialization join
commands. If you do not remember, can be achieved by Master
re-execute the following command to get join
command.
kubeadm token create --print-join-command
kubeadm join {你的IP}:6443 --token m239ha.ot52q6goyq0pcadx --discovery-token-ca-cert-hash sha256:95283a2e81464ba5290bf4aeffc4376b6d708f506fcee278cd2a647f704ed55d
If you are having problems to join you can also use kubeadm rest
to reset.
kubeadm reset
Of course, join
the command also can provide a configuration file, we just need to Node
execute the following command can generate a default configuration file.
kubeadm config print join-defaults > kubeadm-join.yaml
kubeadm join --config kubeadm-join.yaml
Then again kubectl
to view the nodes
status, if you want Node
to perform, then the node needs to be Master
on the /etc/kubernetes/admin.conf
copy to Node
the node.
Next we verify the Node
status of Ready
the added success:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready master 6m52s v1.18.1
node1 Ready <none> 29s v1.18.1
to sum up
These are in Vultr
use kubeadm
to deploy k8s
all process cluster friends, of course, stepped on a lot of the pit. Especially if you want to deploy on different cloud hosts, you ultimately choose to use the same cloud service provider's ECS.
Reference documents
The definitive guide to Kubernetes: full contact from Docker to Kubernetes practice (version 4)
In-depth analysis of Kubernetes
This article is published by OpenWrite, a blog post multi-platform platform !