Use the k8s cluster built by Raspberry Pi to deploy the first application and try to dynamically expand and shrink the capacity

Recap

Last year, I used Raspberry Pi to build a k8s cluster. I didn't know much about this at the time, so I just built a simple cluster, installed network plug-ins, only a simple dashboard, and did not deploy any actual applications. Links to previous articles : Teach you how to use Raspberry Pi 4B to build a k8s cluster

Recently, on a whim, I made a new chassis with building blocks to test and deploy an application. Today, I will summarize all the processes and problems I have experienced recently for your reference.

Handling pre-problems

Replacing network plugins

The last article wrote that the network plug-in colico was installed, that is to say, it was recently found that the colico plug-in pod failed to start. After trying for a long time without success, only reinstalled it, and installed it according to the previous tutorial. You can only reinstall the fannel plug-in.

remove web plugin

Remove the network plugin first

kubectl delete -f calico.yaml
复制代码

At this point, there is a tunl0 virtual network card remaining, you can use ifconfig to view and uninstall it, because I have tried many times, and combined many shell commands into one command, which can be modified according to your actual situation:

ifconfig tunl0 down;ip link delete tunl0;rm -f /etc/cni/net.d/*;kubectl delete -f calico.yaml;systemctl start kubelet; systemctl start docker
复制代码

Cluster reset

Execute the cluster initialization command on 3 machines:

kubeadm reset
复制代码

Three machines delete the configuration files:

rm -rf $HOME/.kube;rm -rf /etc/cni/net.d
复制代码

Restart docker and kubelet, and the firewall rules are cleared:

systemctl daemon-reload;systemctl stop kubelet; systemctl stop docker; iptables --flush; iptables -tnat --flush;systemctl start kubelet; systemctl start docker
复制代码

Cluster installation

Like the previous article, the Master node installation will not be described in detail here.

sudo kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.20.0 --apiserver-advertise-address=192.168.2.181 --pod-network-cidr=192.168.0.0/16 --ignore-preflight-errors=all
复制代码

Node node join command

kubeadm join 192.168.2.181:6443 --token jqll23.kc3nkji7vxkaefro  --discovery-token-ca-cert-hash sha256:1b475725b680ed8111197eb8bfbfb69116b38a8d2960d51d17af69188b6badc2 --ignore-preflight-errors=all 
复制代码

See all Node commands:

kubectl get pods --all-namespaces
复制代码

Sometimes executing the command after the machine restarts will report an error: The connection to the server localhost:8080 was refused - did you specify the right host or port?
Solution: Reason: The kubernetes master is not bound to the local machine, and it is not bound when the cluster is initialized The problem can be solved by setting the environment variable on the local machine at this time.

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile

source /etc/profile
复制代码

source /etc/profileThis problem can be completely solved by placing it in a script that is automatically executed at startup

Pod running error, check log

I have tried many times myself, and often the pod does not run successfully after installing the network plug-in. You can use the following command to view the log reason:

kubectl logs -f test-k8s-68bb74d654-9wwbt -n kube-system
复制代码

test-k8s-68bb74d654-9wwbtis the specific pod name

Install network plugin

Use the official yaml file

curl -sSL https://raw.githubusercontent.com/coreos/flannel/v0.12.0/Documentation/kube-flannel.yml | kubectl apply -f -*
复制代码

没想到直接成功了,之前都是无法连接10.1244.***.**的报错.

查看所有pod

kubectl get pods --all-namespaces
复制代码

image.png 现在是第一个应用安装完成了,pod名称test-k8s的应用就是我安装的应用,它命名空间是default的,跟其他的不同。

查看所有Node

kubectl get node --all-namespaces
复制代码

跟上个命令一样的,改了一个地方 image.png

安装第一个应用

制作镜像

安装应用就应该写一个yaml文件,和一个可用的镜像,我参考的是B站的广州云科的视频教程,但是他的测试的应用是基于X86平台的,镜像直接运行报错如下:

c33c444a5e810da7fb889eb3358874b.png

所以只有自己重新制作镜像,先找到项目地址:test-k8s 把代码全部克隆到树莓派机器上:

image.png 要弄一个镜像仓库提供给集群拉取,然后我自己测试使用的阿里云的容器镜像服务,要设置成开放的仓库,允许所有的人拉取这个镜像

image.png 别人的DockerFile已经写好了我们直接使用docker build命令打包镜像(现在为了重新推送测试,我把所有镜像都和容器删掉) 主要根据阿里云的教程就行了,如下:

image.png

打包&推送到阿里云

先本地打包,打包镜像名为k8s,-t为镜像标签的简写,tag的意思。可以在构建中设置多个标签

docker build -t test-k8s .
复制代码

镜像打一个新tag

docker tag test-k8s:latest registry.cn-shenzhen.aliyuncs.com/koala9527/testapp:latest
复制代码

推送

docker push registry.cn-shenzhen.aliyuncs.com/koala9527/testapp:latest
复制代码

image.png

到现在这个镜像就在阿里云的容器镜像服务中了,镜像地址为: registry.cn-shenzhen.aliyuncs.com/koala9527/testapp:latest

写第一个应用的yaml文件

文件名testapp.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  # 部署名字
  name: test-k8s
spec:
  replicas: 3
  # 用来查找关联的 Pod,所有标签都匹配才行
  selector:
    matchLabels:
      app: test-k8s
  # 定义 Pod 相关数据
  template:
    metadata:
      labels:
        app: test-k8s
    spec:
      # 定义容器,可以多个
      containers:
      - name: test-k8s # 容器名字
        image: registry.cn-shenzhen.aliyuncs.com/koala9527/testapp:v1  # 镜像
        resources:
          requests:
            cpu: 100m
复制代码

这里稍微解释复习一下这些资源描述文件中字段得意思,虽然上篇文章也有解释
apiVersion:api版本,我的理解就是资源控制的相关版本控制,k8s里的功能在高速迭代,不同的资源控制器使用不同的api版本,不同版本集群支持的api不同,写的yaml文件需要配合真实的集群环境,资源控制器类型就是用kind字段指定。

可以使用kubectl api-versions查看集群的api版本 image.png

kind:控制器类型,集群里面的所有资源都被k8s高度抽象化,kind就是表明这些资源的类型,Deployment就是一个定义一个多副本的无状态资源对象。
name: test-k8s :指定资源控制器的名字为test-k8s replicas:初始化指定的pod的个数 matchLabels:选择器标签,要用其他资源控制指定这个资源的时候用这个标签值
template这个字段下面就是包含这个pod的相关数据,app: test-k8s指定pod的名字,imagepod的镜像拉取地址,requests申请的CPU资源,0.1m等于0.1个CPU资源。

部署应用

kubectl apply -f testapp.yaml
复制代码

这时候就有3个pod了 image.png 可以使用 kubectl get pod -o wide查看pod详细信息,主要是看下IP:

 kubectl get pod -o wide
复制代码

image.png

登录其中一个pod访问另一个pod试试(这里是进入第一个pod访问第二个pod):

kubectl exec -it test-k8s-68b9f5c6c7-hn25x -- bash
curl 10.244.1.173:8080
复制代码

效果如图: image.png 可以看到输出了正确的pod得名称
但是这时候只能在pod之前进行互相访问,上篇文章有说pod可以一个单独的物理机,共用一个网络,要供集群外访问就要新建一个另外的资源,下面接着说。

微信截图_20220404212645.png

新建service资源控制器

yaml文件

service的特性:

  • Service 通过 label 关联对应的 Pod
  • Servcie 生命周期不跟 Pod 绑定,不会因为 Pod 重创改变 IP
  • 提供了负载均衡功能,自动转发流量到不同 Pod
  • 可对集群外部提供访问端口
  • 集群内部可通过服务名字访问

所有的资源都是通过yaml文件描述,写一个描述servie的yaml文件,文件名为service.yaml

apiVersion: v1
kind: Service
metadata:
  name: test-k8s
spec:
  selector:
    app: test-k8s
  type: NodePort  
  ports:
    - port: 8080        # 本 Service 的端口,内部访问
      targetPort: 8080  # 容器端口,也就是test-k8s这个应用的
      nodePort: 31000 #暴露出集群的端口

复制代码

值得说的是service这个资源控制的中type这个关键字类型,这里指定的是NodePort类型,在每个Node上开放一个端口,可以访问,如果不指定这个type,默认的类型就是ClusterIp,这个就是不允许集群外访问的,只允许集群内部访问,其他还有LoadBalance类型,负载均衡的意思,一般是云厂商提供的这个资源类型,不常见。

还要注意NodePort暴露的端口固定范围为: 30000-32767

应用Service:

和应用Deployment一样:

kubectl apply -f service.yaml
复制代码

查看k8s中Service资源

kubectl get svc
复制代码

image.png

测试效果

这个机器的内网IP为192.168.2.187,刚才设置的端口为31000

image.png

动态扩缩容

安装资源指标查看工具

使用动态扩缩容之前需要安装一个资源指标获取工具,用来监控集Node,Pod资源占用CPU,运行内存情况,名为metrics-server,集群默认不会安装这个,安装十分简单,在官方的GitHub下载安装:

wget <https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml>
kubectl apply -f components.yaml
复制代码

后面出错了,其相关pod不能启动,需要替换yaml文件内的一段内容,替换成如下,具体原因不清楚:

接下里就可以使用top命令查看pod和node CPU和运行内存资源占用情况 image.png

安装水平自动伸缩服务

控制pod的动态扩缩容又是一个资源控制器,叫HorizontalPodAutoscaler,字面意思是水平自动伸缩,跟service.yaml一样简单,文件名:hpa.yaml

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  namespace: default
  name: test-k8s-scaler
  labels:
    app: test-k8s-scaler
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: test-k8s
  minReplicas: 2
  maxReplicas: 100
  targetCPUUtilizationPercentage: 45

复制代码

The data in the metadata of the horizontal auto-scaling controller represents the basic information of the controller. The data in the spac is scaleTargetRefused to specify which resources to monitor, minReplicasspecify the minimum number of replicas, and maxReplicasspecify the number of replicas. This minimum number of replicas will override the initial definition of replicas. Quantity, targetCPUUtilizationPercentagespecify the CPU indicator when scaling is triggered, k8s has a complex algorithm (briefly explained in the book "Kubernetes in Action"), and it will observe the resource usage of these pods at certain intervals to automatically adjust, here I experience As long as the CPU usage of the pod exceeds 45%, the expansion strategy will be triggered for expansion, and other indicators can be specified for monitoring, but generally the CPU and running memory.

kubectl apply -f hpa.yamlInstall this autoscaling controller using

You can also kubectl get hpaview the basic state of this horizontal auto-scaling controller using

image.png

Autoscaling with AB Stress Test

Download address Download directly on Windows, unzip it, and then enter the bin directory to run the following command:

./ab.exe -n 10000 -c 100 http://192.168.2.181:31000/
复制代码

It means a total of 1w requests, 100 threads request and execute at the same time, watch kubectl get hpa,podreal-time monitoring of automatic scaling and detailed parameters of the number of podsimage.png

A few minutes after the request is completed, the number of pods will return to 2, and all tests are completed at this time.

微信截图_20220404212522.png

Summarize

The demo application can be deployed into the cluster. The whole process is not complicated, and there is no complex brain-burning feature. The horizontal expansion and shrinkage in K8s is the most attractive place for me, so after the application is successfully deployed, the first step is to try to achieve this. The function is ready, and then I will use Gitlab to automatically deploy the application for real CI/CD. There is no need to manually type commands. Submit the code to merge the branch to trigger the deployment command, or try to install other applications that can provide real services to use it. Really works, thank you all for seeing here.

Guess you like

Origin juejin.im/post/7082743804272312333