KubeEdge 初测

本文在已经成功部署了 KubeEdge 的集群中进行实测。目的是了解 KubeEdge 与 k8s 的异同。本文针对1.2版本。

一些说明

因为 KubeEdge 在 edgecore 上实现了 kubelet 部分功能,所以理论上是无缝接合的。
本文使用统一的镜像latelee/webgin,该镜像的功能是提供 web 服务,返回运行时的 CPU、OS 和主机名称。笔者利用 docker manifest,可根据不同 CPU 拉取不同镜像,所以在 yaml 文件中统一使用同一名称,可自动匹配不同平台。

在主节点查看集群:

# kubectl get node
NAME                        STATUS   ROLES    AGE     VERSION
edge-node                   Ready    edge     43h     v1.17.1-kubeedge-v1.2.0
edge-node2                  Ready    <none>   3m14s   v1.17.0
latelee.org.ttucon-2142ec   Ready    edge     40h     v1.17.1-kubeedge-v1.2.1-dirty
ubuntu                      Ready    master   44h     v1.17.4

其中 edge-node2 为 k8s,版本为 v1.17.0,edge-node 和 latelee.org.ttucon-2142ec 为 KubeEdge 边缘端,后者是 arm 板子系统主机。

在部署 KubeEdge 时已经创建了 crds 了,查看之(本文中作用不大):

# kubectl get crds
NAME                                           CREATED AT
clusterobjectsyncs.reliablesyncs.kubeedge.io   2020-02-20T08:28:32Z
devicemodels.devices.kubeedge.io               2019-12-31T08:41:34Z
devices.devices.kubeedge.io                    2019-12-31T08:41:34Z
objectsyncs.reliablesyncs.kubeedge.io          2020-02-20T08:28:32Z

测试

测试 yaml 文件 webgin-service.yaml 如下:

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: webgin-deployment
  labels:
   app: webgin
spec:
  replicas: 3 # tells deployment to run 3 pods matching the template
  selector:
    matchLabels:
      app: webgin
  template:
    metadata:
      labels:
        app: webgin
    spec:
      containers:
      - name: webgin
        image: latelee/webgin
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /etc/localtime
          name: time-zone
      volumes:
      - name: time-zone
        hostPath: 
          path: /etc/localtime
      hostNetwork: true
---

apiVersion: v1
kind: Service # 指定为service
metadata:
  labels:
    run: webgin
  name: webgin
  namespace: default
spec:
  ports:
  - port: 88 # 对外为88端口
    targetPort: 80
  selector:
    app: webgin
  type: LoadBalancer

释义:deployment + service 组合,副本数为3(因为有3台节点机器),hostNetwork 模式(故port选项不生效),挂载日期文件是为了输出真实时间。

在主节点创建 deployment:

kubectl apply -f webgin-service.yaml 

查看 pod:

# kubectl get pod -owide
NAME                                 READY   STATUS    RESTARTS   AGE   IP              NODE                        NOMINATED NODE   READINESS GATES
webgin-deployment-7ccff86d8b-6hgfk   0/1     Pending   0          91s   <none>          edge-node                   <none>           <none>
webgin-deployment-7ccff86d8b-lnmpj   1/1     Running   0          91s   192.168.0.153   edge-node2                  <none>           <none>
webgin-deployment-7ccff86d8b-ngp7v   1/1     Running   0          91s   192.168.0.220   latelee.org.ttucon-2142ec   <none>           <none>

有一节点一直处于 Pending 状态,另外两节点成功运行。

访问web服务:

# curl 192.168.0.153 
Hello World 
arch: amd64 os: linux hostname: edge-node2
Now: 2020-03-12 22:05:35
# curl 192.168.0.220
Hello World 
arch: arm os: linux hostname: latelee.org.ttucon-2142ec
Now: 2020-03-12 22:05:40

结果:可知运行的2台机器,一为x86,一为arm。

其它测试

对比kubectl describe命令。

k8s:
# kubectl describe pod webgin-deployment-7ccff86d8b-lnmpj 
Events:
  Type    Reason     Age        From                 Message
  ----    ------     ----       ----                 -------
  Normal  Scheduled  <unknown>  default-scheduler    Successfully assigned default/webgin-deployment-7ccff86d8b-lnmpj to edge-node2
  Normal  Pulled     8m46s      kubelet, edge-node2  Container image "latelee/webgin" already present on machine
  Normal  Created    8m45s      kubelet, edge-node2  Created container webgin
  Normal  Started    8m45s      kubelet, edge-node2  Started container webgin

KubeEdge:
# kubectl describe pod webgin-deployment-7ccff86d8b-ngp7v

Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled  <unknown>  default-scheduler  Successfully assigned default/webgin-deployment-7ccff86d8b-ngp7v to latelee.org.ttucon-2142ec

结果:k8s 的输出信息相对全面一些。

对比kubectl logs命令。

k8s:
# kubectl logs webgin-deployment-7ccff86d8b-lnmpj 
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:   export GIN_MODE=release
 - using code:  gin.SetMode(gin.ReleaseMode)

[GIN-debug] GET    /                         --> main.myIndex (3 handlers)
gin server start...
[GIN-debug] Listening and serving HTTP on :80

KubeEdge:
# kubectl logs webgin-deployment-7ccff86d8b-ngp7v
Error from server: Get https://192.168.0.220:10250/containerLogs/default/webgin-deployment-7ccff86d8b-ngp7v/webgin: dial tcp 192.168.0.220:10250: connect: connection refused

结果:KubeEdge 不支持该命令。

对比kubectl exec命令。

k8s:
# kubectl exec -it webgin-deployment-7ccff86d8b-lnmpj -- uname -a
Linux edge-node2 4.4.0-174-generic #204-Ubuntu SMP Wed Jan 29 06:41:01 UTC 2020 x86_64 GNU/Linux

KubeEdge:
# kubectl exec -it webgin-deployment-7ccff86d8b-ngp7v -- uname -a
Error from server: error dialing backend: dial tcp 192.168.0.220:10250: connect: connection refused

结果:KubeEdge 不支持该命令。

针对 hostNetwork 模式,在arm节点上查看容器IP(提供主要内容):

# docker exec -it 71605a5e17a3 ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:00:00:00:94  
          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0

eth0      Link encap:Ethernet  HWaddr 4C:00:00:00:00:EC  
          inet addr:192.168.0.220  Bcast:192.168.0.255  Mask:255.255.255.0

在arm节点上,没有产生如veth216ffbc7之类名称的网络设备。

结论

两者还是存在差异。如部分命令不支持,如通信不稳定。

本文所用镜像,真实存在,但可能会不定时更新。本文所述,仅为本人实际测试之现象,不具通用性。

猜你喜欢

转载自blog.csdn.net/subfate/article/details/104979990
今日推荐