Huawei Altas 200DK A2 deployment practice (5) Deploy Yolov8-pose attitude estimation model on Atlas 200 DK A2

Deploy Yolov8-pose pose estimation model on Atlas 200 DK A2

   
For the model we trained with the Pytorch framework, after we convert it into an onnx model, we need to further convert it into an om model before deploying it on the development board. Reference links.

There are two ways to convert the OM model, both on the PC and on the development board. The prerequisite is that the CANN toolkit and its dependency packages must be installed. Of course, conversion can also be achieved through Huawei's own IDE MindStdio, but it is essentially based on the CANN toolkit. The co-location environment on the development board and the latest version of the CANN toolkit are installed. It is recommended to convert the model on the PC. According to actual testing, the conversion speed of the model on the development board is a bit slow.

Install the CANN toolkit on the PC. Taking the Ubuntu system as an example, first install the dependencies.

# 首先安装依赖项,PC端连接网络,执行如下命令检查源是否可用。
apt-get update

# 检查root用户的umask
umask
#如果umask不等于0022,执行以下操作
vi ~/.bashrc
#在文件最后一行后面添加umask 0022内容,按执行esc 以及 :wq!命令保存文件并退出
source ~/.bashrc
#安装CANN工具包完成后,记得执行上述步骤将umask改为0027

#安装以下包
sudo apt-get install -y gcc g++ make cmake zlib1g \
zlib1g-dev openssl libsqlite3-dev \
libssl-dev libffi-dev libbz2-dev libxslt1-dev \
unzip pciutils net-tools libblas-dev gfortran libblas3

#在python环境中安装以下包,按照官网的教程,系统的python环境版本应该要大于等于3.7.5
pip3 install attrs
pip3 install numpy
pip3 install decorator
pip3 install sympy
pip3 install cffi
pip3 install pyyaml
pip3 install pathlib2
pip3 install psutil
pip3 install protobuf
pip3 install scipy
pip3 install requests
pip3 install absl-py

Then we download the CANN toolkit,Click this link to download. This is the 6.2.RC2 version of CANN. Huawei's CANN is divided into communities version and commercial version, our board is version 7.0 RC1. As ordinary users, we currently cannot download such a high version. I think it should be compatible as long as the CANN version on the PC is less than or equal to the actual CANN version on the board.

After the download is successful, we enter the download directory.

#增加对软件包的可执行权限
chmod +x Ascend-cann-toolkit_6.2.RC2_linux-x86_64.run

#执行以下命令安装软件
./Ascend-cann-toolkit_6.2.RC2_linux-x86_64.run --install

After the installation is completed, if the following information is displayed, the installation is successful, and xxx corresponds to the software package name.

[INFO] xxx install success

Next, we need to focus on configuring environment variables. Every time we use the CANN tool, we need to execute the following two instructions. If you find it troublesome, you can write it into the environment variables in the ~/.bashrc file yourself.

#CANN_INSTALL_PATH 代表你安装的CANN工具包所在的目录,需要自己替换

source CANN_INSTALL_PATH/ascend-toolkit/set_env.sh 
export LD_LIBRARY_PATH=CANN_INSTALL_PATH/ascend-toolkit/latest/x86_64-linux/devlib/:$LD_LIBRARY_PATH

At this point, our CANN toolkit has been successfully installed.

   
Next, we convert the onnx model to the om model. Be careful not to enter the conda environment when performing the conversion, otherwise an error may be reported. Just use the base environment directly. Reference links.

# model为我们要转化的onnx模型  framework表示源模型为onnx  output表示输出目录  soc_version表示开发板处理器信息
atc --model=handpose.onnx --framework=5 --output=handpose --soc_version=Ascend310B4 

   
The execution result is as follows

Insert image description here

Figure 1 Convert onnx model to om model through ATC

   
In order to facilitate the display, we also converted the Yolov8-pose.pt pre-trained by Yolov8 into the yolov8n-pose.om file. Yolov8 officially provides human body key point detection, and our own training is hand key point detection.

   
Next, we put our converted OM model on the development board and write test routines. We can remotely access our development board through Vscode's SSH connection for development (this is just the method I choose, we can also install IDEs such as Huawei's own MindStdio on the development board for development, which is very powerful but takes time to get started). As shown below.

Insert image description here

Figure 2 PC accesses the development board through Vscode SSH connection

   
Our first step is to run through the Yolov8 official pre-trained yolov8n-pose.pt version of the model. First, use the officially provided sample framework of yolov5 to design the model application sample. Go to the /samples/notebooks/01-yolov5 directory, copy the contents of main.ipynb, and paste it into a newly created test.py file, modify it as shown below.

# test.py 

# 导入代码依赖
import cv2
import numpy as np
from skvideo.io import vreader, FFmpegWriter
from ais_bench.infer.interface import InferSession

import time
from det_utils import NMS ,postprocess, preprocess_warpAffine, random_color, draw_bbox

cfg = {
    
    
    'conf_thres': 0.25,  # 模型置信度阈值,阈值越低,得到的预测框越多
    'iou_thres': 0.45,  # IOU阈值,高于这个阈值的重叠预测框会被过滤掉
    'input_shape': [640, 640],  # 模型输入尺寸
}

# 模型路径
model_path = 'yolov8n-pose.om'

# 初始化推理模型
model = InferSession(0, model_path)

# 选择模式
infer_mode = 'camera'

# 模型推理
def model_infer(image, model, cfg):
    start_time = time.time()

    # 数据预处理
    img_pre, IM = preprocess_warpAffine(image)
    pre_time = time.time()
    print("pre:",pre_time - start_time)

    # 模型推理
    output = model.infer([img_pre])[0].transpose(0,2,1)  #(1, 56, 8400) to (1,8400,56)
    infer_time = time.time()
    print("infer:",infer_time - pre_time)

    # 后处理
    boxes = postprocess(output, IM, cfg['conf_thres'], cfg['iou_thres'])
    post_time = time.time()
    print("post:",post_time - infer_time)

    # 绘图    
    img = draw_bbox(boxes, image)
    draw_time = time.time()
    print("draw:",draw_time - post_time)

    return img

# 推理图片
def infer_image(img_path, model, cfg):
    # 通过cv2.imread()载入图像
    img = model_infer(cv2.imread(img_path), model, cfg)

    # 保存
    cv2.imwrite("infer-pose.jpg", img)
    print("save done")

# 推理视频
def infer_video(video_path, model, labels_dict, cfg):
    """视频推理"""
    image_widget = widgets.Image(format='jpeg', width=800, height=600)
    display(image_widget)

    # 读入视频
    cap = cv2.VideoCapture(video_path)
    while True:
        ret, img_frame = cap.read()
        if not ret:
            break
        # 对视频帧进行推理
        image_pred = infer_frame_with_vis(img_frame, model, labels_dict, cfg, bgr2rgb=True)
        image_widget.value = img2bytes(image_pred)

# 推理摄像头的流
def infer_camera(model, cfg):
    """外设摄像头实时推理"""
    def find_camera_index():
        max_index_to_check = 2  # Maximum index to check for camera

        for index in range(max_index_to_check):
            cap = cv2.VideoCapture(index)
            if cap.read()[0]:
                cap.release()
                return index

        # If no camera is found
        raise ValueError("No camera found.")

    # 获取摄像头
    camera_index = find_camera_index()
    cap = cv2.VideoCapture(camera_index)
    
    # 返回当前时间
    start_time = time.time()
    counter = 0
    while True:
        # 从摄像头中读取一帧图像
        _, frame = cap.read()
        image  = model_infer(frame, model, cfg)
        counter += 1  # 计算帧数
        if frame is not None:
          try:	
            # 实时显示帧数
            if (time.time() - start_time) != 0:
              cv2.putText(image, "FPS:{0}".format(float('%.1f' % (counter / (time.time() - start_time)))), (5, 30),
            			cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0, 0, 255), 1)
              # 显示图像
              cv2.imshow('keypoint', image)
          except:
            print(frame)
        else:
        	exit(0)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
   	# 释放资源
    cap.release()
    cv2.destroyAllWindows()

# 推理模式选择
if infer_mode == 'image':
    img_path = 'bus.jpg'
    infer_image(img_path, model, cfg)
elif infer_mode == 'camera':
    infer_camera(model, cfg)
elif infer_mode == 'video':
    video_path = 'racing.mp4'
    infer_video(video_path, model, cfg)

Then copy the file det_utils.py in the /samples/notebooks/01-yolov5 directory. This is a module used for pre-processing, post-processing, and drawing of the model. Modify the code as follows.

# det_utils.py

"""
Copyright 2022 Huawei Technologies Co., Ltd

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""

import cv2
import numpy as np
import torch
import torchvision

# 17个关键点连接顺序
skeleton = [[16, 14], [14, 12], [17, 15], [15, 13], [12, 13], [6, 12], [7, 13], [6, 7], [6, 8], 
            [7, 9], [8, 10], [9, 11], [2, 3], [1, 2], [1, 3], [2, 4], [3, 5], [4, 6], [5, 7]]

# 调色板
pose_palette = np.array([[255, 128, 0], [255, 153, 51], [255, 178, 102], [230, 230, 0], [255, 153, 255],
                         [153, 204, 255], [255, 102, 255], [255, 51, 255], [102, 178, 255], [51, 153, 255],
                         [255, 153, 153], [255, 102, 102], [255, 51, 51], [153, 255, 153], [102, 255, 102],
                         [51, 255, 51], [0, 255, 0], [0, 0, 255], [255, 0, 0], [255, 255, 255]],dtype=np.uint8)
# 骨架颜色
kpt_color  = pose_palette[[16, 16, 16, 16, 16, 0, 0, 0, 0, 0, 0, 9, 9, 9, 9, 9, 9]]

#关键点颜色
limb_color = pose_palette[[9, 9, 9, 9, 7, 7, 7, 0, 0, 0, 0, 0, 16, 16, 16, 16, 16, 16, 16]]


def preprocess_warpAffine(image, dst_width=640, dst_height=640):
    scale = min((dst_width / image.shape[1], dst_height / image.shape[0]))
    ox = (dst_width  - scale * image.shape[1]) / 2
    oy = (dst_height - scale * image.shape[0]) / 2
    M = np.array([
        [scale, 0, ox],
        [0, scale, oy]
    ], dtype=np.float32)
    
    img_pre = cv2.warpAffine(image, M, (dst_width, dst_height), flags=cv2.INTER_LINEAR,
                             borderMode=cv2.BORDER_CONSTANT, borderValue=(114, 114, 114))
    IM = cv2.invertAffineTransform(M)

    img_pre = (img_pre[...,::-1] / 255.0).astype(np.float32)    # BGR to RGB, 0 - 255 to 0.0 - 1.0
    img_pre = img_pre.transpose(2, 0, 1)[None]  # BHWC to BCHW (n, 3, h, w)
    img_pre = torch.from_numpy(img_pre)
    return img_pre, IM


def iou(box1, box2):

    def area_box(box):
        return (box[2] - box[0]) * (box[3] - box[1])
    
    left, top = max(box1[:2], box2[:2])
    right, bottom = min(box1[2:4], box2[2:4])
    union = max((right-left), 0) * max((bottom-top), 0)
    cross = area_box(box1) + area_box(box2) - union
    if cross == 0 or union == 0:
        return 0
    return union / cross


def NMS(boxes, iou_thres):

    remove_flags = [False] * len(boxes)

    keep_boxes = []
    for i, ibox in enumerate(boxes):
        if remove_flags[i]:
            continue

        keep_boxes.append(ibox)
        for j in range(i + 1, len(boxes)):
            if remove_flags[j]:
                continue

            jbox = boxes[j]
            if iou(ibox, jbox) > iou_thres:
                remove_flags[j] = True
    return keep_boxes


def postprocess(pred, IM=[], conf_thres=0.25, iou_thres=0.45):

    # 输入是模型推理的结果,即8400个预测框
    # 1,8400,56 [cx,cy,w,h,conf,17*3]
    boxes = []
    for img_id, box_id in zip(*np.where(pred[...,4] > conf_thres)):
        item = pred[img_id, box_id]
        cx, cy, w, h, conf = item[:5]
        left    = cx - w * 0.5
        top     = cy - h * 0.5
        right   = cx + w * 0.5
        bottom  = cy + h * 0.5
        keypoints = item[5:].reshape(-1, 3)
        keypoints[:, 0] = keypoints[:, 0] * IM[0][0] + IM[0][2]
        keypoints[:, 1] = keypoints[:, 1] * IM[1][1] + IM[1][2]
        boxes.append([left, top, right, bottom, conf, *keypoints.reshape(-1).tolist()])

    boxes = np.array(boxes)
    lr = boxes[:,[0, 2]]
    tb = boxes[:,[1, 3]]
    boxes[:,[0,2]] = IM[0][0] * lr + IM[0][2]
    boxes[:,[1,3]] = IM[1][1] * tb + IM[1][2]
    boxes = sorted(boxes.tolist(), key=lambda x:x[4], reverse=True)
    
    return NMS(boxes, iou_thres)

def hsv2bgr(h, s, v):
    h_i = int(h * 6)
    f = h * 6 - h_i
    p = v * (1 - s)
    q = v * (1 - f * s)
    t = v * (1 - (1 - f) * s)
    
    r, g, b = 0, 0, 0

    if h_i == 0:
        r, g, b = v, t, p
    elif h_i == 1:
        r, g, b = q, v, p
    elif h_i == 2:
        r, g, b = p, v, t
    elif h_i == 3:
        r, g, b = p, q, v
    elif h_i == 4:
        r, g, b = t, p, v
    elif h_i == 5:
        r, g, b = v, p, q

    return int(b * 255), int(g * 255), int(r * 255)

def random_color(id):
    h_plane = (((id << 2) ^ 0x937151) % 100) / 100.0
    s_plane = (((id << 3) ^ 0x315793) % 100) / 100.0
    return hsv2bgr(h_plane, s_plane, 1)

def draw_bbox(boxes, img):
    for box in boxes:
        left, top, right, bottom = int(box[0]), int(box[1]), int(box[2]), int(box[3])
        confidence = box[4]
        label = 0
        color = random_color(label)
        cv2.rectangle(img, (left, top), (right, bottom), color, 2, cv2.LINE_AA)
        caption = f"Person {
      
      confidence:.2f}"
        w, h = cv2.getTextSize(caption, 0, 1, 2)[0]
        cv2.rectangle(img, (left - 3, top - 33), (left + w + 10, top), color, -1)
        cv2.putText(img, caption, (left, top - 5), 0, 1, (0, 0, 0), 2, 16)
        
        keypoints = box[5:]
        keypoints = np.array(keypoints).reshape(-1, 3)
        for i, keypoint in enumerate(keypoints):
            x, y, conf = keypoint
            color_k = [int(x) for x in kpt_color[i]]
            if conf < 0.5:
                continue
            if x != 0 and y != 0:
                cv2.circle(img, (int(x), int(y)), 5, color_k, -1, lineType=cv2.LINE_AA)
        
        for i, sk in enumerate(skeleton):
            pos1 = (int(keypoints[(sk[0] - 1), 0]), int(keypoints[(sk[0] - 1), 1]))
            pos2 = (int(keypoints[(sk[1] - 1), 0]), int(keypoints[(sk[1] - 1), 1]))

            conf1 = keypoints[(sk[0] - 1), 2]
            conf2 = keypoints[(sk[1] - 1), 2]
            if conf1 < 0.5 or conf2 < 0.5:
                continue
            if pos1[0] == 0 or pos1[1] == 0 or pos2[0] == 0 or pos2[1] == 0:
                continue
            cv2.line(img, pos1, pos2, [int(x) for x in limb_color[i]], thickness=2, lineType=cv2.LINE_AA)
    
    return img
    

After doing this, we can test it on the development board by running test.py, and paste the effect below.
Insert image description here

Figure 3 Predicted pictures using the human body key point detection model officially pre-trained by Atlas 200 DK A2 Yolov8

   
Insert image description here

Figure 4 Predicting camera video using Atlas 200 DK A2 Yolov8 official pre-trained human key point detection model

   
You can see that our Yolov8 official pre-trained model has been deployed successfully. Next, we deploy our own Yolov8n-pose hand key point detection model trained on the self-made data set. You need to create a new det_utils2.py and modify it as follows based on det_utils.py.

"""
Copyright 2022 Huawei Technologies Co., Ltd

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""

import cv2
import numpy as np
import torch
import torchvision

# 21个关键点连接顺序
skeleton = [[1, 2], [2, 3], [3, 4], [4, 5], [1, 6], [6, 7], [7, 8], [8, 9], [6, 10], 
            [10, 11], [11, 12], [12, 13], [10, 14], [14, 15], [15, 16], [16, 17], [14, 18], [18, 19], [19, 20],[20,21],[1,18]]

# 调色板
pose_palette = np.array([[255, 128, 0], [255, 153, 51], [255, 178, 102], [230, 230, 0], [255, 153, 255],
                         [153, 204, 255], [255, 102, 255], [255, 51, 255], [102, 178, 255], [51, 153, 255],
                         [255, 153, 153], [255, 102, 102], [255, 51, 51], [153, 255, 153], [102, 255, 102],
                         [51, 255, 51], [0, 255, 0], [0, 0, 255], [255, 0, 0], [255, 255, 255]],dtype=np.uint8)

#关键点颜色
kpt_color  = pose_palette[[0, 16, 16, 16, 16, 0, 16, 16, 16, 0, 16, 16, 16, 0, 16, 16, 16, 0, 16, 16, 16, 16, 16]]

# 骨架颜色
limb_color = pose_palette[[8, 8, 8, 8, 0, 13, 13, 13, 0, 10, 10, 10, 0, 2, 2, 2, 0, 7, 7, 7, 0]]


def preprocess_warpAffine(image, dst_width=640, dst_height=640):
    scale = min((dst_width / image.shape[1], dst_height / image.shape[0]))
    ox = (dst_width  - scale * image.shape[1]) / 2
    oy = (dst_height - scale * image.shape[0]) / 2
    M = np.array([
        [scale, 0, ox],
        [0, scale, oy]
    ], dtype=np.float32)
    
    img_pre = cv2.warpAffine(image, M, (dst_width, dst_height), flags=cv2.INTER_LINEAR,
                             borderMode=cv2.BORDER_CONSTANT, borderValue=(114, 114, 114))
    IM = cv2.invertAffineTransform(M)

    img_pre = (img_pre[...,::-1] / 255.0).astype(np.float32)    # BGR to RGB, 0 - 255 to 0.0 - 1.0
    img_pre = img_pre.transpose(2, 0, 1)[None]  # BHWC to BCHW (n, 3, h, w)
    img_pre = torch.from_numpy(img_pre)
    return img_pre, IM

def iou(box1, box2):

    def area_box(box):
        return (box[2] - box[0]) * (box[3] - box[1])
    
    left, top = max(box1[:2], box2[:2])
    right, bottom = min(box1[2:4], box2[2:4])
    union = max((right-left), 0) * max((bottom-top), 0)
    cross = area_box(box1) + area_box(box2) - union
    if cross == 0 or union == 0:
        return 0
    return union / cross


def NMS(boxes, iou_thres):

    remove_flags = [False] * len(boxes)

    keep_boxes = []
    for i, ibox in enumerate(boxes):
        if remove_flags[i]:
            continue

        keep_boxes.append(ibox)
        for j in range(i + 1, len(boxes)):
            if remove_flags[j]:
                continue

            jbox = boxes[j]
            if iou(ibox, jbox) > iou_thres:
                remove_flags[j] = True
    return keep_boxes


def postprocess(pred, IM=[], conf_thres=0.25, iou_thres=0.45):

    # 输入是模型推理的结果,即8400个预测框
    # 1,8400,42 [cx,cy,w,h,conf,21*2]
    boxes = []
    for img_id, box_id in zip(*np.where(pred[...,4] > conf_thres)):
        item = pred[img_id, box_id]
        cx, cy, w, h, conf = item[:5]
        left    = cx - w * 0.5
        top     = cy - h * 0.5
        right   = cx + w * 0.5
        bottom  = cy + h * 0.5

        keypoints = item[5:].reshape(-1, 2)
        keypoints[:, 0] = keypoints[:, 0] * IM[0][0] + IM[0][2]
        keypoints[:, 1] = keypoints[:, 1] * IM[1][1] + IM[1][2]
        boxes.append([left, top, right, bottom, conf, *keypoints.reshape(-1).tolist()])

    #防止原始图像没有待测目标
    if boxes !=[]:
        boxes = np.array(boxes)
        lr = boxes[:,[0, 2]]
        tb = boxes[:,[1, 3]]
        boxes[:,[0,2]] = IM[0][0] * lr + IM[0][2]
        boxes[:,[1,3]] = IM[1][1] * tb + IM[1][2]
        boxes = sorted(boxes.tolist(), key=lambda x:x[4], reverse=True)
        
        return NMS(boxes, iou_thres)
    
    return []


def hsv2bgr(h, s, v):
    h_i = int(h * 6)
    f = h * 6 - h_i
    p = v * (1 - s)
    q = v * (1 - f * s)
    t = v * (1 - (1 - f) * s)
    
    r, g, b = 0, 0, 0

    if h_i == 0:
        r, g, b = v, t, p
    elif h_i == 1:
        r, g, b = q, v, p
    elif h_i == 2:
        r, g, b = p, v, t
    elif h_i == 3:
        r, g, b = p, q, v
    elif h_i == 4:
        r, g, b = t, p, v
    elif h_i == 5:
        r, g, b = v, p, q

    return int(b * 255), int(g * 255), int(r * 255)


def random_color(id):
    h_plane = (((id << 2) ^ 0x937151) % 100) / 100.0
    s_plane = (((id << 3) ^ 0x315793) % 100) / 100.0
    return hsv2bgr(h_plane, s_plane, 1)


def draw_bbox(boxes, img):
    for box in boxes:
        left, top, right, bottom = int(box[0]), int(box[1]), int(box[2]), int(box[3])
        confidence = box[4]
        label = 0
        color = random_color(label)
        cv2.rectangle(img, (left, top), (right, bottom), color, 2, cv2.LINE_AA)
        caption = f"Person {
      
      confidence:.2f}"
        w, h = cv2.getTextSize(caption, 0, 1, 2)[0]
        cv2.rectangle(img, (left - 3, top - 33), (left + w + 10, top), color, -1)
        cv2.putText(img, caption, (left, top - 5), 0, 1, (0, 0, 0), 2, 16)
        
        keypoints = box[5:]
        keypoints = np.array(keypoints).reshape(-1, 2)

        for i, keypoint in enumerate(keypoints):
            x, y = keypoint
            color_k = [int(x) for x in kpt_color[i]]

            if x != 0 and y != 0:
                cv2.circle(img, (int(x), int(y)), 5, color_k, -1, lineType=cv2.LINE_AA)
        
        for i, sk in enumerate(skeleton):
            pos1 = (int(keypoints[(sk[0] - 1), 0]), int(keypoints[(sk[0] - 1), 1]))
            pos2 = (int(keypoints[(sk[1] - 1), 0]), int(keypoints[(sk[1] - 1), 1]))

            if pos1[0] == 0 or pos1[1] == 0 or pos2[0] == 0 or pos2[1] == 0:
                continue
            cv2.line(img, pos1, pos2, [int(x) for x in limb_color[i]], thickness=2, lineType=cv2.LINE_AA)
    
    return img

   
Then we change the package imported at the beginning of test.py from from det_utils import xxx to from det_utils2 import xxx, and we can perform inference prediction of hand key points. The effect is as follows.
Insert image description here

Figure 5 Atlas 200 DK A2 self-made hand key point model inference prediction picture

   Insert image description here

Figure 6 Atlas 200 DK A2 self-made hand key point model inference prediction camera video

   
It can be seen that we can now infer camera videos to the level of 16 or 17 frames. This speed is faster than those top-down models that use target detection and key point detection to infer respectively.

At this point, the deployment of the Yolov8-pose model on Atlas 200 DK A2 is completed. Some operators and program optimization will be updated to improve this example.

Guess you like

Origin blog.csdn.net/lzq6261/article/details/134946348