AI Practice in August: Industrial Vision Defect Detection

AI Practice in August: Industrial Vision Defect Detection

– Yolov8 model optimization and inference based on tflite

For the operation video, please see the link at Bilibili: aidlux model optimization + industrial defect detection~~Perfectly using my Huawei mobile phone to achieve defect detection reasoning bilibiliaidlux model optimization + industrial defect detection~~Perfectly using my Huawei mobile phone to achieve defect detection reasoning

1 Model optimization

Convert onnx model to tflite model

打开网站:http://aimo.aidlux.com/
输入试用账号和密码:账号:AIMOTC001 ,密码:AIMOTC001

Through the AI ​​Model Optimizer prompt on the page, follow the steps ① Upload the model ② Select the target platform ③ Parameter settings ④ Conversion results.

The onnx model can be converted into a tflite model through the above ①-④

The model conversion process includes the following log information

2023-09-07 19:47:05,969 - INFO : Optimization started.
2023-09-07 19:47:05,970 - INFO : [ONNX-SIM] Clean ONNX Model input node.
2023-09-07 19:47:06,733 - INFO : [ONNX2TFLITE] Start converting to TFLITE.
2023-09-07 19:47:28,511 - INFO : Model optimization done.

2 py file for reasoning

The model uses yolov8_slimneck_SIOU.ONNX provided in the course. After converting the model path and name, it is as follows

# 模型
model_path = "/home/lesson3/yolov8_slimneck_SIOU_tflite/yolov8_slimneck_SIOU_fp32.tflite"
# 测试图片路径
image_path = "/home/lesson3/test"

The model inference process includes the following steps:

  1. Initialize the aidlite class and create the aidlite object
aidlite = aidlite_gpu.aidlite()
print("ok")
  1. Load model
value = aidlite.ANNModel(model_path, [640 * 640 * 3 * 4], [8400 * 11 * 4], 4, 0)
print("gpu:", value)

Contains traversing each picture

for root, dirs, files in os.walk(image_path):
    num = 0
    for file in files:
        file = os.path.join(root, file)
         frame = cv2.imread(file)
         x_scale = frame.shape[1] / 640
         y_scale = frame.shape[0] / 640

Convert the image to the 640*640 size input by the model

img = cv2.resize(frame, (640, 640))
# img_copy=img.co
img = img / 255.0
img = np.expand_dims(img, axis=0)
img = img.astype(dtype=np.float32)
print(img.shape)
  1. Pass in model input data
aidlite.setInput_Float32(img)
  1. Perform reasoning
start = time.time()
aidlite.invoke()
end = time.time()
timerValue = 1000 * (end - start)
print("infer time(ms):{0}", timerValue)
  1. Get output
pred = aidlite.getOutput_Float32(0)
# print(pred.shape)
pred = np.array(pred)
print(pred.shape)
pred = np.reshape(pred, (8400, 11))
print(pred.shape)  # shape=(8400,11)
  1. Post-processing, parsing output
boxes, scores, classes = postProcess(pred, confThresh, NmsThresh)
  1. draw save image
ret_img = draw(frame, x_scale, y_scale, boxes, scores, classes)
ret_img = ret_img[:, :, ::-1]
num += 1
image_file_name = "/home/result/res" + str(num) + ".jpg"

8. Save the picture

cv2.imwrite(image_file_name, ret_img)

Guess you like

Origin blog.csdn.net/qq_42835363/article/details/132762395