Realize face recognition on RV1126----facenet into rknn model

content

1. Model download

Pre-trained models

2. Convert facenet to rknn model and reason

 3 View the network model


1. Model download

First, you need to download the facenet model. The github download URL is: https://github.com/davidsandberg/facenet

Pre-trained models

Model name LFW accuracy Training dataset Architecture
20180408-102900 0.9905 CASIA-WebFace Inception ResNet v1
20180402-114759 0.9965 VGGFace2 Inception ResNet v1

NOTE: If you use any of the models, please do not forget to give proper credit to those providing the training dataset as well.

2. Convert facenet to rknn model and reason

import numpy as np
import cv2
from rknn.api import RKNN

INPUT_SIZE = 160

if __name__ == '__main__':
    # Create RKNN object
    rknn = RKNN(verbose=False, verbose_file='./test1.log')

    # Config for Model Input PreProcess
    #rknn.config(channel_mean_value='0 0 0 1', reorder_channel='0 1 2',target_platform=['rv1126'])
    rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], reorder_channel='0 1 2', target_platform='rv1126',
    quantized_dtype='asymmetric_affine-u8', optimization_level=3,   output_optimize=1)
    
    print('config done')

    # load tensorflow model
    print('--> Loading model')
    rknn.load_tensorflow(tf_pb='./20180402-114759.pb',
                         # inputs=['input', 'phase_train'],
                         inputs=['input'],
                         outputs=['InceptionResnetV1/Bottleneck/BatchNorm/Reshape_1'],
                         #outputs=['embeddings'],
                         input_size_list=[[INPUT_SIZE, INPUT_SIZE, 3]])
    print('done')

    # Build Model
    print('--> Building model')
    # rknn.build(do_quantization=False,do_quantization=True, dataset='dataset.txt')
    rknn.build(do_quantization=True, dataset='dataset.txt')
    print('done')

    # Export RKNN Model
    rknn.export_rknn('./facenet_Reshape_1.rknn')


    # Set inputs
    img = cv2.imread('./ldh.jpg')
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    img = cv2.resize(img,(INPUT_SIZE, INPUT_SIZE))

    # init runtime environment
    print('--> Init runtime environment')
    #ret = rknn.init_runtime(target='rv1126')
    ret = rknn.init_runtime()
    if ret != 0:
        print('Init runtime environment failed')
        exit(ret)
    print('done')

    # Inference
    print('--> Running model')
    outputs = rknn.inference(inputs=[img])
    print('len(outputs[0][0])::', len(outputs[0][0]))
    print("outputs::",  outputs)
    print('done')

    # perf
    print('--> Begin evaluate model performance')
    perf_results = rknn.eval_perf(inputs=[img])
    print('done')
    print("perf_results:", perf_results)

    
    rknn.release()

The output layer taken here is outputs=['InceptionResnetV1/Bottleneck/BatchNorm/Reshape_1'], the result is 512 floats, and some screenshots are as follows

If the output is taken as #outputs=['embeddings'], then the output obtained is

 3 View the network model

We use netron to view the transferred rknn model and see where the output layer we took is

references:

Toybrick - open source community - artificial intelligence - facenet model conversion

Toybrick - open source community - artificial intelligence - questions about facenet

Toybrick-Open Source Community-Artificial Intelligence-rv1126 failed to run rknn init after facenet conversion

Guess you like

Origin blog.csdn.net/u013171226/article/details/123688177