TensorFlow Lite, a lightweight framework for Android machine learning models

Introduction to TensorFlow Lite

TensorFlow Lite is a lightweight framework for running machine learning models on mobile, embedded, and IoT devices. It is an extension of TensorFlow in the mobile field and aims to solve the problem of limited computing resources for machine learning on devices such as mobile phones. TensorFlow Lite achieves the ability to run models efficiently by optimizing model size, quantization, and including kernels for specific device requirements.

TensorFlow Lite supports the development of multiple languages, including Java, C++ and Python, etc. It can convert TensorFlow models into Lite model format, and provides rich API interfaces for developers to use. In addition, TensorFlow Lite also supports the use of accelerator hardware (such as GPU, DSP) to further improve the efficiency of model inference.

TensorFlow Lite has a wide range of application scenarios, such as: speech recognition, image classification, and object detection in smart homes; disease diagnosis and patient monitoring in smart healthcare; vehicle control in autonomous driving, etc. Due to its high efficiency and portability, TensorFlow Lite has become one of the mainstream frameworks for running machine learning on embedded devices such as mobile phones.

The official document address of TensorFlow Lite is: https://www.tensorflow.org/lite , on this website, you can find TensorFlow Lite usage guides, API documentation, sample code, and information about using TensorFlow Lite on mobile devices and embedded Best practices for deploying machine learning models on the system, etc.

TensorFlow Lite integration

To integrate TensorFlow Lite into your Android application, follow these steps:

  1. Add the TensorFlow Lite library to your application's Gradle build file. Add the following dependencies in the build.gradle (Module: app) file:
dependencies {
    
    
    implementation 'org.tensorflow:tensorflow-lite:2.5.0'
}
  1. Copy the model file (.tflite) into the application "assets" directory.

  2. Load the model in the application. Load the model with the following code:

private Interpreter tflite;
tflite = new Interpreter(loadModelFile(), null);
    
private MappedByteBuffer loadModelFile() throws IOException {
    
    
    AssetFileDescriptor fileDescriptor = this.getAssets().openFd("model.tflite");
    FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
    FileChannel fileChannel = inputStream.getChannel();
    long startOffset = fileDescriptor.getStartOffset();
    long declaredLength = fileDescriptor.getDeclaredLength();
    return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);
}
  1. Use the TensorFlow Lite interpreter to run inference. Please refer to the TensorFlow Lite documentation for how to prepare input and obtain output.

TensorFlow Lite self-training model

  1. First, you need to select and train a machine learning model suitable for your application needs. Models can be trained using common deep learning libraries such as TensorFlow, PyTorch.

  2. After training, you need to convert the model to a format supported by the TensorFlow Lite platform. During the conversion process, the model can be optimized and the size of the model can be reduced by techniques such as quantization, so that the model is more suitable for deployment on mobile devices. You can use the TFLite Converter or TensorFlow Hub officially provided by TensorFlow to complete the model conversion.

  3. After the conversion is successful, you can get a TensorFlow Lite model file (usually a .tflite file). The file can be saved to the local disk, or directly packaged into the assets directory of your application.

Hopefully these steps will help you successfully obtain and use TensorFlow Lite model files.

TensorFlow Lite model file

Google's official collection of TensorFlow Lite model files can be found on the TensorFlow Hub website. You can enter keywords in the search bar of this website, such as "TensorFlow Lite", and press Enter to find models related to your search.

From the search results page, you can browse and filter for different types of models, such as classification, object detection, or image segmentation. Each model has its own introduction and documentation, including information on how to use the model and its performance metrics. If you find a model you are interested in, you can click the link to go to the model details page, which may provide downloadable pre-trained weights or converted TensorFlow Lite model files.

Visit the TensorFlow Hub website: https://tfhub.dev/

TensorFlow Lite example

You can find official examples for Android using TensorFlow Lite in the official TensorFlow GitHub repository. This example demonstrates how to use TensorFlow Lite to recognize objects in pictures and display the results in the application.

The sample includes resources such as complete project code, Gradle files, and model files. You can directly download and run the sample application, or use it as a reference to build your own TensorFlow Lite Android application.

The following is the GitHub warehouse address of the example project:
https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android

The following is a sample code for object detection and recognition using the official TensorFlow Lite model file:

  1. Import the TensorFlow Lite library

    implementation 'org.tensorflow:tensorflow-lite:+'
    
  2. load model file

    private MappedByteBuffer loadModelFile(Activity activity, String modelPath) throws IOException {
          
          
        AssetFileDescriptor fileDescriptor = activity.getAssets().openFd(modelPath);
        FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
        FileChannel fileChannel = inputStream.getChannel();
        long startOffset = fileDescriptor.getStartOffset();
        long declaredLength = fileDescriptor.getDeclaredLength();
        return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);
    }
    
  3. preprocessing

    private Bitmap preprocess(Bitmap bitmap) {
          
          
        int width = bitmap.getWidth();
        int height = bitmap.getHeight();
        int inputSize = 300;
    
        Matrix matrix = new Matrix();
        float scaleWidth = ((float) inputSize) / width;
        float scaleHeight = ((float) inputSize) / height;
        matrix.postScale(scaleWidth, scaleHeight);
    
        Bitmap resizedBitmap = Bitmap.createBitmap(bitmap, 0, 0, width, height, matrix, false);
    
        return resizedBitmap;
    }
    
  4. perform reasoning

    private void runInference(Bitmap bitmap) {
          
          
        try {
          
          
            // 加载模型文件
            MappedByteBuffer modelFile = loadModelFile(this, "detect.tflite");
    
            // 初始化解析器
            Interpreter.Options options = new Interpreter.Options();
            options.setNumThreads(4);
            Interpreter tflite = new Interpreter(modelFile, options);
    
            // 获取输入和输出 Tensor
            int[] inputs = tflite.getInputIds();
            int[] outputs = tflite.getOutputIds();
            int inputSize = tflite.getInputTensor(inputs[0]).shape()[1];
    
            // 进行预处理
            Bitmap resizedBitmap = preprocess(bitmap);
            ByteBuffer inputBuffer = convertBitmapToByteBuffer(resizedBitmap, inputSize);
    
            // 执行推理,并获取输出结果
            Object[] inputArray = {
          
          inputBuffer};
            Map<Integer, Object> outputMap = new HashMap<>();
            float[][][] locations = new float[1][100][4];
            float[][] classes = new float[1][100];
            float[][] scores = new float[1][100];
            float[] numDetections = new float[1];
            outputMap.put(outputs[0], locations);
            outputMap.put(outputs[1], classes);
            outputMap.put(outputs[2], scores);
            outputMap.put(outputs[3], numDetections);
            tflite.runForMultipleInputsOutputs(inputArray, outputMap);
    
            // 输出识别结果
            for (int i = 0; i < 100; ++i) {
          
          
                if (scores[0][i] > THRESHOLD) {
          
          
                    int id = (int) classes[0][i];
                    String label = labels[id + 1];
                    float score = scores[0][i];
                    RectF location = new RectF(
                            locations[0][i][1] * bitmap.getWidth(),
                            locations[0][i][0] * bitmap.getHeight(),
                            locations[0][i][3] * bitmap.getWidth(),
                            locations[0][i][2] * bitmap.getHeight()
                    );
                    Log.d(TAG, "Label: " + label + ", Confidence: " + score + ", Location: " + location);
                }
            }
    
            // 释放资源
            tflite.close();
        } catch (Exception e) {
          
          
            e.printStackTrace();
        }
    }
    
    private ByteBuffer convertBitmapToByteBuffer(Bitmap bitmap, int inputSize) {
          
          
        ByteBuffer byteBuffer = ByteBuffer.allocateDirect(inputSize * inputSize * 3);
        byteBuffer.order(ByteOrder.nativeOrder());
        Bitmap resizedBitmap = Bitmap.createScaledBitmap(bitmap, inputSize, inputSize, true);
        for (int y = 0; y < inputSize; ++y) {
          
          
            for (int x = 0; x < inputSize; ++x) {
          
          
                int pixelValue = resizedBitmap.getPixel(x, y);
                byteBuffer.putFloat((((pixelValue >> 16) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
                byteBuffer.putFloat((((pixelValue >> 8) & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
                byteBuffer.putFloat(((pixelValue & 0xFF) - IMAGE_MEAN) / IMAGE_STD);
            }
        }
        return byteBuffer;
    }
    

The above code example is applicable to the object detection model officially provided by TensorFlow Lite. The specific model usage and input and output Tensor can be adjusted according to the actual situation.

Guess you like

Origin blog.csdn.net/weixin_44008788/article/details/130286827