Launched the TF Lite Task Library interface to simplify the ML mobile development process

This article is reprinted. If it cannot be opened, you can read this article directly. The content is exactly the same:

Launched the TF Lite Task Library interface to simplify the ML mobile development process - Technology Sharing - tf.wiki Community

 

Overview

Running inference with a TensorFlow Lite model on a mobile device is more than just interacting with the model, it also requires additional code to handle complex logic such as data transformation, pre- / post-processing, loading associated files, etc.  

Today, we will introduce you to TensorFlow Lite Task Library 3 , a set of powerful and easy-to-use model interfaces that can handle most pre- and post-processing and other complex logic on your behalf. Task Library supports mainstream machine learning tasks, including image classification and segmentation, target detection and natural language processing. The model interface is designed specifically for each task for optimal performance and ease of use - you can now perform inference on pretrained and custom models for supported tasks in just 5 lines of code! Currently, Task Library is widely used in the production environment of many Google products. 

Supported ML tasks

The TensorFlow Lite Task Library currently supports six ML tasks, including vision and natural language processing use cases. Each of them will be briefly introduced below.

  • ImageClassifier
    Image classifier is a common use case of machine learning for identifying what an image represents. For example, we might want to know which animal is present in a given picture.  The ImageClassifier API supports common image processing and configurations and also allows displaying tags in specific supported locales and filtering results based on tag allowed and banned lists.
  • ObjectDetector 2
    object detector can identify which known objects may be present in a group and provide information about the location of these objects in a given image or video stream.  The ObjectDetector API supports image processing options similar to  ImageClassifer  . The output will list the top k detected objects with labels, bounding boxes and probabilities.
  • The ImageSegmenter
    image segmenter predicts whether each pixel of an image is associated with a certain class. This is in contrast to object detection (detecting objects in a rectangular area) and image classification (classifying the entire image). In addition to image processing,  ImageSegmenter  supports two types of output masks: category masks and confidence masks.
  • NLClassifier  and BertNLClassifier 
    • NLClassifier  classifies input text into different categories. This generic API can be configured to load any TFLite model that supports text input and score output.
    • BertNLClassifier  is similar to NLClassifier , except that this API is specifically tailored for BERT- related models and requires Wordpiece and Sentencepiece word segmentation outside of the TFLite model .
  • BertQuestionAnswerer
    BertQuestionAnswerer
    loads the BERT model and answers questions based on the content of a given paragraph. Currently supports MobileBERT and ALBERT . Similar to BertonCollector , BertQuestionAnswerer encapsulates complex word segmentation processing of input text. You can pass the context and question as strings to the BertQuestionAnswerer model.   

Supported models

Task Library is compatible with the following known model sources:

The Task Library also supports custom models that meet the model compatibility requirements of each Task API . The associated files (i.e. label map and vocab files) and processing parameters (if applicable) should be correctly populated into the model metadata . For more details, see the documentation for each API on the TensorFlow website 3 .   

Run inference using Task Library

Task Library works cross-platform and is supported on Java , C++ (experimental), and Swift (experimental). Running inference using the Task Library is as simple as writing a few lines of code. For example, you can use the DeepLab v3 TFLite model to segment aircraft images in Android (Figure 1 ) as follows:    

// Create the API from a model file and options

String modelPath = "path/to/model.tflite"

ImageSegmenterOptions options = ImageSegmenterOptions.builder().setOutputType(OutputType.CONFIDENCE_MASK).build();



ImageSegmenter imageSegmenter = ImageSegmenter.createFromFileAndOptions(context, modelPath, options);



// Segment an image

TensorImage image = TensorImage.fromBitmap(bitmap);

List results = imageSegmenter.segment(image);

Figure 1. ImageSegmenter input image

Figure 2. Segmentation mask

You can then use the colored labels and category masks in the results to construct a segmentation mask image, as shown in Figure 2 .

All three text APIs support Swift . To perform Q&A on a given context and question using the SQuAD v1 TFLite model in iOS , you can run:  

let modelPath = "path/to/model.tflite"



// Create the API from a model file

let mobileBertAnswerer =   TFLBertQuestionAnswerer.mobilebertQuestionAnswerer(modelPath: modelPath)



let context = """

The Amazon rainforest, alternatively, the Amazon Jungle, also known in \

English as Amazonia, is a moist broadleaf tropical rainforest in the \

Amazon biome that covers most of the Amazon basin of South America. This \

basin encompasses 7,000,000 square kilometers(2,700,000 square miles), of \

which 5,500,000 square kilometers(2,100,000 square miles) are covered by \

the rainforest. This region includes territory belonging to nine nations.

"""

let question = "Where is Amazon rainforest?"

// Answer a question

let answers = mobileBertAnswerer.answer(context: context, question: question)

// answers.[0].text could be “South America.”

Build a Task API for your use case

If your use case is not supported by the existing Task library, you can leverage the Task API infrastructure and build a custom C++/Android/iOS inference API . See this guide for more details . 

future work

We will continue to improve the user experience of Task Library . The recent roadmap is as follows:

  • Improve the ease of use of the C++ Task Library , such as providing pre-built binaries and creating user-friendly workflows for users who want to build from source.
  • Publish reference examples using Task Library .
  • Support more machine learning use cases with new task types.
  • Improve cross-platform support and support more tasks for iOS .

feedback

We welcome feedback and suggestions for new use cases to be supported in the Task Library . Please email [email protected] or raise issue 1 in GitHub .   

thank you

This achievement would not be possible without the joint efforts of:

  • Cédric Deltheil and Maxime Brénon are the main contributors to the Task Library Vision API .
  • Chen Cen is a major contributor to Task Library native /Android/iOS infrastructure and Text API .
  • Xunkai and YoungSeok Yoon are the main contributors to the development infrastructure and release process .

and Tian Lin , Sijia Ma , YoungSeok Yoon , Yuqi Li , Hsiu Wang , Qifei Wang , Alec Go , Christine Kaeser-Chen , Yicheng Fan , Elizabeth Kemp , Willi Gierke , Arun Venkatesan , Amy Jang , Mike Liang , Denis Brulé , Gaurav Nemade , Khanh LeViet , Luiz GUStavo Martins , Shuangfeng Li , Jared Duke , Erik Vee , Sarah Sirajuddinand Tim Davis have given strong support to this project, and I would like to express my gratitude here.

Original text: Easy ML mobile development with TensorFlow Lite Task Library
Chinese: Launched the TF Lite Task Library interface to simplify the ML mobile development process 

Guess you like

Origin blog.csdn.net/qq_18256855/article/details/127354605