Google open source visual programming framework Visual Blocks for ML

Visual Blocks for ML is an open source visual programming framework developed by Google. It enables you to create ML pipelines in an easy-to-use no-code graphical editor.

In order to run Visual Blocks for ML. Need to make sure your GPU is working. The rest is to clone the code and run it. Let's make a brief introduction below:

Visual Blocks for ML runs on a web browser that supports javascript. It mainly uses TensorFlow.js, which means that it is not the server's GPU resources but the local GPU, so the data will not be uploaded, and data privacy is protected. But it may not be supported for other frameworks.

But the biggest feature of Visual Blocks for ML is to explain what happened step by step in a visual way, and can help you iterate faster and finally release results faster, speeding up the design process!

In this article, I use the ML segmentation model to add stickers and virtual backgrounds to existing photos, as an example to give a brief introduction.

Official DEMO

1. Image segmentation

The official DEMO is here: https://visualblocks.withgoogle.com/#/demo, click on the "Demo: Create Your Own" tab.

Camera permission may be required to access this page.

To load an image from the component library on the left, click on Input and drag it into the bottom panel of the project.

You can choose a preloaded stock image, upload your own photo

Applying the Body segmentation model—no need to drag nodes from the component library, just click and drag the small circle representing the output of the input image node, and select or search from the list of available candidate nodes.

Add Mask visualizer - In order to display the output of the segmentation model, a Mask visualizer node needs to be added to the workflow. Drag from the output of the Body segmentation model above, and select the recommended node: Mask visualizer.

If you followed the correct steps so far, you should see something like the screenshot below:

Applying the Face landmark model, our goal is to add a sticker on the head, so we need to create a model to locate the face area. The Face landmark model can define anchor points, such as "face top", so that our stickers can be placed in the correct position.

The last thing is to add the virtual sticker: first you need to drag a new input image node from the component library on the left, here I used an image of a light bulb. You can use any image you want as a sticker; just make sure it has a transparent background.

Then you need to drag and select Virtual sticker from the Face landmark output. It needs two more inputs to work, the sticker image and the Mask vizualiser.

Finally, adjust the "Scale" and "OffsetX/Y" parameters to adjust the position, the result is shown in the figure below

In the above image, Landmark visualizer is also used for visualization, which can display the mapping result of the face as a picture.

With the foreground in place, we can also add a background image using Image Mixer:

Pick a new input image node from the component library on the left, which is the preloaded background.

Then drag the Virtual sticker node in the Effect tab of the component library on the left, output the last node we configured above to the input Image1 of the new Virtual sticker node, and then connect the background image to the input Image2. Change the dropdown mode to "destination-over". The final result is as follows:

This tool also provides export or share to convert the pipeline into .js code so others can import and recreate the workflow!

Above we used the DEMO of the official website, let's see how to use Jupyter Notebook to run locally.

Jupyter Notebook

We can also run Visual Blocks in our own environment, here using Colab, as a demonstration.

Install the necessary Python libraries for the Visual block:

 !pip install visualblocks

Start the Visual Blocks server:

 import visualblocks
 server = visualblocks.Server()

Then open the Visual Blocks UI:

 server.display()

Now you can create a workflow locally. After the creation is complete, you can click "Save to Colab", so that the .js of the workflow will be saved in Jupyter Notebook for future runs:

If you want to try it yourself, you can use the file below.

https://avoid.overfit.cn/post/ed762d829e1d40d4968a1c4f24018663

Summarize

I personally feel that the Visual Blocks for ML that Google has just open sourced has little meaning for practical applications. It may be just a demonstration of TensorFlow.js technology, but its research direction should be very good. For example, for cameras, local Feature extraction is performed without the need for network legends, thereby saving bandwidth and server resources, and user privacy is also guaranteed. Isn't this a direction of federated learning? It's fun to watch if you're interested.

Guess you like

Origin blog.csdn.net/m0_46510245/article/details/132376428