Text-controlled 3D printing model, MIT’s latest product and published paper

It is very difficult for ordinary people to create and edit 3D printing models because operating industrial software such as AutoCAD, SolidWorks, CATIA, and Inventor requires professional technical background. In order to solve this problem, the Massachusetts Institute of Technology (Massachusetts Institute of Technology) announced a latest product, Style2Fab, on its official website.

It is reported that Style2Fab takes advantage of the convenience of large language models and uses 1,000 3D models from Thingiverse for training, allowing users to automatically edit the appearance, color, etc. of 3D models through text. For example, directly enter the text "Help me change the color of the wooden pot to the colorful Tang Dynasty style".

Currently, MIT has published the research paper of Style2Fab, and the source code will be released at the "UIST 2023" conference on October 29.

Paper address: https://hcie.csail.mit.edu/research/style2fab/style2fab.html

Insert image description here

A brief introduction to Style2Fab

The researchers said that Style2Fab can automatically cut the 3D model into appearance and function parts through deep learning, thereby simplifying the entire design process.

Appearance mainly involves the parts of the model that interact with the outside world, such as color, style, etc.; function involves the parts of the model that need to cooperate with each other after manufacturing, such as buckles, engagement between gears, etc.

A design tool needs to preserve the geometry of exterior and interior functional segments while allowing for customization of non-functional, aesthetic segments.

But to do this, Style2Fab needs to figure out the difference between the exterior and interior of the 3D model.
Insert image description here

The researchers used machine learning to analyze the model's topology, tracking how often the geometry changes, such as a curve or angle where two planes join. Based on this, the model is divided into a certain number of segments.

Style2Fab then compared these segments to a dataset created by the researchers, which contained 294 3D object models, with each model's segments annotated with functional or aesthetic labels. If a segment closely matches one of these parts, it is marked as functional.

But classifying segments based solely on their geometric shape is a very difficult problem. Because 3D models vary so much and are complex, these segments are an initial set of recommendations presented to the user, and the classification of any segment as appearance or functionality can be easily changed.

In terms of training data, Style2Fab uses 1,000 3D printing models on the thingiverse platform, covering many categories such as fashion, art, gadgets, home, learning, tools, etc., which can better understand the user's text intention and achieve the desired model modifications Effect.

Insert image description here

Style2Fab usage demonstration

Users can use text to describe the design elements of the 3D model they want to change, for example, I changed the color of the back of this phone case to a Moroccan art style.

Style2Fab will automatically find matching textures, colors or shapes based on the definition of the appearance part to meet the user's needs. But the structure of the phone case will not be changed because users have not asked for it.

Insert image description here
The red box in the picture is where text is entered and changes can be made to the 3D model

Insert image description here
The researchers said that Style2Fab is not only suitable for "design novices", but can also be used by experienced veterans. Because Style2Fab will automatically learn, segment and stylize based on the number of text inputs and descriptions entered by the user to understand the user's deeper needs.

In other words, the more times you enter, the more Style2Fab learns and will provide customized services according to your needs, making the change process accurate and reliable.

Currently, Style2Fab is very useful in the medical and industrial fields, and can help business personnel without technical backgrounds quickly design the 3D products they want. In the future, MIT researchers will continue to iterate on Style2Fab and enhance its functionality to expand its scope of use.

Insert image description here

The material of this article comes from the official website of MIT. If there is any infringement, please contact us to delete it.

Guess you like

Origin blog.csdn.net/weixin_57291105/article/details/133037425