Implementation of 3D model search service based on AWS

3D models are widely used in computer games, movies, engineering, retail, advertising and many other fields. There are many tools for making 3D models in the market, but few tools can intuitively search 3D model databases to find similar 3D models because developing good 3D model search tools is very challenging. It requires complex computation and AI/ML frameworks to create model descriptors and extract feature vectors, databases to hold and index large amounts of shape data, and near real-time pattern matching on large datasets.

insert image description here

Recommendation: Use NSDT editor to quickly build programmable 3D scenes

1. The business problem to be solved

In this article, let's understand the real business problem in 3D modeling business and see how to implement the solution on AWS cloud.

Let's start with a hypothetical business problem. Engineering design firm X had a large number of 3D models stored in legacy data stores, and they wanted to start a new business selling their models online. The company wants to provide a service that performs a visual search using photos, hand-drawn or 3D model objects and finds matching 3D models so that customers can easily select and purchase the models they want.

Here, Company X has a large number of 3D models in a legacy database. The first step is to download the models to cloud storage (preferably S3) and extract the shape and feature data of these models, then index the data so that similar models can be grouped together and searched efficiently.

2. Feature generation and indexing

The diagram below illustrates the architecture for shape and feature data generation and indexing.
insert image description here

Here are the steps you need to take to implement the solution.

  • Configure AWS Batch, which provides a serverless batch computing platform, to run a service that connects to a legacy database and downloads 3D model files to an S3 bucket. It can be scheduled to run nightly.
  • Implement an AWS Lambda function to process a downloaded 3D model in an S3 bucket and generate shape data using a shape representation algorithm. The resulting shape data should be stored in Amazon DynamoDB. This Lambda function can be configured to trigger an S3 bucket put event.
  • Implement another AWS Lambda function to create multiple snapshots of the 3D model at different angles and store them as images in an S3 bucket.
  • Extract features from generated images using a convolutional neural network (CNN) model pre-trained on the well-known ImageNet dataset, or a model trained and deployed with Amazon SageMaker
  • Amazon SageMaker is a fully managed machine learning platform that allows for the creation, training, and deployment of rapidly deployable machine learning models in the AWS cloud. Using this model, image textures, geometry data, and metadata can be extracted and stored in Amazon DynamoDB.
  • Create another lambda function to enrich the shape data generated in step 2 with the feature data extracted in step 4. Shape data is now enriched with feature data. Shape data is an array of floating point numbers. The next step is to group similar shapes together.
  • Using an AWS lambda function, build a reference k-NN index on Amazon OpenSearch Service, a fully managed service that makes it easy and cost-effective to deploy, secure, and run Elastic Search at scale. Amazon OpenSearch Service provides k-Nearest Neighbor (k-NN) search, which can store shape data as vectors and use the k-NN algorithm to group similar shape data by Euclidean distance or cosine similarity.

We have now generated feature-rich shape descriptors and indexed them using the k-Nearest Neighbors (k-NN) algorithm. Next, present the 3D model or a 2D view of the model (you can use tools to draw front, top, and side views) to query the application to find similar models from indexed data in Amazon OpenSearch.

3. 3D model search

The diagram below depicts the architecture of a real-time 3D model search to find similar models from a model repository.
insert image description here

  • Using the web application hosted in S3, you can upload 3D model objects (if available), or use the sketch application to draw top, front, and side views of the model and upload the views as images. More accurate results will be obtained if more view images are presented from different angles.
  • Uploaded images are sent to AWS Lambda via Amazon API Gateway.
  • The AWS Lambda function will generate a shape descriptor for the uploaded model/image, and then call the Amazon SageMaker real-time endpoint to extract the feature data.
  • The AWS Lambda function will enrich the shape descriptor with feature data.
  • The AWS Lambda function sends the query to the k-nearest neighbors in the Amazon Elastic Search Service (Amazon OpenSearch Service) index. It will return a list of k similar model data and return the models' respective Amazon S3 URIs.
  • The AWS Lambda function generates a pre-signed Amazon S3 URL to return to the client web application to visualize a similar model.

The purpose of this article is to explain the architecture and high-level implementation details of a 3D model search service on the AWS cloud using AWS services. Added the FAQ section below to provide more details.

4. Frequently Asked Questions

  • What are 3D shape descriptors?

A 3D shape descriptor is a set of numbers used to represent points on the surface of a 3D model to capture the geometric nature of the 3D object. It is a compact representation of 3D objects, and the descriptors form a vector space with a meaningful distance metric.

  • How to generate 3D shape descriptor?

There are many algorithms that can be used to generate 3D shape descriptors. They generate a set of 2D view data generated by rotating the 3D model at different angles. More views yield higher accuracy. Popular algorithms are Light Field Descriptor (LFD) and Multi-View Convolutional Neural Network (MVCNN).

  • What is a pretrained CNN model?

A pretrained model is a model created and trained by someone to solve a problem similar to the one we have. In our case, we can use a pre-trained resnet50 convolutional neural network trained on over a million images from the ImageNet database. resnet50 is available as a built-in algorithm in SageMaker.

  • What is SageMaker?

It is a fully managed machine learning service that makes it quick and easy to build and train machine learning models, then deploy them directly into a production-ready hosting environment.

  • Amazon Elastic Search Service 与 Amazon OpenSearch Service。

Amazon Elastic Search Service, now Amazon OpenSearch Service, offers the latest version of OpenSearch and visualizations powered by OpenSearch Dashboard and Kibana. It enables you to easily ingest, secure, search, aggregate, view and analyze large volumes of data.

  • What is k-NN for Amazon OpenSearch Service?

It allows you to search for points in a vector space and find the "k nearest neighbors" of those points via Euclidean distance or cosine similarity.


Original link: 3D model search based on AWS - BimAnt

Guess you like

Origin blog.csdn.net/shebao3333/article/details/132406279