Digital twin construction based on photogrammetry

In this blog post, we'll look at how photos taken by drones can be used to create 3D models of real-world environments in digital twins.

A digital twin is a virtual representation of a physical system that is regularly updated to mimic the structure, state, and behavior of the asset it represents. Digital twins enable faster, better decision-making by connecting multiple data sources and providing actionable insights in a single pane of glass.

However, building and managing a digital twin from scratch is time-consuming, complex and costly. It requires development teams with different expertise to work together to build integrated solutions that combine data from different sources. Developers must generate real-time insights from streaming data and create contextualized visualizations to better connect end users to the data.

With AWS IoT TwinMaker, it's easy to create digital twins of physical environments and build applications that provide interactive 3D digital representations of large, complex physical structures through a browser.

insert image description here

Recommendation: Use NSDT Designer to quickly build programmable 3D scenes.

1 Overview

One of the key features of AWS IoT TwinMaker is the ability to import existing 3D models, such as CAD and BIM models or point cloud scans, into an AWS IoT TwinMaker scene, and then overlay data from other systems on top of this visualization.

AWS IoT TwinMaker scenes use a real-time WebGL viewport and support the glTF format. While CAD and BIM models represent the designed asset structure, in some cases such models may not exist, or the asset may be constructed differently than designed. There is great value in providing a 3D model in the digital twin that reflects current reality as closely as possible. There are several mechanisms for creating 3D models of the real world, two popular methods being laser scanning and photogrammetry.

Laser scanning uses specialized and often expensive equipment to create highly accurate 3D models of the physical environment. In contrast, photogrammetry is the process of extracting 3D information from overlapping 2D photographs using computer vision techniques, including structure from motion (SfM).

This article focuses on using a low-cost aerial photography platform (consumer-grade quadcopter – DJI Phantom 4 Pro) combined with photogrammetry to create large-area photorealistic models representing assets modeled in AWS IoT TwinMaker.

With this approach, you can quickly build a 3D model of an asset that might be expensive or impossible to create using laser scanning. The model can be updated quickly and frequently with subsequent drone flights to ensure your digital twin closely mirrors reality. The first thing to note is that the model will favor photorealism rather than the absolute accuracy of the generative model.

In this blog, we also describe how to capture geo-referenced photo datasets through automated flight planning and execution. You can then feed these photos through a photogrammetry processing pipeline that automatically creates a resulting 3D visualization scene in AWS IoT TwinMaker.

We processed the data into glTF format for import into AWS IoT TwinMaker using popular free and open-source photogrammetry software. The processing pipeline also supports OBJ files that can be exported from DroneDeploy or other photogrammetry engines.

2. Solutions

2.1 Data collection

Photogrammetry relies on certain characteristics of source aerial photographs to create valid 3D models, including:

  • High overlap between images
  • Can't see the horizon in any photo
  • Capturing nadir and non-nadir photos
  • The capture height is based on the required resolution of the model

While skilled drone pilots can manually capture photos for photogrammetry, you can achieve more consistent results by automating the flight and capture. Flight planning tools create autonomous flight plans, capturing images of relative position, altitude and degree of overlap for efficient photogrammetric processing.

Shown below is the flight planning interface of DroneDeploy, a popular reality capture platform for capturing internal and external aerial and ground visual data, which we used to capture the imagery in the example.

insert image description here

Figure 1 – DroneDeploy flight planning interface

We used the flight planning and autonomous operations capabilities of the DroneDeploy platform to capture data representing assets to be modeled in AWS IoT TwinMaker. The asset of interest is an abandoned power station in Freemantle, Western Australia.

As shown in the previous screenshot, the flight flew at 160 feet, covered 6 acres, and took 149 images in less than 9 minutes. Next, we show two examples of aerial photos captured from drone flights that were subsequently used to generate 3D models, illustrating the high degree of overlap between images.

insert image description here

Figure 2 – Height image overlay for efficient photogrammetry

2.2 Photogrammetry Processing Pipeline

After aerial imagery is captured, it must be fed into a photogrammetry engine to create a 3D model. DroneDeploy provides a powerful photogrammetry engine that can export the 3D model created by the engine in OBJ format, as shown in the figure below
insert image description here

We created a photogrammetry processing pipeline that leverages the NodeODM component of the popular free and open source OpenDroneMap platform to process georeferenced images in a completely serverless fashion. The pipeline leverages AWS Fargate and AWS Lambda for computation, creating as output a scene in AWS IoT TwinMaker that includes a 3D model created by OpenDroneMap.

The pipeline also supports processing 3D models created by DroneDeploy's photogrammetry engine to create scenes in AWS IoT TwinMaker from OBJ files exported from DroneDeploy.

The photogrammetry processing pipeline architecture is shown in the figure below.
insert image description here

Figure 4 – Processing pipeline architecture

The steps to execute the pipeline using the OpenDroneMap photogrammetry processing engine are as follows:

  • Launch Fargate tasks using OpenDroneMap's NodeODM image from the public docker.io registry
  • A set of georeferenced images from the drone flight is uploaded as a .zip file to an Amazon S3 bucket
    Uploading the zip file results in the publication of an Amazon S3 event notification, which triggers the execution of the data processor Lambda
  • Data Processor Lambda unzips the files, starts a new processing job in NodeODM running in Fargate, and uploads all images to the NodeODM task
  • The Status Check Lambda periodically polls the NodeODM task to check if the processing job is complete
  • When a NodeODM processing job completes, the output of the job will be saved in a processed S3 bucket
  • Saving the output zip file causes an Amazon S3 event notification to be posted that triggers the glTF Converter Lambda
  • glTF Lamba converts the OBJ output of the NodeODM processing job to a binary glTF file and uploads it to the workspace S3 bucket associated with the AWS IoT TwinMaker workspace and generated when the CloudFormation stack creates the workspace
  • glTF Lambda uses a glTF file to create a new scene in the AWS IoT TwinMaker workspace

If you used the DroneDeploy photogrammetry engine to create the 3D model, you can upload the exported OBJ zip file directly to the "processed" bucket and steps 6-8 will complete normally.

When the photogrammetry processing pipeline finishes executing, a new scene is created in the AWS IoT TwinMaker workspace containing the resulting 3D model, shown below for the asset of interest.
insert image description here

Figure 5 – 3D scene generated in AWS IoT TwinMaker

3. Prerequisites

An AWS account is required to set up and follow the steps in this blog. The AWS CloudFormation template configures and installs the necessary VPC and network configuration, AWS Lambda functions, AWS Identity and Access Management (IAM) roles, Amazon S3 buckets, AWS Fargate tasks, Application Load Balancers, Amazon DynamoDB tables, and AWS IoT TwinMaker work area. This template is designed to run in the Northern Virginia region (us-east-1). You may incur charges for some of the following services:

  • Amazon Simple Storage Service(Amazon S3)
  • Amazon DynamoDB
  • Amazon VPC
  • Amazon CloudWatch
  • AWS Lambda processing and conversion functions
  • AWS Fargate
  • AWS IoT TwinMaker

4. Deploy the photogrammetry processing pipeline

Download the sample Lambda deployment package. This package contains the code for the above Data Processor Lambda, Status Check Lambda and glTF Transformer Lambda

Navigate to the Amazon S3 console

Create an S3 bucket

Upload the downloaded Lambda deployment package to the S3 bucket created in the previous step. Keep files compressed as they are

After putting the Lambda deployment package into S3, start this CloudFormation template

In the Specify Stack Details screen, under the Parameters section, do the following:

  • Update the Prefix parameter value to a unique prefix for the bucket name. This prefix will ensure that the stack's bucket name is globally unique
  • Update the DeploymentBucket parameter value to the name of the bucket where you uploaded the Lambda deployment package
  • If you are working with large datasets, increase the Fargate task's memory and CPU values ​​according to allowed values, as described here
  • Select Create Stack to create the resources for the photogrammetry processing pipeline

Once done, navigate to the new S3 landing barrel. Links can be found in the resources tab as follows
insert image description here

Figure 6 – Upload bucket resource

Upload the zip file containing the images to the S3 bucket

5. Run the photogrammetry processing pipeline

The photogrammetry processing pipeline starts automatically after uploading a zip file containing georeferenced images.

Processing the job can take over an hour (depending on the number of images provided and the CPU and memory provided in the Fargate processing task), and you can track the progress of the job by viewing the status in the Amazon CloudWatch logs of Status Check Lamda.

When a processing job is active, the status check Lambda will output the status of the job (on a 5-minute schedule) as the job runs. The output includes the progress of the processing job expressed as a percentage value, as shown below.

insert image description here

Figure 7 – Photogrammetry job progress

6. Build a digital twin based on the 3D model

Once the photogrammetry processing pipeline is complete and a new scene is created in the AWS IoT TwinMaker workspace, you can start using 3D models to connect components bound to data sources, provide visual context to the data, and provide data-driven visual cues situation.

Dashboards can be configured using the AWS IoT TwinMaker application plugin for Grafana to share your digital twin with other users.

7. Clean up

Be sure to clean up your work in this blog to avoid charges. Delete the following resources after completing in this order

  • Delete any created scenes from the AWS IoT TwinMaker workspace
  • Delete all files in Landing, Processed and Workspace S3 buckets
  • Delete the CloudFormation stack

8. Conclusion

In this blog, we create a serverless photogrammetry processing pipeline that processes drone imagery into 3D models through open source software and creates scenes in AWS IoT TwinMaker from the resulting 3D models. Additionally, the pipeline can process 3D models created by other photogrammetry engines, such as those provided by DroneDeploy, and export to OBJ.

Although this pipeline has been used to demonstrate the processing of drone imagery, any georeferenced imagery data can be used. Rapidly create photorealistic 3D models of large real-world assets using only consumer-grade hardware, enabling you to maintain up-to-date models that can be bound to data sources and shared with other users, enabling them to make decisions based on data Display data in a rich visual environment.

The pipeline described in this blog is available in this GitHub repository .


Original Link: Digital Twin Based on Photogrammetry—BimAnt

Guess you like

Origin blog.csdn.net/shebao3333/article/details/132203428