Core Competencies of Digital Twin World Construction: Efficient Modeling Capabilities of Digital Twins

Creating a high-fidelity digital twin virtual model is one of the important steps in building a digital twin application, which requires a true representation of the geometry, attributes, behaviors, and rules of physical entities. The digital twin model must not only be consistent with the physical entity in terms of geometric structure, but more importantly, it must be able to simulate the space-time state, behavior, and function of the physical entity.

1. 3D digital twin model

The application of 3D digital twin model in the field of digital twin can be divided into the following aspects:

The 3D digital twin model can improve the accuracy and fidelity of the digital twin, so that the data model in the virtual world can better reflect the state and changes of physical objects or systems in the real world.

The 3D digital twin model can support the multi-dimensional display and interaction of the digital twin, enabling users to observe and operate the data model in the virtual world from different angles, levels, and scales, enhancing user experience and efficiency.

The 3D digital twin model can promote the cross-domain application and innovation of the digital twin, enabling different industries, different scenarios, and different needs to share and integrate data models in the virtual world to achieve collaborative innovation and value enhancement.

In actual digital twin project delivery, there are usually the following requirements for 3D digital models, which need to be paid attention to:

precision. A model needs to be able to accurately reflect the structure, properties, methods and behavior of a physical entity or system, as well as its interaction with the environment. Moreover, it has sufficient details and precision to meet the analysis and simulation requirements of different levels and objectives.

standardization. Following a unified specification and format facilitates the sharing and exchange of 3D digital models among different platforms and systems. The model should have good readability, comprehension and scalability, so as to facilitate the modification and update of model assets in the later stage.

lightweight. The model should reduce the amount of data and computation as much as possible to improve operating efficiency and save resources. Appropriate abstraction and simplification methods need to be adopted to remove redundant and irrelevant information and retain the core features of the model.

visualization. Models should be able to be displayed intuitively through graphics, images, animations, etc., so that users can observe, understand and operate. Support switching of multiple viewing angles and scales to meet the needs of model display and transformation in different scenarios.

If the delivered digital twin project is a B/S architecture, in order to achieve efficient and high-quality digital twin scenarios, it is usually necessary to pay attention to the requirements of model size, format and quality, mainly including:

The size of the model. The size of the model determines the speed of loading and rendering. Too large a model will lead to high network transmission and memory usage, affecting user experience. Therefore, the model needs to be reasonably compressed and optimized to reduce its data volume and complexity.

The format of the model. The format of the model determines its compatibility and functionality, and different formats have different characteristics, advantages and disadvantages. Generally speaking, the visualization scene on the WEB side needs to choose a format that supports animation, texture, material and other attributes, such as GLTF, FBX, OBJ, etc. At the same time, it is also necessary to consider the browser's support for different formats, and choose the format that best suits the current environment and needs.

The quality of the model. The quality of the model determines its visual effect and sense of reality, too low or too high quality will affect the user experience. Too low quality will lead to model distortion, roughness, and unnaturalness; too high quality will lead to excessive rendering pressure, stuttering, and delay. Therefore, the model needs to be properly subdivided or simplified according to the target resolution and device performance, and maintain a reasonable scale and shape.

2. Collection and processing of model data

In the early stage of project construction, developers need to comprehensively collect and process model data to lay a solid data foundation for creating digital twin scenarios. With the continuous advancement of science and technology and the continuous change of social needs, surveying and mapping technology is also constantly developing and innovating. The main technical trends are as follows:

Precision, intelligence and integration. With the continuous upgrading of surveying and mapping instruments and equipment, such as total stations, GPS receivers, digital aerial cameras, etc., as well as the wide application of information technologies such as computers, networks, and artificial intelligence, on-site surveying and mapping technology can achieve higher accuracy and efficiency. At the same time, it can realize the organic combination and synergy of multiple data acquisition methods and multiple data processing methods.

Multi-source, multi-scale and multi-dimensional. With the emergence and development of new data acquisition platforms and sensors such as remote sensing satellites, drones, and lidars, as well as the promotion and application of new data management and analysis technologies such as big data and cloud computing, on-site surveying and mapping technology can obtain richer and more Comprehensive and more real-time geospatial information, which can be expressed and displayed from different angles, levels, and scales.

1) Manual surveying and mapping

In actual digital twin projects, manual surveying and mapping usually requires SLR cameras, mobile phones, 360 panoramic equipment and manual photo collection personnel. In the process of manual surveying and mapping collection, it is necessary to plan the surveying and mapping route in advance, walk in an orderly manner, follow the basic logic, first locate the space as a whole, and then shoot the details locally. For a simple single model body, manual surveying and mapping is more flexible and cost-effective; for the construction of large-scale digital twin scenes, manual surveying and mapping is generally not recommended.

Manual surveying and mapping can generally be divided into the following steps:

① Determine the content, scope and accuracy of surveying and mapping, and formulate surveying and mapping plans and methods.

②Select the appropriate surveying and mapping tools and equipment, such as tripod, level, theodolite, laser plummet, total station, spirit level, vernier caliper, etc.

③ Carry out surveying and mapping on site, record or input relevant data information in accordance with the specified format and requirements, such as horizontal angle, vertical angle, elevation difference, etc.

④ Check, organize and store the surveyed and mapped data, delete or modify erroneous or repeated data, and ensure the integrity and accuracy of the data.

⑤Analyze, process and apply the surveyed data, and process, display or report the data by using calculation methods, graphics tools or professional software according to different goals and needs.

2) Oblique photography

Oblique photogrammetry refers to collecting image data from multiple angles such as vertical and oblique with a lens camera on the same drone, and obtaining complete and accurate texture data and positioning information. In the process of oblique photography data acquisition, one unit acquires vertical images, and the other four units simultaneously acquire side-view images of ground objects from four directions: front, back, left, and right. The tilt angle of the camera is between 40° and 60°, which can obtain the contour and texture information of the side of the ground object more completely.

Oblique photography technology greatly reduces the cost of 3D modeling and can make up for the shortcomings of traditional 3D modeling technology. It is one of the important choices for 3D modeling of large scenes.

Oblique photography technology mainly has the following technical advantages:

One is high resolution. The tilting photography platform is mounted on a low-altitude aircraft, which can obtain centimeter-level high-resolution vertical and oblique images.

Second, it can obtain rich texture information of ground objects. Oblique photography collects images from multiple different angles, which can obtain more realistic and rich texture information on the side of the ground object, making up for the deficiency that the orthographic image can only obtain the top surface texture of the ground object.

The third is that it can efficiently construct 3D models. Through the fully automatic joint spatial encryption of vertical and oblique images, texture mapping can be fully automated and 3D models can be constructed without manual intervention. The real 3D scene constructed from images not only has accurate geographical location coordinate information of objects, but also finely expresses the detailed features of objects, including prominent roofs and exterior walls, as well as fine features such as topography and landforms.

But oblique photography technology also has certain limitations. Oblique photography technology uses visible light for measurement, which has high requirements on weather, and is powerless to the terrain under dense vegetation, and has insufficient modeling ability for small objects.

3) Laser radar mapping

Lidar measurement technology is an emerging technology originally developed by developed countries in Europe and the United States and put into commercial application. It integrates three technologies: laser ranging system, global positioning system (GPS) and inertial navigation system (INS). A major breakthrough has been made in the real-time acquisition of three-dimensional spatial information, which provides a new technical means for obtaining high-spatial-resolution geospatial information, and is one of the more advanced surveying and mapping technologies at present.

LiDAR surveying and mapping technology mainly has the following technical advantages:

One is that digital elevation models can be quickly obtained. Laser point cloud data is the most direct data in laser radar technology. The density and accuracy of point cloud data are relatively high, and it can quickly and clearly display the three-dimensional coordinate frame of the point. Through manual alternate operation or automatic operation, the point clouds that radiate people into ground plants or buildings and other terrain objects are uniformly classified, filtered or cleared, and then the triangulation network TIN can be constructed to obtain DEM in time. Because the laser point density is very large and the number is relatively large, the generation of DEM is more convenient and accurate.

The second is a high degree of automation. From flight design to data acquisition to final data processing, the degree of automation is very high. Real-time display of flight trajectory through GPS technology. There will be no missed shots and human errors will be avoided.

Third, access to information is sensitive. Target information that is smaller than the resolution of remote sensing images or radar images can be obtained, and ground point data can be obtained through vegetation cover.

Fourth, the working conditions of the sensor are limited. It mainly adopts active measurement, emits and receives laser pulses by itself, and can penetrate dense vegetation directly to the ground without being limited by light and shadows. The obtained digital elevation model is closer to the real surface shape and is less affected by the weather.

3. Common 3D modeling software

1)Blender

Blender is a digital twin software developed by the American company PTC, which can convert entities in the physical world into digital models, and perform simulation and analysis in a digital environment. Blender provides a visual interface that enables users to easily create, edit and manage digital twin models. Blender also supports a variety of data formats, including CAD, PLM, and IoT data, as well as various sensor and device data.

2)Virtual/3DMAX

Maya/3DMAX is a digital content creation software mainly used for 3D animation, modeling, simulation and rendering. In the field of digital twins, it can be used to establish virtual twins.

3) Substance 3D Painter

SP is a professional 3D digital painting software owned by Adobe. It has powerful functions and is recognized as the most innovative and user-friendly 3D plotter. It is widely used in game and movie production as well as product design, fashion and in construction. Substance 3D Painter can provide texture painting from scratch in the digital twin, making it easier than ever to create textures for 3D assets.

4. Manual modeling

In the field of digital twins, manual modeling is a commonly used technique for building digital models, which can help engineers better understand and master information about product design, manufacturing, and operation, so as to better optimize products and improvements. In actual digital twin projects, manual modeling is also the most commonly used modeling method by technicians, which can flexibly adapt to the needs of different digital twin scenarios, flexibly modify and iterate according to project requirements, and make it easier to visualize and interact with digital twins Operation and performance are also more advantageous.

The digital twin modeling method based on manual model production mainly involves the following key technologies:

Oblique photography and other methods are used for data collection and processing. In the absence of BIM model support, it is necessary to use oblique photography, artificial photo collection, laser point cloud, etc. to collect and process model data.

BIM model lightweight processing flow. In the case of an existing BIM model, it is necessary to perform operations such as cleaning, surface reduction, and compression on the original BIM model data to improve the operating efficiency of the BIM model in actual projects; for the BIM model lightweight processing process.

Reconstruct the digital twin model by hand. For cases where manual collection is not possible and there is no BIM model support, it is necessary to manually reconstruct the model through software such as Blender, and use existing established knowledge or expert experience to manage the model production process and effect.

1) Oblique photographic model data processing

Processing of oblique photography:

Usually, you will choose to use oblique photography processing software, based on realistic references such as aerial photography data and aerial photography videos, combined with the terrain relationship in the twin scene. Perform a series of operations such as data alignment and model cropping on the oblique photographic model. Work on erroneous or insufficient quality parts of oblique photography, or convert directly to other model-common formats. So that oblique photography presents the correct content part that meets the needs of the project.
insert image description here
Optimization for oblique photography:

It is usually necessary to re-construct the top-level quantity of oblique photography and establish multiple layers of tile levels with different quantities. To deal with the slow loading speed of oblique photography in the twin scene, and the loading freeze.

Release of oblique photography:

Upload the processed and optimized oblique photography to the official platform cloud through platform encryption. There is no need to download the oblique photography locally, and the oblique photography in the cloud can be loaded online in real time by directly linking the URL in the twin scene.

Loading scheme for oblique photography:

It is necessary to convert the OSGB, 3DTiles, and URL formats of oblique photography to meet the loading conditions of oblique photography when the digital twin application faces different network environments (intranet and public network). At the same time, the loading speed of oblique photography can be improved to a certain extent if the method of loading oblique photography locally is used.
insert image description here
2) PBR modeling process

The PBR (Physically-Based Rendering) process is actually a very complex concept. Its basic concept is a combination of a series of complex renderers that deal with real physics and lighting, and a series of textures that use standardized representation of real material parameters. In essence, PBR is an overall system for creating textures and rendering work. Different tools and engines will produce different implementation effects (generally refer to the input types of renderer models and textures).

With the development of the times, the next-generation technology PBR process has also become popular. In the game industry and the digital twin industry, the original traditional process is slowly shifting to the PBR process. For example, we often hear the next-generation games/3A games, which refer to games made using the next-generation PBR process. The main reason for the change is that the material effect of the PBR process is not only closer to the real thing effect, but also the production efficiency is much faster than the traditional process.
insert image description here

The following is an analysis and comparison of the production process and effect of the traditional production process and the PBR process. Regardless of the traditional process or the PBR process, the most basic medium model, high model, low model and the baking of the model are the same.

Traditional process: First, you need to bake the high and low models to get normal (normal) and AO, and then transfer a CAVITY texture through normal and AO, and then multiply AO in PS, and adjust CAVITY to superposition mode, so that Distinguish the general color blocks of objects.

PBR process: The final textures obtained in the PBR process are AO, normal, Metalness, and Roughness four textures. Compared with the traditional process, the PBR process removes the superposition of AO, and only has a fixed AO map without any light and shadow. The normal baking process is consistent with the traditional process. The added Metalness is used to control the metallicity of the metal, either black or white, white is metal, black is non-metal, and Roughness is used to control the roughness of the material, which is also black and white To control, the whiter and rougher, and the darker and smoother. In addition, the use of 3D texture mapping software in the PBR process can simulate scratches, falling paint, dirty stains, etc. through software calculations, making the production more convenient and the effect more realistic.

In the process of actual digital twin project delivery, the PBR modeling process can be summarized as the following steps:

①Create the middle model in the modeling software, that is, the basic three-dimensional model;

②Carve high-resolution models in modeling software to produce high-precision models with details and textures;

③Topologize the low-poly model in the modeling software to form a low-polygon model with an optimized grid and split UV coordinates;

④Bake the texture in the 3D texture mapping software, and project the high-mode information onto the low-mode to generate normal maps, ambient light occlusion maps, etc.;

⑤Draw materials in 3D texture mapping software software, and create material maps such as specular maps, roughness maps, and metallic maps;

⑥Rendering in software such as Blender, you need to set the light source and environment, adjust parameters and effects, and finally export.
insert image description here
3) BIM model lightweight processing

BIM (Building Information Modeling) is the second digital revolution in the field of engineering construction after CAD, which has had a profound impact on the production organization and management methods of the construction industry. The core of BIM is to establish a virtual three-dimensional model of architectural engineering and use digital technology to provide a complete architectural engineering information library for this model that is consistent with the actual situation. The information base not only contains geometric information, professional attributes and state information describing building components, but also contains state information of non-component objects (such as space and motion behavior). Based on BIM technology, various information of building facilities can be integrated on the model elements to construct a digital twin of the building.

In the delivery of digital twin projects, the lightweight of the BIM model mainly refers to reducing the size and complexity of the model by cleaning, reducing, and compressing the original BIM model data, so as to improve the operation of the BIM model in actual projects. Efficiency, making it more suitable for viewing and interacting on terminals such as computers and mobile phones. Digital twin modelers can use Blender or other modeling software to lightweight the BIM model. Common operations include:

You can use the blend shape or blend shape (BlendShape) function to combine multiple shapes into a deformation chain, thereby reducing the number of vertices and polygons of the model.

Use polygon reduction or mesh simplification (PolygonReduction / MeshSimplification) tools to automatically or manually delete or merge unnecessary vertices and faces according to certain criteria and thresholds.

Use maps or materials (Texture / Material) to replace complex geometric details, such as texture maps, normal maps, etc.

Specifically, common BIM model application software and formats include AutoCAD (.dwg/.dxf/.dwt/.dws), Sketchup (.skp/.skb), AutodeskRevit (.rte/.rvt/.rfa), SOLIDWORKS wait. After lightweight processing, the BIM model needs to be exported to the general format FBX or OBJ or datasmith, and imported into Blender or UE for further smoothing.

In the process of lightweighting the BIM model, modelers need to pay attention to the following issues:

UV level requirements and rules:

It is necessary to maintain the consistency of the UV coordinate system to avoid overlapping or dislocation of UV;

Use the appropriate UV unfolding method to select the optimal unfolding method according to the shape of the model and the characteristics of the texture;

Use efficient UV packaging tools to pack UVs of multiple models or materials into the same texture space to reduce the number of textures and memory usage;

Use techniques such as seamless tiling or three-way projection to avoid texture issues such as obvious seams or stretching;

UV needs to minimize blank areas and improve texture utilization and quality;

UVs need to be reasonably divided and grouped according to the complexity and details of the model to avoid too large or too small UV blocks.

The following picture shows an example: if you make an independent texture, follow the above UV level requirements, if you make a Tiling texture, you should follow the texture level requirements as much as possible (for details, please refer to the following texture level case breakdown) texture level requirements and rules
insert image description here
:

Choose the appropriate texture format, and choose the optimal compression ratio and definition according to different platforms and needs;

Control the number and size of textures, minimize unnecessary textures, and pack multiple textures into the same image;

Use lossless or lossy compression tools to compress textures to reduce file size and memory usage;

Use efficient UV unwrapping and packing tools to avoid problems such as UV overlap or misalignment, and improve texture utilization.

Lightweight level requirements and rules:

Select the appropriate surface reduction tool, and select the optimized algorithm and parameters according to different model formats and requirements;

Maintain the integrity and topology of the model to avoid problems such as model damage or deformation; retain important features and details of the model to avoid problems such as model distortion or quality degradation;

Use efficient data compression and transfer techniques to further reduce model data volume and load time.

Model lightweight processing usually chooses to use the model surface reduction tool for surface reduction optimization processing. After the processing is completed, it is imported into Blender and other software for further optimization and line management. For models that cannot be handled, rewiring modeling is required.

5. Procedural modeling

With the rapid development of digital twins and metaverse-related fields, the demand for 3D models has risen sharply, and manual modeling that requires a lot of time and effort cannot meet the skyrocketing demand for models. In response to such phenomena, high-efficiency and high-quality procedural modeling methods have been promoted and developed. Procedural modeling refers to setting the rules of model generation according to the principles of computer graphics, and using programs to create models or textures. Modelers can quickly generate models with diversity and flexibility by adjusting parameters, which helps modelers greatly Improve modeling efficiency.

Programmatic modeling can be mainly divided into three stages. The first stage is modular modeling, which splits the model to be generated into different components or modules; the second stage is independent variable modeling, which adjusts the parameters of different components. The main parameters such as length, width, and height can generate a large number of variants, which provides more abundant results for subsequent combination and matching; the third stage is to combine and arrange components according to certain rules to generate models.

Compared with traditional modeling methods, the advantage of procedural modeling is not only to quickly generate various model results, but also to form an efficient production pipeline by connecting with game engines. For the production pipeline, the features of reusability and editability make procedural modeling have a higher error tolerance rate, and also reduce a lot of repetitive work.

At present, digital twin applications using programmatic modeling are very common. 3D modeling software has launched a visual operation interface, which is convenient for modelers to quickly realize programmatic modeling, such as Blender, Adobe Substance 3D, CityEngine, etc., and special focus Based on the software Houdini for procedural modeling, these tools provide great convenience for modelers. This section will introduce two procedural modeling methods, Houdini and CityEngine.

1)Houdini

Houdini was originally designed for the production of special effects and animated films. Its unique procedural generation technology can quickly create high-quality 3D models and special effects, providing a revolutionary tool for the film industry. With the continuous development and improvement of technology, Houdini's application fields have gradually expanded to various fields such as game development, virtual reality, and digital twins.

The core of Houdini's procedural modeling is to transform the production process of 3D content into the generation process of the program, and realize the precise control of the 3D content through the adjustment and modification of the program. Compared with traditional manual modeling methods, Houdini's procedural generation technology can quickly generate various complex 3D models, animations and special effects, and has a high degree of flexibility and programmability. It not only provides an efficient, fast and customizable solution for producing high-quality model content, but also provides strong support for digital twins, industrial design, architectural visualization and other fields.

Houdini has the following technical features, which can help technicians quickly produce high-quality digital twin models:

Parametric modeling. An important feature of Houdini's procedural modeling is parametric modeling, which can use parameters to control various attributes in the modeling process, thereby achieving fast and flexible modeling. This technique can greatly improve the efficiency of modeling, while also allowing technicians to easily modify and adjust.

Dataflow programming. Data flow programming is a programming paradigm that regards a program as a set of data streams, where each data stream represents a set of data or a set of operations. Houdini's procedural generation technology is based on this idea. It regards data and operations as a series of nodes, and implements complex program logic and generation processes by connecting inputs and outputs between nodes.

Nonlinear process control. The core of procedural modeling is the control of the process. Houdini provides a non-linear and visual process control method, namely "Node Network". Using the node graph, technicians can add, delete, and adjust nodes at different stages at any time to achieve flexible control of the modeling process, and at the same time it can be more convenient to iterate and debug.

Built-in geometric operations and algorithms. Houdini has built-in many commonly used geometric operations and algorithms, such as Boolean operations, subdivision, deformation, optimization, etc. These operations and algorithms can help technicians quickly complete complex geometric operations and modeling tasks.

Houdini procedural modeling technology has very high flexibility, repeatability, visualization and efficiency. These advantages allow technicians to better adapt to project needs and changes, improve model production efficiency and quality, and allow technicians to focus more on creativity and innovation.

In the field of digital twins, the application of Houdini technology can help manufacturers improve the production efficiency and quality of digital twin scenes, and solve problems such as scene modeling, data acquisition, and scene simulation:

Large-Scale Scene Modeling

The field of digital twins requires digital modeling of objects and scenes in the real world, which usually takes a lot of time and effort. Using procedural generation technology, you can automate the modeling process by writing scripts, greatly speeding up scene modeling.

City building scene generation. Houdini procedural generation can be used to quickly create highly realistic urban scenes. Urban scenes that meet planning standards can be quickly generated through simple parameter control. In addition, Houdini can also automatically generate urban scenes by realizing the programmability of elements such as urban road systems and buildings, and by using various fractal algorithms and L-system algorithms, or based on GIS data.
insert image description here
Natural environment generated. Using Houdini procedural generation, artists and technicians can quickly create realistic natural environment scenes. By using methods such as L-systems and fractal algorithms, Houdini can automatically generate trees and vegetation and assign realistic materials and textures to them. In addition, Houdini procedural generation can also be used to create natural landscape elements such as terrain, rivers, and lakes.
insert image description here
High-precision data acquisition

The field of digital twins requires high-precision data collection of objects and scenes in the real world, such as point cloud data collection using laser scanners and other devices. Using programmatic generation technology, the collected data can be processed and optimized quickly and accurately.

efficient simulation

The field of digital twins needs to simulate various physical phenomena, such as mechanical motion, fluid motion, and so on. Using procedural generation technology, various complex simulation scenarios can be quickly generated by writing scripts, which greatly improves the simulation efficiency and accuracy.

efficient data management

Digital twins usually require a large amount of data storage and management, including scene data, texture data, model data, etc. The amount of these data is usually very large and requires certain storage and management capabilities. Houdini's procedural generation technology can manage these data through efficient data nodes and cache mechanisms, so as to improve the efficiency of data reading and processing.

2)CityEngine

CityEngine is a software that uses a code-based procedural approach to efficiently generate 3D city models. Initially it was used in areas such as urban planning, architecture, visualization, game development, entertainment, GIS, archeology and cultural heritage.

As a 3D modeling software, CityEngine has the characteristics of being able to integrate with GIS data and provide easy-to-use functions such as editing tools, facade textures, report and dashboard generation, and 3D model creation. Supports import and export of multiple formats, and has good data compatibility, such as OBJ, Collada (DAE), DXF, etc. These unique capabilities allow the creation of large-scale, interactive and immersive urban environments in less time than traditional modeling techniques. In particular, it can generate a city model that matches reality based on real geographic information system (GIS) data, which satisfies the needs of a large-scale city model in a digital city and greatly improves the efficiency of twin scene modeling.

The biggest feature of CityEngine is that it can use GIS data on the network to generate urban clusters or urban road network lines with building outlines, building heights, and building facade materials based on ShapeFile files. Moreover, since the terrain guarantee matching the building outline can be generated through the DEM data-level image data, the accuracy of the generated model is better guaranteed. At the same time, due to the existence of rule files in CityEngine, users can freely adjust the shape, type, and texture of the generated buildings at the global level, which improves the efficiency of scene modeling while maintaining logic.

In addition, CityEngine also has urban planning functions, which can quickly create and modify urban layouts, and make adjustments based on elements such as roads, blocks, and parcels. CityEngine also supports the batch modeling function, which can apply CGA rule files to multiple parcels to realize batch generation of building models.

Guess you like

Origin blog.csdn.net/amumuum/article/details/131374696