[Blender from sculpting to rotten] First sight of geometric node procedural generation (Procedural Content Generation) - urban scene

The ratio of the number of tweets to the number of readings in the bad photo series I wrote last time is quite scary. I didn’t expect everyone’s enthusiasm for learning modeling to be so high.
In the future, I will continue to be a human-powered modeling series, but there is a limit to modeling by human power.
"I don't carve jojo—"

This time, I mainly followed julin's tutorial to learn how to programmatically generate urban scenes.
But personally, I am a computer science class, and I may be relatively experienced in logic. If you are a friend with an art background, it may be relatively difficult to get started with PCG, but in fact, it is not very difficult logically. The key is to figure it out.

This is the first time I have written such a long article, and I have done half of the reproduction of the pbr link of learn-opengl to be postponed. . . .

1. Demonstration of the final effect

The first is to release the finished effect first.
In the end, there are not too many parameters, and I didn't follow the tutorial 100% to fully realize it, just to get the required degree of completion.
insert image description here
At that time, I just started to learn the first version of the demo, simple random building generation.
insert image description here
The following is the completed version.
Pull the corresponding parameters to realize the random generation of plots/buildings
insert image description here
and realize the random generation of plot size changes. My old book can’t take too much The number of high faces brought by the block can be pulled up to 4*4 at most. (40-series cards and 13900hx are on the way!!)
insert image description here
Realize random changes in building heights, random heightening/lowering,
insert image description here
and building density changes
insert image description here

2. Two points to pay attention to

2.1. The dividing line does not display the problem

When I saw this, I thought my blender 3.4 was a pirated version, but I switched to the editing mode and found that it still only displayed the original cube.
insert image description here
But I see that the number of faces is correct, and I feel that there should be a problem with the rendering settings.
insert image description here

insert image description here
Go directly to the render settings and pay attention to this item.
insert image description here
Open it to see the dividing line normally.
insert image description here

2.2. Cross-project resource import

Friends who have been playing unity for a long time may be familiar with the file management of unity project, but all the projects on blender are packaged into .blend files.
At the beginning, I wanted to import the texture ball made in the tutorial, but Baidu couldn't find a solution for a long time, so I had to use Google.
The import of cross-project resources is mainly to
insert image description here
select the corresponding project file in the "File" - "Add" column. The document management method is also similar to that of Unity. Just enter the corresponding folder and import the required resources.
insert image description here

3. Organize ideas

The idea is as follows (mistake)
insert image description here
No kidding, no kidding, this kind of graphical programming often becomes a big pile of shit that is difficult to organize and maintain (this is just the main code, excluding the wheel nodes I made myself, etc.), so it’s still It is necessary to sort out the general implementation ideas.

4. Main ideas

As far as the main idea is concerned, it is relatively simple. On the whole, after dividing the plots by relevant parameters, the building collection and green space collection are divided by probability, and then re-divided respectively to generate corresponding building/green space models. Of course, in order to save the amount of engineering, our pcg often makes components to assemble things, and we will also mention the process of how to use components to assemble buildings later.
In the first step, we first divide the overall plot according to the input parameters. We will divide the basic plot number of x*y according to the size of the plot.
In the second step, we mainly made related random segmentation wheels to achieve random segmentation of different levels and times, and randomly divided the plot into two parts (a collection of two faces) again. One part is used for building, and the other part is used for green space.

insert image description here

4.1 Plot division

Then first we create the corresponding plot grid through the grid node by giving the plot size and the number of vertices in the x and y directions. Then edge splitting is realized by splitting the edge nodes, and the entire face is split into a series of independent faces according to the edges. By randomly selecting a series of faces and performing secondary segmentation, the segmented plots are finally output.
insert image description here

insert image description here
The effect of splitting the surface for the first time (no intermediate division)
insert image description here
the divided ground effect (completely divided)
insert image description here

4.1.1 Realization of the function of random division of specific plots

It is basically inevitable that you need to make your own wheels when doing programmatic generation. Some operations that are very simple in manual modeling require relatively complex logic to be realized. The plot division here is like this. The center division that can be done with one hand ctrl r is just a round of tossing here, which is very refreshing. Here is a brief overview of the general process:
Here it is mainly necessary to implement a function of dividing based on the longest side in the x and y directions. Therefore, it is necessary to subtract and project the vertices in the x and y directions to obtain the corresponding length. Since there is no way in pcg to do ctrl r directly through nodes to make middle cuts in any specified direction, so we have to make our own wheels here.
insert image description here
The node connection of this function is as follows. The most complicated part on the lower left is actually used to solve the projection length and comparison of the x and y axes. Since the node display is too unintuitive, and the connections are relatively messy, redrawing the flow chart is equivalent to reorganizing the ideas, and we will not show too many local details of graphical programming in the future.
insert image description here

Geometric data input (surface)
insert image description here
is mainly square, so the x and y axes are randomly selected for segmentation, and after segmentation, scaling is performed. After
insert image description here
all surfaces are copied, the transformation is performed in the opposite direction. Note that here we select a specific batch of geometric data copies by selecting the copy number.
insert image description here
insert image description here
The effect of the final segmented plot is as follows
insert image description here

4.2 Separation of building plots and green plots

On the basis of the preliminary division, we mainly scaled the plot first and then divided it twice and three times, and then scaled it. Note that the scaling here uses a function similar to the equidistant scaling of the interpolated surface, which is different from direct scaling.
insert image description here
First of all, based on the parameters of building density, we randomly separate the building plots and greening plots, and shrink and divide the two types of plots twice. Note that after shrinking here, it is only the plots of buildings and greening. Under this type of plots, there are also plots of roads and roads. The road plots are also based on the geometric data output by the method 4.1.
For building plots, here we separate large, medium and small plots according to their area, and subsequent large, medium and small plots will be used to generate different types of buildings.
For green plots, we only divide them into large plots and sub-large plots. The largest plot is only used to generate playgrounds and ball fields, and the second largest plot is used to generate trees.
Here, both the green plot and the building plot need to be copied before generating the corresponding building elements. One is used to give the base material (grass, brick) and the other is used to generate the corresponding elements.
insert image description here
Overview of Node Situation
insert image description here

The effect after generating trees and plots.
insert image description here

4.2.1 Realization of equidistant scaling of xy axis

The principle of equidistant scaling is very simple. It mainly obtains the lengths of the x and y sides, calculates the scaling coefficient of the corresponding axis according to the scaling distance d, and performs directional scaling twice according to the coefficient.
Comparison of the effects of conventional scaling and equidistant scaling (note that the direct operation here is actually the process of interpolating the plane)
insert image description here
to obtain the length of the x and y axes. Note that there is no input here, because the geometry node is logically similar to programming, but there are differences to a certain extent. Personally, I understand that the vertex data of all the edges of the current geometric data is obtained through the vertices of the edges, but what we actually operate is only part of the collective data. Therefore, although the vertex data of a part of the edge has been read, it will not be used. We just assume that only the edge and vertex of the geometric data we need will be read.
insert image description here
The part of performing scaling is mainly to obtain the scaling factor by subtracting the scaling distance d from the length of xy and then dividing, and then adjust through two scaling element nodes. The logic is relatively simple.
insert image description here

4.3 Generate buildings

The main idea of ​​generating buildings is relatively simple, but there are many things that need to be adjusted in details. Mainly by copying different numbers of floor plots, then transforming according to the floor height, and finally by arranging corresponding components on each side evenly, the structure generation of each floor is realized. Sometimes we will make a distinction between the first floor structure and the upstairs structure, as well as the roof structure. You can deal with the corresponding problems by selecting the copy number.
insert image description here
In fact, the generation logic of the building is relatively complicated, and different building types need customized adjustments, so the readability and maintainability have been extremely stretched. The figure below is a node effect display.
insert image description here

Copy and generate floor copy data and implement transformation
insert image description here
Obtain edge data
insert image description here
Grid data is converted to curve data to achieve resampling
insert image description here
After resampling, the data is converted to grid data, and then converted to point data and
insert image description here
instantiated on the point (note that the trouble and The cumbersome scaling and adjustment process, only the effect is displayed), there is no geometric data of the roof data and other decorative components for the time being. The
insert image description here
generation of the roof components and all the decorative components is completed.
insert image description here

4.3.1 Get surface normal

Our building components only face one direction after modeling. Here we need to obtain the different normals of the four sides of the building, so as to correctly rotate the orientation of different components and obtain the correct effect.
Here, the grid is extruded in the z direction, and the edges in the xy plane are extruded into four faces, and the normal data of each face is obtained by number sampling, and the corresponding rotation vector is generated by the y direction, and output to the instantiation At the point node, the correct rotation of the component is achieved.
insert image description here
The effect of extruding edges
insert image description here
The pre-made components (y-axis orientation)
insert image description here
are different for different buildings, and our generation logic will be different.
insert image description here
High-rise building generation
insert image description here

Basically, the general idea is like this, and the road surface generation idea is actually similar. Since my new computer arrived in the process of writing the article, this time I can freely display the random generation of larger plot sizes!
insert image description here

5. Postscript

I feel that the generation of blender's geometry nodes seems to be more biased towards the application of the underlying principles. Occasionally, I have seen the pcg nodes of Houdini, which seems to be very simple. I hope to continue learning the generation of blender's geometry nodes, and I want to follow up Think more about Houdini's pcg process.

Guess you like

Origin blog.csdn.net/misaka12807/article/details/129453275
Recommended