Detailed steps for converting avatar images into model images using Stable diffusion: (copy and paste)

Preconditions:

       ① The environment is configured by default. The configuration environment is very simple and there are many teaching videos. ② There are real series of basic large models. This is easy to solve. There are many online. If you need mine, you can share them. ③ The computer configuration is better.

Small Tips:

         Frontal pictures can also be copied and there are fewer steps, but if you need to control the face and skin color and need to train the model, I will take the time to make one when the next business needs it, so stay tuned!

        

  • Mannequin preparation:

important point:

①The device has better pixels (the quality of the picture taken directly determines the quality of the picture)

② The mannequin has arms, which are best adjustable (because if it is long-sleeved clothes, the posture cannot be adjusted without using the arms, and 3Dopenpose cannot do anything because the clothes are used as masks for drawing)

③The angle of the photo is the angle of the model picture, including the size and distance of the clothes (you can understand this after practical operation. A good shot reduces a lot of workload. Although the size and movement can be adjusted in 3D openpose, it is easy to handle deformation)

④ Pay attention to the adjustment of the arm sleeves to make them smooth and natural to avoid rework.

  • Officially begin:

1. Start stable diffusion (WebUi of Qiuye Boss) and enter the graph graph interface

2. Configure basic prompt words (in order to improve the quality of the picture and avoid the appearance of some information that is not suitable for browsing), just copy and paste.

Positive prompt words:

best quality,ultra high res,(photorealistic:1.4),1girl,

负面提示词:
paintings,sketches,(worst quality:2),(low quality:2),(normal quality:2),lowres,normal quality,((monochrome)),((grayscale)),skin spots,skin blemishes,age spot,bikini,medium breast,Nevus,skin spots,nsfw,

3. Enter the partial redrawing module and select the picture that needs to be adjusted. The work here is temporarily completed. You will come back to this module later.

4. Click to enter the 3Dopenpose work area.

①Click File, click below to set the background image (you cannot choose to detect from the image here, and an error will be reported, because 3Dopenpose cannot recognize human figures and animation characters. If it is a real model, you can choose this step), and then you need to adjust the image on the right For the size, I used 600*800. That’s it. Then we need to adjust the dummy in the picture to get the pose we want.

② If you can't move, just click on the movement mode here. In the movement mode, you can move the villain. If you turn it off, you can adjust the joint rotation and other actions.

③In this way, my sample is adjusted. Since mine is a back view, I will turn off the movement mode after adjusting the pose size and rotate it to the back (the current algorithm of 3Dopenpose is not very good, even if you adjust it to the back) Sometimes it will be recognized as frontal. At this time, you can add keywords such as back to the frontal prompt word to control the weight)

After the rough adjustment is completed, there is a parameter bar on the right side. Here I usually adjust the bone width, because female bones are smaller and thinner. This adjustment of the picture structure is more in line with women and reduces the appearance of muscles, hands and feet. Then I slightly adjust the head. Height, the status bar here is where we make fine adjustments after major adjustment. Compared with major adjustment, it is easier to control here. Generally, it is difficult to draw a picture at once, so it needs to be adjusted repeatedly. So the initial work of 3Dopenpose has come to an end.

④After the initial adjustment is completed, click the Generate button and it will jump to the Send to ControlNet module. The first one is posture control, and the remaining ones are hand controls. You can choose to turn off the - symbol. Of course, if you need hand The part can also be relaxed to 1 or other channels, but it cannot be 0, because 0 is already occupied by posture control. Then click Send to Tushengtu.

5. Return to the graph module, and we drop down to the controlNet module. At this time, we can already see that the posture graph has been generated. We need to click Enable and a few buttons on its right side, so that the posture intervention image generated by 3Dopenpose can be considered. Drawing, then we select none for the preprocessor and openpose for the model (this is because we already have a pose diagram, so there is no need for preprocessing. If a person needs to detect the pose, preprocessing is required. 3Dopenpose has already been processed, so We can just use the model directly), so the pose is considered complete.

  1. Next comes the tedious stage. I said I would come back in the third step, so I chose to go back to the third step. ① Simply adjust the parameters first, just copy my parameters. I will change the subtle parameters I adjusted later. To explain, after the parameter module is completed, we need to graffiti the area that needs to be redrawn (popular understanding is the redrawn area), ② focus the mouse on the local redraw area by clicking on it, and then press the S shortcut key on the keyboard to perform the method, and then You can see a round black brush, which needs to be used for graffiti. You can adjust the size by pressing CTRL + scrolling up and down for easy graffiti.

③Here are some tips for you. You can first use a small brush to doodle the edge, and then enlarge the brush to doodle the background. This makes it easy to adjust. It is important to pay attention here. The quality of the graffiti directly affects the quality of the picture. Will there be cluttered lace, etc., so here It is recommended that you take your time. Of course, if you have the conditions, you can buy a computer drawing board to connect, which will simplify it a lot. If you don't have the conditions, just like me, you can only make mistakes.

  • Official start and parameter fine-tuning:

  1. The first time I produced the picture, the effect was very poor, but the posture algorithm worked. However, as I mentioned before, 3Dopenpose could not accurately distinguish the front and back sides, so the phenomenon of reverse penetration occurred, and the character's head appeared to be obviously bowed. , the upper corners are obviously blurred; the clothes have additional derivatives, with many more corners.

Processing methods:

① Add a back shadow to the front prompt word and increase the weight (back shadow: 1.2),

②If you obviously lower your head, you can go to 3Dopenpose to adjust your head, choose to turn off the movement mode, and then raise your head.

③If the corner of the top is blurred, you can lower the blur of the mask edge from the original 4 to 2.

④I will add bikini prompt words to the positive prompt words and increase the weight (bikini:1.2),

1.2 The second time I released the picture, I used the processing methods ①③④, and reduced the control weight and paid more attention to the prompt words. The effect was slightly better, but the position of the character's legs was still not good, and the situation of the clothing derivatives was not improved. Next Execute ② in one step and make fine adjustments in 3Dopenpose.

  1. The second time I took the picture, this time it was still obviously improved. The weight of the back shadow was strengthened (back shadow: 1.4). The remaining problems are some details. From my aesthetic point of view, I didn’t find any serious problems. The problem is, the rest is fine adjustment, so I will batch produce pictures to see if there is a better output. If not, I will improve on this basis. Pay attention to whether the pictures can meet the needs of the product. .

Batch plotting result filtering: (in line with expectations)

As can be seen from the above picture, we have actually produced the expected pictures, but the details are not good enough, so we want to continue to produce, but the quality of the pictures under the control of the 3Dopenpose pose algorithm is very low, as shown in the two pictures above. Select one from 10 groups, that is, it took 20 pictures to select 2. It can be seen that the output rate is very low, but we use more precise algorithm control. Let me teach you a technique below.

When the clothes in the picture are particularly severe or there are other situations, but the overall posture and gestures are relatively satisfactory and you need to adjust the details, you can use the picture with posture control algorithms such as canny and softEdge, and then fine-tune other issues, because it is detected through Openpose The expected output rate of human posture is not high, so a more accurate algorithm can be used. The specific implementation steps are demonstrated below.

3. Select a relatively satisfactory picture, use softEdge or canny algorithm control, and appropriately reduce the weight to around 0.8.

3.1 softEdge picture display:

3.1 canny picture display:

4. Use scripts to expand by adjusting the redrawing range. The method of adjusting the redrawing range will add more possibilities to the picture than before, and may produce unexpectedly better pictures.

5. Fine adjustments. If you are good at P-picture technology, you can get twice the result with half the effort here. I will explain the specific steps next.

5.1 Enter the graffiti redraw module and select the picture that needs to be refined.

5.2 I will use a free software called FastStone, and then click the button on the right to select the screen color, and then select the area that needs to be processed. If you are good at P-picture technology, you can also P-picture processing faster.

5.3 Then add the picked color to the graffiti redrawn artboard color to complete the graffiti, as shown in Figure 2

5.4 Then go back to the script and set the redrawing range. Generally, it does not exceed 0.7. If it exceeds 0.7, the meaning of graffiti redrawing will be lost.

5.5 Result Viewing

5.6 Effect display:

5.7 Adjust the details of the right shoulder, select color and graffiti, the effect is as shown in the figure

5.8 Result View: All in line with expectations

5.9 Effect display:

6.0 Processing of the lower right arm: (Tips: Remember to use the latest picture to deal with the next problem after each processing, otherwise the previous work will be in vain. There should be no such idiots)

6.1 Result View:

6.2 Effect display:

Guess you like

Origin blog.csdn.net/weixin_54515240/article/details/132701411