[Stable Diffusion] ControlNet Basic Tutorial (3)

Continuing from the previous article [Stable Diffusion] ControlNet Basic Tutorial (2), this article introduces two common basic usages of ControlNet. For more usages, welcome to pay attention to the blogger, and the blogger will update more interesting content.
3.3 Change the skin of the object
Sometimes, we don’t want to change the outline of the object, but only want to change the skin on the surface of the object. For example, for the same shoe type, we let it generate different styles of the shoe surface (1)
in the " image (img2img) " Upload (Drop Image Here or Click to Upload) the image of the object that needs to be replaced with the "skin"
insert image description here
(2) Upload the same image that needs to be processed in ControlNet
(Drop Image Here or Click to Upload) (3) Enable ControlNet , in " Preset Select " canny " or " hed " or " pidinet " in "Preprocessor", and select the corresponding " control_canny " or " control_hed " in " Model " . It is worth noting that in " Preprocessor " Select " pidinet ", in the " Model (Model)"Corresponding to " control_hed ", these three processing methods are to detect the edge of the image, choose according to your own needs, you can choose "hed" in the " Annotator Resolution " below, It is "HED Resolution", the function is the same, adjust the edge line, you can click the " Preview Preprocessing Result (Preview Annotator Result) " below to observe the effect after preprocessing, judge whether to make further adjustments, and adjust the position of the parameters As shown in the figure:
insert image description here
an example of the effect after adjustment:
insert image description here

(4) Select a large model, different large models display different image effects, you can try a few more, here I use "chilloutmix_NiPrunedFp32Fix" to test (5) Enter the words Prompt and
Negative prompt, according to your needs, for example, If you want "a pair of blue trendy shoes", you can enter "blue fashionable shoes". Of course, you can also enter other descriptors according to your needs, such as the color of the shoelaces, whether there are graffiti on the upper, etc.
(6) If you have other needs, you can also use it with lora, etc.
The effect is as follows:
insert image description here
(Large model: chilloutmix_NiPrunedFp32Fix
descriptor: masterpiece, best quality, design sense, blue fashionable shoes)
insert image description here
(Large model: chilloutmix_NiPrunedFp32Fix
descriptor: masterpiece, best quality, design sense, green fashionable shoes)
the effect is pretty good! Inspiration library for designers, good news for Taobao merchants!
3.5 Outdoor landscape generation
We want to generate a good-looking landscape picture with a few strokes of a specific color, and use ControlNet's semantic segmentation model to achieve (1
) use a specific color for painting creation, because the model used by ControlNet For ADE20K, the elements represented by the colors correspond to the following:insert image description here
Or use a real landscape picture, perform semantic segmentation first, and then do further processing. It is worth noting that in ControlNet, use preprocessing to select segmentation, and model selection control_seg can also perform semantic segmentation, but the speed is relatively slow and the effect It's not very good either, it is recommended to use here: https://huggingface.co/spaces/shi-labs/OneFormer. After opening:
insert image description here

If you are interested in semantic segmentation and other computer vision content, the blogger will make a special blog post to talk about it.
Here is a semantic segmentation image:
insert image description here

(2) Click Enable in ControlNet, upload the semantic segmentation image, select None in preprocessing, select control_seg in the model, add some descriptors in Prompt and Negative prompt, and then select the large model. If you are interested, you can also add lora. And some other models and parameters, you can generate outdoor landscape images.
Image example:
insert image description here
(large model: realisticVisionV13_v13
descriptors: masterpiece, best quality, landscape,morden house, ,scenery, photorealistic, realistic)
insert image description here
(large model: chilloutmix_NiPrunedFp32Fix
descriptors: masterpiece, best quality, landscape,morden house, ,scener y, photorealistic , realistic)
, isn’t it amazing! In the next section, we will use ControlNet to guide the actions of the characters and let the characters perform specific actions. This is the most magical operation of ControlNet! This means that the posture can be customized, which means that AI drawing has officially entered the autonomous and controllable generation ! Additionally, the blogger will demonstrate basic usage of architectural/interior image generation using ControlNet. Welcome to like, follow, bookmark and support a wave. Stay tuned for more AI painting tutorials!

Guess you like

Origin blog.csdn.net/weixin_47970003/article/details/130472057