[Stable Diffusion] ControlNet Basic Tutorial (2)

Continuing from the previous article [Stable Diffusion] ControlNet Basic Tutorial (1), this article introduces two common basic usages of ControlNet. For more usages, please pay attention to bloggers, and bloggers will update more interesting content.
3. Basic usage of ControlNet
3.1 Manga line drawing coloring
(1) Upload the image that needs to be processed (Drop Image Here or Click to Upload)
insert image description here
(2) Enable ControlNet, select "canny" in "Preprocessor" and select "canny" in " Select the "control_canny" that is consistent with the preprocessing in Model (Model) (if you don't know where the operating parameters of enablement, preprocessing, model, and ControlNet mentioned later are, please read the previous blog post [Stable Diffusion] ControlNet Basics Tutorial (1))
canny is used to identify the edge information of the input image, and extract the line draft from the edge detection of the original image to generate a picture with the same composition.
(3) Select a large model, different large models display different image effects, you can try a few more, here I use "dalcfoBm9V2_dalcefoBm9V2" and "chilloutmix_NiPrunedFp32Fix" to test (4) Input the words Prompt and Negative prompt, according to your
needs Input, for example, if you want blue hair, green background, you can enter "blue hair, green background", of course, you can also enter other descriptors according to your needs, the color of the jacket, the color of the skirt, etc. (
5 ) can be used in conjunction with lora to generate images of different styles. For example, here I used the "Moxin lora" mentioned in the previous lora tutorial (see the blog post [Stable Diffusion] basic usage skills of lora for details) and the effect is as follows
:
insert image description here
(Large model: dalcfoBm9V2_dalcefoBm9V2
descriptor: masterpiece,cute girl, blue hair,green background, < lora:MoXinV1:0.7 >)
insert image description here
(Large model: chilloutmix_NiPrunedFp32Fix
descriptor: masterpiece,cute girl, green background, < lora:MoX inV1:0.7 > )
The effect is still pretty good! You can easily color even if you don’t know how to draw. This is just a simple demonstration. If you want more effects, you can adjust the corresponding parameters.
3.2 Random graffiti generation
(1) Enable ControlNet, select in "Preprocessor" "scribble", select "control_scribble" consistent with preprocessing in "Model"
"scribble" means graffiti, more freedom
(2) click "Create Blank Canvas (Create Blank Canvas)"
(3) can be in Adjust the length and width of the blank canvas in "Canvas Width and Canvas Height"
(4) Click the "pen" graphic next to the canvas to adjust the size of the brush
insert image description here
(5) Randomly graffiti on the canvas
insert image description here
( 6) Choose a large model, you can still experiment more, different large models, the effect is different
(7) Enter the words Prompt and Negative prompt, here you can input according to your needs, I entered here "sunshine, beach, sea waves, cactus "
(8) It can still cooperate with lora and other models to generate images with different effects.
Examples of effects are as follows:
insert image description here
(Large model: chilloutmix_NiPrunedFp32Fix
descriptor: matepiece, best quality, sun, waves, cacti, house on the beach)
insert image description here(Large model: dalcfoBm9V2_dalcefoBm9V2
descriptor: matepiece, best quality, sun, waves, cacti, house on the beach, < lora :MoXinV1:0.7 >)
The above are just two simple usages of ControlNet, and there are more abundant and useful operations in the future. Welcome to like, follow, bookmark and support a wave. Stay tuned for more AI painting tutorials!

Guess you like

Origin blog.csdn.net/weixin_47970003/article/details/130465992