Today I want to show you how to use Stable diffusion to add scenes to your products and turn them into advertising blockbusters in seconds.
Mastering this skill, you can add different scenes to the product at will, saving complex shooting settings and shooting costs.
Without further ado, let’s explain the demonstration process in detail.
First select a picture of your product. If it has a background, it is recommended to use the cutout tool to cut out the product, because we will use it later.
Then open Stable diffusion, and you need to create a real scene in the form of a Vincentian diagram, so we choose a realistic model here.
Then fill in the content we want in the forward keyword and reverse keyword input boxes.
Then there are the sampling methods, of which there are many.
Before using it, you need to understand what sampling is. In short, it is stable diffusion. Before generating an image, a large model will generate a picture full of noise.
The noise filter then goes to work, subtracting the predicted noise from the image, repeating the process until a clear image is obtained.
The entire process of removing noise points is called sampling. There are many sampling methods. We need to choose a suitable sampling method for pictures of different styles.
For our real model, it is suitable to choose DPM++ and karras sampling methods for plotting. Here you can choose DM++2M SDE.
The higher the number of iteration steps, the more precise it will naturally be, but the longer it will take to produce the image, and after the number of sampler sampling steps reaches a certain value, the clarity will no longer change much.
Therefore, the generally default value of about twenty is our commonly used sampling step number. After the number of iteration steps and sampling method, Controlnet needs to be used.
To put it simply, its function is to pad the image, allowing stable diffusion to optimize and create according to the outline of the sample image to achieve our purpose of adding scenes. Select the product image you just cut out and throw it to Controlnet.
Check to enable the preprocessor and select Cannv here, and the model will also select the corresponding one here.
The function of Canny is that it can identify the picture content as line drawing, and use the line drawing to control the content generated by the SD picture, so that our output pictures will not change too much.
After doing the above operations, you can draw the picture.
Of course, your picture may not be satisfactory at this time. You can control the weight through Controlnet.
Adjust the similarity between the output image and the original image. The higher the weight, the higher the similarity with the original image, and vice versa. Of course, in this step, we do not need to make the generated image completely similar to the original image.
What needs to be noted here is the integrity of the bottom part of the product and the ground contact part, and whether the overall picture is the style you want. Don’t worry about the details of other parts of the picture. If the elements and style are too different from what you want, then adjust the keywords, repeatedly draw pictures, and finally choose a generated picture.
If the product and surrounding elements are too different from what we expected, we need to use PS to modify the image and replace the product image just now.
At the same time, you can also add some elements to the surrounding space in the picture to transform it.
As for the complex relationship between light and shadow, it can also be handled by Stable diffusion. Click on the graphic to redraw it locally and throw the picture into it.
Copy the forward and reverse keywords in the Vincent diagram just now, and erase the product in the middle. The purpose is to protect the product from being redrawn by the software. Remember not to go beyond the edge of the product when applying so that the product blends better with the background.
Here we choose to redraw the non-mask area, the purpose is to let SD only draw areas other than the product.
Daran chose DPM++2S SDE as the sampling method, and the image size remained the same as the previous Vincent image.
Continue to open Controlnet and put in the picture just now.
This time I still choose Canny for the model. The principle is the same as before. To avoid too big changes in the picture structure, click Generate to output the picture. Refresh it a few times and select a rendering that you are satisfied with.
its not bad, right! Both the relationship between light and shadow and the color tone are relatively uniform. Finally, I adjusted the color in PS software, added some copywriting, and a product poster came out.
in conclusion