Exploring the production process of high-definition artistic QR codes-AI painting drawings and drawings

In the previous article "AI Production of Artistic QR Codes - Vincent Pictures", I introduced a method to generate high-quality QR codes directly through prompt words. However, we cannot well control the style of the generated pictures through prompt words. , especially some students who want to attach their logo or avatar to the QR code. To meet such needs, you need to use the method of generating QR codes from pictures introduced in this article.

Let’s first take a look at a few QR codes I generated (due to platform limitations, they have been mosaic processed):

 

 This article uses the synthesis of photos of beautiful women to demonstrate. Although the photos I synthesized are a bit difficult to describe, you can just take a look at it. The focus is on learning. Let me give you a hint and you will definitely be able to synthesize better-looking QR codes.

Basic model settings

The tool we use is Stable Diffusion WebUI, and the basic model is Guofeng 3. This is a model that is particularly suitable for pictures of Chinese-style beauties and has a 2.5D texture.

Basic graph settings

1. Open Tushengtu in SD and upload a picture that you want to integrate into the QR code. Here, select a beauty picture I generated before.

2. Click "CLIP Reverse Derivation of Prompt Words" to derive the prompt words. You can search the reverse prompt words online according to the situation. Why do we need prompt words? Because this generation method collects the outline of the basic image, and we also need SD to add details.

Prompt words: a girl with long hair and blue eyes, transparent background,
reverse prompt words: paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale))

3. Graph parameter setting.

  • Sampler: DPM++2S a Karras
  • Sample deployment: 30
  • Size: 768*768
  • Prompt word guidance coefficient: 7
  • Redraw strength: 0.75

ControlNet settings

Two ControlNets are used here, and their settings are introduced below.

1. ControlNet Unit0 settings

Upload the basic image and enable ControlNet. The function of this ControlNet is to control the posture of the character.

Select the type of ControlNet: OpenPose. Under normal circumstances, the preprocessor and model will be loaded automatically. If they are not loaded, please select them manually. Note that Control Weight is set to 1 here.

2. ControlNet Unit1 settings

The function of this ControlNet is to draw a QR code, so here we upload a picture of the QR code.

Here the Control type selects Tile, which has the ability to zoom in and control details. Please select matching preprocessor and model.

Because the QR code is more important between the original image and the QR code image, the weight of this ControlNet should be higher, otherwise it will not be easy to scan out.

Here we also need to control the starting and ending steps of the intervention drawing. The starting step cannot start from 0, otherwise the image will not be drawn.

generate

Finally, just click Generate and see the effect (due to platform limitations, mosaic processing has been added):

Notice

There needs to be a balance between the beauty of the picture and the recognizability of the QR code. Sometimes the generated QR code cannot be scanned, or cannot be recognized by long pressing in WeChat. You can regenerate it several times, or adjust the weight and control intervention of ControlNet. The number of starting and ending steps.

Different models also have a greater impact on the effect of generating pictures. It is recommended to use 2.5D or 3D models, which are easier to produce pictures, and the parameters of ControlNet of different models may also need to be adjusted.

For the synthesis of facial photos, affected by the color of the QR code, the resulting image will be less beautiful. You can try changing the color of the QR code, or draw other pictures that do not require much beautification.


The above is the main content of this article. I will continue to share AIGC things in the future. If you are interested, please follow me in time (WeChat public account: Yinghuo Walk AI) to avoid missing exciting content.

Guess you like

Origin blog.csdn.net/bossma/article/details/131540879