Feeding level stable_diffusion_webUI tutorial

insert image description here

insert image description here

configuration requirements

insert image description here

To run stable-diffusion-webui and models smoothly, a large enough video memory is required. The minimum configuration is 4GB video memory, the basic configuration is 6GB video memory, and the recommended configuration is 12GB video memory. Of course, the memory should not be too small, preferably greater than 16GB. In short, the larger the memory, the better. The graphics card shown in the picture below is NVIDIA GeForce GTX 1060 Ti ( 5GB / NVIDIA ). This ancient graphics card is really difficult to run AI drawing, but it can be used. The minimum configuration, higher than this configuration can run, it doesn't matter if you run slowly, just practice your hands

1. Download project

Pull down and run the entire project source code by means of git:

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git 

NoticeThe current update frequency of this open source project is very fast, and some bugs will be fixed or some new functions will be added from time to time, so it is recommended to
pull the latest code through git pull from time to time. Veteran drivers, please skip steps 1, 2, and 3~

Download and install git

Super detailed Git installation tutorial (Windows)

2. Python environment

> stable-diffusion-webui is mainly developed using Python, so to run this project, you need to install the Python environment and configure the environment variables.

NoticeOfficial recommendation to install Python 3.10.6 Version 3.10.6 download address

使用python --version查看当前版本。

In addition, it is recommended to use Anaconda to manage multiple Python environments, see

Official conda environment installation instructions :
anaconda common commands

3. CUDA environment

By default, stable-diffusion-webui uses GPU computing power, that is to say, it needs to use Nvidia graphics card (the higher the configuration, the faster the drawing ). A card does not work, A card does not work, and A card does not work (CPU computing power is equal to GPU calculation. The power is quite different, although it can be achieved by changing the parameters, but if you have the conditions, go directly to the N card). Here we need to install the CUDA driver, first confirm the CUDA version that can be installed on the computer, in the lower right corner of the desktop -> right click on the NVIDIA settings icon -> NVIDIA control panel:

insert image description here

It can be seen that the display of my computer is NVIDIA CUDA 11.6.134 driver, so the CUDA version to be installed on my computer cannot exceed 11.6.

Notice: A high-version graphics card can be installed with a low-version CUDA driver. For example, I can also install the classic 10.2 version, but installing the 11.6 version can obtain higher GPU operating efficiency, so it is generally recommended to install the highest CUDA version supported by the graphics card.

Find the corresponding CUDA version in the following URL to install:

CUDA official archive
insert image description here

Simply select "Lite" to install. After the installation is complete, you can use the following command to view the CUDA version to verify whether CUDA is installed successfully:

nvcc --version

NoticeIf you don't have an Nvidia graphics card, you can also specify the running parameter --use-cpu sd to stable-diffusion-webui to make it run with CPU computing power, but it is not recommended that you do this. CPU computing power is compared with GPU computing power It's a huge difference. It may take only 10
seconds for the GPU to draw, but it takes 10 minutes for the CPU. This is not a joke. In addition, if your graphics card does not have much memory, it is recommended to add the --medvram startup parameter for the 4G graphics card, and the --lowvram startup parameter for the 2G graphics card.

Windows users edit the webui-user.bat file and modify the sixth line:

set COMMANDLINE_ARGS=--lowvram --precision full --no-half --skip-torch-cuda-test

If it is a 16 series graphics card, if the picture is black, modify the sixth line of the webui-user.bat file:

set COMMANDLINE_ARGS=--lowvram --precision full --no-half

2. Download the weight file sd-v1-4.ckpt,
which is a necessary weight file for stable diffusion to run. It is about 4G. You can download it from the hugging face and put it in the models/Stable-diffusion directory

https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt

3. Download the model files required for this article. It
is mainly used for real-life style image drawing (you can try it down and recommend lora)

chilloutmix_NiPrunedFp32Fix.safetensors

You can go to station C to download, 3.97G, after the download is complete, put it in the models/Stable-diffusion directory

https://civitai.com/models/6424/chilloutmix

insert image description here

4. Start the project

After installing and configuring the running environment, just run the webui-user.bat file under the project directly (if it is a Unix-like system, run webui-user.sh).

The first startup will automatically download some Python dependent libraries (see requirements.txt under the project for specific libraries), as well as configuration and model files needed for the project (for example: v1-5-pruned-emaonly.safetensors, nearly 4 G ~), after initialization once, the next startup will be fast.

 Launching Web UI with arguments: ... Running on local URL:  http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`.

Seeing this prompt means that it has successfully run. Open the URL and you can see the running interface of the program:
insert image description here

Reminder: This project is an English page, you can use the extension to expand the Chinese version

4-12: Insertion

tips: How to install webUI Simplified Chinese language pack
This extension can be installed directly by loading the official plugin list in the Extension tab

Official Download
Click the Extension tab, click the Avaliable sub-tab,
uncheck localization, check the others, and click the orange button, as shown below
insert image description here

Click install on the right side of zh_CN Localization
insert image description here

Install via URL
Click the Extension tab, click the Install from URL sub-tab
Copy this git warehouse URL:

https://github.com/dtlnor/stable-diffusion-webui-localization-zh_CN

Paste it into the URL bar and click Install, as shown in the figure
insert image description here

The installation is complete~~~~

After making sure the extension has loaded correctly

Configuration
Restart the webUI to make sure the extension is loaded
On the Settings tab, click the orange Reload UI button in the upper right corner of the page to refresh the list of extensions
insert image description here

On the Extensions tab, make sure the extension is checked ☑️; if it is not checked, click the orange button to enable the extension.
insert image description here

Select Simplified Chinese language pack (zh_CN)
In the Settings tab, find the User interface sub-option
insert image description here

Then go to the bottom of the page, find the Localization (requires restart) item, find and select zh_CN in the drop-down menu (if not, click the button), as shown in the figure
insert image description here

Then click the orange Apply settings button on the top left of the page to save the settings, and then press the orange Reload UI button on the right to restart the webUI

It took effect~~~

5. Start using

stable-diffusion-webui has many functions, mainly as follows:

Vincent image (text2img): Generate corresponding images according to the description of the prompt word (Prompt).

Image-generated image (img2img): Generate a new image from an image based on the characteristics described by the prompt word (Prompt).

1. Vincent graph (text2img)
Before using the Vincent graph, it is necessary to understand the meanings of the following parameters. See the following articles for details:

https://zhuanlan.zhihu.com/p/574063064
https://baijiahao.baidu.com/s?id=1758865024644276830&wfr=spider&for=pc

Next, let's generate a cyberpunk-style cat picture. After configuring the following parameters,

Click "Generate" to:

Prompt:a cute cat, cyberpunk art, by Adam Marczyński, cyber steampunk 8 k 3 d, kerem beyit, very cute robot zen, beeple | Negative prompt:(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation, flowers, human, man, woman CFG scale:6.5 Sampling method:Euler a Sampling steps:26 Seed:1791574510

NoticeThe more prompt words (Prompt), the more accurate the AI ​​drawing results will be. In addition, the effect of Chinese prompt words is not good at present, so English prompt words have to be used.

insert image description here

2. Model file

How is the value of the Stable Diffusion checkpoint in the upper left corner of the above screenshot different from that in the previous screenshot? This is because I changed a model file. Remember the nearly 4 G model file (v1-5-pruned-emaonly.safetensors) mentioned earlier? That is the default model file of stable-diffusion-webui, the picture generated by this model file is ugly, so I changed another model file. There are several websites for downloading model files, and the more famous one is civitai, which share models trained by others.
Model file download address:

civitai: civitai.com/default
v1-5-pruned-emaonly: https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main

insert image description here
According to the image style you want to generate (eg animation, landscape), select the appropriate model to view. The previous example of Vincent’s diagram uses this Deliberate model. You can directly click "Download Latest" to download the model file.

insert image description here

NoticeThere are two formats of model files, .ckpt (Model PickleTensor) and .safetensors (Model
SafeTensor). It is said that .safetensors are safer. Both formats
are supported by stable-diffusion-webui, just download one at will.

Put the downloaded model file in the stable-diffusion-webui\models\Stable-diffusion directory:
insert image description here

After placing the model file, you need to restart stable-diffusion-webui (execute webui-user.bat) to recognize it.
insert image description here

These model files generally come with a set of renderings. Click any one to see some parameter configurations that generate the renderings:
insert image description here

Configure these parameters into the stable-diffusion-webui, and click "Generate" to generate a picture with a similar effect.
Note: Due to the random nature of AI drawing, the generated picture may not be exactly the same as the effect picture.

There are many things to discover in the Vincent map function. You can use it to generate unique pictures in the world. To use the Vincent map function well, the prompt word (Prompt) is the most important thing that must be mastered. It has grammatical rules.

Reprinted from the Zhihu article of the boss https://zhuanlan.zhihu.com/p/617997179

Guess you like

Origin blog.csdn.net/qq_47272950/article/details/130109318