Teach you how to free Midjourney Adobe FireFly AI drawing - Window local rapid deployment of stable diffusion AI drawing and usage guide (even beginners can learn)


Recently, I saw all kinds of text-generated pictures, pictures-generated pictures, and dazzling AI-generated pictures in the circle of friends. I was also moved, so I hurriedly studied Midjourney and Adobe FireFly, which are considered to be the most powerful in the industry. I wanted to try them out, but why ? The people all over the world are too enthusiastic, and Midjourney has been overwhelmed by wool. Originally, Midjourney just registered and could play 25 times for free, but now it is also closed by Midjourney.
insert image description here
insert image description here

So I decided to look for open source, free, unlimited number of generation tools, fast generation time, no need to queue, much higher degree of freedom, no need to be constrained by nsfw, and more tools that can be debugged and personalized. After some exploration, I finally discovered that many powerful image AI generation tools are basically applied to Stable Diffusion这个框架the model. It can be said that Stable diffusion is currently one of the most widely used and most effective open source AI drawing software, and it is a popular fried chicken .

Then let's start to see how to deploy the framework model of Stable Diffusion locally.

foreword

If you can use the major online AI drawing platforms to generate images, it is equivalent to elementary school level proficiency.

If you can use localized deployment of Stable Diffusion to run ai painting, it is the same as a high school graduate, and already has certain painting skills.

Of course, if you want to reach the university graduation level, that is, you can use AI drawing as you like, you need to understand more technical details and improve their proficiency step by step.


1. Requirements

There are some basic requirements for localized deployment to run Stable Diffusion:

(1) Need to have NVIDIA graphics card, starting from GT1060, with more than 4G video memory. (There is no need for 3080 cases anymore, it is very close to the people)

(2) The operating system requires win10 or win11 system.

(3) The computer memory is 16G or above.

(4) It is best to be able to surf the Internet magically, otherwise the network fluctuates, some web pages cannot be opened, and sometimes the download is very slow.

(5) Be patient, try more, and search more.

My computer is an HP workstation, with 16G of video memory and 64G of memory. Normally, such a high configuration is not required. A friend's computer configuration for your reference, Win11, I5, NVIDIA GT1060 5G, 16G.

It takes about 20-30 seconds for my friend’s computer to generate a 20-step image;
my computer generates a 20-step image for about 5-6 seconds. A better computer can generate images really fast.
insert image description here

2. Stable diffusion WebUI project

If you directly deploy the stable diffusion project locally, you basically need to use a pure code interface, which is not so friendly to non-programmers. So I chose the stable diffusion webui, which is a visual operation project based on the stable diffusion project . Through the visual webpage operation, it is more convenient to debug the prompt and various parameters. At the same time, many functions are added, such as img2img function, extra image enlargement function and so on.

Therefore, the stable diffusion webui project is the first choice for many people to deploy locally. This tutorial also takes the stable diffusion webui project as an example.
insert image description here

3. Installation of basic computer tools and environment configuration

1. Install Anaconda

From the anaconda official website, download and install anaconda . See the official website tutorial for details .
insert image description here
If you feel that Anaconda is too large, you can also download and install Miniconda from https://docs.conda.io/en/latest/miniconda.html . Here you can also refer to the official installation steps to install Anaconda .

2. Configure the conda environment

Start the windows command window , enter it in the search box cmdto open it, and enter the following command:

conda config --set show_channel_urls yes 

Right-click the Anaconda icon, find the root directory where Anaconda is located in the properties, which is the directory in the red box in the figure below, and then open the directory to find the .condarc file.
insert image description here
Open and modify the .condarc file with Notepad. Copy all the content below, overwrite the original content, ctrl+s to save, and close the file.

channels:
 - defaults
show_channel_urls: true
default_channels:
 - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
 - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r
 - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2
custom_channels:
 conda-forge: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
 msys2: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
 bioconda: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
 menpo: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
 pytorch: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
 pytorch-lts: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
 simpleitk: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud

Run conda clean -i in the windows command window to clear the index cache to make sure you are using the address of the mirror site.

conda clean -i 

3. Create python environment

Run the following statement in the windows command window to create a python 3.10.6 environment.

conda create --name stable-diffusion-webui python=3.10.6

The system may prompt y/n, enter y and press Enter.

Finally, done is displayed, and it is completed.

insert image description here

4. Activate the pyton environment and upgrade pip

Continue to enter conda activate stable-diffusion-webui in the windows command window and press Enter.

conda activate stable-diffusion-webui

Continue to upgrade pip in cmd

python -m pip install --upgrade pip

Continue to set the default library package download address of pip to Tsinghua mirror in the windows command window.

pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple

5. Install git

Installing git is actually not too difficult, as long as you click next all the way to complete, you can refer to the git detailed installation tutorial
software package download address for details

After the installation is complete, run the following command in the windows command window and see the installed git version, indicating that the installation is successful

git -v

insert image description here

6. Install cuda

cuda is a dependent program used by NVIDIA graphics cards to run algorithms, so we must install it. Open the NVIDIA cuda official website , (if you can’t open it, please use magic to go online.)

Run the following command in the windows command window to check your cuda version

nvidia-smi

insert image description here

For example, if it is version 11.7, download the link of 11.7, but my graphics card configuration is relatively high, and it shows version 12.0. I finally chose the lower version of cuda 11.7 , mainly because the torch in the stable diffusion project is not necessarily Support higher version cuda.

insert image description here
Then according to your own system, choose win10 or 11, exe(local), Download (if the download is slow, please use magic to surf the Internet.)

insert image description here

After downloading, install it. This software is 2.5G and can be installed in places other than the c drive.

At this point, the basic environment settings of the computer are finally finished. Next, we started installing stable diffusion.

Five, stable diffusion environment configuration and installation

1. Download stable diffusion source code

Run the following command in the windows command window to switch the directory to another disk with larger hard disk space (it is not recommended to directly use the c disk, mainly because stable diffusion will have many models to download, which will take up a relatively large hard disk space, generally at least 10G or more), For example, e disk, then enter e: press Enter

e:

insert image description here

Next clone the stable diffusion webui project (hereinafter referred to as sd-webui)

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git

If done is displayed at the end, it means that the project has been cloned to the disk you specified, for examplee:\stable-diffusion-webui

If the clone is slow, it is recommended to open the magic, and then follow the steps below to configure the windows command window to use magic .

Configure windows command window to use magic

First of all, check the local magic port, as shown in the figure below 10810, some magic, the port may be 1080 10809 or something, if you really don’t know where to check, try in order, you should be able to find it.
insert image description here
Then enter the following two commands in sequence and press Enter. Among them, 10810 is the port address found above, if yours is 1080, replace it with 1080

set http_proxy=http://127.0.0.1:10810
set https_proxy=http://127.0.0.1:10810

Finally, check whether the magic is successful, use the following command, be careful not to use it ping, if a bunch of html language comes out, it means that the pull is successful.

curl https://www.google.com

insert image description here

2. Download the training model of stable diffusion

Click the tab in the stable diffusion model library to download the training model.file and versionssd-v1-4.ckpt

insert image description here

Note: This model is a basic model library of drawing elements for subsequent generation of AI drawings.

If you want to use it later waifuai, novelaijust replace the model and put it in the model folder of the sd-webui project.

We will now use the model of stable diffusion 1.4 to continue down.

After downloading, please rename the model to model.ckpt, and place it in models/stable-diffusionthe directory of sd-webui. For example my path ise:\stable-diffusion-webui\models\Stable-diffusion

insert image description here

3. Install GFPGAN

This is an open source project under Tencent, which can be used to repair and draw faces, and reduce the distortion and deformation of stable diffusion faces.

Open GFPGAN—github address

Pull down the web page to the readme.md section, find the V1.4 model, and click the blue 1.4 to download it.

insert image description here

After downloading, just put it under the root directory of the sd-webui project. For example, my root directory ise:\stable-diffusion-webui

4. Start the sd-webui project

4.1 Check the conda environment

In the windows command window, check whether the conda environment is started. If you do not see (stable-diffusion-webui) in front of the directory, it means that the conda environment is not started. You need to enter the following command to start:

conda activate stable-diffusion-webui

insert image description here

4.2 Enter the root directory and start the sd-webui project

Note that you must first enter the root directory of the stable-diffusion-webui project in the windows command window cd. If you have been following the above steps, you can directly run the following command to enter (the blogger directly downloaded it from git, and then placed it in other directory, don't refer to my image):

cd stable-diffusion-webui

insert image description here
Then run the following command:

webui-user.bat

Then press Enter and wait for the system to start executing automatically. until the system prompts:running on local URL: http://127.0.0.1:7860

Notice:

This step may often report various errors, and requires patience and time to try multiple times.

Don't close the little black window, even if it hasn't changed for a few minutes.

If it prompts a connection error, you may need to enable Magic Internet Access, and then re-execute the webui-user.bat command (note that you need to configure the windows command window to use magic operations in advance ).

insert image description here

6. Use stable diffusion

Open http://127.0.0.1:7860 in the browser (note, do not close the windows command window running above )

insert image description here

1. Set Chinese interface

Note that the default display interface is English at the beginning. If you need to change it to a Chinese interface, you can follow the steps below.

First click to switch to [ Extensions], then click [ Available], in [ Hide extensions with tags], uncheck " localization", then click [ Load from]
insert image description here
to find zh_CN Localizationor zh_TW Localization, click Installthe button
insert image description here

Click the [ ] tab, make sure that "stable-diffusion-webui-localization- _ " Installedis checked at the bottom of the page , click [ ], and restart the page.UI Apply and restart UI
insert image description here

Switch to [ Settings], find [ User interface] on the left, pull down to the end, and select the language you need in the drop-down box, for example, zh_CN
insert image description here
insert image description here
return to the top of the page, first click [ Apply settings], then click [ UI Reload UI]
insert image description here
If there is no problem, your interface is in Chinese .

insert image description here

2. Easy to use

Enter relevant commands in the prompt area, such as beautiful landscape, and then click Generate on the right to generate the first picture.
insert image description here

3. Advanced usage

The space is limited, so I will not expand on the detailed operations of the stable diffusion AI drawing functions. Interested students can search for a lot of AI drawing tag/prompt settings on the Internet, or refer to this blog .

Summarize

I have to say that now the interactive function of the sd-webui project webpage is really much easier to use, and many new functions have been added.

For example, the img2img function, you can regenerate some unsatisfactory parts of the generated image, such as mouth, nose and eyes.

You can even change clothes directly by regenerating part of the content.

You can also directly use the extras function to enlarge the generated image up to 4 times. (512 512 magnified 4 times = 2048 2048)

It took the blogger about 1 hour to install successfully, mainly because of the magic, which speeded up a lot of time. But friends may still encounter various problems during the installation process. If you can’t solve them, you can also comment in the comment area or find me, and help you debug for free.

Of course, if you still don’t want to spend so much time installing or the computer does not have advanced graphics card configuration, the blogger will set up a server later, and it will be free for everyone to test and play.

Other information download

If you want to continue to learn about artificial intelligence-related learning routes and knowledge systems, welcome to read my other blog " Heavy | Complete artificial intelligence AI learning-basic knowledge learning route, all materials can be downloaded directly from the network disk without paying attention to routines "
This blog refers to Github's well-known open source platform, AI technology platform and experts in related fields: Datawhale, ApacheCN, AI Youdao and Dr. Huang Haiguang, etc. There are about 100G related materials, hoping to help all friends.

Guess you like

Origin blog.csdn.net/qq_31136513/article/details/130101961