How to build a Stable Diffusion AI painting tool locally?

Recently, I saw that various AI tools on the Internet are very popular, and my heart is a little itchy, so I used the open source Stable Diffusion third-party code to build an AI painting locally, realizing the freedom of AI painting, and no longer need to be greedy for others! .

First look at the interface renderings:

insert image description here

Preparation

  1. Hardware equipment: mine is Mac pro M2 chip 16G memory + 1T solid state (if the configuration is too low, the model may not be able to run)
  2. Configuration environment: python3 environment is required.
  3. Ladder: It is best to have a ladder, so that the download or installation will be very fast, otherwise a model is several G, and it is very painful not to use the ladder.

Install

Here is an official tutorial for everyone: github.com/AUTOMATIC11…

insert image description here

It has two installation modes on its side:

Existing Install: Existing installation means that the Python3 environment has been installed on your computer, and the Stable Diffusion project has been pulled through git.

New Install: Fresh installation, nothing on this machine.

Personal recommendation : Regardless of whether you already have a Python3 environment on your machine, use New Install to install it fresh, don't ask why. (At present and as far as I know, no one has ever succeeded through Existing Install. This is a big pit, and the blogger has stepped on it for you and filled it up.)

1. Install Homebrew tools

Follow the official tutorial step by step, open the mac terminal, and download the Homebrew tool: brew.sh , the blogger has downloaded it here, so I won’t post the downloaded picture. Just paste this line of code into the terminal and press Enter:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
复制代码

insert image description here

2. Install Python3

The second step, following the official tutorial, after successfully installing Homebrew, install the Python3 environment

brew install cmake protobuf rust [email protected] git wget
复制代码

Then use the following command to configure the Python local environment:

cd ~
vi .bash_profile
alias python="/Library/Frameworks/Python.framework/Versions/3.10/bin/python3
复制代码

insert image description here

Then enter python3 to see if the installation is successful:

insert image description here

3.下载 Stable Diffusion -webui

在下载之前,最好用 cd 命令,切换到一个自己常用的文件夹下,因为它会下载到你当前的目录下,如果不知道当前目录的话,下完估计会找不到源文件。

在终端输入以下命令,拉取代码:

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
复制代码

4.下载大模型

第四步,我们去Hugging face上下载大模型 :

huggingface.co/CompVis/sta…

二选一即可,视自己电脑内存情况而定。

下载完后,把下载的文件,放到刚刚拉取的stable diffusion web-ui的models下的 stable diffusion文件夹下,即可。

insert image description here

5. 安装 GFPGAN(神坑)

第五步,重点来了,又是个大坑!!!按照官方教程这一步就直接 ./webui.sh 了,这是个大坑,这时候不管怎么启动,都会失败了!因为少了一个GFPGAN,那么什么是GFPGAN呢?

GFPGAN:是对模糊照片,或者微调人脸的一种技术,可以防止人脸过于自由化。

所以我们需要去GFPGAN官网上下载该文件。

github.com/TencentARC/…

(不知道为什么,stable diffusion官方教程不说这一步,估计是因为GFPGAN是国产的缘故?)

insert image description here

打开后,拉到最下面,点击V.4 model 即可下载,下载后,只需要放到拉取的Stable Diffusion的根目录下即可。

5. 允许 stable diffusion-webui

这个时候我们可以在终端启动了,输入 ./webui.sh,第一次比较慢,需要几分钟时间。

当出现以下界面时,即启动成功了。

insert image description here

如何使用

这个时候 我们可以打开这个地址 http://127.0.0.1:7860,就出现了,本篇文章最开始的界面效果图。

But for how to use it, I recommend a website to everyone, civitai.com/models/1417… , referred to as station C.

On the above, we can choose a picture we like, and then copy its parameters to our local website.

insert image description here

renderings

Finally, post a few renderings of my own drawings and share them with everyone

Please add a picture description

Please add a picture description Please add a picture description

Guess you like

Origin juejin.im/post/7229178458071384121