Deploy Stable Diffusion web UI on GPU cloud server

1 Introduction

Recently, I was researching how to use Controlnet fine-grained control of Stable Diffusion to generate satisfactory pictures, but my local graphics card only has 6G video memory, and the Openpose function of Controlnet requires more than 10G video memory to produce pictures normally. So we can only rent a GPU cloud server and deploy the SD model on the server, and then access it through a local browser.

2. About the choice of cloud server

Recommend this article: Comparison of GPU cloud server platforms! Which is the most recommended?
I use AutoDL myself, but I didn't use its own Stable Diffusion model, so I deployed it from scratch. I don't recommend this one, because if you use pay-as-you-go , there will often be no cards available after shutdown, especially the popular card 3090.

3. About machine selection

Taking AutoDL as an example, after the registration is complete, go to the following interface to select the machine:

insert image description here
Generally speaking, at this stage, 3090 is a better choice in terms of price and memory size, and of course it is often sold out.

Regarding the billing method, only pay-as-you-go is recommended , and many platforms with discounts can rent 3090 graphics cards for 1~2 yuan/hour. Unless you need to train a large model, if you just use it yourself, it is not recommended to pack the day/week/month, and the price will be able to buy one for yourself in one year. The disadvantage of pay-as-you-go is that after shutting down, it may not be able to boot normally due to insufficient free graphics cards.

insert image description here

AutoDL comes with NovelAI, but there may be various errors in actual use, most of which are problems with the Python version . At this point in time, they all come with Python 3.8 on their machines, but the latest version of stable-diffusion requires a 3.10 environment, so it is not recommended to use the built-in Python 3.8, but choose the following configuration:

insert image description here
It needs to be mentioned here that some graphics cards that appeared earlier may not allow the use of CUDA11.8 environments (such as RTX3080), so it is recommended to use 3090 and later graphics cards.

4. Deploy Stable Diffusion

Python and CUDA

Generally, GPU servers already come with Python and graphics card drivers. It is recommended to use the ones provided by the server manufacturer, otherwise many problems will occur later.

check pip source

This step is very important. The update speed of Stable Diffusion is very fast, but the pip source used by some manufacturers' machines is not updated in time, and there is no latest version of some dependencies required by SD, which will lead to constant error reporting.

Taking autodl as an example, my machine uses the Huawei source by default. As a result, I couldn’t find the latest version of facexlib and numpy when I installed dependencies later. Finally, I changed to Ali source to solve the problem successfully. At the same time, remember to update it after changing the source. version of pip.

For the specific operation method, see: pip change source -pip change domestic mirror source

Download Stable Diffusion web UI

Enter the following command in the terminal. It is recommended to deploy it on the data disk instead of the system disk, because various models need to be downloaded later, which takes up a lot of space.

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git

If the connection times out, it is recommended to try a few more times, because the git connection is sometimes very unstable.

Try running Stable Diffusion

After the download is complete, enter the project root directory and execute the command:

cd stable-diffusion-webui
COMMANDLINE_ARGS="--medvram --always-batch-cond-uncond --port 6006" REQS_FILE="requirements.txt" python launch.py

Among them, launch.py ​​is the execution script, medvram and always-batch-cond-uncond are the parameters of video memory optimization;

port 6006 specifies that the process is running on port 6006 of the machine. Because autodl comes with an externally exposed service, the port number is 6006, so it is set like this. Of course, there are other methods, which will be explained later;

The last REQS_FILE is the dependency required for running, and the dependency will be installed automatically after the command is executed.

Manually download dependencies using the command

If you are renting a cloud server with a domestic node, there is a high probability that you will encounter various connection failures and timeout problems. for example:

The TLS connection was non-properly terminated

If this problem occurs, manual download is recommended.

base model

First create the repositories directory under the main directory:

mkdir repositories

Stable Diffusion web UI has four dependency models, which need to be downloaded separately.

StableDiffusion:

git clone https://github.com/CompVis/stable-diffusion.git repositories/stable-diffusion

taming-transformers:

git clone https://github.com/CompVis/taming-transformers.git repositories/taming-transformers

CodeFormer:

git clone https://github.com/sczhou/CodeFormer.git repositories/CodeFormer

BLIP:

git clone https://github.com/salesforce/BLIP.git repositories/BLIP

Execute the command again after the installation is complete, and the remaining dependencies will be installed automatically. Of course, an error may be reported later due to connection timeout:

COMMANDLINE_ARGS="--medvram --always-batch-cond-uncond --port 6006" REQS_FILE="requirements.txt" python launch.py

python library dependencies

When installing dependencies, it may also be stuck due to the network, especially the gfpgan library. At this time, it is recommended to use the pip install command to manually install dependencies. If you get an error like:

No matching distribution found for facexlib>=0.2.5

This is because the latest version of the library cannot be obtained from pip. It is recommended to upgrade pip and check whether the pip source has not been updated for a long time.

Local download dependencies and upload to the server

If the network is really unavailable, you can only use this machine to download the model and upload it to the server.

In Stable Diffusion's dependencies, there is a v1-5-pruned-emaonly.safetensors model that needs to be downloaded from the huggingface website, but the download speed of the terminal is extremely slow. Pay-as-you-go is money every minute and every second, so it is recommended to download it locally and upload it directly to the server. Here we take the autodl server as an example .

download model

URL:

https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.safetensors

upload server

Generally, Xshell can be used. For details, please refer to the document: AutoDL data upload

Here I introduce the server uploaded to autodl through Alibaba cloud disk (because there is no speed limit).

  1. The model is uploaded to the Alibaba cloud disk. If you do not have an account, you need to register first.
    insert image description here

  2. On the console interface (power-on state), open AutoPanel :

insert image description here

  1. Open the public network disk , select Alibaba Cloud Disk, a QR code will appear below, you need to download the app of Alibaba Cloud Disk on your mobile phone, scan and authorize it.

insert image description here

  1. Click Download to download the model from the cloud disk using the server:

Please add a picture description

The downloaded files are stored in the root directory of the data disk. After entering the folder where the data is stored, move the model to the project main directory:

mv v1-5-pruned-emaonly.safetensors stable-diffusion-webui/

Subsequent other models can be uploaded to the server in this way.

5. run

After all the previous dependencies are installed, execute the command again in the main directory of the project:

COMMANDLINE_ARGS="--medvram --always-batch-cond-uncond --port 6006" REQS_FILE="requirements.txt" python launch.py

The operation is successful if the following output appears:

insert image description here

In the case of using autodl, click on the custom service in the console:

insert image description here

There will be a prompt later to ask for the real name, because the supervision is further tightened, if you don't want the real name, it is recommended to switch to other server manufacturers. After completing the real name, you can operate Stable Diffusion to draw in the local browser.

insert image description here
The interface is as follows:

insert image description here

Controlnet is used successfully!

insert image description here

6. Reference

  • https://zhuanlan.zhihu.com/p/386821676
  • https://zhuanlan.zhihu.com/p/574200991

Guess you like

Origin blog.csdn.net/weixin_45943887/article/details/129817062