A nanny-level Stable Diffusion deployment tutorial to start your alchemy road | JD Cloud technical team

ME1688463891876.jpg

There are many applications on the market that can be used for AI painting, such as DALL-E, Midjourney, NovelAI, etc. Most of them run on cloud servers, and some of them need to pay membership fees to purchase more quotas for drawing. In August 2022, an application called Stable Diffusion raised the fineness of AI painting to a new level through algorithm iteration, and can complete the output in seconds. Runs on computers with "consumer-grade" graphics cards.

Through Stable Diffusion, you can draw various styles of works, such as animation style, illustration vertical painting, Chinese style ink painting, 3D modeling, and even photorealistic images. With the help of derivative functions such as LoRa and ControlNet, you can also Accurately control the art style, character details, poses, actions, composition, etc. More importantly, it is fully open source, which means that you can deploy the entire program on your own computer, and use it to draw and draw is completely free and unlimited! Most commercial-grade AI painting applications on the market are developed based on SD.

Although Stable Diffusion is very friendly, it still has certain configuration requirements. It needs a powerful enough independent graphics card to provide computing power for drawing. In fact, "running well" and "playing well" are two different experiences, and the difference in computing power will greatly affect the efficiency of AI drawing when drawing. It is precisely because of this that many students I missed the opportunity to experience Stable Diffusion in depth due to the urgent configuration of the computer. Wait a minute, do you know JD Cloud? JD Cloud GPU cloud host is an elastic computing service that provides GPU computing power. It has super parallel computing capabilities and is being widely used in scenarios such as deep learning, scientific computing, graphics and image processing, and video encoding and decoding. It provides you with computing power at your fingertips. It can effectively relieve computing pressure, improve your business efficiency, and can be elastically expanded to help you quickly build heterogeneous computing applications.

After a series of explorations, I have summed up a set of zero-based, very easy-to-use tutorials for deploying and installing Stable Diffusion WebUI and related tools and plug-ins with the help of JD Cloud GPU cloud host, please check it out.

1. Create a GPU host instance

1.1 Create a GPU cloud host

The standard configuration of JD Cloud GPU cloud host includes Tesla P40 24G graphics card, 12-core 48G, and the experience of running Stable Diffusion is very good. The recommended configuration is as follows:

configuration recommend illustrate
system Ubuntu 20.04 64 bits
Specification GPU standard pn - p.n1p40.3xlarge 12-core 48G Nvidia Tesla P40 24G video memory
system disk 100G System disk recommended 100G
bandwidth 5M 5M recommended

1.2 Create a security group and bind it

First, create a security group in [Security Group] on the left menu, add and open ports 7860, 7861, 8080, and 8888 in [Inbound Rules] and [Outbound Rules] respectively. Among them
image.png
, in the instance details, click [Security Group] - [Bind Security Group] to bind the newly created security group.

2. Environment installation

2.1 Install the GPU driver

Check the driver version on the NVIDIA official website based on the graphics card model, operating system, and CUDA. Official website query link https://www.nvidia.com/Download/index.aspx?lang=en-us
Pay attention to the CUDA version here. If CUDA is not installed, you can choose a version first, and then install CUDA later.
image.png

Click Search

image.png
As shown in the figure above, the appropriate version is found to be 510. Then you can use apt to install the corresponding driver version, which is more convenient to use apt to install.

# 安装510版本驱动
apt install nvidia-driver-510
# 查看驱动信息
nvidia-smi

If the installation is successful, the following prompt information can be displayed.

image.png

2.2 Install CUDA

Visit the NVIDIA developer website and first select the CUDA version (the version should correspond to the CUDA version supported by the GPU driver in 2.1), and then select the corresponding CUDA installation command according to the operating system, visit the link https://developer.nvidia.com/cuda-toolkit-archive

image.png

Make sure that the CUDA version corresponding to the selected driver is 11.6 according to the above installation, and install it according to the installation command. The following commands are applicable to Ubuntu 20.04 x86_64, GPU driver version 510

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.6.2/local_installers/cuda-repo-ubuntu2004-11-6-local_11.6.2-510.47.03-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2004-11-6-local_11.6.2-510.47.03-1_amd64.deb
sudo apt-key add /var/cuda-repo-ubuntu2004-11-6-local/7fa2af80.pub
sudo apt-get update
sudo apt-get -y install cuda

2.3 Install Python 3.10

Stable Diffusion WebUI currently supports at least Python 3.10, so install version 3.10 directly, and install the command:

	apt install software-properties-common
	add-apt-repository ppa:deadsnakes/ppa
	apt update
	apt install python3.10
	python3.10 --verison

PIP sets the domestic source. Since the default source is abroad, problems such as timeouts may often occur during installation. Using domestic sources can largely avoid the situation of downloading package timeouts. Copy the following content into the file ~/.pip/pip.conf, if there is no such file, create it first touch ~/.pip/pip.conf.

	[global] 
	index-url = https://pypi.tuna.tsinghua.edu.cn/simple
	[install]
	trusted-host = https://pypi.tuna.tsinghua.edu.cn  

2.4 Install Anaconda

Anaconda is highly recommended. Anaconda can easily obtain and manage packages, and can manage the distribution version of the Python environment in a unified manner. The installation command is also simple:

	wget https://repo.anaconda.com/archive/Anaconda3-2023.03-1-Linux-x86_64.sh
	bash ./Anaconda3-2023.03-1-Linux-x86_64.sh

Create a Python3.10.9 environment and use it

	conda create -n python3.10.9 python==3.10.9
	conda activate python3.10.9

2.5 Install PyTorch

First, check the corresponding CUDA version of Torch on the PyTorch official website. For example, in the above chapter 2.2, CUDA 11.6 needs to install pytorch1.13.1

# 使用conda安装,两种安装方式二选一
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia

# 使用pip安装,两种安装方式二选一
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116

3. Deploy Stable Diffusion WebUI

3.1 Download stable-diffusion-webui

Note that the Python3.10 environment is activated first:

conda activate python3.10.9

Then download stable-diffusion-webui

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git

3.2 Installation dependencies

cd to the stable-diffusion-webui directory to install the corresponding dependencies. If access to the network times out, fails, etc., pay attention to setting the domestic source according to Chapter 2.3. If it fails again, retry a few times to complete the installation.

cd stable-diffusion-webui
pip install -r requirements_versions.txt
pip install -r requirements.txt

3.3 start stable-diffusion-webui

After the installation is complete, execute the following startup command:

python launch.py --listen --enable-insecure-extension-access

This step will download some commonly used models. If the download fails, download the model from huggingface.co according to the error message and put it in the corresponding directory. For example, download the stable-diffusion-v1-5 model, search and find https://huggingface.co/ runwayml/stable-diffusion-v1-5/tree/main
image.png

Click the download button in the figure to download v1-5-pruned-emaonly.safetensors to the stable-diffusion-webui/models/Stable-diffusion directory, and the same for other models.

After the model download is complete, execute the startup command again, and it will prompt that it has started to port 7860, then you can access it through IP+7860 port:
image.png

It is recommended to set an access password for the public network, and replace username:password in the following command with the username and password.

python launch.py --listen --enable-insecure-extension-access --gradio-auth username:password

The above commands do not run in the background. If you need to run in the background, you can use nohup, tmux and other methods to achieve.

3.4 Generate images using stable-diffusions

Download a model to the /stable-diffusion-webui/models/Stable-diffusion directory, the model can be found at https://civitai.com/, the majicMIX realistic model used in the figure below . After the download is complete, click the refresh button in the upper left corner, then select the model you just downloaded, and enter the Promote and parameters to generate a picture.

image.png

Attach the Promote and parameters used in the figure

Prompt

1 girl a 24 y o woman, blonde, dark theme, soothing tones, muted colors, high contrast, look at at viewer, contrasty , vibrant , intense, stunning, captured in the late afternoon sunlight, using a Canon EOS R6 and a 16-35mm to capture every detail and angle, with emphasis on the lighting and shadows, late afternoon sunlight, 8K

Negative prompt

(deformed, distorted, disfigured, doll:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation, 3d, illustration, cartoon, flat , dull , soft, (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs,

Other parameters

image.png

4. Commonly used related tools and plug-ins

4.1 Install the LoRa plug-in Additional Networks

Using Lora's essential plug-in, Additional Networks can be used to control checkpoint+LoRa or multiple LoRa models to generate mixed-style images, and can set the Weight of the LoRa model. The installation method is as follows:

Open stable-diffusion-webui, click [Extensions] - [Install from URL] and enter https://ghproxy.com/https://github.com/kohya-ss/sd-webui-additional-networks.git

Then click [Install] and wait for the installation until it is displayed in [Installed], and then directly restart stable-diffusion-webui (not reload webui) with the command. It is strongly recommended to restart stable-diffusion-webui after all plug-ins are installed, which can save A lot of trouble.

Finally, click [Setting]-[Additional Networks] to enter the absolute path of the LoRa folder, such as /root/stable-diffusion-webui/models/Lora (for example, please fill in your system path), then [Reload UI] and wait for the restart to complete .

image.png

Then you can select the Lora model in [txt2img] or [img2img] and set the weight to use.

image.png

4.2 Install ControlNet

As a must-install plug-in for Stable Diffusion, ControlNet allows users to finely control the generated images to obtain better visual effects. ControlNet makes a qualitative change in the controllability of AI painting, making AGIC truly ready for production use.

Open stable-diffusion-webui, click [Extensions] - [Install from URL], enter https://ghproxy.com/https://github.com/Mikubill/sd-webui-controlnet.git
and click [Install] to wait for the installation , until it is displayed in [Installed], and then restart the stable-diffusion-webui directly with the command (not reload webui).

Since controlNet uses many models, it will be downloaded by default when restarting. If the download fails or times out, you need to manually download it to the controlnet directory.

Visit huggingface.co to find the address of controlnet: https://huggingface.co/lllyasviel/ControlNet-v1-1/tree/main

image.png

Manually download the above model file to the stable-diffusion-webui/extensions/sd-webui-controlnet/models directory, and view the downloaded controlnet model:

image.png

After the download is complete, restart stable-diffusion-webui to use it in [txt2img] or [img2img].

image.png

4.3 Jupyter Notebook

Jupyter Notebook is a web-based interactive environment that can be used to edit and run Python code and visualize the results. At the same time, basic file tree operation functions are provided.

If Anaconda has been installed in Chapter 2.4, run the notebook directly using the following command

jupyter notebook --allow-root --NotebookApp.token='设置你的token'

Access the IP+8888 port, you can start using the notebook

image.png

4.4 Model training tool Kohya_ss

Kohya_ss is recognized as a recommended visual tool for training Stable Diffusion models, especially on the Windows platform. After trying to use it directly on Linux, you will encounter various environmental problems. In order to avoid these problems, it is highly recommended to use docker installation.

First install docker according to the official docker documentation, Ubuntu installs docker documentation: https://docs.docker.com/engine/install/ubuntu/
Since GPU resources need to be used in the docker container, it needs to be installed firstNVIDIA Container Toolkit

sudo apt-get update \
    && sudo apt-get install -y nvidia-container-toolkit-base

# 查看是否安装成功
nvidia-ctk --version

Then download kohya_ss:

git clone https://github.com/bmaltais/kohya_ss.git

As shown in the figure below, modify the port of the kohya_ss/docker-compose.yaml file to 0.0.0.0:7861:7860(map port 7860 of kohya_ss to port 7861 of the host, because 7860 will be occupied by the Stable Diffusion WebUI),

The startup parameter is set to "--username xxxx --password xxxx --headless", pay attention to replace xxxx with the account password that needs to be set

image.png

then execute

docker compose build # 首次执行需要build

docker compose run --service-ports kohya-ss-gui

During the process, the model file will be downloaded from huggingface.co. If the download fails, you can try to manually download it to the directory kohya_ss/.cache/user/huggingface/hub/models--openai--clip-vit-large-patch14/snapshots/8d052a0f05efbaefbc9e8786ba291cfdf93e5bff, Pay attention to changing the final hash value to the corresponding version.

Download address https://huggingface.co/openai/clip-vit-large-patch14/tree/main, pay attention to download all files

image.png

The download is complete, and then access the port +7861 port, you can start using Kohya_ss to train the model.

image.png

V. Summary

After installing Stable Diffusion and the recommended plug-ins above, your Stable Diffusion already has strong productivity. In the future, I will continue to explore and share more experience with you. Please look forward to the next episode of this series of articles. Newcomers who purchase JD Cloud GPU cloud hosts
now can enjoy a 7-day discounted trial price of 99 yuan (0.59 yuan per hour), and immediately start the journey of alchemy.

ME1688521750020.png
ME1688520180384.png

Author: Jingdong Technology Wang Lei

Source: JD Cloud Developer Community

Graduates of the National People’s University stole the information of all students in the school to build a beauty scoring website, and have been criminally detained. The new Windows version of QQ based on the NT architecture is officially released. The United States will restrict China’s use of Amazon, Microsoft and other cloud services that provide training AI models . Open source projects announced to stop function development LeaferJS , the highest-paid technical position in 2023, released: Visual Studio Code 1.80, an open source and powerful 2D graphics library , supports terminal image functions . The number of Threads registrations has exceeded 30 million. "Change" deepin adopts Asahi Linux to adapt to Apple M1 database ranking in July: Oracle surges, opening up the score again
{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/4090830/blog/10086984