(1) Deploy sdwebui under linux, install models and plug-ins

material

Install

Download stable-diffusion-webui, prepare py3.9, upgrade pip, and set cloud vendor mirror source

It is best to use python 3.10 or above, and use canda, this tutorial is 3.10 and venv deployment, it can be used as well, but it seems to be limited, so far I haven’t encountered any functions that I can’t experience

# 下载stable-diffusion-webui
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
# 进入目录
cd stable-diffusion-webui/
# libGL.so 报错需要的依赖
yum install -y mesa-libGL.x86_64

# 不允许用根用户安装,创建普通用户
useradd peter
chown -R peter.peter .
su peter

# 创建虚拟环境,这个可以是conda等其他手段,具体可以百度python安装,这里是python3.9
python3.9 -m venv venv

# 激活环境
source venv/bin/activate
# 设置云厂商的pip镜像源
pip config set global.index-url 'http://mirrors.tencentyun.com/pypi/simple'
pip config set global.trusted-host 'mirrors.tencentyun.com'

python -m pip install --upgrade pip
# libGL.so 报错需要的依赖
pip install opencv-python-headless

Modify the loaded one lauch.py, set the pytorch source as the manufacturer

torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==1.13.1 torchvision==0.14.1")

Install

sh webui.sh --enable-insecure-extension-access --xformers --server-name 0.0.0.0

#安装过程会新增repositories文件夹,
#存放stable-diffusion-stability-ai,taming-transformers等模型
# 模型下载位置 /data/stable-diffusion-webui

plugin installation

  • controlnet
  • openpose
yum -y install git git-lfs

github加速, the domestic access speed is relatively slowhttps://ghproxy.com/

https://ghproxy.com/https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui-extensions/master/index.json

insert image description here

sd-webui-controlnet plugin

sd-webui-controlnet If clicking the install button fails, you need to obtain the git address to install manually, or download the entire package through git clone or the github web page and place it under the stable-diffusion-webui > extensions folder. After installation, remember to click apply and restart on the installed page (apply and restart).

insert image description here

Install the model file required by the plug-in, controlnet model address , put the model into "stable-diffusion-webui\extensions\sd-webui-controlnet\models". Now we have included all "yaml" files. You just need to download the "pth" file.

# 进入ControlNet插件目录
cd {project}/models/ControlNet
git lfs install
git lfs clone https://huggingface.co/lllyasviel/ControlNet

insert image description here
Models can be downloaded individually, not necessarily all

  • ControlNet/models/control_sd15_canny.pth用于使用精明边缘检测来控制 SD
  • ControlNet/models/control_sd15_depth.pth , uses Midas depth estimation to control SD.
  • ControlNet/models/control_sd15_hed.pth , controls SD using HED edge detection (soft edges).
  • ControlNet/models/control_sd15_mlsd.pth, uses M-LSD line detection to control SD (can also be used with traditional Hough transform).
  • ControlNet/models/control_sd15_normal.pth ControlNet+SD1.5 model Control SD with normal map. It is best to use the normal map generated by the Gradio application. Other normal maps will also work as long as the orientation is correct (red for left, blue for right, green for top, purple for bottom).
  • ControlNet/models/control_sd15_openpose.pthControl the SD using OpenPosegesture detection. Directly manipulating the pose skeleton should also work.
  • ControlNet/models/control_sd15_scribble.pthThe ControlNet+SD1.5 model controls SD using human scribbles. The model is trained using border edges with very powerful data augmentation to simulate border lines similar to those drawn by humans.
  • ControlNet/models/control_sd15_seg.pth uses semantic segmentation to control SD. The protocol is ADE20k.
  • ControlNet/annotator/ckpts/body_pose_model.pth Third-party model: Openpose's pose detection model.
  • ControlNet/annotator/ckpts/hand_pose_model.pth Third-party model: Openpose's hand detection model.
  • ControlNet/annotator/ckpts/dpt_hybrid-midas-501f0c75.pt Third party model: Midas depth estimation model.
  • ControlNet/annotator/ckpts/mlsd_large_512_fp32.pth Third-party model: M-LSD detection model.
  • ControlNet/annotator/ckpts/mlsd_tiny_512_fp32.pth 3rd party model: Another smaller detection model for M-LSD (we don't use this one).
  • ControlNet/annotator/ckpts/network-bsds500.pth Third party model: Hall effect device boundary detection.
  • ControlNet/annotator/ckpts/upernet_global_small.pth Third-party model: Uniformer Semantic Segmentation.

Note that the first time you use control_sd15_openposethe model, it will automatically download the relevant third-party model, body_pose_model, hand_pose_model,facenet

insert image description here
Load the model and {project}replace it with your path

cd {project}/models/ControlNet
ln -s {project}/sdmodels/ControlNet/models/control_sd15_* .

main model model

Go to huggingface to find chilloutmix_Ni , GuoFeng3.3 and other main models. The installation method is the same

Generally, it is stored in a file /data/stable-diffusion-webui/models/Stable-diffusionwith a file suffix safetensorsofckpt

How to load chilloutmixnithe model (I don't have to talk about the characters)

# 根据实际存放修改即可
cd {project}
curl -Lo chilloutmixni.safetensors https://huggingface.co/nolanaatama/chomni/resolve/main/chomni.safetensors
ln -s {project}/sdmodels/chilloutmixni.safetensors {project}/stable-diffusion-webui/models/Stable-diffusion

As above, this model is downloaded and placed instable-diffusion-webui/models/stable-diffusion

Of course, if there is a lora model, it needs to be placed in the following folder

stable-diffusion-webui>>models>>Lora

SD character SOLO prompt words

Model: chilloutmixni, choosing the right model is more important than anything else, and you can get a good picture if you want.
Picture size: 512x786, vertical picture, the effect of expressing characters is better, tall
Prompt words:

  1. High-quality image words, (masterpiece:1.0), (best quality:1.0), (ultra highres:1.0) ,(8k resolution:1.0),(realistic:1.0),(ultra detailed1:0), (sharp focus1:0 ), (RAW photo:1.0)
  2. Supplementary words for environmental characters, full body, simple background, beautiful girl, solo focus are solo descriptions,
  3. Just delete these and replace them with your needs, tall, skirt, high heels, sea side, sky, tree

Template, in theory, when the model and parameters are the same, the output image is exactly the same, and the random image can be set as seed (a prompt image in batches) as-1

#提示词
(masterpiece:1.0), (best quality:1.0), (ultra highres:1.0) ,(8k resolution:1.0),(realistic:1.0),(ultra detailed1:0), (sharp focus1:0), (RAW photo:1.0),full body,simple background,beautifull girl,solo focus,
tall,skirt,high heels,sea side,sky,tree

# 负面
Negative prompt: (easynegative:1.2), (worst quality: 1.2), (low quality: 1.2),nsfw,by <bad-artist-anime:0.6> , by <bad-artist:0.6> , by <bad-hands-5:0.6>, by < bad_prompt_version2:0.8>

#模型参数
Steps: 40, Sampler: DPM++ SDE Karras, 
CFG scale: 7, Seed: 1845414120, 
Face restoration: CodeFormer, 
Size: 512x786, 
Model hash: 7234b76e42, Model: chilloutmixni, 
Denoising strength: 0.45, Hires upscale: 1.4, Hires steps: 11, Hires upscaler: Latent (bicubic antialiased)

Effect

insert image description here

GuoFeng3.3 model + controlnet (learning customer service posture)
prompt word is the official demo, the effect is very good
country style

Guess you like

Origin blog.csdn.net/q116975174/article/details/130402592