[Easier to change models] How to use Serverless to deploy Stable Diffusion with one click_

image.png
Author|Han Xie (Alibaba Cloud Intelligent Technology Expert)

Previous review

AI painting is popular, how to use Serverless function to calculate and deploy Stable Diffusion?
[Change model by yourself] How to use Serverless to deploy Stable Diffusion with one click?

This chapter is the third in Alibaba Cloud's function computing deployment stablediffusion series. If the first article is to try to use cloud services to solve the problem of users' local deployment of sd (the cost of graphics cards and the deployment technology is complex), the second article is for technical students to solve The practicability of cloud service sd (custom model, extension), then this article is to realize the replacement of local computers in a more popular way, so that everyone can have a set of practical sd services, no matter you are an ordinary user , or technical students, are applicable.

Preconditions

You don’t have to deal with the problem of fees. The function computing and file storage Nas are only billed when they are used, and the fees are relatively low.

quick start

First enter the application center:
https://account.aliyun.com/login/login.htm?oauth_callback=https%3A%2F%2Ffcnext.console.aliyun.com%2Fapplications&lang=zh

Create an application from a template -> artificial intelligence tab -> AI digital painting stable-diffusion custom template -> create now

Fill out the form items

Select Direct Deployment -> Hangzhou Region -> Copy the container image prepared by the developer and
click to create and deploy the default environment.

application deployment

Next, there is no need to operate anything, just wait for the application to be deployed, it will take about 5-10 minutes, if you are a technical student, you can expand to see the deployment log we provide, and observe the deployment process

Configuration management background

After successful deployment, two domain names are obtained

Among them, the one starting with sd is the main service, which cannot be accessed because there is no mirror image, and the one starting with admin is our management background. Next, we need to configure the management background first, and then upload our model.

The management background uses the kod-box provided by Kedaoyun. For you, just a little bit all the way, you can

After initialization, set your own login account and password

log in later

Enter /mnt/auto/sd in the path after login

If you are familiar with the sd-webui directory, you can see the corresponding directory

Next we open /mnt/auto/sd/models/Stable-diffusion/, and then click Upload -> Offline Download

Here we enter the model address of sd1.5
https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt
Of course, you can also enter any address of your own, In addition to downloading, you can also drag and drop local models directly to upload.
Because the model is large, the download time is estimated to take 5-15 minutes, you can rest and wait for a while (if there is an error in the upgrade, you can ignore it)

In addition, you need to pay special attention. If the file is downloaded from the huggingface source site, you need to change the file suffix. For example, here, we need to ensure that the name of the file is strict, such as sd-v1-5-inpainting.ckpt
After the model is downloaded, we can open the sd service

Source code customization

Also paste the source code for building the mirror here, if you are a development classmate, you can build your own mirror

Based on the https://github.com/AbdBarho/stable-diffusion-webui-docker/tree/master/services/AUTOMATIC1111 project, replace the entrypoint.sh
in it

#!/bin/bash

set -Eeuo pipefail

# TODO: move all mkdir -p ?
mkdir -p /mnt/auto/sd/config/auto/scripts/
# mount scripts individually
find "${ROOT}/scripts/" -maxdepth 1 -type l -delete
cp -vrfTs /mnt/auto/sd/config/auto/scripts/ "${ROOT}/scripts/"

cp -n /docker/config.json /mnt/auto/sd/config/auto/config.json
jq '. * input' /mnt/auto/sd/config/auto/config.json /docker/config.json | sponge /mnt/auto/sd/config/auto/config.json

if [ ! -f /mnt/auto/sd/config/auto/ui-config.json ]; then
  echo '{}' >/mnt/auto/sd/config/auto/ui-config.json
fi

declare -A MOUNTS

MOUNTS["/root/.cache"]="/mnt/auto/sd/.cache"

# main
MOUNTS["${ROOT}/models"]="/mnt/auto/sd/models"
MOUNTS["${ROOT}/embeddings"]="/mnt/auto/sd/embeddings"
MOUNTS["${ROOT}/config.json"]="/mnt/auto/sd/config/auto/config.json"
MOUNTS["${ROOT}/ui-config.json"]="/mnt/auto/sd/config/auto/ui-config.json"
MOUNTS["${ROOT}/extensions"]="/mnt/auto/sd/config/auto/extensions"
MOUNTS["${ROOT}/outputs"]="/mnt/auto/sd/config/auto/outputs"
MOUNTS["${ROOT}/extensions-builtin"]="/mnt/auto/sd/extensions-builtin"
MOUNTS["${ROOT}/configs"]="/mnt/auto/sd/configs"
MOUNTS["${ROOT}/localizations"]="/mnt/auto/sd/localizations"

# extra hacks
MOUNTS["${ROOT}/repositories/CodeFormer/weights/facelib"]="/mnt/auto/sd/.cache"

for to_path in "${!MOUNTS[@]}"; do
  set -Eeuo pipefail
  from_path="${MOUNTS[${to_path}]}"
  rm -rf "${to_path}"
  if [ ! -f "$from_path" ]; then
    mkdir -vp "$from_path"
  fi
  mkdir -vp "$(dirname "${to_path}")"
  ln -sT "${from_path}" "${to_path}"
  echo Mounted $(basename "${from_path}")
done

if [ -f "/mnt/auto/sd/config/auto/startup.sh" ]; then
  pushd ${ROOT}
  . /mnt/auto/sd/config/auto/startup.sh
  popd
fi

exec "$@"

After customizing your own image, you can replace the image part that needs to be filled in the above process, and pay attention to the corresponding region.

Q&A

Download model not available

Need to check whether the file name of the model is correct

Plugins cannot be installed online

Container image deployment has security restrictions. You can download the plug-in locally, and then upload it to the extensions directory through the management background. If you want to support the url, you need to customize the docker image and modify the relevant parameters.

How to access the api of sd

You need to customize the image, turn on the --api parameter, and then visit /docs to view the callable api

tariff part

This application relies on function computing and Nas file storage. Before using it, please receive the corresponding free quota or purchase the corresponding resource package. For specific fee descriptions, please refer to the official website.

More flexible customization scheme

You can try to map the entire webui directory directly to nas, which is more convenient to modify the source code.

other considerations

Please pay attention to the corresponding open source agreement to prevent your commercialization from possible risks.

Guess you like

Origin blog.csdn.net/weixin_42477427/article/details/130749499