MAC builds stable-diffusion of M1 environment

MAC M1 builds a stable-diffusion environment


Apple programmers have created a stable-diffusion warehouse for ARM64 chips such as M1 and M2:

Run Stable Diffusion on Apple Silicon with Core ML

The link is: https://github.com/apple/ml-stable-diffusion

To make full use of the built-in artificial intelligence chip (neural network chip) of M1, it is necessary to convert the PyTorch model to the Apple Core ML model.

This article is based on this warehouse.

Environmental preparation

1. Hardware environment

  • Apple MacBook Pro with M1 chip
  • 16G memory; 8G is also available, but some additional configuration is required.

2. System environment

3. Basic software environment

  • git: download warehouse source code; theoretically need to update to the latest version
  • conda: mainly used to create a Python environment
  • Python: version 3.8 is required, it doesn't work if it is higher or lower, just use conda to install it.

The conda download page is: https://docs.conda.io/en/latest/miniconda.html

main reference

I found a lot of information on the Internet, tossed many times, and found this article is better:

If you encounter problems, please open this page to check.

Steps

1. Download git

Refer to the official website: https://git-scm.com/downloads

Just download or install it.

2. Download conda

Refer to the official website: https://docs.conda.io/en/latest/miniconda.html

Download miniconda and install it.

miniconda is a simplified version that only has python built in. Other C++, Java and other environments are temporarily removed.

3. Create python environment

Reference: https://zhuanlan.zhihu.com/p/590869015

The corresponding command is:

# 创建和准备Python环境
conda create -n coreml_stable_diffusion python=3.8 -y

# 查看conda的环境列表
conda env list

# 激活特定环境
conda activate coreml_stable_diffusion

## 查看Python版本, 注意是大V
python -V

These environments are scoped at the operating system user level. Mainly used in the shell.

4. Download the repository

The command used is:

git clone https://github.com/apple/ml-stable-diffusion.git

github supports downloading zip packages, but if you are in heaven, you need some skills to download successfully.

If the speed is too slow, you may need some download tricks, such as purchasing Internet services.

5. Install dependencies

# 进入仓库目录
cd ml-stable-diffusion

# 激活特定环境
conda activate coreml_stable_diffusion

# 安装python依赖; pip 是和 python 环境一起自动安装的;
pip install -r requirements.txt

If the speed is too slow, you may need some download tricks, such as purchasing Internet services.

If the installation fails due to network speed, you can repeat the installation again.

6. Conversion model

In order to utilize the built-in artificial intelligence chip (neural network chip) of M1, it is necessary to convert the PyTorch model to an Apple Core ML model.

The command corresponding to the conversion model is:

# 进入仓库目录
cd ml-stable-diffusion

# 激活特定环境
conda activate coreml_stable_diffusion

# 模型转换; 需要下载几个GB的文件
# (默认值是脚本里面内置的1.4版本)
python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-text-encoder --convert-vae-decoder --convert-safety-checker -o ./models

If memory is insufficient, try closing some other programs first.

I reported an error during execution here:

RuntimeError: PyTorch convert function for op 'scaled_dot_product_attention' not implemented.

Solution, reference: https://blog.csdn.net/cainiao1412/article/details/131204867

pip show torch # 查看torch版本
pip uninstall torch # 卸载torch版本
pip install torch==1.13.1 # 安装指定版本

If an error is reported, switch the torch version, and then execute the model conversion command again.

7. Verification and testing

The command used is:

python -m python_coreml_stable_diffusion.pipeline --prompt "magic book on the table" -i ./models -o ./output --compute-unit ALL --seed 93

Because it needs to initialize the environment, load the model and process, the process is relatively slow, and it takes me several minutes here.

8. Construct the web interface

The advantage is that there is no need to initialize the environment every time the prompt word prompt is executed.

Install gradio, refer to: https://www.gradio.app/quickstart/

The corresponding installation command is:

pip install gradio

Then refer to the script mentioned in the https://zhuanlan.zhihu.com/p/590869015 column:

After the preparation of the web.py file is complete, the startup command is:

# 进入仓库目录
cd ml-stable-diffusion

# 激活特定环境
conda activate coreml_stable_diffusion

# 启动WebUI
python -m python_coreml_stable_diffusion.web -i ./models --compute-unit ALL

The startup needs to load the environment, which will take some time.

After the startup is complete, you will see the access URL given by the command line, for example: http://0.0.0.0:7860

9. Test WebUI

Open the access URL, for example: http://0.0.0.0:7860

Find a supported template and modify it, for example:

rabbit, anthro, very cute kid's film character, disney pixar zootopia character concept artwork, 3d concept, detailed fur, high detail iconic character for upcoming film, trending on artstation, character design, 3d artistic render, highly detailed, octane, blender, cartoon, shadows, lighting

After entering the vocabulary, click Generate and wait.

insert image description here

It can be seen that with this configuration, it only takes about 7 seconds to output the image using the WebUI interface, and the file size is about 500KB.

There are still some problems with this WebUI. Sometimes a black image will be generated. If you encounter it, just refresh the page and try again.

There are many prompt word templates on the Internet, the famous ones are: https://github.com/Dalabad/stable-diffusion-prompt-templates

Of course, the advantage of the template is that you can try to rabbitreplace tigerwith words like .

10. Close the environment

When webUI is turned on, python will take up a lot of memory. When it is not needed, CTRL+Cjust close it from the console, or kill the process directly.

11. Integrated App

There are many one-click installation scripts under the Windows system, just search for keywords:windows stable diffusion 一键安装

After searching, there are similar ones in the MAC system, which support Intel and M1/M2 chips.

Stepping on the Pit Diary

1. brew update failed

The reason is that the domestic source has been switched. However, the domestic sources are relatively poor, often incompatible or reporting errors.

Reset the brew source, reference: Replace and reset the default source of Mac Homebrew

In essence, brew relies on several git repositories, so if you have any problems, you can directly process the corresponding directories through git.

2. Conversion model error

The error message is:

RuntimeError: PyTorch convert function for op 'scaled_dot_product_attention' not implemented.

Solution, reference: https://blog.csdn.net/cainiao1412/article/details/131204867

pip show torch # 查看torch版本
pip uninstall torch # 卸载torch版本
pip install torch==1.13.1 # 安装指定版本

OK, version 1.3.1 can be processed successfully.

3. Network problem

It is very strong, and the network often times out, and skills are needed at this time.

Related Links

Author: Iron Anchor
Date: June 20, 2023

Guess you like

Origin blog.csdn.net/renfufei/article/details/131308782