Stable Diffusion records the whole process of building and generates its own exclusive artistic photos

introduction

Recently, people from Silicon Star have reported AI image generation technology many times, and mentioned well-known products such as DALL·E, Midjourney, DALL·E mini (currently known as Craiyon), Imagen, and TikTok AI green screen.

In fact, Stable Diffusion has powerful generation capabilities and wide usage possibilities. The model can run directly on consumer-grade graphics cards, and the generation speed is quite fast. And its free and open nature can make the AI ​​image generation model no longer a plaything for a few people in the industry.

In the field of AI image generation where strong players are like clouds and giants are entering the game one after another, Stability AI, the "mysterious" organization behind Stable Diffusion, also exists like an "outsider monk". Its founders aren't that well-known, and details of its founding story and funding aren't public information. Coupled with the charity of free and open source Stable Diffusion, people have increased their interest in this mysterious AI research institution.

Introduction to Stable Diffusion

There are two project development leaders, Patrick Esser of Runway, an AI video editing technology startup, and Robin Romabach of the Machine Vision Learning Group at the University of Munich. The technical foundation of this project mainly comes from the Latent Diffusion Model (Latent Diffusion Model) research jointly published by the two developers at the computer vision conference CVPR22.

In terms of training, the model used a cluster of 4,000 A100 graphics cards, which took a month. The training data comes from LAION-Aesthetics, a data subset focusing on "aesthetics" under the large-scale AI open network project, including nearly 5.9 billion pictures-text parallel data.

Although the computing power requirements of the training process are particularly high, Stable Diffusion is quite user-friendly: it can run on an ordinary graphics card, and even if the video memory is less than 10GB, it can still generate high-resolution image results in a few seconds.

Train the diffusion model to predict how to lightly denoise the samples at each step, and after a few iterations, get the result. Diffusion models have been applied to various generative tasks such as image, speech, 3D shape, and graph synthesis.

The diffusion model consists of two steps:

  • Forward Diffusion - Maps data to noise by gradually perturbing the input data. This is formally achieved by a simple stochastic process that starts with data samples and iteratively generates noise samples using a simple Gaussian diffusion kernel. This process is only used during training, not inference.
  • Parametric Backward - Undoes forward diffusion and performs iterative denoising. This process stands for data synthesis and is trained to generate data by converting random noise into real data.

This is actually very cumbersome, and based on this, Stable Diffusion adopts a more efficient way to build a diffusion model, as follows (from the model paper):

insert image description here

Stable Diffusion model building record

stable-diffusion-v1-1 environment preparation

The reason for distinguishing v1.1 from the later v1.4 environment is that I see that the v1.1 warehouse seems to be just a test. There is no complete code of v1.4 in it, and the model weight and installation difficulty are much smaller.

  • sd-v1-1.ckpt: 237k steps at resolution 256x256 on laion2B-en. 194k steps at resolution 512x512 on laion-high-resolution (170M examples from LAION-5B with resolution >= 1024x1024).
  • sd-v1-2.ckpt: Resumed from sd-v1-1.ckpt. 515k steps at resolution 512x512 on laion-aesthetics v2 5+ (a subset of laion2B-en with estimated aesthetics score > 5.0, and additionally filtered to images with an original size >= 512x512, and an estimated watermark probability < 0.5. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using the LAION-Aesthetics Predictor V2).
  • sd-v1-3.ckpt: Resumed from sd-v1-2.ckpt. 195k steps at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning to improve classifier-free guidance sampling.
  • sd-v1-4.ckpt: Resumed from sd-v1-2.ckpt. 225k steps at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning to improve classifier-free guidance sampling.

The above comes from Github. The simple explanation is that sd-v1-1.ckptit is about 1.3G, sd-v1-4.ckptbut 4G, full-v1.4which is 7.4G, so enter the v1.1 environment installation process.

pip install --upgrade diffusers transformers scipy

That's right, just one sentence. The v1.1 environment is just a brief version of v1.4, and v1.4 is the full version.

stable-diffusion-v1-4 environment preparation

This problem is a bit more, because of the problem of the external network, and some packages are really not easy to install, it may be much faster to open the ladder, because I am on the server, the following are some records of my stepping on the pit.

https://github.com/CompVis/stable-diffusion.git
conda env create -f environment.yaml
conda activate ldm

The above bug is mainly in the second step, and the download speed is very slow. Here are several solutions. The channels set by the author in yaml are based on the default sources of pytorch and conda, but obviously, without a ladder, it will not only be very slow, but also greatly increase the chance of timeout. Consider changing the channel address to:

name: ldm
channels:
  - http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
  - http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/
  - http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/msys2/
  - http://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/bioconda/
  - http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
    # - defaults

I don’t know if it’s just me who has the problem, and the error is reported Solving environment: failed,ResolvePackageNotFoundas follows:
insert image description here
I didn’t analyze the meaning of this error, but I probably felt that there was something conflicting in it, so I changed it manually, manually created a virtual environment as py38, and then to download the package. In addition to CLIPand taming-transformers, no other problems.

The last two packages are wrong error: RPC failed; curl 56 GnuTLS recv error (-54): Error in the pull function., and the solution given for the error is note: This error originates from a subprocess, and is likely not a problem with pip.:
insert image description here
The reason for this error is that the pip of the virtual environment I created manually usually installs the latest version, but the environment required for these two packages is pip==20.3, so the installation is successful after the pip version is withdrawn .

Eligibility to apply for Diffusion on huggingface

First of all, if you want to download the Stable Diffusion model, you must go to huggingface to agree to the download agreement. The specific link is:

stable-diffusion-v1-1:
https://huggingface.co/CompVis/stable-diffusion-v1-1

stable-diffusion-v1-4:
https://huggingface.co/CompVis/stable-diffusion-v1-4

Clicking into these two, the relevant agreement will pop up first, probably not for commercial use, not for illegal activities, xxxxx, etc., but how should I put it, the qubit article "Stable Diffusion was so popular that it was reported by artists collectively, behind the popular science of netizens After reading the article "The Mechanism Was Liked by LeCun", I feel that the commercial company will still use it for commercial use, because it is too popular? emmm. . . Back to the topic, only after you click to agree to the agreement, you can download it on the server side.

On the server side enter:

huggingface-cli login

The login interface will pop up:
insert image description here
Then go to the settings on the web page, which is similar to the GitHub operation, select User Access Tokens, copy the token, and enter the above image to log in. If not User Access Tokens, please create it:
insert image description here

After the token is logged in, the model test can be performed.

stable-diffusion-v1-1 test

import torch
from torch import autocast
from diffusers import StableDiffusionPipeline

model_id = "CompVis/stable-diffusion-v1-1"
device = "cuda"


pipe = StableDiffusionPipeline.from_pretrained(model_id, use_auth_token=True)
pipe = pipe.to(device)

prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
    image = pipe(prompt, guidance_scale=7.5)["sample"][0]

image.save("astronaut_rides_horse.png")

Not surprisingly, there will be a bar-shaped scrolling model download output, so I won't demonstrate it anymore. Although the model is only 1.3G, my internet speed is a bit poor. After downloading v1.4, my patience is already limited. .

Of course, the above is only the most original model download method, there are other options to download different weights:

"""
如果您受到 GPU 内存的限制并且可用的 GPU RAM 少于 10GB,请确保以 float16 精度加载 StableDiffusionPipeline,而不是如上所述的默认 float32 精度。
"""
import torch

pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16", use_auth_token=True)
pipe = pipe.to(device)

prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
    image = pipe(prompt, guidance_scale=7.5)["sample"][0]  
    
image.save("astronaut_rides_horse.png")

"""
要换出噪声调度程序,请将其传递给from_pretrained:
"""
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler

model_id = "CompVis/stable-diffusion-v1-1"
# Use the K-LMS scheduler here instead
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, use_auth_token=True)
pipe = pipe.to("cuda")

prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
    image = pipe(prompt, guidance_scale=7.5)["sample"][0]  
    
image.save("astronaut_rides_horse.png")

Finally, if the network speed is really too bad, you can go to the web page to download directly, the link is:
https://huggingface.co/CompVis/stable-diffusion-v-1-1-original

stable-diffusion-v1-4 test

Like 1.1, the first is the model download, there are many options, I will not list them one by one:

# make sure you're logged in with `huggingface-cli login`
from torch import autocast
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained(
        "CompVis/stable-diffusion-v1-4",
        use_auth_token=True
).to("cuda")

prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
    image = pipe(prompt)["sample"][0]

image.save("astronaut_rides_horse.png")


# device = "cuda"
# model_path = "CompVis/stable-diffusion-v1-4"
# 
# # Using DDIMScheduler as anexample,this also works with PNDMScheduler
# # uncomment this line if you want to use it.
# 
# # scheduler = PNDMScheduler.from_config(model_path, subfolder="scheduler", use_auth_token=True)
# 
# scheduler = DDIMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False)
# pipe = StableDiffusionImg2ImgPipeline.from_pretrained(
#     model_path,
#     scheduler=scheduler,
#     revision="fp16", 
#     torch_dtype=torch.float16,
#     use_auth_token=True
# ).to(device)

I used the initial download method above, the default is 32 bits, and the other parameters are not changed, that is, I need to download more than 4 G models: I have been disconnected a few times in the middle, and each time it is the same as xxxx, it is very uncomfortable if the network is not
insert image description here
good . But fortunately, the download is still finished. After downloading, it is the same as the model library of pytorch. The storage path is:

insert image description here
The current directory generates a picture with similar content to the words of the prompt:
Please add a picture description
it feels quite comical. In addition, during the above-mentioned waiting time, I also made two-handed preparations and downloaded the model directly from the official website. I am not afraid of 10,000, just in case. The address is: https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/blob/main/sd-v1-4.ckpt
insert image description here

Either way, as long as it can be used, then you can test the text-to-image text routine. Here I wrote two myself. In addition, I refer to the prompt and run commands in the model method – Stable Diffusion , because I feel It looks like it's written in full. Examples are:

python txt2img.py --prompt "Asia girl, glossy eyes, face, long hair, fantasy, elegant, highly detailed, digital painting, artstation, concept art, smooth, illustration, renaissance, flowy, melting, round moons, rich clouds, very detailed, volumetric light, mist, fine art, textured oil over canvas, epic fantasy art, very colorful, ornate intricate scales, fractal gems, 8 k, hyper realistic, high contrast" 
                  --plms 
                  --outdir ./output/
                  --ckpt ./models/sd-v1-4.ckpt 
                  --ddim_steps 100 
                  --H 512 
                  --W 512 
                  --seed 8

Here, for the sake of looking good, the parameters are line-wrapped. If you run it directly, please remove the line breaks. The explanation of the parameters can be directly viewed on GitHub. There is no too difficult parameter setting. After the terminal runs, you need to download a HardNet model:
insert image description here
after downloading, you can get the result, the image is:
insert image description here

There are two other groups of parameters that I just wrote:

prompt = "women, pink hair, ArtStation, on the ground, open jacket, video game art, digital painting, digital art, video game girls, sitting, game art, artwork"

prompt = "fantasy art, women, ArtStation, fantasy girl, artwork, closed eyes, long hair. 4K, Alec Tucker, pipes, fantasy city, fantasy art, ArtStation"

insert image description here

It seems that something strange has been mixed in? emmm, I don't know why it came out. . .

This is the use case of text-to-picture conversion, and there is another kind of image + text-to-picture conversion , then the startup method is:

python img2img.py --prompt "magic fashion girl portrait, glossy eyes, face, long hair, fantasy, intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, renaissance, flowy, melting, round moons, rich clouds, very detailed, volumetric light, mist, fine art, textured oil over canvas, epic fantasy art, very colorful, ornate intricate scales, fractal gems, 8 k, hyper realistic, high contrast" 
                          --init-img ./ceshi/33.jpg 
                          --strength 0.8 
                          --outdir ./output/
                          --ckpt ./models/sd-v1-4.ckpt 
                          --ddim_steps 100

Originally, I thought that running the demo would be a smooth end, but sadly, the card resources are not enough. Just a few G of the card space (PS: that is, the video memory required by v1.4, more than 15G):

    return _VF.einsum(equation, operands)  # type: ignore[attr-defined]
RuntimeError: CUDA out of memory. Tried to allocate 2.44 GiB (GPU 0; 14.75 GiB total capacity; 11.46 GiB already allocated; 1.88 GiB free; 11.75 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

So, I don't get entangled anymore, and directly transfer to FP16 precision, and refer to the experiment on colab. I think someone has succeeded with t4, so without further ado, transfer directly jupyter notebook.

Pilot package:

import inspect
import warnings
from typing import List, Optional, Union

import torch
from torch import autocast
from tqdm.auto import tqdm

from diffusers import (
    AutoencoderKL,
    DDIMScheduler,
    DiffusionPipeline,
    PNDMScheduler,
    UNet2DConditionModel,
)
from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker
from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer

Then add the data pipeline source code, download the pre-trained weight model, and specify the model as float16:

class StableDiffusionImg2ImgPipeline(DiffusionPipeline):
    def __init__(
        self,
        vae: AutoencoderKL,
        text_encoder: CLIPTextModel,
        tokenizer: CLIPTokenizer,
        unet: UNet2DConditionModel,
        scheduler: Union[DDIMScheduler, PNDMScheduler],
        safety_checker: StableDiffusionSafetyChecker,
        feature_extractor: CLIPFeatureExtractor,
    ):
        super().__init__()
        scheduler = scheduler.set_format("pt")
        self.register_modules(
            vae=vae,
            text_encoder=text_encoder,
            tokenizer=tokenizer,
            unet=unet,
            scheduler=scheduler,
            safety_checker=safety_checker,
            feature_extractor=feature_extractor,
        )

    @torch.no_grad()
    def __call__(
        self,
        prompt: Union[str, List[str]],
        init_image: torch.FloatTensor,
        strength: float = 0.8,
        num_inference_steps: Optional[int] = 50,
        guidance_scale: Optional[float] = 7.5,
        eta: Optional[float] = 0.0,
        generator: Optional[torch.Generator] = None,
        output_type: Optional[str] = "pil",
    ):

        if isinstance(prompt, str):
            batch_size = 1
        elif isinstance(prompt, list):
            batch_size = len(prompt)
        else:
            raise ValueError(f"`prompt` has to be of type `str` or `list` but is {
      
      type(prompt)}")

        if strength < 0 or strength > 1:
          raise ValueError(f'The value of strength should in [0.0, 1.0] but is {
      
      strength}')

        # set timesteps
        accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys())
        extra_set_kwargs = {
    
    }
        offset = 0
        if accepts_offset:
            offset = 1
            extra_set_kwargs["offset"] = 1

        self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)

        # encode the init image into latents and scale the latents
        init_latents = self.vae.encode(init_image.to(self.device)).sample()
        init_latents = 0.18215 * init_latents

        # prepare init_latents noise to latents
        init_latents = torch.cat([init_latents] * batch_size)
        
        # get the original timestep using init_timestep
        init_timestep = int(num_inference_steps * strength) + offset
        init_timestep = min(init_timestep, num_inference_steps)
        timesteps = self.scheduler.timesteps[-init_timestep]
        timesteps = torch.tensor([timesteps] * batch_size, dtype=torch.long, device=self.device)
        
        # add noise to latents using the timesteps
        noise = torch.randn(init_latents.shape, generator=generator, device=self.device)
        init_latents = self.scheduler.add_noise(init_latents, noise, timesteps)

        # get prompt text embeddings
        text_input = self.tokenizer(
            prompt,
            padding="max_length",
            max_length=self.tokenizer.model_max_length,
            truncation=True,
            return_tensors="pt",
        )
        text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0]

        # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
        # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
        # corresponds to doing no classifier free guidance.
        do_classifier_free_guidance = guidance_scale > 1.0
        # get unconditional embeddings for classifier free guidance
        if do_classifier_free_guidance:
            max_length = text_input.input_ids.shape[-1]
            uncond_input = self.tokenizer(
                [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt"
            )
            uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]

            # For classifier free guidance, we need to do two forward passes.
            # Here we concatenate the unconditional and text embeddings into a single batch
            # to avoid doing two forward passes
            text_embeddings = torch.cat([uncond_embeddings, text_embeddings])


        # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
        # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
        # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
        # and should be between [0, 1]
        accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
        extra_step_kwargs = {
    
    }
        if accepts_eta:
            extra_step_kwargs["eta"] = eta

        latents = init_latents
        t_start = max(num_inference_steps - init_timestep + offset, 0)
        for i, t in tqdm(enumerate(self.scheduler.timesteps[t_start:])):
            # expand the latents if we are doing classifier free guidance
            latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents

            # predict the noise residual
            noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings)["sample"]

            # perform guidance
            if do_classifier_free_guidance:
                noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
                noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)

            # compute the previous noisy sample x_t -> x_t-1
            latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs)["prev_sample"]

        # scale and decode the image latents with vae
        latents = 1 / 0.18215 * latents
        image = self.vae.decode(latents)

        image = (image / 2 + 0.5).clamp(0, 1)
        image = image.cpu().permute(0, 2, 3, 1).numpy()

        # run safety checker
        safety_cheker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(self.device)
        image, has_nsfw_concept = self.safety_checker(images=image, clip_input=safety_cheker_input.pixel_values)

        if output_type == "pil":
            image = self.numpy_to_pil(image)

        return {
    
    "sample": image, "nsfw_content_detected": has_nsfw_concept}

device = "cuda"
model_path = "CompVis/stable-diffusion-v1-4"

# Using DDIMScheduler as anexample,this also works with PNDMScheduler
# uncomment this line if you want to use it.

# scheduler = PNDMScheduler.from_config(model_path, subfolder="scheduler", use_auth_token=True)

scheduler = DDIMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False)
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(
    model_path,
    scheduler=scheduler,
    revision="fp16", 
    torch_dtype=torch.float16,
    use_auth_token=True
).to(device)

Presumably there are also close to 3G models here, and after no errors, load the image and preprocess it so we can pass it to the pipeline. You can first select the official map for testing:

Preprocessing:

import PIL
from PIL import Image
import numpy as np

def preprocess(image):
    w, h = image.size
    w, h = map(lambda x: x - x % 32, (w, h))  # resize to integer multiple of 32
    image = image.resize((w, h), resample=PIL.Image.LANCZOS)
    image = np.array(image).astype(np.float32) / 255.0
    image = image[None].transpose(0, 3, 1, 2)
    image = torch.from_numpy(image)
    return 2.*image - 1.

Load the official map, you can manually download and upload it, or you can directly go to the network request:

import requests
from io import BytesIO

url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"

response = requests.get(url)
init_img = Image.open(BytesIO(response.content)).convert("RGB")
init_img = init_img.resize((768, 512))
init_img

insert image description here
Finally, load the prompt and load it into the pipeline to get the same effect as in GitHub:

init_image = preprocess(init_img)

prompt = "A fantasy landscape, trending on artstation"

generator = torch.Generator(device=device).manual_seed(1024)
with autocast("cuda"):
    images = pipe(prompt=prompt, init_image=init_image, strength=0.75, guidance_scale=7.5, generator=generator)["sample"]

But what I added here is another entry, which is:

prompt = "Anime, Comic, pink hair, ArtStation, on the ground,cartoon, Game "

The result is:
insert image description here

This looks okay, but I went to download a few anime pictures, and I was going to use the above entries, mainly the keywords of pink hair, and my mind immediately thought of Kuriyama Mirai and Sage Megumi (I found a problem during the inspection, but The combination of Sakura + Hui impresses me), and as a result, my jupyter in the above picture has a few command block codes, and it ran nearly 80 times, and I was fine-tuning more than 60 times. . . Qian donkey skills are poor, and I feel that there is a problem with the entry, but that's it, the better tuned work is:

Text to display when the first image doesn't show up
The text to display when the second image is not displayed

But looking at what other people have done online, it is really beautiful. From the results, the first may be that the accuracy of my model is selected to be small, and the second is that my vocabulary is a bit lacking. This use case is adjusted while writing a blog. In addition, I am busy with other things, and the adjustment is a bit annoying, but it is not bad. satisfy. (PS: What can I do if I am not satisfied? emmm)

The above content is to build the environment and adjust it by yourself, which is equivalent to manually adjusting the model parameters by yourself, and go in the direction you want. The following will introduce some online platforms that I have tuned on huggingface and a commercial one.

Experience Stable Diffusion Online

Two addresses are recommended here, one is the official test address:

https://huggingface.co/spaces/stabilityai/stable-diffusion

insert image description here

The input Anime, Comic, on the ground,cartoon, Gamefeels indescribable. The official online deployment should be a small model, and the training results are very slow.

https://huggingface.co/spaces/huggingface/diffuse-the-rest

insert image description here
Yes, it seems that my paintings are still very realistic, emmm. In addition, after experiencing it a few times, I found that for Asia, or directly specifying China, regardless of men and women, the appearance is a bit different from that of Europe and the United States, and it may be that there are not enough domestic data sets.

Finally, find a non-open source stable-diffusion-animationproject called:

https://replicate.com/andreasjansson/stable-diffusion-animation

insert image description here
This one is more realistic. I made a 20-second video with 24 frames of images, which happened to match the most popular video of biological origin. I don’t know if it was made with this project. So here it is, the end of this blog.

Guess you like

Origin blog.csdn.net/submarineas/article/details/126634227