difusores de Python StableDiffusionXLPipeline para uso fuera de línea

  

Descargar sd_xl_base_1.0.safetensors

https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main

Lo descargué y lo puse en los modelos del proyecto.

model_path = "./models/v1-5-pruned-emaonly.safetensors"
# model_path = "./models/v1-5-pruned.safetensors"
# modelId = "runwayml/stable-diffusion-v1-5"
if Util.isMac():
    # from_single_file
    pipe = StableDiffusionPipeline.from_single_file(model_path, original_config_file='./models/v1-inference.yaml', cache_dir='./cache/', use_safetensors=True)
    pipe = pipe.to("mps")
    pipe.enable_attention_slicing()
else:
    pipe = StableDiffusionPipeline.from_single_file(model_path, torch_dtype=torch.float16, use_safetensors=True)
    pipe.to("cuda")
    pipe.enable_model_cpu_offload()
    pipe.enable_attention_slicing()

Modificar paquete

Si el código en Difusores está codificado, accederá a Huggingface y deberá cambiarse manualmente.



import sysconfig; 

def repalceStringInFile(srcStr, dstStr, filePath):
    # Read in the file
    with open(filePath, 'r') as file:
        filedata = file.read()

    # Replace the target string
    filedata = filedata.replace(srcStr, dstStr)

    # Write the file out again
    with open(filePath, 'w') as file:
        file.write(filedata)

sitedir=sysconfig.get_paths()["purelib"]
ckPtFilePath=f"{sitedir}/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py"
print(ckPtFilePath)

srcStr="https://raw.githubusercontent.com/CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml"
dstStr="http://xxxxxx/github/CompVis_stable-diffusion_main_configs_stable-diffusion_v1-inference.yaml"

repalceStringInFile(srcStr, dstStr, ckPtFilePath)

srcStr="https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml"
dstStr="http://xxxxxx/github/Stability-AI_stablediffusion_main_configs_stable-diffusion_v2-inference-v.yaml"
repalceStringInFile(srcStr, dstStr, ckPtFilePath)


srcStr="https://raw.githubusercontent.com/Stability-AI/generative-models/main/configs/inference/sd_xl_base.yaml"
dstStr="http://xxxxxx/github/Stability-AI_generative-models_main_configs_inference_sd_xl_base.yaml"
repalceStringInFile(srcStr, dstStr, ckPtFilePath)


srcStr="https://raw.githubusercontent.com/Stability-AI/generative-models/main/configs/inference/sd_xl_refiner.yaml"
dstStr="http://xxxxxx/github/Stability-AI_generative-models_main_configs_inference_sd_xl_refiner.yaml"
repalceStringInFile(srcStr, dstStr, ckPtFilePath)


ckPtFilePath=f"{sitedir}/diffusers/loaders.py"
print(ckPtFilePath)
srcStr="text_encoder=text_encoder,\n            vae=vae,"
dstStr="text_encoder=text_encoder,\n            local_files_only=True,\n            vae=vae,"
repalceStringInFile(srcStr, dstStr, ckPtFilePath)


ckPtFilePath=f"{sitedir}/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py"
print(ckPtFilePath)
srcStr="has_projection=True, **config_kwargs"
dstStr="has_projection=True, local_files_only=local_files_only, **config_kwargs"
repalceStringInFile(srcStr, dstStr, ckPtFilePath)

http://xxxxxx  Debes descargar el archivo en github tú mismo, encontrar un lugar para colocarlo y luego reemplazar el enlace.

local_files_only es para obtener el caché local. Este caché está en la carpeta del usuario actual.

ubuntu

/home/fxbox/.cache/huggingface/hub

Mac

/Users/linzhiji/.cache/huggingface/hub

Copie el caché en la máquina que puede eludir el firewall a la máquina que no puede eludir el firewall.

referencia:

Estrategia tres de generación de imágenes de difusión estable - Zhihu

Supongo que te gusta

Origin blog.csdn.net/linzhiji/article/details/132852861
Recomendado
Clasificación