https://gitee.com/leeguandong/dreambooth-for-diffusion https://gitee.com/leeguandong/dreambooth-for-diffusion https://zhuanlan.zhihu.com/p/584736850
https://zhuanlan.zhihu .com/p/584736850 Esta biblioteca usa la biblioteca de difusores, y ahora es principalmentekohya-ss/sd-scriptsmezclados, o se usa directamente para difusores.
1.instalar
torch
torchvision
huggingface_hub==0.14.1
tokenizers==0.13.3
transformers==4.25.1
diffusers==0.16.0
accelerate==0.15.0
Actualice libstdc++.so.6, glic
2.ckpt2difusores
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
Las líneas 705/817 configuran el peso del clip de openai
if os.path.exists(default_model_path):
text_model = CLIPTextModel.from_pretrained(os.path.join(default_model_path, "clip-vit-large-patch14"))
else:
text_model = CLIPTextModel.from_pretrained("/home/imcs/local_disk/dreambooth-for-diffusion-main/tools/clip-vit-large-patch14")
if os.path.exists(default_model_path):
tokenizer = CLIPTokenizer.from_pretrained(os.path.join(default_model_path, "clip-vit-large-patch14"))
else:
tokenizer = CLIPTokenizer.from_pretrained("/home/imcs/local_disk/dreambooth-for-diffusion-main/tools/clip-vit-large-patch14")
3.tren_objeto.sh
Entrenamiento de personas y cosas específicas: (Se recomienda preparar de 3 a 5 imágenes con un estilo unificado y objetos específicos)
# accelerate launch tools/train_dreambooth.py \
python -m torch.distributed.launch --nproc_per_node=4 --nnodes=1 --node_rank=0 --master_addr=localhost --master_port=22222 --use_env "tools/train_dreambooth.py" \
--train_text_encoder \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--instance_prompt="a photo of <xxx> building" \
--with_prior_preservation --prior_loss_weight=1.0 \
--class_prompt="a photo of building" \
--class_data_dir=$CLASS_DIR \
--num_class_images=200 \
--output_dir=$OUTPUT_DIR \
--logging_dir=$LOG_DIR \
--center_crop \
--resolution=512 \
--train_batch_size=1 \
--gradient_accumulation_steps=1 --gradient_checkpointing \
--use_8bit_adam \
--learning_rate=2e-6 \
--lr_scheduler="constant" \
--lr_warmup_steps=0 \
--auto_test_model \
--test_prompts_file=$TEST_PROMPTS_FILE \
--test_seed=123 \
--test_num_per_prompt=3 \
--max_train_steps=1000 \
--save_model_every_n_steps=500
# --mixed_precision="fp16" \
4.train_style.sh
Finetune entrena su propio modelo grande: (se recomienda preparar más de 3000 imágenes, incluida la mayor diversidad posible, los datos determinan la calidad del modelo entrenado)