first look at the effect
The above styles of LoRA are: outdoor garden wedding dress, winter snow scene Hanfu, flame goddess, fairy style
Environmental preparation
Select the PAI-DSW-GPU environment in the Mota platform ModelScope Mota community
After entering, open the terminal environment, first check that the video memory needs about 20G (nvidia-smi), and then download the core file.
GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/modelscope/facechain.git
Enter the topic below
1. Fixed character LoRA training:
1. Create a folder imgs for training characters, pay attention to put it in the facechain folder, and put the face photos that need to be fixed into this folder
2. Character LoRA training
Then run the following code in the terminal to start the training
PYTHONPATH=. CUDA_VISIBLE_DEVICES="0" sh train_lora.sh "ly261666/cv_portrait_model" "v2.0" "film/film" "./imgs" "./processed" "./output"
2. Style LoRA replacement:
1. Upload the style LoRA file and modify the parameters
Style source: outdoor photo|LiblibAI , mainly migrated its outdoor flowery style and renamed it wedding
2. Modify the constants.py file
Mainly modify the file name to the newly uploaded style file, and add wedding dress related content in the prompt.
styles = [
{'name': '默认风格(default style)'},
{'name': '凤冠霞帔(Chinese traditional gorgeous suit)',
'model_id': 'ly261666/civitai_xiapei_lora',
'revision': 'v1.0.0',
'bin_file': 'xiapei.safetensors',
'multiplier_style': 0.35,
'cloth_name': '汉服风(hanfu)',
'add_prompt_style': 'red, hanfu, tiara, crown, '},
{'name': '婚纱(wedding)',
'model_id': 'ly261666/civitai_xiapei_lora',
'revision': 'v1.0.0',
'bin_file': 'wedding.safetensors',
'multiplier_style': 0.35,
'cloth_name': '婚纱(wedding)',
'add_prompt_style': 'bride wearing a white wedding dress,simple and elegant style, <lora:outdoor photo_20230819231754:0.6> --ar 3:4'},
]
3. Modify the key parameters of run_inference.py
Modify the style folder path, change use_style to true, and extend the sequence number to 2
use_main_model = True
use_face_swap = True
use_post_process = True # 可改为False则不控制数量
use_stylization = False
processed_dir = './processed'
num_generate = 5
base_model = 'ly261666/cv_portrait_model'
revision = 'v2.0'
multiplier_style = 0.25
base_model_sub_dir = 'film/film'
train_output_dir = './output'
output_dir = './generated'
use_style = True
if not use_style:
style_model_path = None
pos_prompt = generate_pos_prompt(styles[0]['name'], cloth_prompt[0]['prompt'])
else:
model_dir = '/mnt/workspace/wedding'
style_model_path = os.path.join(model_dir, styles[2]['bin_file'])
pos_prompt = generate_pos_prompt(styles[2]['name'], styles[2]['add_prompt_style']) # style has its own prompt
gen_portrait = GenPortrait(pos_prompt, neg_prompt, style_model_path, multiplier_style, use_main_model,
use_face_swap, use_post_process,
use_stylization)
outputs = gen_portrait(processed_dir, num_generate, base_model,
train_output_dir, base_model_sub_dir, revision)
os.makedirs(output_dir, exist_ok=True)
for i, out_tmp in enumerate(outputs):
cv2.imwrite(os.path.join(output_dir, f'{i}.png'), out_tmp)
3. Inferring the graph:
python run_inference.py
Fixed face + relocated outdoor flowery style + prompt-controlled wedding dress = wedding outdoor photo
appendix
1. Project address: GitHub - modelscope/facechain: FaceChain is a deep-learning toolchain for generating your Digital-Twin. (Currently GitHub has more than 4K stars!)
2. Principle explanation with graphic and video explanation
3. Last weekly report
Kuwa FaceChain Open Source Project Iteration Weekly 2023-08-19
4. Global developer recruitment
The cool frog FaceChain project has been open sourced. We plan to continue to polish the open source project with the strength of the open source community, unlock more advanced gameplay (such as character emoticons, character comic stories, virtual fitting rooms...), and carry out deeper Algorithm innovation, published corresponding top conference papers. If you are interested in this open source project and have vision and belief in the future of this open source project, welcome to sign up.
Here is the content card of Yuque, click the link to view: Login · Yuque