Train your own Stable Diffusion models locally - no code required

I've been working on stable diffusion for a few weeks now. Start with the basics and run the base model on HuggingFace to test different cues. Then I started reading tips and tricks, joined a few Discord servers, and got my hands dirty training and fine-tuning my own models. Now that I have a model made with my own face, I can create custom avatars and more. It's interesting.

Last week I was talking to a colleague who told me that he was very interested in generative AI, but unfortunately lacked the technical knowledge needed to train a custom model, so was just running inference on the base model. It suddenly occurred to me - what if I make something for him to play, no code at all? Like "Hey guys, just run this script - it will create the model you want from your input image".

Sounded like an interesting project, so I started figuring out how to do it. And, well, I'm late. In fact, I'm a few months late. Everything I want to build already exists, made by really smart people who are light years ahead of me in terms of AI exposure and experience. So I humbly forgot about my project and instead wrote a small tutorial for my friend explaining how to train your model.

This is the extended version.

Require

While this should work on any OS, since all the repositories and web UI we're going to use support Linux, Windows, and Mac, I've only tested this with my own and my friend's setup:

  • Windows 10
  • The GPU has at least 6-7GB of VRAM - more precisely, this works well with an NVIDIA GeForce RTX 3060 or RTX 3060 Ti. It's not enough to run a pure Dreambooth, but we'll be using LoRA so that's ok - don't worry if you don't get this yet, we'll get back to it later. I believe some functions will only run on the CPU, but I haven't tested it.
    Apart from that, you need to install two things &

Guess you like

Origin blog.csdn.net/iCloudEnd/article/details/131729415