Great tips, thank you! It feels like I'm right behind you in terms of where I'm at so your input is very much appreciated.
3. Train for only 1 epoch - interesting, any known rationale here?
5. I just read somewhere else that someone got good results from mixing their custom model with the original (60/40 in their case) - good to hear some more anecdotes that this is pretty effective. Especially the further training after merging, sounds promising!
I've also been using kohya_ss for training LoRAs so great to hear it works for you for models as well. On your point about the inference tricks, definitely noted but I did notice that you can feed some params (# of samples, negative embeddings, etc) to the sample images generated during training (check the textarea placeholder text). Still not going to have all usual the tricks but it'll get you a little closer.
3. Train for only 1 epoch - interesting, any known rationale here?
5. I just read somewhere else that someone got good results from mixing their custom model with the original (60/40 in their case) - good to hear some more anecdotes that this is pretty effective. Especially the further training after merging, sounds promising!
I've also been using kohya_ss for training LoRAs so great to hear it works for you for models as well. On your point about the inference tricks, definitely noted but I did notice that you can feed some params (# of samples, negative embeddings, etc) to the sample images generated during training (check the textarea placeholder text). Still not going to have all usual the tricks but it'll get you a little closer.