Commit graph

129 commits

Author SHA1 Message Date
Lee Bousfield
f9706acf43
Support loading textual inversion embeddings from safetensors files 2023-01-10 18:40:34 -07:00
AUTOMATIC
1fbb6f9ebe make a dropdown for prompt template selection 2023-01-09 23:35:40 +03:00
AUTOMATIC
43bb5190fc remove/simplify some changes from #6481 2023-01-09 22:52:23 +03:00
AUTOMATIC1111
18c001792a
Merge branch 'master' into varsize 2023-01-09 22:45:39 +03:00
AUTOMATIC
085427de0e make it possible for extensions/scripts to add their own embedding directories 2023-01-08 09:37:33 +03:00
AUTOMATIC
a0c87f1fdf skip images in embeddings dir if they have a second .preview extension 2023-01-08 08:52:26 +03:00
dan
669fb18d52 Add checkbox for variable training dims 2023-01-08 02:31:40 +08:00
dan
448b9cedab Allow variable img size 2023-01-08 02:14:36 +08:00
AUTOMATIC
79e39fae61 CLIP hijack rework 2023-01-07 01:46:13 +03:00
AUTOMATIC
683287d87f rework saving training params to file #6372 2023-01-06 08:52:06 +03:00
AUTOMATIC1111
88e01b237e
Merge pull request #6372 from timntorres/save-ti-hypernet-settings-to-txt-revised
Save hypernet and textual inversion settings to text file, revised.
2023-01-06 07:59:44 +03:00
Faber
81133d4168
allow loading embeddings from subdirectories 2023-01-06 03:38:37 +07:00
Kuma
fda04e620d
typo in TI 2023-01-05 18:44:19 +01:00
timntorres
b6bab2f052 Include model in log file. Exclude directory. 2023-01-05 09:14:56 -08:00
timntorres
b85c2b5cf4 Clean up ti, add same behavior to hypernetwork. 2023-01-05 08:14:38 -08:00
timntorres
eea8fc40e1 Add option to save ti settings to file. 2023-01-05 07:24:22 -08:00
AUTOMATIC1111
eeb1de4388
Merge branch 'master' into gradient-clipping 2023-01-04 19:56:35 +03:00
AUTOMATIC
525cea9245 use shared function from processing for creating dummy mask when training inpainting model 2023-01-04 17:58:07 +03:00
AUTOMATIC
184e670126 fix the merge 2023-01-04 17:45:01 +03:00
AUTOMATIC1111
da5c1e8a73
Merge branch 'master' into inpaint_textual_inversion 2023-01-04 17:40:19 +03:00
AUTOMATIC1111
7bbd984dda
Merge pull request #6253 from Shondoit/ti-optim
Save Optimizer next to TI embedding
2023-01-04 14:09:13 +03:00
Vladimir Mandic
192ddc04d6
add job info to modules 2023-01-03 10:34:51 -05:00
Shondoit
bddebe09ed Save Optimizer next to TI embedding
Also add check to load only .PT and .BIN files as embeddings. (since we add .optim files in the same directory)
2023-01-03 13:30:24 +01:00
Philpax
c65909ad16 feat(api): return more data for embeddings 2023-01-02 12:21:48 +11:00
AUTOMATIC
311354c0bb fix the issue with training on SD2.0 2023-01-02 00:38:09 +03:00
AUTOMATIC
bdbe09827b changed embedding accepted shape detection to use existing code and support the new alt-diffusion model, and reformatted messages a bit #6149 2022-12-31 22:49:09 +03:00
Vladimir Mandic
f55ac33d44
validate textual inversion embeddings 2022-12-31 11:27:02 -05:00
Yuval Aboulafia
3bf5591efe fix F541 f-string without any placeholders 2022-12-24 21:35:29 +02:00
Jim Hays
c0355caefe Fix various typos 2022-12-14 21:01:32 -05:00
AUTOMATIC1111
c9a2cfdf2a
Merge branch 'master' into racecond_fix 2022-12-03 10:19:51 +03:00
brkirch
4d5f1691dd Use devices.autocast instead of torch.autocast 2022-11-30 10:33:42 -05:00
AUTOMATIC
b48b7999c8 Merge remote-tracking branch 'flamelaw/master' 2022-11-27 12:19:59 +03:00
flamelaw
755df94b2a set TI AdamW default weight decay to 0 2022-11-27 00:35:44 +09:00
AUTOMATIC
ce6911158b Add support Stable Diffusion 2.0 2022-11-26 16:10:46 +03:00
flamelaw
89d8ecff09 small fixes 2022-11-23 02:49:01 +09:00
flamelaw
5b57f61ba4 fix pin_memory with different latent sampling method 2022-11-21 10:15:46 +09:00
flamelaw
bd68e35de3 Gradient accumulation, autocast fix, new latent sampling method, etc 2022-11-20 12:35:26 +09:00
AUTOMATIC
cdc8020d13 change StableDiffusionProcessing to internally use sampler name instead of sampler index 2022-11-19 12:01:51 +03:00
Muhammad Rizqi Nur
bb832d7725 Simplify grad clip 2022-11-05 11:48:38 +07:00
Fampai
39541d7725 Fixes race condition in training when VAE is unloaded
set_current_image can attempt to use the VAE when it is unloaded to
the CPU while training
2022-11-04 04:50:22 -04:00
Muhammad Rizqi Nur
237e79c77d Merge branch 'master' into gradient-clipping 2022-11-02 20:48:58 +07:00
Nerogar
cffc240a73 fixed textual inversion training with inpainting models 2022-11-01 21:02:07 +01:00
Fampai
890e68aaf7 Fixed minor bug
when unloading vae during TI training, generating images after
training will error out
2022-10-31 10:07:12 -04:00
Fampai
3b0127e698 Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into TI_optimizations 2022-10-31 09:54:51 -04:00
Fampai
006756f9cd Added TI training optimizations
option to use xattention optimizations when training
option to unload vae when training
2022-10-31 07:26:08 -04:00
Muhammad Rizqi Nur
cd4d59c0de Merge master 2022-10-30 18:57:51 +07:00
Muhammad Rizqi Nur
3d58510f21 Fix dataset still being loaded even when training will be skipped 2022-10-30 00:54:59 +07:00
Muhammad Rizqi Nur
a07f054c86 Add missing info on hypernetwork/embedding model log
Mentioned here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/1528#discussioncomment-3991513

Also group the saving into one
2022-10-30 00:49:29 +07:00
Muhammad Rizqi Nur
ab05a74ead Revert "Add cleanup after training"
This reverts commit 3ce2bfdf95.
2022-10-30 00:32:02 +07:00
Muhammad Rizqi Nur
3ce2bfdf95 Add cleanup after training 2022-10-29 19:43:21 +07:00