Commit graph

224 commits

Author SHA1 Message Date
AUTOMATIC
aa6e55e001 do not display the message for TI unless the list of loaded embeddings changed 2023-01-29 11:53:05 +03:00
Max Audron
5eee2ac398 add data-dir flag and set all user data directories based on it 2023-01-27 14:44:30 +01:00
Alex "mcmonkey" Goodwin
e179b6098a allow symlinks in the textual inversion embeddings folder 2023-01-25 08:48:40 -08:00
AUTOMATIC
40ff6db532 extra networks UI
rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight>
2023-01-21 08:36:07 +03:00
AUTOMATIC1111
0f9cacaa0e
Merge pull request #6844 from guaneec/crop-ui
Add auto-sized cropping UI
2023-01-19 13:11:05 +03:00
dan
2985b317d7 Fix of fix 2023-01-19 17:39:30 +08:00
dan
18a09c7e00 Simplification and bugfix 2023-01-19 17:36:23 +08:00
AUTOMATIC
924e222004 add option to show/hide warnings
removed hiding warnings from LDSR
fixed/reworked few places that produced warnings
2023-01-18 23:04:24 +03:00
dan
4688bfff55 Add auto-sized cropping UI 2023-01-17 17:16:43 +08:00
Vladimir Mandic
110d1a2d59
add fields to settings file 2023-01-15 12:41:00 -05:00
AUTOMATIC
d8b90ac121 big rework of progressbar/preview system to allow multiple users to prompts at the same time and do not get previews of each other 2023-01-15 18:51:04 +03:00
AUTOMATIC
a95f135308 change hash to sha256 2023-01-14 09:56:59 +03:00
AUTOMATIC
82725f0ac4 fix a bug caused by merge 2023-01-13 15:04:37 +03:00
AUTOMATIC1111
9cd7716753
Merge branch 'master' into tensorboard 2023-01-13 14:57:38 +03:00
AUTOMATIC1111
544e7a233e
Merge pull request #6689 from Poktay/add_gradient_settings_to_logging_file
add gradient settings to training settings log files
2023-01-13 14:45:32 +03:00
AUTOMATIC
a176d89487 print bucket sizes for training without resizing images #6620
fix an error when generating a picture with embedding in it
2023-01-13 14:32:15 +03:00
AUTOMATIC1111
486bda9b33
Merge pull request #6620 from guaneec/varsize_batch
Enable batch_size>1 for mixed-sized training
2023-01-13 14:03:31 +03:00
Josh R
0b262802b8 add gradient settings to training settings log files 2023-01-12 17:31:05 -08:00
Shondoit
d52a80f7f7 Allow creation of zero vectors for TI 2023-01-12 09:22:29 +01:00
Vladimir Mandic
3f43d8a966
set descriptions 2023-01-11 10:28:55 -05:00
Lee Bousfield
f9706acf43
Support loading textual inversion embeddings from safetensors files 2023-01-10 18:40:34 -07:00
dan
6be644fa04 Enable batch_size>1 for mixed-sized training 2023-01-11 05:31:58 +08:00
AUTOMATIC
1fbb6f9ebe make a dropdown for prompt template selection 2023-01-09 23:35:40 +03:00
AUTOMATIC
43bb5190fc remove/simplify some changes from #6481 2023-01-09 22:52:23 +03:00
AUTOMATIC1111
18c001792a
Merge branch 'master' into varsize 2023-01-09 22:45:39 +03:00
AUTOMATIC
085427de0e make it possible for extensions/scripts to add their own embedding directories 2023-01-08 09:37:33 +03:00
AUTOMATIC
a0c87f1fdf skip images in embeddings dir if they have a second .preview extension 2023-01-08 08:52:26 +03:00
dan
72497895b9 Move batchsize check 2023-01-08 02:57:36 +08:00
dan
669fb18d52 Add checkbox for variable training dims 2023-01-08 02:31:40 +08:00
dan
448b9cedab Allow variable img size 2023-01-08 02:14:36 +08:00
AUTOMATIC
79e39fae61 CLIP hijack rework 2023-01-07 01:46:13 +03:00
AUTOMATIC
683287d87f rework saving training params to file #6372 2023-01-06 08:52:06 +03:00
AUTOMATIC1111
88e01b237e
Merge pull request #6372 from timntorres/save-ti-hypernet-settings-to-txt-revised
Save hypernet and textual inversion settings to text file, revised.
2023-01-06 07:59:44 +03:00
Faber
81133d4168
allow loading embeddings from subdirectories 2023-01-06 03:38:37 +07:00
Kuma
fda04e620d
typo in TI 2023-01-05 18:44:19 +01:00
timntorres
b6bab2f052 Include model in log file. Exclude directory. 2023-01-05 09:14:56 -08:00
timntorres
b85c2b5cf4 Clean up ti, add same behavior to hypernetwork. 2023-01-05 08:14:38 -08:00
timntorres
eea8fc40e1 Add option to save ti settings to file. 2023-01-05 07:24:22 -08:00
AUTOMATIC1111
eeb1de4388
Merge branch 'master' into gradient-clipping 2023-01-04 19:56:35 +03:00
AUTOMATIC
525cea9245 use shared function from processing for creating dummy mask when training inpainting model 2023-01-04 17:58:07 +03:00
AUTOMATIC
184e670126 fix the merge 2023-01-04 17:45:01 +03:00
AUTOMATIC1111
da5c1e8a73
Merge branch 'master' into inpaint_textual_inversion 2023-01-04 17:40:19 +03:00
AUTOMATIC1111
7bbd984dda
Merge pull request #6253 from Shondoit/ti-optim
Save Optimizer next to TI embedding
2023-01-04 14:09:13 +03:00
Vladimir Mandic
192ddc04d6
add job info to modules 2023-01-03 10:34:51 -05:00
Shondoit
bddebe09ed Save Optimizer next to TI embedding
Also add check to load only .PT and .BIN files as embeddings. (since we add .optim files in the same directory)
2023-01-03 13:30:24 +01:00
Philpax
c65909ad16 feat(api): return more data for embeddings 2023-01-02 12:21:48 +11:00
AUTOMATIC
311354c0bb fix the issue with training on SD2.0 2023-01-02 00:38:09 +03:00
AUTOMATIC
bdbe09827b changed embedding accepted shape detection to use existing code and support the new alt-diffusion model, and reformatted messages a bit #6149 2022-12-31 22:49:09 +03:00
Vladimir Mandic
f55ac33d44
validate textual inversion embeddings 2022-12-31 11:27:02 -05:00
Yuval Aboulafia
3bf5591efe fix F541 f-string without any placeholders 2022-12-24 21:35:29 +02:00