Commit graph

147 commits

Author SHA1 Message Date
brkirch
4738486d8f Support for hypernetworks with --upcast-sampling 2023-02-06 18:10:55 -05:00
AUTOMATIC
81823407d9 add --no-hashing 2023-02-04 11:38:56 +03:00
AUTOMATIC
78f59a4e01 enable compact view for train tab
prevent  previews from ruining hypernetwork training
2023-01-22 00:02:51 +03:00
AUTOMATIC
40ff6db532 extra networks UI
rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight>
2023-01-21 08:36:07 +03:00
AUTOMATIC
924e222004 add option to show/hide warnings
removed hiding warnings from LDSR
fixed/reworked few places that produced warnings
2023-01-18 23:04:24 +03:00
aria1th
13445738d9 Fix tensorboard related functions 2023-01-16 03:02:54 +09:00
aria1th
598f7fcd84 Fix loss_dict problem 2023-01-16 02:46:21 +09:00
AngelBottomless
16f410893e
fix missing 'mean loss' for tensorboard integration 2023-01-16 02:08:47 +09:00
AUTOMATIC
d8b90ac121 big rework of progressbar/preview system to allow multiple users to prompts at the same time and do not get previews of each other 2023-01-15 18:51:04 +03:00
AUTOMATIC
f9ac3352cb change hypernets to use sha256 hashes 2023-01-14 10:25:37 +03:00
AUTOMATIC
a95f135308 change hash to sha256 2023-01-14 09:56:59 +03:00
AUTOMATIC1111
9cd7716753
Merge branch 'master' into tensorboard 2023-01-13 14:57:38 +03:00
Vladimir Mandic
3f43d8a966
set descriptions 2023-01-11 10:28:55 -05:00
aria1th
a4a5475cfa Variable dropout rate
Implements variable dropout rate from #4549

Fixes hypernetwork multiplier being able to modified during training, also fixes user-errors by setting multiplier value to lower values for training.

Changes function name to match torch.nn.module standard

Fixes RNG reset issue when generating previews by restoring RNG state
2023-01-10 14:56:57 +09:00
AUTOMATIC
1fbb6f9ebe make a dropdown for prompt template selection 2023-01-09 23:35:40 +03:00
dan
72497895b9 Move batchsize check 2023-01-08 02:57:36 +08:00
dan
669fb18d52 Add checkbox for variable training dims 2023-01-08 02:31:40 +08:00
AUTOMATIC
683287d87f rework saving training params to file #6372 2023-01-06 08:52:06 +03:00
timntorres
b6bab2f052 Include model in log file. Exclude directory. 2023-01-05 09:14:56 -08:00
timntorres
b85c2b5cf4 Clean up ti, add same behavior to hypernetwork. 2023-01-05 08:14:38 -08:00
AUTOMATIC1111
eeb1de4388
Merge branch 'master' into gradient-clipping 2023-01-04 19:56:35 +03:00
Vladimir Mandic
192ddc04d6
add job info to modules 2023-01-03 10:34:51 -05:00
AUTOMATIC1111
b12de850ae
Merge pull request #5992 from yuvalabou/F541
Fix F541: f-string without any placeholders
2022-12-25 09:16:08 +03:00
Vladimir Mandic
5f1dfbbc95 implement train api 2022-12-24 18:02:22 -05:00
Yuval Aboulafia
3bf5591efe fix F541 f-string without any placeholders 2022-12-24 21:35:29 +02:00
AUTOMATIC1111
c9a2cfdf2a
Merge branch 'master' into racecond_fix 2022-12-03 10:19:51 +03:00
brkirch
4d5f1691dd Use devices.autocast instead of torch.autocast 2022-11-30 10:33:42 -05:00
flamelaw
1bd57cc979 last_layer_dropout default to False 2022-11-23 20:21:52 +09:00
flamelaw
d2c97fc3fe fix dropout, implement train/eval mode 2022-11-23 20:00:00 +09:00
flamelaw
89d8ecff09 small fixes 2022-11-23 02:49:01 +09:00
flamelaw
5b57f61ba4 fix pin_memory with different latent sampling method 2022-11-21 10:15:46 +09:00
flamelaw
bd68e35de3 Gradient accumulation, autocast fix, new latent sampling method, etc 2022-11-20 12:35:26 +09:00
AUTOMATIC
cdc8020d13 change StableDiffusionProcessing to internally use sampler name instead of sampler index 2022-11-19 12:01:51 +03:00
Muhammad Rizqi Nur
cabd4e3b3b Merge branch 'master' into gradient-clipping 2022-11-07 22:43:38 +07:00
AUTOMATIC
62e3d71aa7 rework the code to not use the walrus operator because colab's 3.7 does not support it 2022-11-05 17:09:42 +03:00
AUTOMATIC1111
cb84a304f0
Merge pull request #4273 from Omegastick/ordered_hypernetworks
Sort hypernetworks list
2022-11-05 16:16:18 +03:00
Muhammad Rizqi Nur
bb832d7725 Simplify grad clip 2022-11-05 11:48:38 +07:00
Isaac Poulton
08feb4c364
Sort straight out of the glob 2022-11-04 20:53:11 +07:00
Muhammad Rizqi Nur
3277f90e93 Merge branch 'master' into gradient-clipping 2022-11-04 18:47:28 +07:00
Isaac Poulton
fd62727893
Sort hypernetworks 2022-11-04 18:34:35 +07:00
Fampai
39541d7725 Fixes race condition in training when VAE is unloaded
set_current_image can attempt to use the VAE when it is unloaded to
the CPU while training
2022-11-04 04:50:22 -04:00
aria1th
1ca0bcd3a7 only save if option is enabled 2022-11-04 16:09:19 +09:00
aria1th
f5d394214d split before declaring file name 2022-11-04 16:04:03 +09:00
aria1th
283249d239 apply 2022-11-04 15:57:17 +09:00
AngelBottomless
179702adc4
Merge branch 'AUTOMATIC1111:master' into force-push-patch-13 2022-11-04 15:51:09 +09:00
AngelBottomless
0d07cbfa15
I blame code autocomplete 2022-11-04 15:50:54 +09:00
aria1th
0abb39f461 resolve conflict - first revert 2022-11-04 15:47:19 +09:00
AUTOMATIC1111
4918eb6ce4
Merge branch 'master' into hn-activation 2022-11-04 09:02:15 +03:00
aria1th
1764ac3c8b use hash to check valid optim 2022-11-03 14:49:26 +09:00
aria1th
0b143c1163 Separate .optim file from model 2022-11-03 14:30:53 +09:00