AUTOMATIC1111
e00b4df7c6
Merge pull request #1752 from Greendayle/dev/deepdanbooru
...
Added DeepDanbooru interrogator
2022-10-09 10:52:21 +03:00
aoirusann
14192c5b20
Support Download
for txt files.
2022-10-09 10:49:11 +03:00
aoirusann
5ab7e88d9b
Add Download
& Download as zip
2022-10-09 10:49:11 +03:00
AUTOMATIC
4e569fd888
fixed incorrect message about loading config; thanks anon!
2022-10-09 10:31:47 +03:00
AUTOMATIC
c77c89cc83
make main model loading and model merger use the same code
2022-10-09 10:23:31 +03:00
DepFA
cd8673bd9b
add embed embedding to ui
2022-10-09 05:40:57 +01:00
DepFA
5841990b0d
Update textual_inversion.py
2022-10-09 05:38:38 +01:00
AUTOMATIC
050a6a798c
support loading .yaml config with same name as model
...
support EMA weights in processing (????)
2022-10-08 23:26:48 +03:00
Aidan Holland
432782163a
chore: Fix typos
2022-10-08 22:42:30 +03:00
Edouard Leurent
610a7f4e14
Break after finding the local directory of stable diffusion
...
Otherwise, we may override it with one of the next two path (. or ..) if it is present there, and then the local paths of other modules (taming transformers, codeformers, etc.) wont be found in sd_path/../.
Fix https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1085
2022-10-08 22:35:04 +03:00
AUTOMATIC
3b2141c5fb
add 'Ignore last layers of CLIP model' option as a parameter to the infotext
2022-10-08 22:21:15 +03:00
AUTOMATIC
e6e42f98df
make --force-enable-xformers work without needing --xformers
2022-10-08 22:12:23 +03:00
Fampai
1371d7608b
Added ability to ignore last n layers in FrozenCLIPEmbedder
2022-10-08 22:10:37 +03:00
DepFA
b458fa48fe
Update ui.py
2022-10-08 20:38:35 +03:00
DepFA
15c4278f1a
TI preprocess wording
...
I had to check the code to work out what splitting was 🤷🏿
2022-10-08 20:38:35 +03:00
Greendayle
0ec80f0125
Merge branch 'master' into dev/deepdanbooru
2022-10-08 18:28:22 +02:00
AUTOMATIC
3061cdb7b6
add --force-enable-xformers option and also add messages to console regarding cross attention optimizations
2022-10-08 19:22:15 +03:00
AUTOMATIC
f9c5da1592
add fallback for xformers_attnblock_forward
2022-10-08 19:05:19 +03:00
Greendayle
01f8cb4447
made deepdanbooru optional, added to readme, automatic download of deepbooru model
2022-10-08 18:02:56 +02:00
Artem Zagidulin
a5550f0213
alternate prompt
2022-10-08 18:12:19 +03:00
C43H66N12O12S2
cc0258aea7
check for ampere without destroying the optimizations. again.
2022-10-08 17:54:16 +03:00
C43H66N12O12S2
017b6b8744
check for ampere
2022-10-08 17:54:16 +03:00
Greendayle
5329d0aba0
Merge branch 'master' into dev/deepdanbooru
2022-10-08 16:30:28 +02:00
AUTOMATIC
cfc33f99d4
why did you do this
2022-10-08 17:29:06 +03:00
Greendayle
2e8ba0fa47
fix conflicts
2022-10-08 16:27:48 +02:00
Milly
4f33289d0f
Fixed typo
2022-10-08 17:15:30 +03:00
AUTOMATIC
27032c47df
restore old opt_split_attention/disable_opt_split_attention logic
2022-10-08 17:10:05 +03:00
AUTOMATIC
dc1117233e
simplify xfrmers options: --xformers to enable and that's it
2022-10-08 17:02:18 +03:00
AUTOMATIC
7ff1170a2e
emergency fix for xformers (continue + shared)
2022-10-08 16:33:39 +03:00
AUTOMATIC1111
48feae37ff
Merge pull request #1851 from C43H66N12O12S2/flash
...
xformers attention
2022-10-08 16:29:59 +03:00
C43H66N12O12S2
970de9ee68
Update sd_hijack.py
2022-10-08 16:29:43 +03:00
C43H66N12O12S2
69d0053583
update sd_hijack_opt to respect new env variables
2022-10-08 16:21:40 +03:00
C43H66N12O12S2
ddfa9a9786
add xformers_available shared variable
2022-10-08 16:20:41 +03:00
C43H66N12O12S2
26b459a379
default to split attention if cuda is available and xformers is not
2022-10-08 16:20:04 +03:00
MrCheeze
5f85a74b00
fix bug where when using prompt composition, hijack_comments generated before the final AND will be dropped
2022-10-08 15:48:04 +03:00
ddPn08
772db721a5
fix glob path in hypernetwork.py
2022-10-08 15:46:54 +03:00
AUTOMATIC
7001bffe02
fix AND broken for long prompts
2022-10-08 15:43:25 +03:00
AUTOMATIC
77f4237d1c
fix bugs related to variable prompt lengths
2022-10-08 15:25:59 +03:00
AUTOMATIC
4999eb2ef9
do not let user choose his own prompt token count limit
2022-10-08 14:25:47 +03:00
Trung Ngo
00117a07ef
check specifically for skipped
2022-10-08 13:40:39 +03:00
Trung Ngo
786d9f63aa
Add button to skip the current iteration
2022-10-08 13:40:39 +03:00
AUTOMATIC
45cc0ce3c4
Merge remote-tracking branch 'origin/master'
2022-10-08 13:39:08 +03:00
AUTOMATIC
706d5944a0
let user choose his own prompt token count limit
2022-10-08 13:38:57 +03:00
leko
616b7218f7
fix: handles when state_dict does not exist
2022-10-08 12:38:50 +03:00
C43H66N12O12S2
91d66f5520
use new attnblock for xformers path
2022-10-08 11:56:01 +03:00
C43H66N12O12S2
76a616fa6b
Update sd_hijack_optimizations.py
2022-10-08 11:55:38 +03:00
C43H66N12O12S2
5d54f35c58
add xformers attnblock and hypernetwork support
2022-10-08 11:55:02 +03:00
brkirch
f2055cb1d4
Add hypernetwork support to split cross attention v1
...
* Add hypernetwork support to split_cross_attention_forward_v1
* Fix device check in esrgan_model.py to use devices.device_esrgan instead of shared.device
2022-10-08 09:39:17 +03:00
C43H66N12O12S2
b70eaeb200
delete broken and unnecessary aliases
2022-10-08 04:10:35 +03:00
C43H66N12O12S2
c9cc65b201
switch to the proper way of calling xformers
2022-10-08 04:09:18 +03:00
Greendayle
5f12e7efd9
linux test
2022-10-07 20:58:30 +02:00
Greendayle
fa2ea648db
even more powerfull fix
2022-10-07 20:46:38 +02:00
Greendayle
54fa613c83
loading tf only in interrogation process
2022-10-07 20:37:43 +02:00
Greendayle
537da7a304
Merge branch 'master' into dev/deepdanbooru
2022-10-07 18:31:49 +02:00
AUTOMATIC
f7c787eb7c
make it possible to use hypernetworks without opt split attention
2022-10-07 16:39:51 +03:00
AUTOMATIC
97bc0b9504
do not stop working on failed hypernetwork load
2022-10-07 13:22:50 +03:00
AUTOMATIC
d15b3ec001
support loading VAE
2022-10-07 10:40:22 +03:00
AUTOMATIC
bad7cb29ce
added support for hypernetworks (???)
2022-10-07 10:17:52 +03:00
C43H66N12O12S2
5e3ff846c5
Update sd_hijack.py
2022-10-07 06:38:01 +03:00
C43H66N12O12S2
5303df2428
Update sd_hijack.py
2022-10-07 06:01:14 +03:00
C43H66N12O12S2
35d6b23162
Update sd_hijack.py
2022-10-07 05:31:53 +03:00
C43H66N12O12S2
da4ab2707b
Update shared.py
2022-10-07 05:23:06 +03:00
C43H66N12O12S2
2eb911b056
Update sd_hijack.py
2022-10-07 05:22:28 +03:00
C43H66N12O12S2
f174fb2922
add xformers attention
2022-10-07 05:21:49 +03:00
AUTOMATIC
b34b25b4c9
karras samplers for img2img?
2022-10-06 23:27:01 +03:00
Milly
405c8171d1
Prefer using Processed.sd_model_hash
attribute when filename pattern
2022-10-06 20:41:23 +03:00
Milly
1cc36d170a
Added job_timestamp to Processed
...
So `[job_timestamp]` pattern can use in saving image UI.
2022-10-06 20:41:23 +03:00
Milly
070b7d60cf
Added styles to Processed
...
So `[styles]` pattern can use in saving image UI.
2022-10-06 20:41:23 +03:00
Milly
cf7c784fcc
Removed duplicate defined models_path
...
Use `modules.paths.models_path` instead `modules.shared.model_path`.
2022-10-06 20:29:12 +03:00
AUTOMATIC
dbc8a4d351
add generation parameters to images shown in web ui
2022-10-06 20:27:50 +03:00
Milly
0bb458f0ca
Removed duplicate image saving codes
...
Use `modules.images.save_image()` instead.
2022-10-06 20:15:39 +03:00
Jairo Correa
b66aa334a9
Merge branch 'master' into fix-vram
2022-10-06 13:41:37 -03:00
DepFA
fec71e4de2
Default window title progress updates on
2022-10-06 17:58:52 +03:00
DepFA
be71115b1a
Update shared.py
2022-10-06 17:58:52 +03:00
AUTOMATIC
5993df24a1
integrate the new samplers PR
2022-10-06 14:12:52 +03:00
C43H66N12O12S2
3ddf80a9db
add variant setting
2022-10-06 13:42:21 +03:00
C43H66N12O12S2
71901b3d3b
add karras scheduling variants
2022-10-06 13:42:21 +03:00
AUTOMATIC
2d3ea42a2d
workaround for a mysterious bug where prompt weights can't be matched
2022-10-06 13:21:12 +03:00
AUTOMATIC
5f24b7bcf4
option to let users select which samplers they want to hide
2022-10-06 12:08:59 +03:00
Raphael Stoeckli
4288e53fc2
removed unused import, fixed typo
2022-10-06 08:52:29 +03:00
Raphael Stoeckli
2499fb4e19
Add sanitizer for captions in Textual inversion
2022-10-06 08:52:29 +03:00
AUTOMATIC1111
0e92c36707
Merge pull request #1755 from AUTOMATIC1111/use-typing-list
...
use typing.list in prompt_parser.py for wider python version support
2022-10-06 08:50:06 +03:00
DepFA
55400c981b
Set gradio-img2img-tool default to 'editor'
2022-10-06 08:46:32 +03:00
DepFA
af02ee1297
Merge branch 'master' into use-typing-list
2022-10-05 23:02:45 +01:00
DepFA
34c358d10d
use typing.list in prompt_parser.py for wider python version support
2022-10-05 22:11:30 +01:00
AUTOMATIC
20f8ec877a
remove type annotations in new code because presumably they don't work in 3.7
2022-10-06 00:09:32 +03:00
AUTOMATIC
f8e41a96bb
fix various float parsing errors
2022-10-05 23:52:05 +03:00
Greendayle
4320f386d9
removing underscores and colons
2022-10-05 22:39:32 +02:00
AUTOMATIC
c26732fbee
added support for AND from https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/
2022-10-05 23:16:27 +03:00
Greendayle
17a99baf0c
better model search
2022-10-05 22:07:28 +02:00
Greendayle
1506fab29a
removing problematic tag
2022-10-05 21:15:08 +02:00
Greendayle
59a2b9e5af
deepdanbooru interrogator
2022-10-05 20:55:26 +02:00
DepFA
bbdbbd36ed
shared.state.interrupt when restart is requested
2022-10-05 11:37:18 +03:00
Jairo Correa
82380d9ac1
Removing parts no longer needed to fix vram
2022-10-04 22:31:40 -03:00
Jairo Correa
1f50971fb8
Merge branch 'master' into fix-vram
2022-10-04 19:53:52 -03:00
xpscyho
ef40e4cd4d
Display time taken in mins, secs when relevant
...
Fixes #1656
2022-10-04 23:41:42 +03:00
AUTOMATIC
b32852ef03
add editor to img2img
2022-10-04 20:49:54 +03:00
Rae Fu
90e911fd54
prompt_parser: allow spaces in schedules, add test, log/ignore errors
...
Only build the parser once (at import time) instead of for each step.
doctest is run by simply executing modules/prompt_parser.py
2022-10-04 20:26:15 +03:00
AUTOMATIC
1eb588cbf1
remove functools.cache as some people are having issues with it
2022-10-04 18:02:01 +03:00
AUTOMATIC
e1b128d8e4
do not touch p.seed/p.subseed during processing #1181
2022-10-04 17:36:39 +03:00