AUTOMATIC
8f1efdc130
--no-half-vae pt2
2022-10-10 17:03:45 +03:00
AUTOMATIC
7349088d32
--no-half-vae
2022-10-10 16:16:29 +03:00
brkirch
8acc901ba3
Newer versions of PyTorch use TypedStorage instead
...
Pytorch 1.13 and later will rename _TypedStorage to TypedStorage, so check for TypedStorage and use _TypedStorage if it is not available. Currently this is needed so that nightly builds of PyTorch work correctly.
2022-10-10 08:04:52 +03:00
ssysm
6fdad291bd
Merge branch 'master' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into upstream-master
2022-10-09 23:20:39 -04:00
ssysm
cc92dc1f8d
add vae path args
2022-10-09 23:17:29 -04:00
DepFA
4117afff11
Merge branch 'master' into embed-embeddings-in-images
2022-10-10 00:38:54 +01:00
DepFA
e2c2925eb4
remove braces from steps
2022-10-10 00:12:53 +01:00
DepFA
d6a599ef9b
change caption method
2022-10-10 00:07:52 +01:00
DepFA
0ac3a07eec
add caption image with overlay
2022-10-10 00:05:36 +01:00
DepFA
01fd9cf0d2
change source of step count
2022-10-09 22:17:02 +01:00
DepFA
96f1e6be59
source checkpoint hash from current checkpoint
2022-10-09 22:14:50 +01:00
DepFA
6684610510
correct case on embeddingFromB64
2022-10-09 22:06:42 +01:00
DepFA
d0184b8f76
change json tensor key name
2022-10-09 22:06:12 +01:00
DepFA
5d12ec82d3
add encoder and decoder classes
2022-10-09 22:05:09 +01:00
DepFA
969bd8256e
add alternate checkpoint hash source
2022-10-09 22:02:28 +01:00
DepFA
03694e1f99
add embedding load and save from b64 json
2022-10-09 21:58:14 +01:00
AUTOMATIC
a65476718f
add DoubleStorage to list of allowed classes for pickle
2022-10-09 23:38:49 +03:00
DepFA
fa0c5eb81b
Add pretty image captioning functions
2022-10-09 20:41:22 +01:00
AUTOMATIC
8d340cfb88
do not add clip skip to parameters if it's 1 or 0
2022-10-09 22:31:35 +03:00
Fampai
1824e9ee3a
Removed unnecessary tmp variable
2022-10-09 22:31:23 +03:00
Fampai
ad3ae44108
Updated code for legibility
2022-10-09 22:31:23 +03:00
Fampai
ec2bd9be75
Fix issues with CLIP ignore option name change
2022-10-09 22:31:23 +03:00
Fampai
a14f7bf113
Corrected CLIP Layer Ignore description and updated its range to the max possible
2022-10-09 22:31:23 +03:00
Fampai
e59c66c008
Optimized code for Ignoring last CLIP layers
2022-10-09 22:31:23 +03:00
AUTOMATIC
6c383d2e82
show model selection setting on top of page
2022-10-09 22:24:07 +03:00
Artem Zagidulin
9ecea0a8d6
fix missing png info when Extras Batch Process
2022-10-09 18:35:25 +03:00
AUTOMATIC
875ddfeecf
added guard for torch.load to prevent loading pickles with unknown content
2022-10-09 17:58:43 +03:00
AUTOMATIC
9d1138e294
fix typo in filename for ESRGAN arch
2022-10-09 15:08:27 +03:00
AUTOMATIC
e6e8cabe0c
change up #2056 to make it work how i want it to plus make xy plot write correct values to images
2022-10-09 14:57:48 +03:00
William Moorehouse
594cbfd8fb
Sanitize infotext output (for now)
2022-10-09 14:49:15 +03:00
William Moorehouse
006791c13d
Fix grabbing the model name for infotext
2022-10-09 14:49:15 +03:00
William Moorehouse
d6d10a37bf
Added extended model details to infotext
2022-10-09 14:49:15 +03:00
AUTOMATIC
542a3d3a4a
fix btoken hypernetworks in XY plot
2022-10-09 14:33:22 +03:00
AUTOMATIC
77a719648d
fix logic error in #1832
2022-10-09 13:48:04 +03:00
AUTOMATIC
f4578b343d
fix model switching not working properly if there is a different yaml config
2022-10-09 13:23:30 +03:00
AUTOMATIC
bd833409ac
additional changes for saving pnginfo for #1803
2022-10-09 13:10:15 +03:00
Milly
0609ce06c0
Removed duplicate definition model_path
2022-10-09 12:46:07 +03:00
AUTOMATIC
6f6798ddab
prevent a possible code execution error (thanks, RyotaK)
2022-10-09 12:33:37 +03:00
AUTOMATIC
0241d811d2
Revert "Fix for Prompts_from_file showing extra textbox."
...
This reverts commit e2930f9821
.
2022-10-09 12:04:44 +03:00
AUTOMATIC
ab4fe4f44c
hide filenames for save button by default
2022-10-09 11:59:41 +03:00
Tony Beeman
cbf6dad02d
Handle case where on_show returns the wrong number of arguments
2022-10-09 11:16:38 +03:00
Tony Beeman
86cb16886f
Pull Request Code Review Fixes
2022-10-09 11:16:38 +03:00
Tony Beeman
e2930f9821
Fix for Prompts_from_file showing extra textbox.
2022-10-09 11:16:38 +03:00
Nicolas Noullet
1ffeb42d38
Fix typo
2022-10-09 11:10:13 +03:00
frostydad
ef93acdc73
remove line break
2022-10-09 11:09:17 +03:00
frostydad
03e570886f
Fix incorrect sampler name in output
2022-10-09 11:09:17 +03:00
Fampai
122d42687b
Fix VRAM Issue by only loading in hypernetwork when selected in settings
2022-10-09 11:08:11 +03:00
AUTOMATIC1111
e00b4df7c6
Merge pull request #1752 from Greendayle/dev/deepdanbooru
...
Added DeepDanbooru interrogator
2022-10-09 10:52:21 +03:00
aoirusann
14192c5b20
Support Download
for txt files.
2022-10-09 10:49:11 +03:00
aoirusann
5ab7e88d9b
Add Download
& Download as zip
2022-10-09 10:49:11 +03:00
AUTOMATIC
4e569fd888
fixed incorrect message about loading config; thanks anon!
2022-10-09 10:31:47 +03:00
AUTOMATIC
c77c89cc83
make main model loading and model merger use the same code
2022-10-09 10:23:31 +03:00
DepFA
cd8673bd9b
add embed embedding to ui
2022-10-09 05:40:57 +01:00
DepFA
5841990b0d
Update textual_inversion.py
2022-10-09 05:38:38 +01:00
AUTOMATIC
050a6a798c
support loading .yaml config with same name as model
...
support EMA weights in processing (????)
2022-10-08 23:26:48 +03:00
Aidan Holland
432782163a
chore: Fix typos
2022-10-08 22:42:30 +03:00
Edouard Leurent
610a7f4e14
Break after finding the local directory of stable diffusion
...
Otherwise, we may override it with one of the next two path (. or ..) if it is present there, and then the local paths of other modules (taming transformers, codeformers, etc.) wont be found in sd_path/../.
Fix https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1085
2022-10-08 22:35:04 +03:00
AUTOMATIC
3b2141c5fb
add 'Ignore last layers of CLIP model' option as a parameter to the infotext
2022-10-08 22:21:15 +03:00
AUTOMATIC
e6e42f98df
make --force-enable-xformers work without needing --xformers
2022-10-08 22:12:23 +03:00
Fampai
1371d7608b
Added ability to ignore last n layers in FrozenCLIPEmbedder
2022-10-08 22:10:37 +03:00
DepFA
b458fa48fe
Update ui.py
2022-10-08 20:38:35 +03:00
DepFA
15c4278f1a
TI preprocess wording
...
I had to check the code to work out what splitting was 🤷🏿
2022-10-08 20:38:35 +03:00
Greendayle
0ec80f0125
Merge branch 'master' into dev/deepdanbooru
2022-10-08 18:28:22 +02:00
AUTOMATIC
3061cdb7b6
add --force-enable-xformers option and also add messages to console regarding cross attention optimizations
2022-10-08 19:22:15 +03:00
AUTOMATIC
f9c5da1592
add fallback for xformers_attnblock_forward
2022-10-08 19:05:19 +03:00
Greendayle
01f8cb4447
made deepdanbooru optional, added to readme, automatic download of deepbooru model
2022-10-08 18:02:56 +02:00
Artem Zagidulin
a5550f0213
alternate prompt
2022-10-08 18:12:19 +03:00
C43H66N12O12S2
cc0258aea7
check for ampere without destroying the optimizations. again.
2022-10-08 17:54:16 +03:00
C43H66N12O12S2
017b6b8744
check for ampere
2022-10-08 17:54:16 +03:00
Greendayle
5329d0aba0
Merge branch 'master' into dev/deepdanbooru
2022-10-08 16:30:28 +02:00
AUTOMATIC
cfc33f99d4
why did you do this
2022-10-08 17:29:06 +03:00
Greendayle
2e8ba0fa47
fix conflicts
2022-10-08 16:27:48 +02:00
Milly
4f33289d0f
Fixed typo
2022-10-08 17:15:30 +03:00
AUTOMATIC
27032c47df
restore old opt_split_attention/disable_opt_split_attention logic
2022-10-08 17:10:05 +03:00
AUTOMATIC
dc1117233e
simplify xfrmers options: --xformers to enable and that's it
2022-10-08 17:02:18 +03:00
AUTOMATIC
7ff1170a2e
emergency fix for xformers (continue + shared)
2022-10-08 16:33:39 +03:00
AUTOMATIC1111
48feae37ff
Merge pull request #1851 from C43H66N12O12S2/flash
...
xformers attention
2022-10-08 16:29:59 +03:00
C43H66N12O12S2
970de9ee68
Update sd_hijack.py
2022-10-08 16:29:43 +03:00
C43H66N12O12S2
69d0053583
update sd_hijack_opt to respect new env variables
2022-10-08 16:21:40 +03:00
C43H66N12O12S2
ddfa9a9786
add xformers_available shared variable
2022-10-08 16:20:41 +03:00
C43H66N12O12S2
26b459a379
default to split attention if cuda is available and xformers is not
2022-10-08 16:20:04 +03:00
MrCheeze
5f85a74b00
fix bug where when using prompt composition, hijack_comments generated before the final AND will be dropped
2022-10-08 15:48:04 +03:00
ddPn08
772db721a5
fix glob path in hypernetwork.py
2022-10-08 15:46:54 +03:00
AUTOMATIC
7001bffe02
fix AND broken for long prompts
2022-10-08 15:43:25 +03:00
AUTOMATIC
77f4237d1c
fix bugs related to variable prompt lengths
2022-10-08 15:25:59 +03:00
AUTOMATIC
4999eb2ef9
do not let user choose his own prompt token count limit
2022-10-08 14:25:47 +03:00
Trung Ngo
00117a07ef
check specifically for skipped
2022-10-08 13:40:39 +03:00
Trung Ngo
786d9f63aa
Add button to skip the current iteration
2022-10-08 13:40:39 +03:00
AUTOMATIC
45cc0ce3c4
Merge remote-tracking branch 'origin/master'
2022-10-08 13:39:08 +03:00
AUTOMATIC
706d5944a0
let user choose his own prompt token count limit
2022-10-08 13:38:57 +03:00
leko
616b7218f7
fix: handles when state_dict does not exist
2022-10-08 12:38:50 +03:00
C43H66N12O12S2
91d66f5520
use new attnblock for xformers path
2022-10-08 11:56:01 +03:00
C43H66N12O12S2
76a616fa6b
Update sd_hijack_optimizations.py
2022-10-08 11:55:38 +03:00
C43H66N12O12S2
5d54f35c58
add xformers attnblock and hypernetwork support
2022-10-08 11:55:02 +03:00
brkirch
f2055cb1d4
Add hypernetwork support to split cross attention v1
...
* Add hypernetwork support to split_cross_attention_forward_v1
* Fix device check in esrgan_model.py to use devices.device_esrgan instead of shared.device
2022-10-08 09:39:17 +03:00
C43H66N12O12S2
b70eaeb200
delete broken and unnecessary aliases
2022-10-08 04:10:35 +03:00
C43H66N12O12S2
c9cc65b201
switch to the proper way of calling xformers
2022-10-08 04:09:18 +03:00
AUTOMATIC
12c4d5c6b5
hypernetwork training mk1
2022-10-07 23:22:22 +03:00
Greendayle
5f12e7efd9
linux test
2022-10-07 20:58:30 +02:00
Greendayle
fa2ea648db
even more powerfull fix
2022-10-07 20:46:38 +02:00