Commit graph

3103 commits

Author SHA1 Message Date
AUTOMATIC
10923f9b3a calculate dictionary for sampler names only once 2022-11-27 13:43:10 +03:00
AUTOMATIC
40ca34b837 fix for broken sampler selection in img2img and xy plot #4860 #4909 2022-11-27 13:17:39 +03:00
AUTOMATIC
5b2c316890 eliminate duplicated code from #5095 2022-11-27 13:08:54 +03:00
AUTOMATIC1111
997ac57020
Merge pull request #5095 from mlmcgoogan/master
torch.cuda.empty_cache() defaults to cuda:0 device unless explicitly …
2022-11-27 12:56:02 +03:00
AUTOMATIC1111
d860b56c21
Merge pull request #4961 from uservar/DPM++SDE
Add DPM++ SDE sampler
2022-11-27 12:55:03 +03:00
AUTOMATIC1111
6df4945718
Merge branch 'master' into DPM++SDE 2022-11-27 12:54:45 +03:00
AUTOMATIC
b48b7999c8 Merge remote-tracking branch 'flamelaw/master' 2022-11-27 12:19:59 +03:00
AUTOMATIC
b006382784 serve images from where they are saved instead of a temporary directory
add an option to choose a different temporary directory in the UI
add an option to cleanup the selected temporary directory at startup
2022-11-27 11:52:53 +03:00
Billy Cao
349f0461ec
Merge branch 'master' into support_any_resolution 2022-11-27 12:39:31 +08:00
Matthew McGoogan
c67c40f983 torch.cuda.empty_cache() defaults to cuda:0 device unless explicitly set otherwise first. Updating torch_gc() to use the device set by --device-id if specified to avoid OOM edge cases on multi-GPU systems. 2022-11-26 23:25:16 +00:00
MrCheeze
1e506657e1 no-half support for SD 2.0 2022-11-26 13:28:44 -05:00
AUTOMATIC
b5050ad207 make SD2 compatible with --medvram setting 2022-11-26 20:52:16 +03:00
flamelaw
755df94b2a set TI AdamW default weight decay to 0 2022-11-27 00:35:44 +09:00
AUTOMATIC
64c7b7975c restore hypernetworks to seemingly working state 2022-11-26 16:45:57 +03:00
AUTOMATIC
1123f52cad add 1024 module for hypernets for the new open clip 2022-11-26 16:37:37 +03:00
AUTOMATIC
ce6911158b Add support Stable Diffusion 2.0 2022-11-26 16:10:46 +03:00
Jay Smith
c833d5bfaa fixes #3449 - VRAM leak when switching to/from inpainting model 2022-11-25 20:15:11 -06:00
xucj98
263b323de1
Merge branch 'AUTOMATIC1111:master' into draft 2022-11-25 17:07:00 +08:00
Tiago F. Santos
a2ae5a6555 [interrogator] mkdir check 2022-11-24 13:04:45 +00:00
Sena
fcd75bd874
Fix other apis 2022-11-24 13:10:40 +08:00
Nandaka
904121fecc Support NAI exif for PNG Info 2022-11-24 02:39:09 +00:00
Alex "mcmonkey" Goodwin
ffcbbcf385 add filename santization
Probably redundant, considering the model name *is* a filename, but I suppose better safe than sorry.
2022-11-23 06:44:20 -08:00
Alex "mcmonkey" Goodwin
6001684be3 add model_name pattern for saving 2022-11-23 06:35:44 -08:00
flamelaw
1bd57cc979 last_layer_dropout default to False 2022-11-23 20:21:52 +09:00
flamelaw
d2c97fc3fe fix dropout, implement train/eval mode 2022-11-23 20:00:00 +09:00
Billy Cao
adb6cb7619 Patch UNet Forward to support resolutions that are not multiples of 64
Also modifed the UI to no longer step in 64
2022-11-23 18:11:24 +08:00
Sena
75b67eebf2
Fix bare base64 not accept 2022-11-23 17:43:58 +08:00
flamelaw
89d8ecff09 small fixes 2022-11-23 02:49:01 +09:00
Tim Patton
ac90cf38c6 safetensors optional for now 2022-11-22 10:13:07 -05:00
uservar
45fd785436
Update launch.py 2022-11-22 14:52:16 +00:00
uservar
47ce73fbbf
Update requirements_versions.txt 2022-11-22 14:26:09 +00:00
uservar
3c3c46be5f
Update requirements.txt 2022-11-22 14:25:39 +00:00
uservar
0a01f50891
Add DPM++ SDE sampler 2022-11-22 14:24:50 +00:00
uservar
6ecf72b6f7
Update k-diffusion to Release 0.0.11 2022-11-22 14:24:10 +00:00
Rogerooo
c27a973c82 fix null negative_prompt on get requests
Small typo that causes a bug when returning negative prompts from the get request.
2022-11-22 14:02:59 +00:00
Tiago F. Santos
745f1e8f80 [CLIP interrogator] use local file, if available 2022-11-22 12:48:25 +00:00
Tim Patton
210cb4c128 Use GPU for loading safetensors, disable export 2022-11-21 16:40:18 -05:00
Tim Patton
e134b74ce9 Ignore safetensor files 2022-11-21 10:58:57 -05:00
Tim Patton
162fef394f Patch line ui endings 2022-11-21 10:50:57 -05:00
Nicolas Patry
0efffbb407 Supporting *.safetensors format.
If a model file exists with extension `.safetensors` then we can load it
more safely than with PyTorch weights.
2022-11-21 14:04:25 +01:00
brkirch
563ea3f6ff Change .cuda() to .to(devices.device) 2022-11-21 02:56:00 -05:00
brkirch
e247b7400a Add fixes for PyTorch 1.12.1
Fix typo "MasOS" -> "macOS"

If MPS is available and PyTorch is an earlier version than 1.13:
* Monkey patch torch.Tensor.to to ensure all tensors sent to MPS are contiguous
* Monkey patch torch.nn.functional.layer_norm to ensure input tensor is contiguous (required for this program to work with MPS on unmodified PyTorch 1.12.1)
2022-11-21 02:07:19 -05:00
dtlnor
9ae30b3450 remove cmd args requirement for deepdanbooru 2022-11-21 12:53:55 +09:00
flamelaw
5b57f61ba4 fix pin_memory with different latent sampling method 2022-11-21 10:15:46 +09:00
Liam
927d24ef82 made selected_gallery_index query selectors more restrictive 2022-11-20 13:52:18 -05:00
Tim Patton
637815632f Generalize SD torch load/save to implement safetensor merging compat 2022-11-20 13:36:05 -05:00
Jonas Böer
471189743a
Move progress info to beginning of title
because who has so few tabs open that they can see the end of a tab name?
2022-11-20 15:57:43 +01:00
AUTOMATIC1111
828438b4a1
Merge pull request #4120 from aliencaocao/enable-override-hypernet
Enable override_settings to take effect for hypernetworks
2022-11-20 16:49:06 +03:00
AUTOMATIC
c81d440d87 moved deepdanbooru to pure pytorch implementation 2022-11-20 16:39:20 +03:00
flamelaw
2d22d72cda fix random sampling with pin_memory 2022-11-20 16:14:27 +09:00