d8ahazard
|
740070ea9c
|
Re-implement universal model loading
|
2022-09-26 09:29:50 -05:00 |
|
AUTOMATIC
|
d4205e66fa
|
gfpgan: just download the damn model
|
2022-09-23 10:26:00 +03:00 |
|
AUTOMATIC
|
843b2b64fc
|
Instance of CUDA out of memory on a low-res batch, even with --opt-split-attention-v1 (found cause) #255
|
2022-09-12 18:40:06 +03:00 |
|
AUTOMATIC
|
6a9b33c848
|
codeformer support
|
2022-09-07 12:32:28 +03:00 |
|
AUTOMATIC
|
595c827bd3
|
option to unload GFPGAN after using
|
2022-09-03 17:28:30 +03:00 |
|
AUTOMATIC
|
345028099d
|
split codebase into multiple files; to anyone this affects negatively: sorry
|
2022-09-03 12:08:45 +03:00 |
|