Compare commits

...

919 Commits

Author SHA1 Message Date
AUTOMATIC1111 feee37d75f Merge branch 'dev' 2024-05-28 21:20:40 +03:00
AUTOMATIC1111 801b72b92b update changelog 2024-05-28 21:20:23 +03:00
AUTOMATIC1111 759f396a2e Merge pull request #15882 from AUTOMATIC1111/setuptools==69.5.1
Fix method 1 : pin Setuptools==69.5.1
2024-05-28 21:16:28 +03:00
w-e-w a63946233b setuptools==69.5.1 2024-05-25 14:19:19 +09:00
AUTOMATIC1111 ddb28b33a3 Merge branch 'master' into dev 2024-04-22 18:01:16 +03:00
AUTOMATIC1111 1c0a0c4c26 Merge branch 'dev' 2024-04-22 18:00:36 +03:00
AUTOMATIC1111 7dfe959f4b update changelog 2024-04-22 18:00:23 +03:00
AUTOMATIC1111 8f64dad282 Merge pull request #15594 from AUTOMATIC1111/fix-get_crop_region_v2
fix get_crop_region_v2
2024-04-22 17:57:39 +03:00
w-e-w 821adc3041 fix get_crop_region_v2
Co-Authored-By: Dowon <ks2515@naver.com>
2024-04-22 23:10:19 +09:00
AUTOMATIC1111 e2b177c508 Merge branch 'dev' 2024-04-22 12:26:24 +03:00
AUTOMATIC1111 e837124f4b changelog 2024-04-22 12:26:05 +03:00
AUTOMATIC1111 3fdc3cfbbf Merge pull request #15591 from AUTOMATIC1111/restore-1.8.0-style-naming-of-scripts
Restore 1.8.0 style naming of scripts
2024-04-22 12:24:06 +03:00
w-e-w e9809de651 restore 1.8.0-style naming of scripts 2024-04-22 18:23:58 +09:00
AUTOMATIC1111 61f6479ea9 restore 1.8.0-style naming of scripts 2024-04-22 12:19:30 +03:00
AUTOMATIC1111 e84703b253 update changelog 2024-04-22 11:59:54 +03:00
AUTOMATIC1111 e4aa0c362e Merge pull request #15587 from AUTOMATIC1111/fix-mistake-in-#15583
fix mistake in #15583
2024-04-22 11:50:34 +03:00
AUTOMATIC1111 a183ea4ba7 undo adding scripts to sys.modules 2024-04-22 11:49:55 +03:00
w-e-w 6c7c176dc9 fix mistake in #15583 2024-04-22 00:10:49 +09:00
AUTOMATIC1111 e6a8d0b4e6 Merge pull request #15583 from AUTOMATIC1111/get_crop_region_v2
get_crop_region_v2
2024-04-21 18:06:40 +03:00
w-e-w db263df5d5 get_crop_region_v2 2024-04-21 19:34:11 +09:00
AUTOMATIC1111 d1998d747d Merge pull request #15531 from thatfuckingbird/fix-mistyped-function-name
fix typo in function call (eror -> error)
2024-04-21 07:43:19 +03:00
AUTOMATIC1111 c0eaeb15af Merge pull request #15532 from huchenlei/fix_module
Fix cls.__module__ value in extension script
2024-04-21 07:42:57 +03:00
AUTOMATIC1111 9bcfb92a00 rename logging from textual inversion to not confuse it with global logging module 2024-04-21 07:41:28 +03:00
AUTOMATIC1111 d74fc56fa5 Merge pull request #15547 from AUTOMATIC1111/numpy-DeprecationWarning-product---prod
numpy DeprecationWarning product -> prod
2024-04-21 07:23:38 +03:00
AUTOMATIC1111 a44ed231c2 Merge pull request #15555 from light-and-ray/fix_x1_upscalers
fix x1 upscalers
2024-04-21 07:22:58 +03:00
AUTOMATIC1111 daae17851a Merge pull request #15560 from AUTOMATIC1111/api-downscale
Remove API upscaling factor limits
2024-04-21 07:22:30 +03:00
AUTOMATIC1111 ce19a7baef Merge pull request #15544 from cabelo/master
Compatibility with Debian 11, Fedora 34+ and openSUSE 15.4+
2024-04-21 07:22:04 +03:00
AUTOMATIC1111 8d6e72dbfa Merge pull request #15561 from speculativemoonstone/fix-launch-git-directories
Allow webui.sh to be runnable from arbitrary directories containing a .git file
2024-04-21 07:21:21 +03:00
AUTOMATIC1111 6f4f6bff6b add more info to the error message for #15567 2024-04-21 07:18:58 +03:00
AUTOMATIC1111 367b823466 Merge pull request #15567 from AUTOMATIC1111/no-image-data-blocks-debug
Hide 'No Image data blocks found.' message
2024-04-21 07:09:27 +03:00
AUTOMATIC1111 c8ac42aad1 Merge pull request #15533 from travisg/callback-fix
fix: remove_callbacks_for_function should also remove from the ordered map
2024-04-21 07:07:58 +03:00
AUTOMATIC1111 449bc7bcf3 Merge pull request #15534 from storyicon/fix-masking
Fix images do not match / Coordinate 'right' is less than 'left'
2024-04-21 07:06:45 +03:00
AUTOMATIC1111 3810413c00 Merge pull request #15581 from AUTOMATIC1111/FilenameGenerator-sampler-scheduler
FilenameGenerator Sampler Scheduler
2024-04-21 07:00:28 +03:00
AUTOMATIC1111 f8f5d6cea2 Merge pull request #15577 from AUTOMATIC1111/api-get-schedulers
Add schedulers API endpoint
2024-04-21 06:59:56 +03:00
AUTOMATIC1111 cde35bebe9 Merge pull request #15582 from kaanyalova/avif-support
Add avif support
2024-04-21 06:59:38 +03:00
kaanyalova 49fee7c8db Add avif support 2024-04-20 23:26:04 +03:00
w-e-w b5b1487f6a FilenameGenerator Sampler Scheduler 2024-04-21 04:52:03 +09:00
missionfloyd 5cb567c138 Add schedulers API endpoint 2024-04-19 20:32:09 -06:00
missionfloyd d212fb59fe Hide 'No Image data blocks found.' message 2024-04-18 20:56:51 -06:00
storyicon 71314e47b1 feat:compatible with inconsistent/empty mask
Signed-off-by: storyicon <storyicon@foxmail.com>
2024-04-18 11:59:25 +00:00
Speculative Moonstone ba2a737cce Allow webui.sh to be runnable from directories containing a .git file 2024-04-18 04:25:32 +00:00
missionfloyd 909c3dfe83 Remove API upscaling factor limits 2024-04-17 21:20:03 -06:00
Andray 9d4fdc45d3 fix x1 upscalers 2024-04-18 01:53:23 +04:00
w-e-w 63fd38a04f numpy DeprecationWarning product -> prod 2024-04-17 15:54:45 +09:00
Alessandro de Oliveira Faria (A.K.A. CABELO) 50190ca669 Compatibility with Debian 11, Fedora 34+ and openSUSE 15.4+ 2024-04-17 00:01:56 -03:00
storyicon 0980fdfe8c fix: images do not match
Signed-off-by: storyicon <storyicon@foxmail.com>
2024-04-16 07:35:33 +00:00
Travis Geiselbrecht bba306d414 fix: remove callbacks properly in remove_callbacks_for_function()
Like remove_current_script_callback just before, also remove from the
ordered_callbacks_map to keep the callback map and ordered callback map
in sync.
2024-04-15 21:10:11 -07:00
huchenlei a95326bec4 nit 2024-04-15 22:34:01 -04:00
huchenlei 0f82948e4f Fix cls.__module__ 2024-04-15 22:14:19 -04:00
thatfuckingbird 8e1c3561be fix typo in function call (eror -> error) 2024-04-15 21:17:24 +02:00
AUTOMATIC1111 ff6f4680c4 Merge branch 'master' into dev 2024-04-13 06:38:58 +03:00
AUTOMATIC1111 adadb4e3c7 Merge branch 'release_candidate' 2024-04-13 06:37:28 +03:00
AUTOMATIC1111 d282d24800 update changelog 2024-04-13 06:37:03 +03:00
AUTOMATIC1111 a196319edf Merge pull request #15492 from w-e-w/update-restricted_opts
update restricted_opts
2024-04-11 19:34:10 +03:00
AUTOMATIC1111 3fadb4fc85 Merge pull request #15492 from w-e-w/update-restricted_opts
update restricted_opts
2024-04-11 19:33:55 +03:00
w-e-w 592e40ebe9 update restricted_opts 2024-04-11 23:10:25 +09:00
storyicon 4068429ac7 fix: Coordinate 'right' is less than 'left'
Signed-off-by: storyicon <storyicon@foxmail.com>
2024-04-10 10:53:25 +00:00
AUTOMATIC1111 88f70ce63c Merge pull request #15470 from AUTOMATIC1111/read-infotext-Script-not-found
error handling paste_field callables
2024-04-09 16:01:13 +03:00
AUTOMATIC1111 ac8ffb34e3 Merge pull request #15470 from AUTOMATIC1111/read-infotext-Script-not-found
error handling paste_field callables
2024-04-09 16:00:56 +03:00
w-e-w ef83f6831f catch exception for all paste_fields callable 2024-04-09 21:32:58 +09:00
w-e-w 600f339c4c Warning when Script is not found 2024-04-09 21:32:41 +09:00
AUTOMATIC1111 7f691612ca Merge pull request #15460 from AUTOMATIC1111/create_infotext-index-and-callable
create_infotext allow index and callable, re-work Hires prompt infotext
2024-04-09 12:05:15 +03:00
AUTOMATIC1111 a976f4dff9 Merge pull request #15460 from AUTOMATIC1111/create_infotext-index-and-callable
create_infotext allow index and callable, re-work Hires prompt infotext
2024-04-09 12:05:02 +03:00
AUTOMATIC1111 696d6813e0 Merge pull request #15465 from jordenyt/fix-extras-api-upscale-enabled
Fix extra-single-image API not doing upscale failed
2024-04-09 11:00:49 +03:00
AUTOMATIC1111 c48b6bf6bd Merge pull request #15465 from jordenyt/fix-extras-api-upscale-enabled
Fix extra-single-image API not doing upscale failed
2024-04-09 11:00:30 +03:00
Jorden Tse 2580235c72 Fix extra-single-image API not doing upscale failed 2024-04-09 11:13:47 +08:00
AUTOMATIC1111 3786f3742f fix limited file write (thanks, Sylwia) 2024-04-08 16:15:55 +03:00
AUTOMATIC1111 d9708c92b4 fix limited file write (thanks, Sylwia) 2024-04-08 16:15:25 +03:00
w-e-w e3aabe6959 add documentation for create_infotext 2024-04-08 20:14:02 +09:00
w-e-w 1e1176b6eb non-serializable as None 2024-04-08 18:18:33 +09:00
w-e-w 219e64489c re-work extra_generation_params for Hires prompt 2024-04-08 18:18:13 +09:00
w-e-w 47ed9b2d39 allow list or callables in generation_params 2024-04-08 01:39:31 +09:00
w-e-w 6efdfe3234 if use use_main_prompt index = 0 2024-04-07 22:58:12 +09:00
AUTOMATIC1111 e1640314df 1.9.0 changelog 2024-04-06 21:46:56 +03:00
AUTOMATIC1111 c16a27caa9 Merge pull request #15446 from AUTOMATIC1111/re-add-update_file_entry
re-add update_file_entry
2024-04-06 10:02:45 +03:00
w-e-w 2ad17a6100 re-add update_file_entry
MassFileLister.update_file_entry was accidentally removed in https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15205/files#diff-c39b942d8f8620d46d314db8301189b8d6195fc97aedbeb124a33694b738d69cL151-R173
2024-04-06 15:56:57 +09:00
AUTOMATIC1111 23c06a51cc use 'scripts.' prefix for names of dynamically loaded modules 2024-04-06 09:05:04 +03:00
AUTOMATIC1111 badb70da48 Merge pull request #15423 from storyicon/master
feat: ensure the indexability of dynamically imported packages
2024-04-06 09:00:35 +03:00
AUTOMATIC1111 447198f21b Merge pull request #15442 from AUTOMATIC1111/open_folder-as-util
open_folder as util
2024-04-06 08:54:20 +03:00
AUTOMATIC1111 acb20338b1 put HF_ENDPOINT into shared for #15443 2024-04-06 08:53:21 +03:00
AUTOMATIC1111 73f7812045 Merge pull request #15443 from Satariall/add-hf_endpoint-variable
Use HF_ENDPOINT variable for HuggingFace domain with default
2024-04-06 08:48:55 +03:00
Marsel Markhabulin 989b89b12a Use HF_ENDPOINT variable for HuggingFace domain with default
Modified the list_models function to dynamically construct the model URL
by using an environment variable for the HuggingFace domain. This allows
for greater flexibility in specifying the domain and ensures that the
modification is also compatible with the Hub client library. By
supporting different environments or requirements without hardcoding the
domain name, this change facilitates the use of custom HuggingFace
domains not only within our code but also when interacting with the Hub
client library.
2024-04-05 13:02:49 +03:00
w-e-w 20123d427b open_folder docstring 2024-04-05 16:19:20 +09:00
w-e-w a05d89b1e5 Merge branch 'dev' into open_folder-as-util 2024-04-05 15:14:38 +08:00
w-e-w 92e6aa3653 open_folder as util 2024-04-05 16:08:45 +09:00
AUTOMATIC1111 b372fb6165 fix API upscale 2024-04-01 23:33:45 +03:00
AUTOMATIC1111 0cb2bbd01a Merge pull request #15428 from v0xie/fix/remove-callbacks
Fix: Remove script callbacks in ordered_callbacks_map
2024-04-01 23:25:03 +03:00
v0xie a669b8a6bc fix: remove script callbacks in ordered_callbacks_map 2024-04-01 12:51:09 -07:00
AUTOMATIC1111 719296133d Merge pull request #15425 from light-and-ray/fix_upscaler_2_images_do_not_match
fix upscaler 2 images do not match
2024-04-01 13:40:09 +03:00
Andray 86861f8379 fix upscaler 2 images do not match 2024-04-01 13:58:45 +04:00
storyicon e73a7e4006 feat: ensure the indexability of dynamically imported packages
Signed-off-by: storyicon <storyicon@foxmail.com>
2024-04-01 09:13:07 +00:00
AUTOMATIC1111 aa4a45187e Merge pull request #15417 from light-and-ray/fix_upscaler_2
Fix upscaler 2: add missed max_side_length
2024-03-31 18:49:21 +03:00
Andray 0a7d1e756f fix upscaler 2 2024-03-31 19:34:58 +04:00
AUTOMATIC1111 859f0f6b19 Merge pull request #15415 from light-and-ray/fix_dcd4f880a86e500ec88ddf7eafe65894a24b85a3
fix dcd4f880a8
2024-03-31 16:55:47 +03:00
AUTOMATIC1111 23ef5027c6 Merge pull request #15414 from DrBiggusDickus/dev2
Fix CodeFormer weight
2024-03-31 16:53:23 +03:00
Andray 4ccbae320e fix dcd4f880a8 2024-03-31 17:05:15 +04:00
DrBiggusDickus ea83180761 fix CodeFormer weight 2024-03-31 14:41:06 +02:00
AUTOMATIC1111 f1a6c5fe17 add an option to hide postprocessing options in Extras tab 2024-03-31 08:30:00 +03:00
AUTOMATIC1111 bfa20d2758 resize Max side length field 2024-03-31 08:20:19 +03:00
AUTOMATIC1111 dcd4f880a8 rework code/UI for #15293 2024-03-31 08:17:22 +03:00
AUTOMATIC1111 7f3ce06de9 Merge pull request #15293 from light-and-ray/extras_upscaler_limit_target_resolution
Extras upscaler: option limit target resolution
2024-03-31 08:02:35 +03:00
AUTOMATIC1111 8bebfde701 Merge pull request #15350 from baseco/memory-bug-fix
minor bug fix of sd model memory management
2024-03-30 07:37:10 +03:00
AUTOMATIC1111 98096195dd Merge pull request #15382 from huaizong/fix/whz/model-loaded-remove-v3
fix: when find already_loaded model, remove loaded by array index
2024-03-30 07:36:33 +03:00
AUTOMATIC1111 642bca4c3d Merge pull request #15380 from light-and-ray/interrupt_upscale
interrupt upscale
2024-03-30 07:34:34 +03:00
AUTOMATIC1111 80b87107de Merge pull request #15386 from eltociear/patch-3
fix typo in call_queue.py
2024-03-30 07:34:10 +03:00
AUTOMATIC1111 c4c8a64111 restore the line lost in the merge 2024-03-30 07:33:39 +03:00
AUTOMATIC1111 470d402b17 Merge pull request #15390 from ochen1/patch-1
fix: Python version check for PyTorch installation compatibility
2024-03-30 07:32:29 +03:00
AUTOMATIC1111 1dc8cc1bce Merge branch 'dev' into patch-1 2024-03-30 07:31:08 +03:00
AUTOMATIC1111 8687163f7f Merge pull request #15394 from light-and-ray/fix_ui_config_for_hires_sampler_and_scheduler
fix ui_config for hires sampler and scheduler
2024-03-27 16:42:09 +03:00
Andray 4e2bb7250f fix_ui_config_for_hires_sampler_and_scheduler 2024-03-27 15:35:06 +04:00
ochen1 5461b00e89 fix: Python version check for PyTorch installation compatibility 2024-03-26 21:22:09 -06:00
Ikko Eltociear Ashimine 16522cb0e3 fix typo in call_queue.py
amout -> amount
2024-03-27 03:01:06 +09:00
Andray c321680b3d interrupt upscale 2024-03-26 14:53:38 +04:00
王怀宗 f4633cb9c0 fix: when find already_loaded model, remove loaded by array index 2024-03-26 18:29:51 +08:00
Boning f62217b65d minor bug fix of sd model memory management 2024-03-25 10:38:15 -07:00
AUTOMATIC1111 dfbdb5a135 put request: gr.Request at start of img2img function similar to txt2img 2024-03-25 18:00:58 +03:00
AUTOMATIC1111 b0b90dc0d7 Merge pull request #15319 from catboxanon/feat/ssmd_cover_images
Support cover images embedded in safetensors metadata
2024-03-24 13:43:37 +03:00
AUTOMATIC1111 9aa9e980a9 support scheduler selection in hires fix 2024-03-24 11:00:16 +03:00
catboxanon c4402500c7 Support ssmd_cover_images 2024-03-24 02:33:10 -04:00
AUTOMATIC1111 755d2cb2e5 Merge pull request #15343 from light-and-ray/escape_brackets_in_lora_random_prompt
escape brackets in lora random prompt generator
2024-03-24 05:31:33 +03:00
AUTOMATIC1111 0affa24ce2 Merge pull request #15354 from akx/xyz-script-size
Add Size as an XYZ Grid option
2024-03-24 05:27:00 +03:00
AUTOMATIC1111 bf2f7b3af4 Merge pull request #15333 from AUTOMATIC1111/scheduler_selection
Scheduler selection in main UI
2024-03-24 05:21:56 +03:00
AUTOMATIC1111 db61b876d6 Merge pull request #15361 from kaalibro/fix/scheduler_selection
Fix for "Scheduler selection" #15333
2024-03-24 05:21:40 +03:00
kaalibro f3ca6a92ad Fix for #15333
- Fix "X/Y/Z plot" not working with "Schedule type"
- Fix "Schedule type" not being saved to "params.txt"
2024-03-23 00:50:37 +05:00
Aarni Koskela 2941e1f1f3 Add Size as an XYZ Grid option 2024-03-22 12:04:03 +02:00
Andray 721c4309c2 escape brackets in lora random prompt generator 2024-03-21 16:29:51 +04:00
AUTOMATIC1111 57727e554d make #15334 work without making copies of images 2024-03-21 07:22:27 +03:00
AUTOMATIC1111 b80b1cf92c Merge pull request #15334 from Gourieff/extras--allow-png-rgba--dev
Allow PNG-RGBA for Extras Tab
2024-03-21 07:21:15 +03:00
AUTOMATIC1111 5c5594ff16 linter 2024-03-21 07:09:40 +03:00
AUTOMATIC1111 65075896f2 Merge pull request #15310 from Dalton-Murray/update-pytorch-lightning-utilities
Update pytorch lightning utilities
2024-03-21 07:08:43 +03:00
Dalton 32ba757501 Re-add import but after if check 2024-03-20 23:55:04 -04:00
Dalton 4e6e2574ab Cleanup ddpm_edit.py
Fully reverts this time
2024-03-20 23:36:35 -04:00
Dalton 41907b25f0 Cleanup sd_hijack_ddpm_v1.py
Forgot some things to revert
2024-03-20 23:35:32 -04:00
Dalton 4bc2963320 Remove unnecessary import 2024-03-20 23:33:15 -04:00
Dalton 4eb5e09873 Update initialize_util.py 2024-03-20 23:28:40 -04:00
Dalton b5b04912b5 Include running pytorch lightning check 2024-03-20 23:06:00 -04:00
Dalton f010dfffb9 Revert ddpm_edit.py 2024-03-20 23:02:30 -04:00
Dalton 5fd9a40b92 Revert sd_hijack_ddpm_v1.py 2024-03-20 23:01:50 -04:00
Art Gourieff e0cad0f87a Merge remote-tracking branch 'upstream/dev' into extras--allow-png-rgba--dev 2024-03-20 15:28:17 +07:00
Art Gourieff 8ec8901921 FIX: No specific type for 'image' arg
Roll back
2024-03-20 15:20:29 +07:00
Art Gourieff 702edb288e FIX: initial_pp RGBA right way 2024-03-20 15:14:28 +07:00
AUTOMATIC1111 31306ce672 change the behavior of discard_next_to_last_sigma for sgm_uniform to match other schedulers 2024-03-20 10:29:52 +03:00
AUTOMATIC1111 ac9aa44cb8 do not add 'Automatic' to infotext 2024-03-20 10:27:53 +03:00
AUTOMATIC1111 76f8436bfa add Uniform scheduler 2024-03-20 10:27:32 +03:00
Art Gourieff 61f488302f FIX: Allow PNG-RGBA for Extras Tab 2024-03-20 13:28:32 +07:00
AUTOMATIC1111 25cd53d775 scheduler selection in main UI 2024-03-20 09:17:11 +03:00
AUTOMATIC1111 060e55dfe3 Merge pull request #15331 from AUTOMATIC1111/extra-networks-buttons
Fix extra networks buttons when filename contains an apostrophe
2024-03-20 06:53:55 +03:00
missionfloyd b5c33341a1 Don't use quote_js on filename 2024-03-19 19:06:56 -06:00
missionfloyd 6e420c7be2 Merge branch 'dev' into extra-networks-buttons 2024-03-19 19:03:53 -06:00
missionfloyd d7f48472cc Fix extra networks buttons when filename contains an apostrophe 2024-03-19 18:50:25 -06:00
Dalton 49779413aa Formatting sd_hijack_ddpm_v1.py 2024-03-19 14:54:06 -04:00
Dalton 8f450321fe Formatting ddpm_edit 2024-03-19 14:53:30 -04:00
Dalton 86276832e0 Update sd_hijack_ddpm_v1.py 2024-03-19 14:45:07 -04:00
Dalton 61f321756f Update ddpm_edit.py 2024-03-19 14:44:31 -04:00
AUTOMATIC1111 d44b8aa8c1 Merge pull request #15325 from AUTOMATIC1111/sgm_uniform
Sgm uniform scheduler for SDXL-Lightning models
2024-03-19 21:37:16 +03:00
Kohaku-Blueleaf a6b5a513f9 Implementation for sgm_uniform branch 2024-03-19 20:05:54 +08:00
AUTOMATIC1111 c4a00affc5 use existing quote_js function for #15316 2024-03-19 08:10:27 +03:00
AUTOMATIC1111 522121be7e Merge pull request #15316 from AUTOMATIC1111/escape-filename
Escape btn_copy_path filename
2024-03-19 08:02:36 +03:00
missionfloyd 3fa1ebed62 Escape btn_copy_path filename 2024-03-18 21:47:52 -06:00
AUTOMATIC1111 7ac7600dc3 Merge pull request #15307 from AUTOMATIC1111/restore-outputs-path
restore outputs path
2024-03-18 19:33:48 +03:00
w-e-w e9d4da7b56 restore outputs path
output -> outputs
2024-03-19 00:54:56 +09:00
AUTOMATIC1111 c4664b5a9c fix for listing wrong requirements for extensions 2024-03-18 08:00:42 +03:00
Andray 203afa39c4 update tooltip 2024-03-18 06:52:46 +04:00
Dalton 51cb20ec39 Update ddpm_edit.py 2024-03-17 22:45:31 -04:00
Dalton 2a6054f836 Update sd_hijack_ddpm_v1.py 2024-03-17 22:37:19 -04:00
AUTOMATIC1111 8ac4a207f3 Merge pull request #15299 from AUTOMATIC1111/diskcache-bett
Tweak diskcache limits
2024-03-17 23:59:12 +03:00
Aarni Koskela df4da02ab0 Tweak diskcache limits 2024-03-17 20:25:25 +00:00
AUTOMATIC1111 f1b090e9e0 Merge pull request #15287 from AUTOMATIC1111/diskcache
use diskcache library for caching
2024-03-17 23:20:00 +03:00
AUTOMATIC1111 611faaddef change the default name for callback from None to "unnamed" 2024-03-17 23:19:24 +03:00
AUTOMATIC1111 daa1b33247 make reloading UI scripts optional when doing Reload UI, and off by default 2024-03-17 18:16:12 +03:00
Andray fd83d4eec3 add .needs_reload_ui() 2024-03-17 18:19:13 +04:00
Andray 81be357925 hide limit target resolution under option 2024-03-17 14:51:19 +04:00
AUTOMATIC1111 79cbc92abf change code for variant requirements in metadata.ini 2024-03-17 13:30:20 +03:00
Andray 06c5dd0907 maybe fix tests 2024-03-17 14:28:26 +04:00
AUTOMATIC1111 908d522057 update ruff to 0.3.3 2024-03-17 13:19:44 +03:00
AUTOMATIC1111 4ce2e25c0b Merge pull request #15290 from light-and-ray/allow_variants_for_extension_name_in_metadata.ini
allow variants for extension name in metadata.ini
2024-03-17 13:19:23 +03:00
Andray ef35619325 Extras upscaler: option limit target resolution 2024-03-17 14:14:12 +04:00
Andray b1cd0189bc allow variants for extension name in metadata.ini 2024-03-17 13:05:35 +04:00
AUTOMATIC1111 c95c46004a Merge pull request #15288 from light-and-ray/allow_use_zoom.js_outside_webui_context
little fixes zoom.js
2024-03-17 09:48:33 +03:00
Andray c3f75d1d85 little fixes zoom.js 2024-03-17 10:30:11 +04:00
AUTOMATIC1111 c12ba58433 Merge pull request #15286 from light-and-ray/allow_use_zoom.js_outside_webui_context
allow use zoom.js outside webui context [for extensions]
2024-03-17 09:20:51 +03:00
AUTOMATIC1111 66355b4775 use diskcache library for caching 2024-03-17 09:18:32 +03:00
Andray e9b8a89b3c allow use zoom.js outside webui context 2024-03-17 09:29:11 +04:00
AUTOMATIC1111 93c7b9d7fc linter for #15262 2024-03-17 07:02:31 +03:00
AUTOMATIC1111 6d8b7ec188 Merge pull request #15262 from catboxanon/feat/dragdrop-urls
Support dragdrop for URLs to read infotext
2024-03-17 07:02:08 +03:00
catboxanon 446cd5a58b dragdrop: add error handling for URLs 2024-03-16 20:19:12 -04:00
missionfloyd 83a9dd82db Download image client-side 2024-03-16 17:10:26 -06:00
missionfloyd 3da13f0cc9 Fix dragging to/from firefox 2024-03-16 15:46:29 -06:00
AUTOMATIC1111 df8c09bcb3 Merge pull request #15283 from AUTOMATIC1111/dora-weight-decompose
Use correct DoRA implementation
2024-03-16 20:20:08 +03:00
AUTOMATIC1111 8dcb8faf5d Merge branch 'dev' into dora-weight-decompose 2024-03-16 20:20:02 +03:00
Kohaku-Blueleaf 199c51d688 linter 2024-03-17 00:00:07 +08:00
Kohaku-Blueleaf 1792e193b1 Use correct implementation, fix device error 2024-03-16 23:52:29 +08:00
AUTOMATIC1111 bf35c66183 fix for #15179 2024-03-16 18:45:19 +03:00
AUTOMATIC1111 cb09e1ef7d Merge pull request #15179 from llnancy/master
fix: fix syntax errors
2024-03-16 18:45:01 +03:00
AUTOMATIC1111 0283826179 prevent make alt key from opening main menu if it's used for brush size also 2024-03-16 18:44:36 +03:00
AUTOMATIC1111 2f9d1c33e2 Merge pull request #15267 from light-and-ray/prevent_alt_menu_on_firefox
prevent alt menu for firefox
2024-03-16 18:31:55 +03:00
AUTOMATIC1111 874809e0ca Merge pull request #15268 from light-and-ray/handle_0_wheel_deltaX
handle 0 wheel deltaY
2024-03-16 18:25:00 +03:00
Andray c364b60776 handle 0 wheel deltaX 2024-03-16 18:08:02 +04:00
Andray 7598a92436 use e.key instead of e.code 2024-03-16 17:49:05 +04:00
Andray eb2ea8df1d check e.key in up event 2024-03-16 17:42:25 +04:00
Andray 9142ce8188 fix linter and do not require reload page if option was changed 2024-03-16 16:14:57 +04:00
Andray 79514e5b8e prevent defaults for alt only if mouse inside image 2024-03-16 16:06:21 +04:00
AUTOMATIC1111 bb9df5cdc9 Merge pull request #15276 from AUTOMATIC1111/v180_hr_styles-actual-version-number
v180_hr_styles actual version number
2024-03-16 12:40:24 +03:00
AUTOMATIC1111 e8613dbc93 Merge pull request #15231 from light-and-ray/fix_ui-config_for_InputAccordion
fix ui-config for InputAccordion [custom_script_source]
2024-03-16 12:35:43 +03:00
Andray cc8ea32501 fix ui-config for InputAccordion 2024-03-16 12:32:39 +04:00
w-e-w 38a7dc5488 v180_hr_styles actual version number 2024-03-16 17:19:38 +09:00
AUTOMATIC1111 5bd2724765 Merge pull request #15205 from AUTOMATIC1111/callback_order
Callback order
2024-03-16 09:45:41 +03:00
AUTOMATIC1111 9fd693272f Merge pull request #15211 from light-and-ray/type_hintinh_in_shared.py
type hinting in shared.py
2024-03-16 09:45:30 +03:00
AUTOMATIC1111 f7bad19e00 Merge pull request #15221 from AUTOMATIC1111/fix-Restore-progress
fix "Restore progress" button
2024-03-16 09:44:50 +03:00
AUTOMATIC1111 03ea0f3bfc Merge pull request #15222 from light-and-ray/move_postprocessing-for-training_into_builtin_extensions
move postprocessing-for-training into builtin extensions
2024-03-16 09:43:01 +03:00
AUTOMATIC1111 2fc47b44c2 Merge pull request #15223 from light-and-ray/move_upscale_postprocessing_under_input_accordion
move upscale postprocessing under input accordion
2024-03-16 09:41:40 +03:00
AUTOMATIC1111 446e49d6db Merge branch 'dev' into move_upscale_postprocessing_under_input_accordion 2024-03-16 09:41:16 +03:00
AUTOMATIC1111 8bc9978909 Merge pull request #15228 from wangshuai09/ascend_npu_readme
Ascend NPU wiki page
2024-03-16 09:39:34 +03:00
AUTOMATIC1111 1282bceeba Merge pull request #15233 from light-and-ray/featch_only_active_branch_updates_for_extensions
featch only active branch updates for extensions
2024-03-16 09:06:33 +03:00
AUTOMATIC1111 d38b390ed4 Merge pull request #15239 from AUTOMATIC1111/Fix-lora-bugs
Add missing .mean() back
2024-03-16 09:06:05 +03:00
AUTOMATIC1111 63c3c4dbc3 simplify code for #15244 2024-03-16 09:04:08 +03:00
AUTOMATIC1111 afb9296e0d Merge pull request #15244 from Haoming02/auto-scale-by
Automatically Set the Scale by value when user selects an Upscale Model
2024-03-16 08:49:32 +03:00
AUTOMATIC1111 c9244ef83a Merge pull request #15224 from DGdev91/dev
Better workaround for Navi1, removing --pre for Navi3
2024-03-16 08:45:02 +03:00
AUTOMATIC1111 a072c1997d Merge pull request #15259 from AUTOMATIC1111/PEP-604-annotations
PEP 604 annotations
2024-03-16 08:43:02 +03:00
AUTOMATIC1111 3cb698ac15 Merge pull request #15260 from v0xie/fix-OFT-MhA-AttributeError
Fix AttributeError in OFT when trying to get MultiheadAttention weight
2024-03-16 08:42:45 +03:00
AUTOMATIC1111 0cc3647c1c Merge pull request #15261 from catboxanon/fix/imageviewer-click
Make imageviewer event listeners browser consistent
2024-03-16 08:41:11 +03:00
AUTOMATIC1111 3ffe47c6b7 Merge pull request #15263 from AUTOMATIC1111/fix-hr-comments
Strip comments from hires fix prompt
2024-03-16 08:30:16 +03:00
AUTOMATIC1111 c5aa7b65f7 Merge pull request #15269 from AUTOMATIC1111/fix-Hires-prompt-Styles
fix issue with Styles when Hires prompt is used
2024-03-16 08:25:39 +03:00
AUTOMATIC1111 01ba5ad213 Merge pull request #15272 from AUTOMATIC1111/bump-action-version
bump action version
2024-03-16 08:22:48 +03:00
w-e-w a3a648bf6b bump action version 2024-03-16 05:57:23 +09:00
w-e-w 887a512208 fix issue with Styles when Hires prompt is used 2024-03-15 21:06:54 +09:00
Andray 6f51e05553 prevent alt menu for firefox 2024-03-15 12:12:37 +04:00
missionfloyd 5f4203bf9b Strip comments from hires fix prompt 2024-03-14 22:23:06 -06:00
catboxanon 8eaa7e9f04 Support dragdrop for URLs 2024-03-15 04:06:17 +00:00
catboxanon 76fd487818 Make imageviewer event listeners browser consistent 2024-03-14 21:59:53 -04:00
v0xie 07805cbeee fix: AttributeError when attempting to reshape rescale by org_module weight 2024-03-14 17:05:14 -07:00
w-e-w c40f33ca04 PEP 604 annotations 2024-03-15 08:22:36 +09:00
Haoming 4e17fc36d8 add user setting
Now this is disabled by default
2024-03-14 10:04:09 +08:00
Haoming fd71b761ff use re instead of hardcoding
Now supports all natively provided upscaler as well
2024-03-14 09:55:14 +08:00
Haoming d18eb10ecd add hook 2024-03-13 21:15:52 +08:00
KohakuBlueleaf 9f2ae1cb85 Add missing .mean 2024-03-13 11:47:33 +08:00
DGdev91 32f0b5dbaf Merge branch 'dev' of https://github.com/DGdev91/stable-diffusion-webui into dev 2024-03-13 00:56:42 +01:00
DGdev91 2efc7c1b05 Better workaround for Navi1, removing --pre for Navi3 2024-03-13 00:54:32 +01:00
DGdev91 9fbfb8ad32 Better workaround for Navi1 - fix if 2024-03-13 00:43:01 +01:00
DGdev91 74e2e5279c Workaround for Navi1: pytorch nightly whl for 3.8 and 3.9 2024-03-13 00:17:24 +01:00
Andray b980c8140b featch only active branch updates for extensions 2024-03-12 22:21:59 +04:00
wangshuai09 994e08aac1 ascend npu readme 2024-03-12 18:45:26 +08:00
DGdev91 8262cd71c4 Better workaround for Navi1, removing --pre for Navi3 2024-03-12 00:09:07 +01:00
Andray 2e3a0f39f6 move upscale postprocessing under input accordion 2024-03-12 02:28:15 +04:00
Andray 4079b17dd9 move postprocessing-for-training into builtin extensions 2024-03-12 01:50:57 +04:00
w-e-w 1a1205f601 fix Restore progress 2024-03-12 03:26:50 +09:00
Andray 2d57a2df66 Update modules/shared.py
Co-authored-by: catboxanon <122327233+catboxanon@users.noreply.github.com>
2024-03-11 07:40:15 +04:00
Andray eb10da8bb7 type hinting in shared.py 2024-03-11 05:15:09 +04:00
AUTOMATIC1111 3e0146f9bd restore the lost Uncategorized options section 2024-03-10 22:40:35 +03:00
AUTOMATIC1111 1bbc8a153b Merge branch 'dev' into callback_order 2024-03-10 16:15:09 +03:00
AUTOMATIC1111 3670b4f49e lint 2024-03-10 15:16:12 +03:00
AUTOMATIC1111 2f55d669a2 add support for specifying callback order in metadata 2024-03-10 15:14:04 +03:00
AUTOMATIC1111 edc56202c1 Merge pull request #15201 from AUTOMATIC1111/update-preview-on-Replace-Preview
update preview on Replace Preview
2024-03-10 14:11:26 +03:00
AUTOMATIC1111 7e5e67330b add UI for reordering callbacks 2024-03-10 14:09:48 +03:00
w-e-w 9fd0cd6a80 update preview on Replace Preview 2024-03-10 18:24:52 +09:00
SunChaser 9b842e9ec7 fix: resolve type annotation warnings 2024-03-10 16:19:59 +08:00
AUTOMATIC1111 0411eced89 add names to callbacks 2024-03-10 07:52:57 +03:00
AUTOMATIC1111 2e93bdce0c Merge pull request #15198 from zopieux/search-desc
Add model description to searched terms
2024-03-10 07:03:16 +03:00
AUTOMATIC1111 8076100e14 Merge pull request #15199 from AUTOMATIC1111/add-entry-to-MassFileLister-after-writing-metadata
Add entry to MassFileLister  after writing metadata
2024-03-10 07:01:46 +03:00
w-e-w fb62f1fb40 add entry to MassFileLister after writing metadata
fix #15184
2024-03-10 06:07:16 +09:00
Alexandre Macabies 0085e719a9 Add model description to searched terms.
This adds the model description to the searchable terms.
This is particularly useful since the description can be used to store
arbitrary tags, independently from the filename, which is imposed by the
model publisher.
2024-03-09 21:53:38 +01:00
AUTOMATIC1111 6136db1409 linter 2024-03-09 12:21:46 +03:00
AUTOMATIC1111 110e3d7033 Merge pull request #15191 from AUTOMATIC1111/fix-default-in-get_learned_conditioning
Avoid error from None in get_learned_conditioning
2024-03-09 12:21:23 +03:00
Kohaku-Blueleaf 0dc179ee72 Avoid error from None 2024-03-09 17:12:54 +08:00
AUTOMATIC1111 4c9a7b8a75 Merge pull request #15190 from AUTOMATIC1111/dora-weight-decompose
Fix built-in lora system bugs caused by torch.nn.MultiheadAttention
2024-03-09 08:29:51 +03:00
AUTOMATIC1111 1770b887ec Merge pull request #15189 from 10sa/dev
Add '--no-prompt-history' cmd args for disable last generation prompt history
2024-03-09 08:28:56 +03:00
AUTOMATIC1111 18d801a13d stylistic changes for extra network sorting/search controls 2024-03-09 08:25:01 +03:00
Kohaku-Blueleaf 851c3d51ed Fix bugs for torch.nn.MultiheadAttention 2024-03-09 12:31:32 +08:00
AUTOMATIC1111 5251733c0d use natural sort in extra networks when ordering by path 2024-03-09 07:24:51 +03:00
10sa c50b7e4eff Add '--no-prompt-history' cmd args for disable last generation prompt history 2024-03-09 11:43:49 +09:00
AUTOMATIC1111 d318f1b5e1 Merge pull request #15183 from jim60105/master
chore: fix font not loaded
2024-03-08 21:58:41 +03:00
陳鈞 02a4ceabdd chore: fix font not loaded
fix #15182
2024-03-09 02:13:35 +08:00
AUTOMATIC1111 7d1368c51c lint 2024-03-08 17:11:56 +03:00
AUTOMATIC1111 758e8d7b41 undo unwanted change for extra networks 2024-03-08 17:11:42 +03:00
AUTOMATIC1111 530fea2bc4 optimization for extra networks sorting 2024-03-08 17:09:11 +03:00
AUTOMATIC1111 3bd75adb1c optimization for extra networks filtering 2024-03-08 16:54:39 +03:00
SunChaser 01f531e9b1 fix: fix syntax errors 2024-03-08 17:25:28 +08:00
AUTOMATIC1111 a551a43164 add an option to have old-style directory view instead of tree view 2024-03-08 09:52:25 +03:00
AUTOMATIC1111 a43ce7eabb fix broken resize handle on the train tab 2024-03-08 08:13:08 +03:00
AUTOMATIC1111 9409419afb Merge pull request #15160 from AUTOMATIC1111/dora-weight-decompose
Add DoRA (weight-decompose) support for LoRA/LoHa/LoKr
2024-03-08 08:02:17 +03:00
AUTOMATIC1111 e0c9361b7d performance optimization for extra networks 2024-03-08 07:51:31 +03:00
AUTOMATIC1111 8b96f3d036 Merge pull request #15178 from catboxanon/feat/edit-attention-whitespace-trim
edit-attention: deselect surrounding whitespace
2024-03-08 06:21:24 +03:00
catboxanon 5ab5405b6f Simpler comparison
Co-authored-by: missionfloyd <missionfloyd@users.noreply.github.com>
2024-03-07 21:30:05 -05:00
catboxanon 766f6e3eca edit-attention: deselect surrounding whitespace 2024-03-07 18:30:36 -05:00
Kohaku-Blueleaf 12bcacf413 Initial implementation 2024-03-07 13:29:40 +08:00
AUTOMATIC1111 58f7410c9d Merge pull request #14820 from alexhegit/master
Update to ROCm5.7 and PyTorch
2024-03-06 15:45:39 +03:00
AUTOMATIC1111 ea3aae9c39 Merge branch 'dev' into master 2024-03-06 15:44:55 +03:00
AUTOMATIC1111 8904e00842 Merge pull request #15148 from continue-revolution/conrevo/fix-soft-inpaint
Fix Soft Inpaint for AnimateDiff
2024-03-06 15:36:01 +03:00
continue-revolution 7d59b3b564 rm comment 2024-03-06 05:39:17 -06:00
continue-revolution 7f766cd762 Merge branch 'dev' into conrevo/fix-soft-inpaint 2024-03-06 05:33:30 -06:00
continue-revolution 73e635ce6e fix 2024-03-06 05:32:59 -06:00
AUTOMATIC1111 ecd5fa9c42 Merge pull request #15131 from catboxanon/feat/extra-network-metadata
Re-use profiler visualization for extra networks
2024-03-06 13:08:43 +03:00
AUTOMATIC1111 14215beb48 Merge pull request #15135 from AUTOMATIC1111/fix-extract_style_text_from_prompt
fix extract_style_text_from_prompt #15132
2024-03-06 13:07:43 +03:00
AUTOMATIC1111 11ef1a9302 Merge pull request #15142 from catboxanon/fix/emphasis-prompt-txt-write-order
Fix emphasis infotext missing from `params.txt`
2024-03-06 13:07:02 +03:00
AUTOMATIC1111 c1deec64cb lint 2024-03-06 13:06:13 +03:00
AUTOMATIC1111 2bb296531d Merge pull request #15141 from catboxanon/feat/emphasis-infotext-parse
Only override emphasis if actually used in prompt
2024-03-06 13:05:56 +03:00
catboxanon ed386c84b6 Fix emphasis infotext missing from params.txt 2024-03-05 11:53:36 -05:00
catboxanon 7785d484ae Only override emphasis if actually used in prompt 2024-03-05 11:50:53 -05:00
w-e-w 706f63adfa fix extract_style_text_from_prompt #15132 2024-03-05 12:35:46 +09:00
catboxanon ecffe8513e Lint 2024-03-04 18:46:25 -05:00
catboxanon 801461eea2 Re-use profiler visualization for extra networks 2024-03-04 18:33:22 -05:00
AUTOMATIC1111 eee46a5094 Merge pull request #14981 from wangshuai09/gpu_info_for_ascend
Add training support and change lspci for Ascend NPU
2024-03-04 20:06:54 +03:00
AUTOMATIC1111 09b5ce68a9 add images.read to automatically fix all jpeg/png weirdness 2024-03-04 19:14:53 +03:00
AUTOMATIC1111 5625ce1b1a Merge pull request #14958 from HTYISABUG/dev
Error handling for unsupported transparency
2024-03-04 18:40:16 +03:00
AUTOMATIC1111 58278aa71c Merge pull request #15121 from AUTOMATIC1111/fix-settings-in-ui
[alternative fix] can't load webui if selected wrong extra option in ui
2024-03-04 18:24:09 +03:00
AUTOMATIC1111 33fbe943e2 Merge pull request #15062 from astriaai/fix-exif-orientation-api
Fix EXIF orientation in API image loading
2024-03-04 16:26:53 +03:00
AUTOMATIC1111 0dc12861ef call script_callbacks.ui_settings_callback earlier; fix extra-options-section built-in extension killing the ui if using a setting that doesn't exist 2024-03-04 15:30:46 +03:00
Alon Burg 67d8dafe44 Fix EXIF orientation in API image loading 2024-03-04 12:23:14 +02:00
wangshuai09 3fb1c2e58d fix npu-smi command 2024-03-04 17:19:37 +08:00
AUTOMATIC1111 92d77e3fa8 Merge pull request #15102 from light-and-ray/fix_jpeg_live_preview
fix_jpeg_live_preview
2024-03-04 10:30:24 +03:00
AUTOMATIC1111 48a677c4ac Merge pull request #15116 from akx/typos
Fix various typos with crate-ci/typos
2024-03-04 10:28:42 +03:00
Aarni Koskela e3fa46f26f Fix various typos with crate-ci/typos 2024-03-04 08:42:07 +02:00
AUTOMATIC1111 e2a8745abc Merge pull request #15084 from clayne/1709395892-upscale-logging-reduce
upscaler_utils: Reduce logging
2024-03-04 06:50:03 +03:00
Christopher Layne 3c0177a24b upscaler_utils: Reduce logging
* upscale_with_model: Remove debugging logging occurring in loop as
  it's an excessive amount of noise when running w/ DEBUG log levels.
2024-03-03 11:41:32 -08:00
Andray 0103365697 fix_jpeg_live_preview 2024-03-03 16:54:58 +04:00
AUTOMATIC1111 45b8a499a7 fix wrong condition 2024-03-02 10:36:48 +03:00
AUTOMATIC1111 bb24c13ed7 infotext support for #14978 2024-03-02 07:39:59 +03:00
AUTOMATIC1111 aabedcbcc7 Merge pull request #14978 from drhead/refiner_fix
Make refiner switchover based on model timesteps instead of sampling steps
2024-03-02 07:24:44 +03:00
AUTOMATIC1111 f4cb21bb8a Merge pull request #15031 from light-and-ray/unix_filenames
cmd args: `--unix-filenames-sanitization` and `--filenames-max-length`
2024-03-02 07:16:06 +03:00
AUTOMATIC1111 6044a9723a Merge pull request #15059 from Dalton-Murray/direct-binary-release-link
Add a direct link to the binary release
2024-03-02 07:15:32 +03:00
AUTOMATIC1111 9189ea20b0 Merge pull request #15041 from light-and-ray/resize_handle_for_extra_networks
resize handle for extra networks
2024-03-02 07:13:00 +03:00
AUTOMATIC1111 95d143eafe Merge branch 'master' into dev 2024-03-02 07:04:33 +03:00
AUTOMATIC1111 bef51aed03 Merge branch 'release_candidate' 2024-03-02 07:03:13 +03:00
AUTOMATIC1111 1398485789 update changelog 2024-03-02 07:00:08 +03:00
AUTOMATIC1111 141a17e969 style changes for #14979 2024-03-02 06:55:03 +03:00
AUTOMATIC1111 da67afe5f6 call apply_alpha_schedule_override in load_model_weights for #14979 2024-03-02 06:54:58 +03:00
AUTOMATIC1111 28bc85a20a Merge pull request #14979 from drhead/refiner_cumprod_fix
Protect alphas_cumprod during refiner switchover
2024-03-02 06:54:55 +03:00
AUTOMATIC1111 978c7fadb3 Merge pull request #15039 from light-and-ray/dat_cmd_flag
dat cmd flag
2024-03-02 06:54:52 +03:00
AUTOMATIC1111 d558cb69b0 Merge pull request #15065 from light-and-ray/resizeHandle_handle_double_tap
resizeHandle handle double tap
2024-03-02 06:54:49 +03:00
AUTOMATIC1111 0b07a6cf26 Merge pull request #15035 from AUTOMATIC1111/fix/normalized-filepath-absolute
Use `absolute` path for normalized filepath
2024-03-02 06:54:48 +03:00
AUTOMATIC1111 024a32a09b Merge pull request #15012 from light-and-ray/register_tmp_file-also-with-mtime
register_tmp_file also for mtime
2024-03-02 06:54:46 +03:00
AUTOMATIC1111 b47756385d Merge pull request #15010 from light-and-ray/fix_resize-handle_for_vertical_layout
Fix resize-handle visability for vertical layout (mobile)
2024-03-02 06:54:44 +03:00
AUTOMATIC1111 ee470cc6a3 style changes for #14979 2024-03-02 06:54:11 +03:00
AUTOMATIC1111 1a51b166a0 call apply_alpha_schedule_override in load_model_weights for #14979 2024-03-02 06:53:53 +03:00
AUTOMATIC1111 06b9200e91 Merge pull request #14979 from drhead/refiner_cumprod_fix
Protect alphas_cumprod during refiner switchover
2024-03-02 06:40:32 +03:00
AUTOMATIC1111 f04e76811a Merge pull request #15039 from light-and-ray/dat_cmd_flag
dat cmd flag
2024-03-02 06:38:50 +03:00
AUTOMATIC1111 817d9b15f7 Merge pull request #15065 from light-and-ray/resizeHandle_handle_double_tap
resizeHandle handle double tap
2024-03-02 06:37:19 +03:00
AUTOMATIC1111 150b603770 Merge pull request #15035 from AUTOMATIC1111/fix/normalized-filepath-absolute
Use `absolute` path for normalized filepath
2024-03-02 06:35:46 +03:00
Andray eb0b84c564 make minimal width 2 times smaller then default 2024-02-29 16:02:21 +04:00
Andray bb99f52712 resizeHandle handle double tap 2024-02-29 15:40:15 +04:00
Dalton bce09eb987 Add a direct link to the binary release 2024-02-29 01:04:46 -05:00
Andray 51cc1ff2c9 fix for mobile and allow collapse right column 2024-02-27 23:31:47 +04:00
Andray b4c44e659b fix on reload with changed show all loras setting 2024-02-27 23:17:52 +04:00
Andray de7604fa77 lint 2024-02-27 18:38:38 +04:00
Andray 44bce3c74e resize handle for extra networks 2024-02-27 18:31:36 +04:00
Andray 3ba575216a dat cmd flag 2024-02-27 15:10:51 +04:00
drhead 4dae91a1fe remove alphas cumprod fix from samplers_common 2024-02-26 23:46:10 -05:00
drhead 94f23d00a7 move alphas cumprod override out of processing 2024-02-26 23:44:58 -05:00
drhead e2cd92ea23 move refiner fix to sd_models.py 2024-02-26 23:43:27 -05:00
catboxanon 3a618e3d24 Fix normalized filepath, resolve -> absolute
https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/313
https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/14942#discussioncomment-8550050
2024-02-26 12:44:57 -05:00
Andray dd4b0b95d5 cmd args: allow unix filenames and filenames max length 2024-02-26 16:30:15 +04:00
AUTOMATIC1111 c8a5322d1f Merge pull request #15012 from light-and-ray/register_tmp_file-also-with-mtime
register_tmp_file also for mtime
2024-02-26 12:53:21 +03:00
AUTOMATIC1111 ca0308b60d Merge pull request #15010 from light-and-ray/fix_resize-handle_for_vertical_layout
Fix resize-handle visability for vertical layout (mobile)
2024-02-26 12:52:34 +03:00
Andray 6e6cc2922d fix resize handle 2024-02-26 13:37:29 +04:00
AUTOMATIC1111 eaf5e0289a update cahngelog 2024-02-26 07:37:26 +03:00
AUTOMATIC1111 f97e3548e5 Merge pull request #15006 from imnodb/master
fix: the `split_threshold` parameter does not work when running Split oversized images
2024-02-26 07:35:42 +03:00
AUTOMATIC1111 e651ca8adb Merge pull request #15004 from light-and-ray/ResizeHandleRow_-_allow_overriden_column_scale_parametr
ResizeHandleRow - allow overriden column scale parametr
2024-02-26 07:35:36 +03:00
AUTOMATIC1111 78e421e4ea Merge pull request #14995 from dtlnor/14591-bug-the-categories-layout-is-different-when-localization-is-on
Fix #14591 using translated content to do categories mapping
2024-02-26 07:35:32 +03:00
AUTOMATIC1111 a10c8df876 Merge pull request #14973 from AUTOMATIC1111/Fix-new-oft-boft
Fix the OFT/BOFT bugs when using new LyCORIS implementation
2024-02-26 07:35:28 +03:00
drhead 648f6a8e0c dont need to preserve alphas_cumprod_original 2024-02-25 23:28:36 -05:00
AUTOMATIC1111 2b7ddcbb5c Merge pull request #15006 from imnodb/master
fix: the `split_threshold` parameter does not work when running Split oversized images
2024-02-26 07:16:42 +03:00
AUTOMATIC1111 e3a8dc6e23 Merge pull request #15004 from light-and-ray/ResizeHandleRow_-_allow_overriden_column_scale_parametr
ResizeHandleRow - allow overriden column scale parametr
2024-02-26 07:16:24 +03:00
AUTOMATIC1111 ca8dc2bde2 Merge pull request #14995 from dtlnor/14591-bug-the-categories-layout-is-different-when-localization-is-on
Fix #14591 using translated content to do categories mapping
2024-02-26 07:12:31 +03:00
AUTOMATIC1111 900419e85e Merge pull request #14973 from AUTOMATIC1111/Fix-new-oft-boft
Fix the OFT/BOFT bugs when using new LyCORIS implementation
2024-02-26 07:12:12 +03:00
Andray 3a99824638 register_tmp_file also with mtime 2024-02-23 20:26:56 +04:00
Andray bab918f049 fix resize-handle for vertical layout 2024-02-23 18:34:24 +04:00
DB Eriospermum ed594d7ba6 fix: the split_threshold parameter does not work when running Split oversized images 2024-02-23 13:37:37 +08:00
Andray 9211febbfc ResizeHandleRow - allow overriden column scale parametr 2024-02-23 02:32:12 +04:00
AUTOMATIC1111 3069716510 update changelog 2024-02-22 23:00:14 +03:00
AUTOMATIC1111 dfab42c949 Merge pull request #15002 from light-and-ray/support_resizable_columns_for_touch_(tablets)
support resizable columns for touch (tablets)
2024-02-22 22:59:44 +03:00
AUTOMATIC1111 18819723c1 Merge pull request #15002 from light-and-ray/support_resizable_columns_for_touch_(tablets)
support resizable columns for touch (tablets)
2024-02-22 22:59:26 +03:00
AUTOMATIC1111 6bc35be10a update changelog 2024-02-22 21:29:58 +03:00
AUTOMATIC1111 b53989b195 update changelog 2024-02-22 21:29:10 +03:00
AUTOMATIC1111 726aaea0fe make extra network card description plaintext by default, with an option to re-enable HTML as it was 2024-02-22 21:27:28 +03:00
AUTOMATIC1111 3f18a09c86 make extra network card description plaintext by default, with an option to re-enable HTML as it was 2024-02-22 21:27:10 +03:00
Andray 58985e6b37 fix lint 2 2024-02-22 17:22:00 +04:00
Andray ab1e0fa9bf fix lint and console warning 2024-02-22 17:16:16 +04:00
Andray 85abbbb8fa support resizable columns for touch (tablets) 2024-02-22 17:04:56 +04:00
wangshuai09 ba66cf8d69 update 2024-02-22 20:17:10 +08:00
AUTOMATIC1111 052fbde3ac possible fix for reload button not appearing in some cases for extra networks. 2024-02-22 10:48:30 +03:00
AUTOMATIC1111 1da05297ea possible fix for reload button not appearing in some cases for extra networks. 2024-02-22 10:28:03 +03:00
dtlnor f537e5a519 fix #14591 - using translated content to do categories mapping 2024-02-22 12:26:57 +09:00
Kohaku-Blueleaf c4afdb7895 For no constraint 2024-02-22 00:43:32 +08:00
Kohaku-Blueleaf 64179c3221 Update network_oft.py 2024-02-21 22:50:43 +08:00
wangshuai09 b7aa425344 del gpu_info for npu 2024-02-21 11:49:06 +08:00
drhead 9c1ece8978 Protect alphas_cumprod during refiner switchover 2024-02-20 19:23:21 -05:00
drhead bf348032bc fix missing arg 2024-02-20 16:59:28 -05:00
drhead 25eeeaa65f Allow refiner to be triggered by model timestep instead of sampling 2024-02-20 16:37:29 -05:00
drhead 09d2e58811 Pass sigma to apply_refiner 2024-02-20 16:22:40 -05:00
drhead f4869f8de3 Add compatibility option for refiner switching 2024-02-20 16:18:13 -05:00
Kohaku-Blueleaf 591470d86d linting 2024-02-20 17:21:34 +08:00
Kohaku-Blueleaf a5436a3ac0 Update network_oft.py 2024-02-20 17:20:14 +08:00
AUTOMATIC1111 140d58b512 Merge pull request #14966 from light-and-ray/avoid_doble_upscaling_in_inpaint
[bug] avoid doble upscaling in inpaint
2024-02-19 18:06:16 +03:00
AUTOMATIC1111 0a271938d8 Merge pull request #14966 from light-and-ray/avoid_doble_upscaling_in_inpaint
[bug] avoid doble upscaling in inpaint
2024-02-19 18:06:05 +03:00
Andray 33c8fe1221 avoid doble upscaling in inpaint 2024-02-19 16:57:49 +04:00
AUTOMATIC1111 532c340829 update changelog 2024-02-19 10:07:57 +03:00
AUTOMATIC1111 92ab0ef7d6 Merge pull request #14871 from v0xie/boft
Support inference with LyCORIS BOFT networks
2024-02-19 10:05:44 +03:00
AUTOMATIC1111 6e4fc5e1a8 Merge pull request #14871 from v0xie/boft
Support inference with LyCORIS BOFT networks
2024-02-19 10:05:30 +03:00
Kohaku-Blueleaf 4eb949625c prevent undefined variable 2024-02-19 14:43:07 +08:00
HSIEH TSUNGYU 9d5dc582be Error handling for unsupported transparency
When input images (palette mode) have transparency (bytes) in info,
the output images (RGB mode) will inherit it,
causing ValueError in Pillow:PIL/PngImagePlugin.py#1364
when trying to unpack this bytes.

This commit check the PNG mode and transparency info,
removing transparency if it's RGB mode and transparency is bytes
2024-02-18 19:27:33 +08:00
Kohaku-Blueleaf 5a8dd0c549 Fix rescale 2024-02-18 14:58:41 +08:00
AUTOMATIC1111 c7808825b1 update changelog 2024-02-17 21:38:19 +03:00
AUTOMATIC1111 cb52279c3e Merge pull request #14947 from AUTOMATIC1111/open-button
option "open image button" open the actual dir
2024-02-17 21:31:22 +03:00
AUTOMATIC1111 9d5becb4de Merge pull request #14947 from AUTOMATIC1111/open-button
option "open image button" open the actual dir
2024-02-17 21:30:21 +03:00
w-e-w 71072f5620 re-work open image button settings 2024-02-18 02:47:44 +09:00
w-e-w a18e54ecd7 option "open image button" open the actual dir 2024-02-18 00:38:05 +09:00
AUTOMATIC1111 3345218439 Update comment for Pad prompt/negative prompt v0 to add a warning about truncation, make it override the v1 implementation 2024-02-17 13:21:37 +03:00
AUTOMATIC1111 4ff1fabc86 Update comment for Pad prompt/negative prompt v0 to add a warning about truncation, make it override the v1 implementation 2024-02-17 13:21:08 +03:00
AUTOMATIC1111 4652fc5ac3 prevent escape button causing an interrupt when no generation has been made yet 2024-02-17 11:41:30 +03:00
AUTOMATIC1111 4573195894 prevent escape button causing an interrupt when no generation has been made yet 2024-02-17 11:40:53 +03:00
AUTOMATIC1111 310d6b9075 update changelog 2024-02-17 11:35:36 +03:00
AUTOMATIC1111 db19c46d6d lint 2024-02-17 10:32:10 +03:00
AUTOMATIC1111 1466daeafc Disable prompt token counters option actually disables token counting rather than just hiding results.
Disable prompt token counters option does not require reload UI.
token counters do not become visible until they are positioned correctly.
2024-02-17 10:31:16 +03:00
AUTOMATIC1111 dd1641ecc4 fix an exception when filtering extra networks very early 2024-02-17 09:46:04 +03:00
AUTOMATIC1111 7dae6bb3b5 fix search UI invisible in an extra network tab that just loaded 2024-02-17 09:45:48 +03:00
AUTOMATIC1111 2e1b61e590 change condition for scheduleAfterScriptsCallbacks() to properly reflect the needed amount of search fields 2024-02-17 09:45:03 +03:00
AUTOMATIC1111 f293dbbf97 Merge pull request #14900 from AUTOMATIC1111/fix-css-color-extra-network-control--enabled
fix extra-network-control--enabled color
2024-02-17 09:00:06 +03:00
AUTOMATIC1111 bf08a5b75e Merge pull request #14910 from analysisjp/wsl2_flx
fixed webui.sh issue that occurred in WSL environment (fix: #14883)
2024-02-17 08:59:41 +03:00
AUTOMATIC1111 48ce0379bc Merge pull request #14916 from light-and-ray/use_original_document_title
Use original App Title in progress bar
2024-02-17 08:57:56 +03:00
AUTOMATIC1111 d235aa068d Merge pull request #14932 from AUTOMATIC1111/fix/esc-interrupt
Only trigger interrupt on `Esc` when interrupt button visible
2024-02-17 08:57:25 +03:00
AUTOMATIC1111 ce57a6c6db Merge pull request #14933 from AUTOMATIC1111/fix/graceful-mtime-hash-cache-exception
Gracefully handle mtime read exception from cache
2024-02-17 08:56:48 +03:00
AUTOMATIC1111 d70632a7cf Merge pull request #14934 from AUTOMATIC1111/fix/normalize-cmd-arg-paths
Normalize command-line argument paths
2024-02-17 08:54:06 +03:00
AUTOMATIC1111 4333ecc43f Merge pull request #14939 from AUTOMATIC1111/Fix-extranetworks-search-reload
Fix the bugs that search/reload will disappear when using other ExtraNetworks extensions
2024-02-17 08:51:11 +03:00
AUTOMATIC1111 a56125b0a8 Merge pull request #14930 from RedDeltas/feat/launch_utils/file_mode_for_clone
Added core.filemode=false so doesn't track changes in file permission…
2024-02-17 08:48:37 +03:00
Kohaku-Blueleaf 23f03d4796 Update extraNetworks.js 2024-02-16 16:43:43 +08:00
catboxanon 06ab10a1be Normalize cmd arg paths
In particular, this fixes an issue on Windows where some functions
will misbehave if forward slashes are provided rather than
double backslashes.
2024-02-15 14:22:13 -05:00
catboxanon 6ee4012c0a Gracefully handle mtime read exception from cache 2024-02-15 13:31:44 -05:00
catboxanon 46988af636 Fix Esc interrupt when button not visible 2024-02-15 13:05:39 -05:00
RedDeltas 18ec22bffe Added core.filemode=false so doesn't track changes in file permissions in more restrictive environments 2024-02-15 12:26:14 +00:00
Andray 1142201a3a Use original App Title in progress bar 2024-02-14 15:26:57 +04:00
analysisjp 69f9564a6d fixed webui.sh issue that occurred in WSL environment (fix: #14883) 2024-02-13 21:49:23 +09:00
Kohaku-Blueleaf 90441294db Add rescale mechanism
LyCORIS will support save oft_blocks instead of oft_diag in the near future (for both OFT and BOFT)

But this means we need to store the rescale if user enable it.
2024-02-12 14:25:09 +08:00
w-e-w 13fd466c18 fix extra-network-control--enabled color
also add forgotten semicolon
2024-02-12 04:07:14 +09:00
AUTOMATIC1111 b7f45e67dc add before_token_counter callback and use it for prompt comments 2024-02-11 12:56:53 +03:00
AUTOMATIC1111 02ab75b86a Count tokens of enabled styles 2024-02-11 12:40:27 +03:00
AUTOMATIC1111 f6e476d7a8 call the right function for token counter in img2img 2024-02-11 12:24:02 +03:00
AUTOMATIC1111 b531b0bbef add propmpt comments support 2024-02-11 12:23:21 +03:00
AUTOMATIC1111 e2b19900ec add infotext entry for emphasis; put emphasis into a separate file, add an option to parse but still ignore emphasis 2024-02-11 09:39:51 +03:00
AUTOMATIC1111 3732cf2f97 Merge pull request #14874 from hako-mikan/master
Add option to disable normalize embeddings after after calculating emphasis.
2024-02-11 08:34:40 +03:00
AUTOMATIC1111 2f1e2c492f Merge pull request #14873 from AUTOMATIC1111/check_extensions_list_on_apply_js_method
if extensions page not loaded, prevent apply
2024-02-11 08:29:54 +03:00
AUTOMATIC1111 860534399b Merge pull request #14879 from AUTOMATIC1111/walk_files-extensions-case-insensitive
util.walk_files extensions case insensitive
2024-02-11 08:29:05 +03:00
AUTOMATIC1111 4d46f8c25c Merge pull request #14883 from analysisjp/dev_fix_memleak_new
fix prepare_tcmalloc (fix: #14227)(Fixed memory leak issue in Ubuntu 22.04 or modern linux environment)
2024-02-11 08:28:42 +03:00
AUTOMATIC1111 5ddd5d29e5 Merge pull request #14884 from light-and-ray/ResizeHandleRow_png_info_and_train
ResizeHandleRow png_info and train
2024-02-11 08:26:20 +03:00
AUTOMATIC1111 440fff64a2 Merge pull request #14890 from AUTOMATIC1111/always-append-timestamp
Always add timestamp to displayed image
2024-02-11 08:26:00 +03:00
AUTOMATIC1111 d2246df160 Merge pull request #14885 from AUTOMATIC1111/extensions-tab-table-row-hover-highlight
extensions tab table row hover highlight
2024-02-11 08:24:41 +03:00
missionfloyd c04c4b95de Always add timestamp to displayed image 2024-02-10 14:49:08 -07:00
w-e-w 7583351760 extensions tab table row hover highlight 2024-02-10 18:09:10 +09:00
Andray 82e2e25325 ResizeHandleRow png_info and train 2024-02-10 13:00:16 +04:00
analysisjp 2ba0277b52 fix: prepare_tcmalloc (Fixed memory leak issue in Ubuntu 22.04 or modern linux environment) 2024-02-10 10:09:19 +09:00
w-e-w 542611cce4 walk_files extensions case insensitive 2024-02-10 05:39:01 +09:00
hako-mikan c3c88ca8b4 Update sd_hijack_clip.py 2024-02-10 00:18:08 +09:00
hako-mikan 6b3f7039b6 add option 2024-02-09 23:57:46 +09:00
w-e-w 6b8458eb9f if extensions page not loaded, prevent apply
since they are built-in extensions we can make the assumption that they will be at least one or more extensions

Co-Authored-By: Andray <33491867+light-and-ray@users.noreply.github.com>
2024-02-09 23:19:39 +09:00
hako-mikan 0bc7867ccd Merge branch 'AUTOMATIC1111:master' into master 2024-02-09 23:17:40 +09:00
AUTOMATIC1111 d69a7944c9 Merge pull request #14857 from light-and-ray/refresh_extensions_list
Button for refresh extensions list
2024-02-09 16:06:02 +03:00
v0xie eb6f2df826 Revert "fix: add butterfly_factor fn"
This reverts commit 81c16c965e.
2024-02-08 22:00:15 -08:00
v0xie 613b0d9548 doc: add boft comment 2024-02-08 21:58:59 -08:00
v0xie 325eaeb584 fix: get boft params from weight shape 2024-02-08 11:55:05 -08:00
v0xie 2f1073dc6e style: fix lint 2024-02-07 04:55:11 -08:00
v0xie 81c16c965e fix: add butterfly_factor fn 2024-02-07 04:54:14 -08:00
v0xie a4668a16b6 fix: calculate butterfly factor 2024-02-07 04:51:22 -08:00
v0xie 9588721197 feat: support LyCORIS BOFT 2024-02-07 04:49:17 -08:00
Andray 99c6c4a51b add button for refreshing extensions list 2024-02-07 16:06:17 +04:00
AUTOMATIC1111 321b2db067 fix extra networks metadata failing to work properly when you create the .json file with metadata for the first time. 2024-02-02 22:47:51 +03:00
AUTOMATIC1111 1ff1c5be64 fix refresh button forgetting sort order for extra networks #14588 2024-02-02 20:51:54 +03:00
AUTOMATIC1111 5084b39ea5 fix checkpoint selection not working for #14588 2024-02-02 19:41:07 +03:00
AUTOMATIC1111 5904e3f6b3 fix page refresh not re-applying sort/filter for #14588
fix path sortkey not including the filename for #14588
2024-02-02 19:30:59 +03:00
Alex He db4632f4ba Update to ROCm5.7 and PyTorch
The webui.sh installs ROCm5.4.2 as default. The webui run failed with AMD
Radeon Pro W7900 with **Segmentation Fault** at Ubuntu22.04 maybe the ABI
compatibility issue.

ROCm5.7 is the latest version supported by PyTorch (https://pytorch.org/)
at now. I test it with AMD Radeon Pro W7900 by PyTorch+ROCm5.7 with PASS.

Signed-off-by: Alex He <heye_dev@163.com>
2024-02-02 13:48:42 +08:00
AUTOMATIC1111 2600370659 fix error when editing extra networks card 2024-02-01 23:54:57 +03:00
AUTOMATIC1111 9f3ba38314 Add "Interrupting..." placeholder. 2024-02-01 22:34:29 +03:00
AUTOMATIC1111 b594f518b7 Merge pull request #14814 from AUTOMATIC1111/catch-load-style.csv-error
catch load style.csv error
2024-02-01 22:02:28 +03:00
w-e-w bbe8e02d74 catch load style.csv error 2024-02-01 15:40:15 +09:00
AUTOMATIC1111 652a7bbf80 Merge pull request #14809 from Cyberbeing/fix_upscaler_autocast_nans
Fix potential autocast NaNs in image upscale
2024-01-31 22:41:22 +03:00
AUTOMATIC1111 1b0931fd92 Merge pull request #14803 from AUTOMATIC1111/create_submit_box-tooltip
add tooltip create_submit_box
2024-01-31 22:39:50 +03:00
AUTOMATIC1111 96b550430a Merge pull request #14801 from wangshuai09/npu_support
Add NPU Support
2024-01-31 22:39:29 +03:00
Cyberbeing 74b214a92a Fix potential autocast NaNs in image upscale 2024-01-30 22:32:31 -08:00
wangshuai09 cc3f604310 Update 2024-01-31 12:29:58 +08:00
w-e-w c4255d12f7 add tooltip create_submit_box 2024-01-31 04:36:11 +09:00
wangshuai09 74ff85a1a1 Merge branch 'dev' into npu_support 2024-01-30 19:15:41 +08:00
AUTOMATIC1111 ce168ab5db Merge pull request #14791 from AUTOMATIC1111/fix-mha-manual-cast
Fix dtype error in MHA layer/change dtype checking mechanism for manual cast
2024-01-29 20:39:06 +03:00
Kohaku-Blueleaf f9ba7e648a Revert "Try to reverse the dtype checking mechanism"
This reverts commit d243e24f53.
2024-01-29 22:54:12 +08:00
Kohaku-Blueleaf d243e24f53 Try to reverse the dtype checking mechanism 2024-01-29 22:49:45 +08:00
Kohaku-Blueleaf 6e7f0860f7 linting 2024-01-29 22:46:43 +08:00
Kohaku-Blueleaf 750dd6014a Fix potential bugs 2024-01-29 22:27:53 +08:00
AUTOMATIC1111 6484053037 Merge pull request #14782 from AUTOMATIC1111/safetensors-0.4.2
Bump safetensors' version to 0.4.2
2024-01-29 16:35:07 +03:00
wangshuai09 ec124607f4 Add NPU Support 2024-01-29 19:25:06 +08:00
AUTOMATIC1111 baaf39b6f9 fix the typo -- thanks Cyberbeing 2024-01-29 10:20:27 +03:00
Kohaku-Blueleaf 2cb1b65309 Bump safetensors' version to 0.4.2 2024-01-28 22:18:46 +08:00
AUTOMATIC1111 757dda9ade Add Pad conds v0 option 2024-01-27 22:30:47 +03:00
AUTOMATIC1111 e717eaff86 Merge pull request #14773 from AUTOMATIC1111/rework-set_named_arg
rework set_named_arg
2024-01-27 20:54:47 +03:00
w-e-w eae0bb89fd set_named_arg fuzzy option 2024-01-27 21:55:52 +09:00
AUTOMATIC1111 0a3a83382f Merge pull request #14775 from AUTOMATIC1111/fix-CLIP-Interrogator-topN-regex
fix CLIP Interrogator topN regex
2024-01-27 14:36:27 +03:00
w-e-w 486aeda3a7 fix CLIP Interrogator topN regex
Co-Authored-By: Martin Rizzo <60830236+martin-rizzo@users.noreply.github.com>
2024-01-27 19:35:07 +09:00
AUTOMATIC1111 7ea0859362 Merge pull request #14728 from AUTOMATIC1111/another-Hr-button-fix-
Another hr button fix
2024-01-27 10:06:40 +03:00
w-e-w 2996e43ff7 fix txt2img_upscale
use force_enable_hr to set p.enable_hr = True
allow Script.setup() have access to the correct value

add a comment for p.txt2img_upscale
2024-01-27 14:55:47 +09:00
w-e-w 36d1fefc19 rework set_named_arg
change identifying a script from using Scripts class name to Scripts internal name an
as not all Script have unique names

raise RuntimeError when there's issue
2024-01-27 14:42:52 +09:00
AUTOMATIC1111 4d5db58a3e Merge pull request #14767 from AUTOMATIC1111/minor-fix-to-#14525
minor fix to #14525
2024-01-26 19:04:13 +03:00
w-e-w 47cf92039b minor fix to #14525 2024-01-26 17:16:53 +09:00
AUTOMATIC1111 c6c9d12275 Merge pull request #14754 from AUTOMATIC1111/xyz-filter-blank-for-number-axes
XYZ filter blank for number axes
2024-01-26 00:41:24 +03:00
w-e-w 0466ee2a83 xyz filter blank for number axes 2024-01-25 05:45:45 +09:00
AUTOMATIC1111 19c95de8eb Merge pull request #14715 from stefanbenten/sb/embedding-refresh
modules/api/api.py: add api endpoint to refresh embeddings list
2024-01-23 22:35:41 +03:00
AUTOMATIC1111 358e9e2847 Merge pull request #14726 from v0xie/fix-oft-device
Fix kohya-ss OFT network wrong device for eye and constraint
2024-01-23 22:34:24 +03:00
AUTOMATIC1111 c17f7ee694 Merge pull request #14707 from AUTOMATIC1111/multi-styles-base-styles-file
re-work multi --styles-file
2024-01-23 22:22:37 +03:00
AUTOMATIC1111 de5a8c5cb4 add an option to not overlay original image for inpainting for #14727 2024-01-23 22:19:38 +03:00
AUTOMATIC1111 bac30eb3e7 Merge pull request #14740 from light-and-ray/make_extras_tab_collumns_resizable
Make extras tab columns resizable
2024-01-23 22:00:56 +03:00
Andray 695e24dbce make extras tab collumns resizable 2024-01-23 21:56:49 +04:00
AUTOMATIC1111 94d4b3c8e7 lint 2024-01-23 00:36:31 +03:00
AUTOMATIC1111 f4e931f18f put extra networks controls row into the tabs UI element for #14588 2024-01-22 23:20:30 +03:00
AUTOMATIC1111 569dc1919c Merge pull request #14588 from Sj-Si/feature/extra-networks-tree-view
Feature: Extra Networks Tree View
2024-01-22 22:24:06 +03:00
v0xie fd383140cf fix: wrong devices for eye and constraint 2024-01-22 02:52:34 -08:00
Sj-Si 26e1cd7ec4 Remove unnecessary template and simplify tree list. 2024-01-21 11:34:08 -05:00
Sj-Si d7d3166a27 Fix broken scrollbars 2024-01-21 11:27:24 -05:00
Stefan Benten 2974b9cee9 modules/api/api.py: add api endpoint to refresh embeddings list 2024-01-21 14:05:47 +01:00
AUTOMATIC1111 8a6a4ad894 Merge pull request #14709 from AUTOMATIC1111/improve-get_crop_region
improve get_crop_region
2024-01-21 16:01:44 +03:00
w-e-w e36827af32 improve get_crop_region 2024-01-21 09:02:18 +09:00
AUTOMATIC1111 3a5196de1b Merge pull request #14663 from aalbacetef/feature/add-model-to-log-file
feature/add-model-to-log-file
2024-01-21 02:05:18 +03:00
Arturo Albacete 4aa99f77ab add docstring 2024-01-20 22:04:53 +01:00
Arturo Albacete f190b85182 restore saving fields 2024-01-20 21:27:38 +01:00
Arturo Albacete 8459015017 skip if headers haven't changed 2024-01-20 21:19:53 +01:00
Arturo Albacete d0b65e148b merge dev 2024-01-20 21:15:57 +01:00
w-e-w 25e8273d2f re-work multi --styles-file
--styles-file change to append str
--styles-file is [] then defaults to [styles.csv]

--styles-file accepts paths or paths with wildcard "*"

the first `--styles-file` entry is use as the default styles file path
if filename a wildcard then the first matching file is used
if no match is found, create a new "styles.csv" in the same dir as the first path

when saving a new style it will be save in the default styles file
when saving a existing style, it will be saved to file it belongs to

order of the styles files in the styles dropdown can be controlled to a certain degree by the order of --styles-file
2024-01-21 03:53:58 +09:00
Sj-Si b67a49441f Add option in settings to enable/disable tree view by default. 2024-01-20 13:28:37 -05:00
Sj-Si 2310cd66e5 Add toggle button for tree view. Use default settings for sortmode and direction. 2024-01-20 11:43:45 -05:00
AUTOMATIC1111 f939bce845 Merge pull request #14702 from light-and-ray/keep_postprocessing_upscale_selected_tab_after_restart
[Bug] Keep postprocessing upscale selected tab after restart
2024-01-20 14:56:53 +03:00
Andray ed383eb5a0 keep postprocessing upscale selected tab after restart 2024-01-20 15:37:49 +04:00
AUTOMATIC1111 c1713bfeac Merge pull request #14689 from AUTOMATIC1111/fix-nested-manual-cast
Fix nested manual cast
2024-01-20 11:43:51 +03:00
Kohaku-Blueleaf 4a66d2fb22 Avoid exceptions to be silenced 2024-01-20 16:33:59 +08:00
Kohaku-Blueleaf 81126027f5 Avoid early disable 2024-01-20 16:31:12 +08:00
AUTOMATIC1111 1dbee391b4 Merge pull request #14637 from light-and-ray/fix_tab_indexes_resets_after_restart_ui
[Bug] Fix tab indexes are reseted after restart UI
2024-01-20 11:00:29 +03:00
Andray 56676ff923 fix tab indexes reset after restart ui 2024-01-20 11:49:05 +04:00
AUTOMATIC1111 a06ae54a18 Merge pull request #14639 from light-and-ray/fix_extension_check_for_requirements
[Bug] Fix extension check for requirements
2024-01-20 10:30:28 +03:00
AUTOMATIC1111 58a142b56b Merge pull request #14640 from WebDev9000/replace-hashtags-in-filenames
Add # to the invalid_filename_chars list
2024-01-20 10:29:42 +03:00
AUTOMATIC1111 0f2de4cfbe Merge pull request #14655 from chi2nagisa/lora-alias-fix
[bug] fix using wrong model caused by alias
2024-01-20 10:27:54 +03:00
AUTOMATIC1111 cfa326bbcc Merge pull request #14659 from AUTOMATIC1111/immediately-stop-on-second-interrupt
immediately stop on second interrupt
2024-01-20 10:26:52 +03:00
AUTOMATIC1111 7581da5d92 Merge pull request #14645 from AUTOMATIC1111/infotexts
geninfo from Infotexts
2024-01-20 10:26:03 +03:00
AUTOMATIC1111 41c2121e51 Merge pull request #14657 from AUTOMATIC1111/callback-postprocess_image_after_composite
callback postprocess_image_after_composite
2024-01-20 10:25:29 +03:00
AUTOMATIC1111 5df56b3980 Merge pull request #14626 from AUTOMATIC1111/hr-button-fix
more Hr button fix
2024-01-20 10:24:59 +03:00
AUTOMATIC1111 9cdd161160 Merge pull request #14690 from n0kovo/dev
Add support for DAT upscaler models
2024-01-20 10:15:59 +03:00
AUTOMATIC1111 7c30c5eec4 Merge pull request #14699 from light-and-ray/fix_extras_big_batch_crashes
[Bug] Fix extras big batch crashes
2024-01-20 10:12:16 +03:00
AUTOMATIC1111 5cbda43d63 Merge pull request #14638 from WebDev9000/brush-size-adjust
Adjust brush size with hotkeys
2024-01-20 09:42:10 +03:00
WebDev 30f31d23c6 Update hotkey_config.py 2024-01-19 16:39:33 -08:00
WebDev bcdcc8be7d Update hotkey_config.py 2024-01-19 16:39:07 -08:00
WebDev 4dde96109b Update zoom.js 2024-01-19 16:38:34 -08:00
Andray d31dc7a739 fix extras big batch crashes 2024-01-20 00:40:03 +04:00
n0kovo 1ddb886a80 Fix wrong options value 2024-01-19 00:48:46 +01:00
n0kovo 2e7efe47b6 Minor cleanup 2024-01-19 00:39:14 +01:00
n0kovo a97147bc8a Add support for DAT upscaler models 2024-01-19 00:10:02 +01:00
Sj-Si 69f4f148dc Fix various bugs including refresh bug. 2024-01-18 12:13:33 -05:00
Sj-Si 50e444fa1d Fix missing important style. 2024-01-18 12:13:09 -05:00
Kohaku-Blueleaf 0181c1f76b Fix nested manual cast 2024-01-19 00:14:03 +08:00
Sj-Si f25c81a744 Fix embeddings add/remove to/from prompt on click bugs. 2024-01-17 22:38:51 -05:00
w-e-w 2cf23099eb fix console total progress bar when using txt2img_upscale
add p.txt2img_upscale as indicator
2024-01-18 04:44:21 +09:00
w-e-w e1dfd452c0 parse_generation_parameters with no skip_fields 2024-01-18 02:39:42 +09:00
w-e-w 6916de5c0b parse_generation_parameters skip_fields 2024-01-18 02:39:15 +09:00
w-e-w 45a51c07e2 parse_generation_parameters with no skip_fields 2024-01-18 02:36:19 +09:00
w-e-w d224fed0ce parse_generation_parameters skip_fields 2024-01-18 02:36:19 +09:00
w-e-w 6acd8e28fc save_files info base on infotexts 2024-01-18 02:33:43 +09:00
w-e-w 0b83f4c263 reuse seed from infotexts 2024-01-18 02:33:43 +09:00
Sj-Si ccee26b065 fix bugs 2024-01-16 14:54:07 -05:00
Sj-Si 4f96267033 Finish cleanup. 2024-01-16 13:35:01 -05:00
Arturo Albacete 315e40a49c reuse variable for log file path 2024-01-16 19:11:28 +01:00
Arturo Albacete a75dfe1c0d - expand fields to include model name and hash
- write these in the CSV log file
- ensure old log files are updated w.r.t delimiter count
2024-01-16 19:03:48 +01:00
w-e-w 14b9762bca immediately stop on second interrupt
Revert "immediately stop on second interrupt"

This reverts commit ab409072a1f2a9c911a63aee98a6b42081803cdc.

immediately stop on second interrupt
2024-01-16 17:18:05 +09:00
w-e-w c1e04c63b3 callback postprocess_image_after_composite 2024-01-16 14:18:20 +09:00
Sj-Si 1fdc18e6a0 Run linting 2024-01-15 18:01:13 -05:00
Sj-Si f49c220c03 Move extra network tab buttons into tree view; 2024-01-15 17:34:44 -05:00
chi2nagisa e280eb4055 fix using wrong model caused by alias 2024-01-16 03:45:19 +08:00
Sj-Si d88424ef2a fix bugs. introduce new ones. 2024-01-15 13:40:47 -05:00
Sj-Si 019f6e3c5a fix linting 2024-01-14 13:49:39 -05:00
Sj-Si 261972f929 fix search box handling. sort maybe broken not sure. need to bug test and finish cleanup. 2024-01-14 13:39:01 -05:00
WebDev 39db1d09d1 Update zoom.js 2024-01-14 01:43:23 -08:00
WebDev 92032f2da8 Update hotkey_config.py 2024-01-14 01:42:03 -08:00
w-e-w 208ccfbe7c seed info from infotexts 2024-01-14 17:57:49 +09:00
w-e-w ee9d487081 fix gallery black image issue 2024-01-14 17:57:42 +09:00
w-e-w cfb90a938e allowe hr pass to return multiple images 2024-01-14 17:57:26 +09:00
w-e-w 92501d4f80 disable saving images before highres fix 2024-01-14 17:56:34 +09:00
Sj-Si 02e6963325 continue cleanup and redesign. 2024-01-13 13:16:39 -05:00
Andray b6dc307c99 fix_extension_check_for_requirements 2024-01-13 14:45:15 +04:00
WebDev 47b52d9b28 Add # to the invalid_filename_chars list 2024-01-13 02:31:26 -08:00
WebDev 541881e318 Adjust brush size with hotkeys. 2024-01-13 02:11:06 -08:00
Sj-Si 036500223d Merge changes from dev 2024-01-11 16:37:35 -05:00
Sj-Si 0726a6e12e Finish base layout. Fix bugs. Need to test for stability and clean up. 2024-01-11 15:06:57 -05:00
AUTOMATIC1111 cb5b335acd Merge pull request #14618 from akx/log-fmt-fix
Logging: set formatter correctly for fallback logger too
2024-01-11 13:49:36 +03:00
Aarni Koskela 0011640ab1 Logging: set formatter correctly for fallback logger too 2024-01-11 08:29:42 +02:00
Sj-Si 3db6938caa begin redesign of tree module. 2024-01-10 18:11:48 -05:00
AUTOMATIC1111 85bf2eb441 Merge pull request #14598 from AUTOMATIC1111/fix-txt2img_upscale
hires button, fix seeds
2024-01-09 20:05:10 +03:00
w-e-w 4d9f2c3ec8 update p.seed and p.subseed 2024-01-10 01:56:44 +09:00
AUTOMATIC1111 abb630108e Merge pull request #14589 from Sj-Si/hotfix/remove-upscaler-size-limits
Increase Upscaler Limits
2024-01-09 19:52:54 +03:00
Sj-Si 9dd2534824 Restore scale factor limit changes to master branch. 2024-01-09 11:47:39 -05:00
Sj-Si 3cc7572f5d Restore scale factor limit changes to master branch. 2024-01-09 11:46:10 -05:00
AUTOMATIC1111 751d014cd6 Merge pull request #14583 from continue-revolution/conrevo/lcm-sampler
Official LCM Sampler Support
2024-01-09 19:37:35 +03:00
AUTOMATIC1111 639f22ea7c Merge pull request #14593 from papuSpartan/api_tls
Allow TLS with API only mode (--nowebui)
2024-01-09 19:33:31 +03:00
AUTOMATIC1111 905b14237f Merge pull request #14597 from AUTOMATIC1111/improved-manual-cast
Improve the implementation of Manual Cast and IPEX support
2024-01-09 19:33:00 +03:00
Kohaku-Blueleaf ca671e5d7b rearrange if-statements for cpu 2024-01-09 23:30:55 +08:00
Kohaku-Blueleaf 58d5b042cd Apply the correct behavior of precision='full' 2024-01-09 23:23:40 +08:00
Kohaku-Blueleaf 1fd69655fe Revert "Apply correct inference precision implementation"
This reverts commit e00365962b.
2024-01-09 23:15:05 +08:00
Kohaku-Blueleaf e00365962b Apply correct inference precision implementation 2024-01-09 23:13:34 +08:00
Kohaku-Blueleaf c2c05fcca8 linting and debugs 2024-01-09 22:53:58 +08:00
KohakuBlueleaf 42e6df723c Fix bugs when arg dtype doesn't match 2024-01-09 22:39:39 +08:00
Kohaku-Blueleaf 209c26a1cb improve efficiency and support more device 2024-01-09 22:11:44 +08:00
unknown 8d986727b3 include tls arguments in api uvicorn init 2024-01-09 03:01:20 -06:00
Sj-Si 46413b20a4 Increase limits for upscalers. 2024-01-08 16:23:35 -05:00
Sj-Si 34fc215249 fix linting 2024-01-08 14:23:01 -05:00
Sj-Si 67a70ad112 fix indentation 2024-01-08 14:11:24 -05:00
Sj-Si df8aa69a99 Add tree-view display for extra networks. 2024-01-08 14:10:03 -05:00
continue-revolution 8e292373ec lcm sampler 2024-01-08 06:43:39 -06:00
AUTOMATIC1111 6869d95890 Merge pull request #14573 from continue-revolution/conrevo/add-p-to-cfg
Add self to CFGDenoiserParams
2024-01-08 10:23:08 +03:00
Chengsong Zhang 37906e429a make denoiser None by default 2024-01-07 20:17:42 -06:00
continue-revolution f56cebf5ba add self instead 2024-01-07 12:35:35 -06:00
continue-revolution 425507bd10 add p to cfgdenoiserparams 2024-01-07 10:25:01 -06:00
AUTOMATIC1111 2f98a35fc4 add assets repo; serve fonts locally rather than from google's servers 2024-01-07 09:21:21 +03:00
AUTOMATIC1111 30aa5e0a7c Merge pull request #14560 from Nuullll/condfunc-warning
Handle CondFunc exception when resolving attributes
2024-01-07 08:23:08 +03:00
AUTOMATIC1111 cab1d839b5 Merge pull request #14563 from Nuullll/model-loaded-callback
Execute model_loaded_callback after moving to target device
2024-01-07 08:22:17 +03:00
AUTOMATIC1111 71e0057137 Merge pull request #14562 from Nuullll/fix-ipex-xpu-generator
[IPEX] Fix xpu generator
2024-01-07 08:21:43 +03:00
Nuullll a183de04e3 Execute model_loaded_callback after moving to target device 2024-01-06 20:03:33 +08:00
Nuullll 818d6a11e7 Fix format 2024-01-06 19:14:06 +08:00
Nuullll 73786c047f [IPEX] Fix torch.Generator hijack 2024-01-06 19:09:56 +08:00
AUTOMATIC1111 b00b429477 Merge pull request #14559 from Nuullll/ipex-sdpa-fix
[IPEX] Fix SDPA attn_mask dtype
2024-01-06 13:14:18 +03:00
Nuullll ec9acb3145 Handle CondFunc exception when resolving attributes 2024-01-06 17:18:38 +08:00
Nuullll 16b4d2cf3f [IPEX] Fix SDPA attn_mask dtype 2024-01-06 16:32:18 +08:00
AUTOMATIC1111 8b6848c6db Merge pull request #14546 from AUTOMATIC1111/fix-oft-dtype
Fix dtype casting in OFT module
2024-01-06 10:50:38 +03:00
AUTOMATIC1111 a4ee64050a Merge pull request #14547 from AUTOMATIC1111/lyco-forward
Implement general forward method for all method in built-in lora ext
2024-01-06 10:50:06 +03:00
AUTOMATIC1111 942617f828 Merge pull request #14548 from keshav-nischal/patch-2
Update README.md
2024-01-06 10:49:45 +03:00
AUTOMATIC1111 233c66b36e Make the upscale button update the gallery with the new image rather than replace it. 2024-01-05 12:28:41 +03:00
Keshav Nischal 88ba095fd0 Update README.md 2024-01-05 14:15:58 +05:30
Kohaku-Blueleaf 44744d6005 linting 2024-01-05 16:38:05 +08:00
Kohaku-Blueleaf 18ca987c92 Add general forward method for all modules. 2024-01-05 16:32:19 +08:00
Kohaku-Blueleaf f8f38c7c28 Fix dtype casting for OFT module 2024-01-05 16:31:48 +08:00
AUTOMATIC1111 a06dab8d7a Merge pull request #14538 from akx/log-wut
Fix logging configuration again
2024-01-05 11:04:14 +03:00
AUTOMATIC1111 6ffbff0857 Merge pull request #14537 from akx/gradio-analytics-enabled-again
Ensure GRADIO_ANALYTICS_ENABLED is set early enough
2024-01-05 11:02:50 +03:00
Aarni Koskela 6fa42e919f Fix logging configuration again
* Only use `tqdm.write()` if `tqdm` is active, defer to stderr
* Correct log formatter for TqdmLoggingHandler
* If `rich` is installed and `SD_WEBUI_RICH_LOG` is set, use `rich`'s formatter
2024-01-04 19:32:03 +02:00
Aarni Koskela 9805f35c6f Ensure GRADIO_ANALYTICS_ENABLED is set early enough 2024-01-04 19:13:47 +02:00
AUTOMATIC1111 15ec54dd96 Have upscale button use the same seed as hires fix. 2024-01-04 19:47:00 +03:00
AUTOMATIC1111 f903b4dda3 Merge pull request #14523 from AUTOMATIC1111/paste-infotext-cast-int-as-float
paste infotext cast int as float
2024-01-04 11:19:18 +03:00
AUTOMATIC1111 3f7f61e541 Merge pull request #14524 from akx/fix-swinir-issues
Fix SwinIR issues
2024-01-04 11:17:20 +03:00
AUTOMATIC1111 1e7a8ce5e4 Merge pull request #14525 from AUTOMATIC1111/handle-config.json-failed-to-load
handle config.json failed to load
2024-01-04 11:16:37 +03:00
AUTOMATIC1111 397251ba0c Merge pull request #14527 from akx/avoid-isfiles
Avoid unnecessary `isfile`/`exists` calls
2024-01-04 11:15:56 +03:00
AUTOMATIC1111 df62ffbd25 Merge branch 'dev' into avoid-isfiles 2024-01-04 11:15:50 +03:00
AUTOMATIC1111 149c9d2234 Merge pull request #14528 from AUTOMATIC1111/mass-file-lister
mass file lister as an attempt to tackle #14507
2024-01-04 11:09:59 +03:00
AUTOMATIC1111 320a217b78 forgot something 2024-01-04 02:39:02 +03:00
AUTOMATIC1111 420f56c2e8 mass file lister as an attempt to tackle #14507 2024-01-04 02:28:05 +03:00
Aarni Koskela d9034b48a5 Avoid unnecessary isfile/exists calls 2024-01-04 00:26:30 +02:00
w-e-w 50158a1fc9 handle config.json failed to load 2024-01-04 06:30:52 +09:00
Aarni Koskela 62470ee234 upscale_2: cast image to model's dtype 2024-01-03 22:39:12 +02:00
Aarni Koskela 3d31d5c27b SwinIR: pass model.scale 2024-01-03 22:38:49 +02:00
Aarni Koskela dfdc51246c SwinIR: use prefer_half 2024-01-03 22:38:13 +02:00
w-e-w bfc48fbc24 paste infotext cast int as float 2024-01-04 03:46:05 +09:00
AUTOMATIC1111 04a005f0e9 Merge pull request #14512 from AUTOMATIC1111/remove-excessive-extra-networks-reload
reduce unnecessary re-indexing extra networks directory
2024-01-03 19:15:46 +03:00
w-e-w fccd0b00c2 reduce unnecessary re-indexing extra networks dir 2024-01-03 19:25:06 +09:00
AUTOMATIC1111 9c6ea5386b Merge pull request #14504 from akx/you-spin-me-round
torch_bgr_to_pil_image: round, don't truncate
2024-01-03 12:42:05 +03:00
Aarni Koskela 7ad6899bf9 torch_bgr_to_pil_image: round, don't truncate
This matches what `realesrgan` does.
2024-01-02 17:14:05 +02:00
AUTOMATIC1111 e4dcdcc955 Merge pull request #14501 from akx/credits-remove-copypaste
Remove licenses and README mentions for code that's no longer copy-pasted
2024-01-02 16:25:26 +03:00
Aarni Koskela 62bd7624d2 Remove licenses for code that's no longer copy-pasted; adjust README 2024-01-02 11:46:42 +02:00
AUTOMATIC1111 7c3ab416ad Merge pull request #14500 from akx/spandrel-prefer-half
Spandrel: "prefer half" instead of "force half"
2024-01-02 12:23:23 +03:00
Aarni Koskela 2cacbc124c load_spandrel_model: make half prefer_half
As discussed with the Spandrel folks, it's good to heed Spandrel's
"supports half precision" flag to avoid e.g. black blotches and what-not.
2024-01-02 10:44:38 +02:00
AUTOMATIC1111 51f1cca852 Merge pull request #14484 from akx/swinir-resample-for-div8
Refactor Torch-space upscale fully out of ScuNET/SwinIR
2024-01-02 10:56:37 +03:00
Aarni Koskela cf14a6a7aa Refactor upscale_2 helper out of ScuNET/SwinIR; make sure devices are right 2024-01-02 08:57:12 +02:00
AUTOMATIC1111 980970d390 final touches 2024-01-02 07:08:32 +03:00
AUTOMATIC1111 80873b1538 fix #14497 2024-01-02 07:05:05 +03:00
AUTOMATIC1111 1341b22081 add an option to hide upscaling progressbar 2024-01-02 06:47:26 +03:00
AUTOMATIC1111 6f9fcfdbb7 Merge pull request #14497 from Jibaku789/dev
Add inpaint arguments in .txt file
2024-01-02 06:42:21 +03:00
Jibaku789 a5b6a5a3ad Add inpaint options to img2img.py 2024-01-01 14:58:55 -06:00
Jibaku789 c2ea571005 Add inpaint options to paste fields 2024-01-01 14:57:41 -06:00
AUTOMATIC1111 ac3cc1adc5 Merge pull request #14495 from akx/fix-js-lint
Fix lint issue from 501993eb
2024-01-01 21:02:47 +03:00
Aarni Koskela c32c51a0fc Fix lint issue from 501993eb 2024-01-01 19:20:54 +02:00
AUTOMATIC1111 501993ebf2 added a button to run hires fix on selected image in the gallery 2024-01-01 19:31:06 +03:00
AUTOMATIC1111 5d7d1823af rename infotext.py again, this time to infotext_utils.py; I didn't realize infotext would be used for variable names in multiple places, which makes it awkward to import the module; also fix the bug I caused by this rename that breaks tests 2024-01-01 17:25:30 +03:00
AUTOMATIC1111 1ffdedc11d restore lines lost from #13789 merge 2024-01-01 17:03:08 +03:00
AUTOMATIC1111 c507d7b252 Merge pull request #13789 from nickpharrison/finer-settings-freezing-control
Finer settings freezing control
2024-01-01 17:01:28 +03:00
AUTOMATIC1111 7ba02e0b7c Merge branch 'dev' into finer-settings-freezing-control 2024-01-01 17:01:06 +03:00
AUTOMATIC1111 15156cde18 Merge pull request #14291 from AUTOMATIC1111/on-mouse-hover-show-hide-modal-image-viewer-icons
on mouse hover show / hide modal image viewer icons
2024-01-01 16:53:33 +03:00
AUTOMATIC1111 0aa7c53c0b fix borked merge, rename fields to better match what they do, change setting default to true for #13653 2024-01-01 16:50:59 +03:00
AUTOMATIC1111 2a7ad70db5 Merge pull request #13653 from antfu/feat/interrupted-end
Interrupt after current generation
2024-01-01 16:40:02 +03:00
AUTOMATIC1111 dfd6438221 Merge branch 'dev' into feat/interrupted-end 2024-01-01 16:39:51 +03:00
AUTOMATIC1111 0ce67cb618 Merge pull request #14352 from AUTOMATIC1111/reduce-unnecessary-ui-config-write
only rewrite ui-config when there is change
2024-01-01 16:35:07 +03:00
AUTOMATIC1111 cba6fba123 Merge pull request #14353 from Nuullll/ipex-sdpa
[IPEX] Slice SDPA into smaller chunks
2024-01-01 16:33:55 +03:00
AUTOMATIC1111 ac0ecf3b4b option to convert VAE to bfloat16 (implementation of #9295) 2024-01-01 16:28:58 +03:00
AUTOMATIC1111 0743ee9b3e re-layout checkboxes for XYZ grid a bit 2024-01-01 15:50:47 +03:00
AUTOMATIC1111 c352008c95 Merge remote-tracking branch 'rubberbaron/xyz-grid-vary-seeds' into dev 2024-01-01 15:50:10 +03:00
AUTOMATIC1111 d8126be578 linter 2024-01-01 15:00:39 +03:00
AUTOMATIC1111 45b7bba3d0 add automatic version support for zero terminal SNR noise schedule option from #14145 2024-01-01 14:51:56 +03:00
AUTOMATIC1111 267fd5d76b Merge pull request #14145 from drhead/zero-terminal-snr
Implement zero terminal SNR noise schedule option
2024-01-01 14:45:12 +03:00
AUTOMATIC1111 d613cd17c7 add automatic backwards version compatibility 2024-01-01 14:38:29 +03:00
AUTOMATIC1111 d859cec696 infotext.py: rename usages in the codebase 2024-01-01 13:53:12 +03:00
AUTOMATIC1111 c5496c7646 infotext.py: add support for old modules.generation_parameters_copypaste name 2024-01-01 13:52:37 +03:00
AUTOMATIC1111 003b91f083 rename generation_parameters_copypaste module to infotext 2024-01-01 13:45:18 +03:00
AUTOMATIC1111 5692bf1517 add missing field for DDIM sampler that was breaking img2img 2024-01-01 11:11:14 +03:00
AUTOMATIC1111 e55fec9d9a Merge pull request #14487 from AUTOMATIC1111/handle-selectable-script_index-is-None
handle selectable script_index is None
2024-01-01 10:19:58 +03:00
w-e-w 00901bfbe0 handle selectable script_index is None 2024-01-01 15:47:57 +09:00
AUTOMATIC1111 a70dfb64a8 change import statements for #14478 2023-12-31 22:38:30 +03:00
AUTOMATIC1111 be5f1acc8f Merge pull request #14478 from akx/dtype-inspect
Add utility to inspect a model's dtype/device
2023-12-31 22:33:32 +03:00
AUTOMATIC1111 f3af8c8d04 Merge pull request #14475 from Learwin/negative_prompt
Adding negative prompts to Loras in extra networks
2023-12-31 22:32:28 +03:00
Learwin b6f74e936e Revert change from linting for unrelated file 2023-12-31 13:36:36 +01:00
Learwin d4945f4422 Removed weight slider for negative prompts 2023-12-31 13:22:30 +01:00
Aarni Koskela 5768afc776 Add utility to inspect a model's parameters (to get dtype/device) 2023-12-31 13:22:43 +02:00
AUTOMATIC1111 a84e842189 Merge pull request #14476 from akx/dedupe-tiled-weighted-inference
Deduplicate tiled inference code from SwinIR/ScuNET
2023-12-31 09:41:49 +03:00
Aarni Koskela 6f86b62a1b Deduplicate tiled inference code from SwinIR/ScuNET 2023-12-31 01:13:30 +02:00
AUTOMATIC1111 ce21840a04 Merge pull request #14477 from akx/spandrel-type-fix
Be more clear about Spandrel model nomenclature and types
2023-12-31 01:38:43 +03:00
AUTOMATIC1111 ae124439c4 Merge pull request #14471 from akx/bump-numpy
Bump numpy to 1.26.2
2023-12-31 01:37:56 +03:00
Aarni Koskela 777af661a2 Be more clear about Spandrel model nomenclature 2023-12-31 00:22:58 +02:00
Aarni Koskela c0ca6348e8 load_spandrel_model: always return a model descriptor 2023-12-31 00:04:47 +02:00
AUTOMATIC1111 3be9074031 fix for the previous fix. 2023-12-31 00:43:41 +03:00
Learwin a2f23f9d22 Code Style fixes 2023-12-30 22:16:51 +01:00
Learwin bc5ae74c7d Added negative prompts to extra networks lora 2023-12-30 21:52:27 +01:00
AUTOMATIC1111 8100e901ab fix error with RealESRGAN model failing to upscale fp32 image 2023-12-30 22:41:53 +03:00
AUTOMATIC1111 c2fd7c0344 Merge pull request #14474 from akx/realesrgan-is-esrgan
Correct RealESRGAN expected architecture type to ESRGAN
2023-12-30 22:40:11 +03:00
AUTOMATIC1111 7c13ffdbb1 Merge pull request #14472 from akx/drop-move-code
Remove `cleanup_models` code
2023-12-30 22:15:30 +03:00
AUTOMATIC1111 a86f4411cb Merge pull request #14473 from akx/soften-model-arch-check
Soften Spandrel model-architecture check to just a warning
2023-12-30 22:13:29 +03:00
Aarni Koskela 393a5b82ba Correct RealESRGAN expected architecture type to ESRGAN 2023-12-30 21:12:32 +02:00
Aarni Koskela af050dcaa7 Soften Spandrel model-architecture check to just a warning 2023-12-30 21:05:59 +02:00
Aarni Koskela 5fbb13e0da Remove cleanup_models code 2023-12-30 20:47:12 +02:00
AUTOMATIC1111 16848f950b Merge pull request #14467 from akx/drop-basicsr
Drop basicsr dependency
2023-12-30 21:27:33 +03:00
Aarni Koskela 48a2a1a437 Don't wait for 10 minutes for test server to come up 2023-12-30 19:44:38 +02:00
Aarni Koskela 1465dab715 Make Tensorboard a late import (it was implicitly installed by basicsr) 2023-12-30 19:44:05 +02:00
AUTOMATIC1111 79c9151802 Merge pull request #14421 from lanyeeee/api_thread_safe
fix API thread safe issues of txt2img and img2img
2023-12-30 20:21:13 +03:00
lanyeeee f651405427 remove locks, move init code to __init__ 2023-12-31 01:09:13 +08:00
Aarni Koskela b58ed1b243 Bump numpy to 1.26.2
This avoids it being downgraded during `launch.py`
2023-12-30 18:02:01 +02:00
Aarni Koskela c9174253fb Drop dependency on basicsr 2023-12-30 17:53:19 +02:00
lanyeeee 91560e98c4 fix format issue 2023-12-30 23:42:10 +08:00
Aarni Koskela f476649c02 Correct arg type for restore_face 2023-12-30 17:41:29 +02:00
AUTOMATIC1111 cd12c0e15c Merge pull request #14425 from akx/spandrel
Use Spandrel for upscaling and face restoration architectures
2023-12-30 18:06:31 +03:00
AUTOMATIC1111 05230c0260 fix img2img api that i broke when implementing infotext support 2023-12-30 18:02:51 +03:00
Aarni Koskela 4ad0c0c0a8 Verify architecture for loaded Spandrel models 2023-12-30 16:37:03 +02:00
Aarni Koskela c756133541 Add experimental HAT model 2023-12-30 16:30:49 +02:00
Aarni Koskela b621a63cf6 Unify CodeFormer and GFPGAN restoration backends, use Spandrel for GFPGAN 2023-12-30 16:30:49 +02:00
Aarni Koskela b0f5934234 Use Spandrel for upscaling and face restoration architectures (aside from GFPGAN and LDSR) 2023-12-30 16:24:01 +02:00
Aarni Koskela e472383acb Refactor esrgan_upscale to more generic upscale_with_model 2023-12-30 16:24:01 +02:00
Aarni Koskela 12c6f37f8e Add tile_count property to Grid 2023-12-30 16:24:01 +02:00
Aarni Koskela 7aa27b000a Add types to split_grid 2023-12-30 16:24:01 +02:00
AUTOMATIC1111 31992eff9b make it possible again to extract styles that have whitespace at the end. 2023-12-30 16:51:13 +03:00
kurisu_u d05f9e8124 Merge branch 'dev' into api_thread_safe 2023-12-30 21:47:59 +08:00
lanyeeee c069c2c562 add locks to ensure init args are thread-safe 2023-12-30 21:32:22 +08:00
AUTOMATIC1111 adcd65ba34 Merge pull request #14367 from AUTOMATIC1111/reorder-post-processing-modules
reorder training preprocessing modules in extras tab
2023-12-30 15:23:52 +03:00
AUTOMATIC1111 f0e2e8b930 update #14354 2023-12-30 15:12:48 +03:00
AUTOMATIC1111 83c0758d90 Merge pull request #14354 from ranareehanaslam/master
Update Added (Fixed) IPV6 Functionality When there is No Webui Argument Passed webui.py
2023-12-30 15:11:49 +03:00
AUTOMATIC1111 1d603eb5a8 Merge pull request #14394 from AUTOMATIC1111/minor-xyz-fix
xyz grid handle axis_type is None
2023-12-30 15:06:39 +03:00
AUTOMATIC1111 4b6eb8072b Merge pull request #14407 from AUTOMATIC1111/prevent-crash-due-to-Script-__init__-exception
prevent crash due to Script __init__ exception
2023-12-30 14:54:31 +03:00
AUTOMATIC1111 908fb4ea71 Merge pull request #14390 from wangqyqq/sdxl-inpaint
Supporting for SDXL-Inpaint Model
2023-12-30 14:49:52 +03:00
AUTOMATIC1111 c9c105c7db Merge pull request #14446 from AUTOMATIC1111/base-output-path-off-data_path
Base output path off data path
2023-12-30 14:45:28 +03:00
AUTOMATIC1111 a79890efd6 Merge pull request #14452 from AUTOMATIC1111/save-info-of-init-image
save info of init image
2023-12-30 14:41:39 +03:00
AUTOMATIC1111 32862e4379 Merge pull request #14464 from AUTOMATIC1111/more-lora-not-found-warning
More lora not found warning
2023-12-30 14:04:17 +03:00
AUTOMATIC1111 8f18263759 fix bad values read from infotext for API, add comment 2023-12-30 13:48:25 +03:00
AUTOMATIC1111 11a435b469 img2img support for infotext API 2023-12-30 13:34:46 +03:00
AUTOMATIC1111 0aacd4c72b add support for alwayson scripts for infotext API 2023-12-30 13:33:18 +03:00
AUTOMATIC1111 8b08b78c03 make it so that if an option from infotext conflicts with an argument from API, the latter overrides the former 2023-12-30 12:27:23 +03:00
AUTOMATIC1111 ba92135a2b add override_settings support for infotext API 2023-12-30 12:11:09 +03:00
w-e-w 59d060fd5e More lora not found warning 2023-12-30 17:11:03 +09:00
AUTOMATIC1111 bb07cb6a0d a 2023-12-30 10:42:42 +03:00
w-e-w dc57ec0296 save info of init image 2023-12-29 01:56:48 +09:00
w-e-w 892e703b59 webpath use truncate_path 2023-12-28 06:52:41 +09:00
w-e-w af2951ed53 base default image output on data_path
Co-Authored-By: Alberto Cano <34340962+canoalberto@users.noreply.github.com>
2023-12-28 06:52:33 +09:00
w-e-w de04573438 create utility truncate_path
utli.truncate_path(target_path, base_path)
return the target_path relative to base_path if target_path is a sub path of base_path else return the absolute path
2023-12-28 06:22:51 +09:00
wangqyqq bfe418a58d add some codes for robust 2023-12-27 10:20:56 +08:00
lanyeeee 00d4a4d4ac move thread-unsafe code to __init__ 2023-12-26 14:46:29 +08:00
w-e-w edfae95d90 prevent crash due to Script __init__ exception 2023-12-23 01:21:00 +09:00
w-e-w de1809bd14 handle axis_type is None 2023-12-22 00:37:30 +09:00
wangqyqq 9feb034e34 support for sdxl-inpaint model 2023-12-21 20:15:51 +08:00
w-e-w 3e068de0dc reorder training preprocessing modules in extras tab
using the order from before the rework
11d23e8ca5
2023-12-19 18:48:49 +09:00
Muhammad Rehan Aslam 0d5941edbc Update webui.py
Co-authored-by: Aarni Koskela <akx@iki.fi>
2023-12-19 09:50:38 +05:00
Muhammad Rehan Aslam fe4d084390 Update webui.py
Added (Fixed) IPV6 Functionality When there is No Webui Argument Passed
2023-12-18 17:50:00 +05:00
Nuullll f586f4973a Fix device id 2023-12-18 19:44:52 +08:00
Nuullll e4b4a9c4ac [IPEX] Slice SDPA into smaller chunks 2023-12-18 18:01:09 +08:00
w-e-w 10945aa41a only rewrite ui-config when there is change
and a typo
2023-12-18 15:27:41 +09:00
AUTOMATIC1111 de03882d6c make task ids for API work without force_task_id 2023-12-17 08:55:35 +03:00
AUTOMATIC1111 3d9a0d9e4b Merge pull request #14330 from AUTOMATIC1111/fix-extras-caption-BLIP
fix extras caption BLIP
2023-12-16 16:41:40 +03:00
w-e-w 98c5fa9201 fix extras caption BLIP
#14328
2023-12-16 22:14:39 +09:00
AUTOMATIC1111 7428ce52ab Merge pull request #14327 from AUTOMATIC1111/fp8-cond-cache-fix
Fix FP8 non-reproducible problem
2023-12-16 14:59:35 +03:00
Kohaku-Blueleaf a978320334 Let fp8-related settings to invalidate cond_cache 2023-12-16 19:39:43 +08:00
AUTOMATIC1111 4f5281a92e Merge pull request #14227 from kingljl/kingljl-patch-memory-leak
Long running memory leak problem
2023-12-16 11:24:07 +03:00
AUTOMATIC1111 86b3aa94e2 rename pending tasks api endpoint to be more in line with others 2023-12-16 11:04:59 +03:00
AUTOMATIC1111 5b7d86d42b Merge pull request #14314 from gayshub/master
Add allow specify the task id and get the location of task in the queue of pending task
2023-12-16 11:01:42 +03:00
AUTOMATIC1111 93eae69895 move soft inpainting to a built-in extension 2023-12-16 11:00:42 +03:00
AUTOMATIC1111 cd9ce2e31c Use radio for FP8 mode selection 2023-12-16 10:40:20 +03:00
AUTOMATIC1111 c121f8c315 Merge pull request #14031 from AUTOMATIC1111/test-fp8
A big improvement for dtype casting system with fp8 storage type and manual cast
2023-12-16 10:22:51 +03:00
AUTOMATIC1111 8edb9144cc Merge branch 'dev' into test-fp8 2023-12-16 10:22:16 +03:00
AUTOMATIC1111 60186c7b9d Merge pull request #14107 from AUTOMATIC1111/torch210
Torch210
2023-12-16 10:15:55 +03:00
AUTOMATIC1111 7745db6fc0 torch 2.1.2 2023-12-16 10:15:08 +03:00
Kohaku-Blueleaf ea272152e0 Add FP8 settings into PNG info 2023-12-16 15:08:08 +08:00
AUTOMATIC1111 e9c6325fc6 Merge branch 'dev' into torch210 2023-12-16 10:05:10 +03:00
AUTOMATIC1111 7504f14503 Merge branch 'master' into dev 2023-12-16 09:59:47 +03:00
AUTOMATIC1111 c16fcb7f46 Merge pull request #14307 from AUTOMATIC1111/default-Falst-js_live_preview_in_modal_lightbox
default False js_live_preview_in_modal_lightbox
2023-12-16 09:25:08 +03:00
gayshub 6d7e57ba6a fix the problem of ruff of github 2023-12-15 18:03:14 +08:00
gayshub da45e73b4f fix the problem of ruff of github 2023-12-15 17:57:58 +08:00
gayshub d859de37d9 fix the problem of ruff of github 2023-12-15 17:48:20 +08:00
gayshub 1242ba08e1 add allow specify the task id and get the location of task in the queue of pending task 2023-12-15 16:57:17 +08:00
w-e-w 0c5427960b make modal toolbar and icon opacity adjustable 2023-12-15 17:11:59 +09:00
w-e-w 3c0c277579 default False js_live_preview_in_modal_lightbox 2023-12-15 00:48:37 +09:00
Kohaku-Blueleaf 0fb34b57b8 Merge branch 'dev' into test-fp8 2023-12-14 16:54:45 +08:00
AUTOMATIC1111 aeaf1c510f Merge pull request #14293 from HinaHyugaHime/master
Bump torch-rocm to 5.6/5.7
2023-12-14 10:10:58 +03:00
AUTOMATIC1111 097140ac1a Merge branch 'dev' into master 2023-12-14 10:10:43 +03:00
AUTOMATIC1111 778a30a95e Merge pull request #14237 from ReneKroon/dev
#13354 : solve lora loading issue
2023-12-14 10:08:03 +03:00
AUTOMATIC1111 96c393a7a7 Merge pull request #14269 from kaalibro/skip-interrupt-keyb-shortcuts
Add keyboard shortcuts for generate/skip/interrupt
2023-12-14 10:04:17 +03:00
AUTOMATIC1111 09013b357c Merge pull request #14216 from wfjsw/state-dict-ref-comparison
change state dict comparison to ref compare
2023-12-14 10:03:14 +03:00
AUTOMATIC1111 d45f790f58 Merge pull request #14230 from AUTOMATIC1111/add-option-Live-preview-in-full-page-image-viewer
add option: Live preview in full page image viewer
2023-12-14 09:59:48 +03:00
AUTOMATIC1111 8c32594d3b Merge pull request #14208 from CodeHatchling/soft-inpainting
Soft Inpainting
2023-12-14 09:56:12 +03:00
AUTOMATIC1111 f3cc5f8382 Merge pull request #14229 from Nuullll/ipex-embedding
[IPEX] Fix embedding and ControlNet
2023-12-14 09:52:23 +03:00
AUTOMATIC1111 28bafffdc2 Merge pull request #14266 from kaalibro/dev
Re-add setting lost as part of e294e46
2023-12-14 09:48:36 +03:00
AUTOMATIC1111 5db09d1865 Merge pull request #14276 from AUTOMATIC1111/fix-styles
Fix styles
2023-12-14 09:48:14 +03:00
AUTOMATIC1111 c5631aa90d Merge pull request #14270 from kaalibro/extra-options-elem-id
Assign id for "extra_options". Replace numeric field with slider.
2023-12-14 09:46:05 +03:00
AUTOMATIC1111 206de1a6b0 Merge pull request #14296 from akx/paste-resolution
Allow pasting in WIDTHxHEIGHT strings into the width/height fields
2023-12-14 09:41:18 +03:00
AUTOMATIC1111 b943eebb1d Merge pull request #14300 from AUTOMATIC1111/oft_fixes
Fix wrong implementation in network_oft
2023-12-14 09:39:57 +03:00
Kohaku-Blueleaf 3772a82a70 better naming and correct order for device. 2023-12-14 01:47:13 +08:00
Kohaku-Blueleaf 8fc67f3851 remove debug print 2023-12-14 01:44:49 +08:00
Kohaku-Blueleaf 265bc26c21 Use self.scale instead of custom finalize 2023-12-14 01:43:24 +08:00
Kohaku-Blueleaf 735c9e8059 Fix network_oft 2023-12-14 01:38:32 +08:00
Aarni Koskela 89cfbc3bbe Allow pasting in WIDTHxHEIGHT strings into the width/height fields 2023-12-13 12:22:13 +02:00
Hina bda86f0fd9 Update webui.sh 2023-12-12 19:39:14 -06:00
w-e-w cc41cc4349 on mouse hover show / hide modal image viewer icons 2023-12-13 02:06:56 +09:00
kaalibro 6513470f0d Remove unnecessary 'else', add 'lightboxModal' check 2023-12-11 18:06:08 +06:00
kaalibro cee1a40651 Fix linter issues 2023-12-10 17:06:12 +06:00
kaalibro 1d42babd32 Replace Ctrl+Alt+Enter with Esc 2023-12-10 16:28:56 +06:00
kaalibro 6b8143a84e Number of columns slider: max count set to 20, add description info 2023-12-10 15:35:06 +06:00
w-e-w 8b74389e76 fix styles.csv filename 2023-12-10 15:48:16 +09:00
w-e-w 23a0e60b9b fix save styles 2023-12-10 15:48:00 +09:00
drhead 5381405eaa re-derive sqrt alpha bar and sqrt one minus alphabar
This is the only place these values are ever referenced outside of training code so this change is very justifiable and more consistent.
2023-12-09 14:09:28 -05:00
kaalibro 1a79a5049b Assign id for "extra_options". Replace numeric field with slider in Settings. 2023-12-09 22:35:31 +06:00
kaalibro 9c201550dd Add keyboard shortcuts for generation
(Removed Alt+Enter) Ctrl+Enter to start/restart generation
(New) Alt/Option+Enter to skip generation
(New) Ctrl+Alt/Option+Enter to interrupt generation
2023-12-09 21:04:45 +06:00
kaalibro 39ec4cfea9 Re-add setting lost as part of e294e46 2023-12-09 19:12:59 +06:00
Nuullll 049d5642e5 Fix format 2023-12-09 18:11:26 +08:00
Nuullll 5942979344 Fix ControlNet 2023-12-09 18:09:45 +08:00
CodeHatchling f1ff932caf Formatted soft_inpainting. 2023-12-08 17:33:11 -07:00
CodeHatchling b2414476ef soft_inpainting now appears in the "inpaint" section, and will not activate unless inpainting is activated. 2023-12-08 17:32:41 -07:00
Rene Kroon 16bdcce92d #13354: solve lora loading issue 2023-12-08 21:20:55 +01:00
CodeHatchling 659f62e120 Fixed grammar error. 2023-12-07 21:39:54 -07:00
CodeHatchling fc3e246c0f Fixed complaint about whitespace, updated help section for a parameter. 2023-12-07 20:28:38 -07:00
CodeHatchling f284ae23bc Added parameters for the composite stage, fixed batched generation. 2023-12-07 20:19:35 -07:00
CodeHatchling 0ef4a4cb23 Fixed error that occurs when using vanilla samplers (somehow). 2023-12-07 14:54:26 -07:00
CodeHatchling 56604f08a1 Moved image filters used by soft inpainting into soft_inpainting.py from images.py 2023-12-07 14:53:44 -07:00
CodeHatchling 8dbacc7d01 Fixed "No newline at end of file". 2023-12-07 14:30:30 -07:00
CodeHatchling 2abc417834 Re-implemented soft inpainting via a script. Also fixed some mistakes with the previous hooks, removed unnecessary formatting changes, removed code that I had forgotten to. 2023-12-07 14:28:02 -07:00
Kohaku-Blueleaf 39ebd5684b Merge remote-tracking branch 'origin/release_candidate' into test-fp8 2023-12-07 20:48:59 +08:00
CodeHatchling ac45789123 Removed soft inpainting, added hooks for softpainting to work instead. 2023-12-06 21:16:27 -07:00
CodeHatchling 4608f6236f Removed changes in some scripts since the arguments for soft painting are no longer passed through the same path as "mask_blur". 2023-12-06 18:11:17 -07:00
CodeHatchling e90d4334ad A custom blending function can be provided by p, replacing the use of soft_inpainting. 2023-12-06 18:02:07 -07:00
w-e-w 9d2cbf8e97 add option: Live preview in full page image viewer
make #13459 "show the preview image in the modal view if available" optional
2023-12-06 23:06:32 +09:00
Nuullll 746783f7a4 [IPEX] Fix embedding
Cast `torch.bmm` args into same `dtype`.

Fixes the following error when using Text Inversion embedding (#14224):

```
RuntimeError: could not create a primitive descriptor for a matmul
primitive
```
2023-12-06 20:55:47 +08:00
fuchen.ljl c2bdbb67b6 Merge branch 'dev' into kingljl-patch-memory-leak 2023-12-06 20:42:04 +08:00
fuchen.ljl 4d56383025 Long distance memory overflow issue
Problem: The memory will slowly increase with the drawing until restarting.
Observation: GC analysis shows that no occupation has occurred, so it is suspected to be a problem with the underlying allocator.
Reason: Under Linux, glibc is used to allocate memory. glibc uses brk and mmap to allocate memory, and the memory allocated by brk cannot be released until the high-address memory is released. That is to say, if you apply for two pieces of memory A and B through brk, it is impossible to release A before B is released, and it is still occupied by the process. Check the suspected "memory leak" through TOP.
So I replaced TCMalloc, but found that libtcmalloc_minimal could not find ptthread_Key_Create. After analysis, it was found that pthread was not entered during compilation.
2023-12-06 20:23:56 +08:00
Kohaku-Blueleaf 294ec5ac37 Merge branch 'dev' into test-fp8 2023-12-06 15:16:49 +08:00
Kohaku-Blueleaf 672dc4efa8 Fix forced reload 2023-12-06 15:16:10 +08:00
Jabasukuriputo Wang 895456c4a2 change state dict comparison to ref compare 2023-12-05 18:00:48 -06:00
AUTOMATIC1111 f92d61497a Merge pull request #14203 from AUTOMATIC1111/remove-clean_text()
remove clean_text()
2023-12-05 07:15:39 +03:00
CodeHatchling 38864816fa Merge remote-tracking branch 'origin2/dev' into soft-inpainting
# Conflicts:
#	modules/processing.py
2023-12-04 20:38:13 -07:00
CodeHatchling 49bbf11407 Fixed unused import. 2023-12-04 19:47:40 -07:00
CodeHatchling 6fc12428e3 Fixed issue where batched inpainting (batch size > 1) wouldn't work because of mismatched tensor sizes. The 'already_decoded' decoded case should also be handled correctly (tested indirectly). 2023-12-04 19:42:59 -07:00
CodeHatchling b32a334e3d Applies a convert('RGBA') operation early to mimic previous behaviour. 2023-12-04 17:57:10 -07:00
CodeHatchling 60c602232f Restored original formatting. 2023-12-04 17:55:14 -07:00
CodeHatchling 57f29bd61d Re-introduce latent blending step from the vanilla inpainting procedure. 2023-12-04 17:41:18 -07:00
CodeHatchling 1455159cf4 Fixed issue with whitespace, removed commented out code that was meant to be used as a reference. 2023-12-04 16:43:57 -07:00
CodeHatchling 976c1053ef Cleaned up code, moved main code contributions into soft_inpainting.py 2023-12-04 16:06:58 -07:00
w-e-w 854f8c318c remove clean_text() 2023-12-05 04:41:09 +09:00
AUTOMATIC1111 22e23dbf29 add hypertile infotext 2023-12-04 15:56:03 +03:00
AUTOMATIC1111 883d6a2b34 repair old handler for postprocessing API in a way that doesn't break interface 2023-12-04 13:11:00 +03:00
AUTOMATIC1111 15322e1b1a repair old handler for postprocessing API 2023-12-04 12:36:41 +03:00
CodeHatchling 259d33c3c8 Enables the original functionality to be toggled on and off. 2023-12-04 01:57:21 -07:00
Kohaku-Blueleaf f5f89780cc Merge branch 'dev' into test-fp8 2023-12-04 16:47:41 +08:00
CodeHatchling aaacf48232 Organized the settings and UI of soft inpainting to allow for toggling the feature, and centralizes default values to reduce the amount of copy-pasta. 2023-12-04 01:27:22 -07:00
CodeHatchling 552f8bc832 "Uncrop" the original denoised image for the composite step, fixing a "ValueError: Images do not match" *shudder* 2023-12-03 14:49:41 -07:00
CodeHatchling 28a2b5b4aa Fixed a math mistake. 2023-12-03 14:20:20 -07:00
CodeHatchling 3bd3a09160 Merge remote-tracking branch 'origin/dev' into soft-inpainting
# Conflicts:
#	modules/processing.py
2023-12-02 21:14:02 -07:00
CodeHatchling bb04d400c9 Rewrote latent_blend() to use in-place operations and to aggressively "del" references with the intention of minimizing allocations and easing garbage collection. 2023-12-02 21:08:26 -07:00
CodeHatchling 73ab982d1b Blend masks are now produced afterward, based on an estimate of the visual difference between the original and modified latent images. This should remove ghosting and clipping artifacts from masks, while preserving the details of largely unchanged content. 2023-12-02 21:07:02 -07:00
Kohaku-Blueleaf 9a15ae2a92 Merge branch 'dev' into test-fp8 2023-12-03 10:54:54 +08:00
CodeHatchling 609dea36ea Added utility functions related to processing masks. 2023-12-02 18:56:49 -07:00
drhead 78acdcf677 fix variable 2023-12-02 14:09:18 -05:00
drhead dc1adeecdd Create alphas_cumprod_original on full precision path 2023-12-02 14:06:56 -05:00
drhead 4a43334376 Revert 309a606c 2023-12-02 14:05:42 -05:00
drhead 81c4ddf6eb fix linting 2023-12-02 13:11:00 -05:00
drhead 309a606c2f ensure that original alpha bar always exists 2023-12-02 13:07:45 -05:00
Kohaku-Blueleaf 50a21cb09f Ensure the cached weight will not be affected 2023-12-02 22:06:47 +08:00
Kohaku-Blueleaf 110485d5bb Merge branch 'dev' into test-fp8 2023-12-02 17:00:09 +08:00
drhead 668ae34e21 remove debug print 2023-11-29 22:48:31 -05:00
catboxanon de79597ab9 Only apply ztSNR related code if alphas_cumprod exists 2023-11-29 18:33:32 -05:00
catboxanon ffa7f8201d Lint 2023-11-29 18:10:43 -05:00
catboxanon ec6ee5c13b Fix infotext for ztSNR 2023-11-29 18:10:27 -05:00
drhead 6d0a8dcd89 Implement zero terminal SNR schedule option 2023-11-29 17:42:07 -05:00
drhead 588a52891d Add options for zero terminal SNR 2023-11-29 17:40:23 -05:00
drhead b25c126ccd Protect alphas_cumprod from downcasting 2023-11-29 17:38:53 -05:00
CodeHatchling c7a1ff8720 Tweaked default values. 2023-11-28 23:31:10 -07:00
CodeHatchling 284fd8f415 Tweaked UI sliders and labels. 2023-11-28 23:03:50 -07:00
CodeHatchling c5c7fa06aa Added slider for detail preservation strength, removed largely needless offset parameter, changed labels in UI and for saving to/pasting data from PNG files. 2023-11-28 22:35:07 -07:00
CodeHatchling debf836fcc Added UI elements to control blending parameters. 2023-11-28 16:15:36 -07:00
CodeHatchling a6e5846453 Nerfs the aggressive post-processing step of overlaying the original image. 2023-11-28 16:13:42 -07:00
CodeHatchling e715e46b6a Implements "scheduling" for blending of the original latents and a latent blending formula that preserves details in blend transition areas. 2023-11-28 16:10:22 -07:00
CodeHatchling bbba133f05 Removed conflicting step that replaces the softly inpainted latents with a naive blend with the original latents. 2023-11-28 15:09:43 -07:00
CodeHatchling dec791d35d Removed code which forces the inpainting mask to be 0 or 1. Now fractional values (e.g. 0.5) are accepted. 2023-11-28 15:05:01 -07:00
Kohaku-Blueleaf 3d341ebc7d Merge branch 'dev' into test-fp8 2023-11-26 17:32:52 +08:00
AUTOMATIC1111 29f04149b6 update torch to 2.1.0 2023-11-26 12:07:33 +03:00
Kohaku-Blueleaf 40ac134c55 Fix pre-fp8 2023-11-25 12:35:09 +08:00
Kohaku-Blueleaf f5d719d1f1 Add forced reload for fp16 cache 2023-11-22 01:45:56 +08:00
Kohaku-Blueleaf 370a77f8e7 Option for using fp16 weight when apply lora 2023-11-21 19:59:34 +08:00
Kohaku-Blueleaf b2e039d07b Update webui-macos-env.sh 2023-11-20 14:05:32 +08:00
Kohaku-Blueleaf 043d2edcf6 Better naming 2023-11-19 15:56:31 +08:00
Kohaku-Blueleaf f383af2729 update xformers/torch versions 2023-11-19 15:56:23 +08:00
Kohaku-Blueleaf 890181e1d4 Update the xformers/torch versions 2023-11-19 15:54:39 +08:00
Kohaku-Blueleaf 598da5cd49 Use options instead of cmd_args 2023-11-19 15:50:06 +08:00
Kohaku-Blueleaf b60e1088db Merge branch 'dev' into test-fp8 2023-11-19 15:24:57 +08:00
Kohaku-Blueleaf cd12256575 Merge branch 'dev' into test-fp8 2023-11-16 21:53:13 +08:00
fuchen.ljl 42dbcad3ef Merge pull request #1 from kingljl/fix-dependency-address-patch-1
Update README.md
2023-11-10 14:38:26 +08:00
hako-mikan 816096e642 Merge branch 'dev' into master 2023-11-09 21:57:57 +09:00
hako-mikan 6b9795849d Fix model switch bug 2023-11-09 20:23:37 +09:00
Kohaku-Blueleaf c3facab495 Merge branch 'dev' into test-fp8 2023-11-04 12:56:58 +08:00
Nick Harrison be31e7e71a Remove blank line whitespace 2023-10-29 16:05:01 +00:00
Nick Harrison 844c23975f Add assertions for checking additional settings freezing parameters 2023-10-29 15:40:58 +00:00
Nick Harrison f2b83517aa Add new arguments to known command prompts 2023-10-29 15:40:13 +00:00
KohakuBlueleaf ddc2a3499b Add MPS manual cast 2023-10-28 16:52:35 +08:00
Kohaku-Blueleaf d4d3134f6d ManualCast for 10/16 series gpu 2023-10-28 15:24:26 +08:00
Kohaku-Blueleaf 0beb131c7f change torch version 2023-10-25 20:07:37 +08:00
Kohaku-Blueleaf dda067f64d ignore mps for fp8 2023-10-25 19:53:22 +08:00
Kohaku-Blueleaf bf5067f50c Fix alphas cumprod 2023-10-25 12:54:28 +08:00
Kohaku-Blueleaf 4830b25136 Fix alphas_cumprod dtype 2023-10-25 11:53:37 +08:00
Kohaku-Blueleaf 1df6c8bfec fp8 for TE 2023-10-25 11:36:43 +08:00
Kohaku-Blueleaf 9c1eba2af3 Fix lint 2023-10-24 02:11:27 +08:00
Kohaku-Blueleaf eaa9f5162f Add CPU fp8 support
Since norm layer need fp32, I only convert the linear operation layer(conv2d/linear)

And TE have some pytorch function not support bf16 amp in CPU. I add a condition to indicate if the autocast is for unet.
2023-10-24 01:49:05 +08:00
Kohaku-Blueleaf 5f9ddfa46f Add sdxl only arg 2023-10-19 23:57:22 +08:00
Kohaku-Blueleaf 7c128bbdac Add fp8 for sd unet 2023-10-19 13:56:17 +08:00
Anthony Fu 3d15e58b0a feat: refactor 2023-10-16 15:00:17 +08:00
Anthony Fu 8aa13d5dce Interrupt after current generation 2023-10-16 14:12:18 +08:00
Robert Barron c4ee6d9b73 xyz_grid: allow varying the seed along an axis along with the axis's other changes 2023-07-30 03:45:02 -07:00
181 changed files with 7765 additions and 6184 deletions
+2 -2
View File
@@ -78,6 +78,8 @@ module.exports = {
//extraNetworks.js
requestGet: "readonly",
popup: "readonly",
// profilerVisualization.js
createVisualizationTable: "readonly",
// from python
localization: "readonly",
// progrssbar.js
@@ -86,8 +88,6 @@ module.exports = {
// imageviewer.js
modalPrevImage: "readonly",
modalNextImage: "readonly",
// token-counters.js
setupTokenCounters: "readonly",
// localStorage.js
localSet: "readonly",
localGet: "readonly",
+5 -5
View File
@@ -11,8 +11,8 @@ jobs:
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name
steps:
- name: Checkout Code
uses: actions/checkout@v3
- uses: actions/setup-python@v4
uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: 3.11
# NB: there's no cache: pip here since we're not installing anything
@@ -20,7 +20,7 @@ jobs:
# not to have GHA download an (at the time of writing) 4 GB cache
# of PyTorch and other dependencies.
- name: Install Ruff
run: pip install ruff==0.1.6
run: pip install ruff==0.3.3
- name: Run Ruff
run: ruff .
lint-js:
@@ -29,9 +29,9 @@ jobs:
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name
steps:
- name: Checkout Code
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Install Node.js
uses: actions/setup-node@v3
uses: actions/setup-node@v4
with:
node-version: 18
- run: npm i --ci
+13 -5
View File
@@ -11,15 +11,21 @@ jobs:
if: github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name
steps:
- name: Checkout Code
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Set up Python 3.10
uses: actions/setup-python@v4
uses: actions/setup-python@v5
with:
python-version: 3.10.6
cache: pip
cache-dependency-path: |
**/requirements*txt
launch.py
- name: Cache models
id: cache-models
uses: actions/cache@v4
with:
path: models
key: "2023-12-30"
- name: Install test dependencies
run: pip install wait-for-it -r requirements-test.txt
env:
@@ -33,6 +39,8 @@ jobs:
TORCH_INDEX_URL: https://download.pytorch.org/whl/cpu
WEBUI_LAUNCH_LIVE_OUTPUT: "1"
PYTHONUNBUFFERED: "1"
- name: Print installed packages
run: pip freeze
- name: Start test server
run: >
python -m coverage run
@@ -49,7 +57,7 @@ jobs:
2>&1 | tee output.txt &
- name: Run tests
run: |
wait-for-it --service 127.0.0.1:7860 -t 600
wait-for-it --service 127.0.0.1:7860 -t 20
python -m pytest -vv --junitxml=test/results.xml --cov . --cov-report=xml --verify-base-url test
- name: Kill test server
if: always()
@@ -60,13 +68,13 @@ jobs:
python -m coverage report -i
python -m coverage html -i
- name: Upload main app output
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
if: always()
with:
name: output
path: output.txt
- name: Upload coverage HTML
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
if: always()
with:
name: htmlcov
+2
View File
@@ -37,3 +37,5 @@ notification.mp3
/node_modules
/package-lock.json
/.coverage*
/test/test_outputs
/cache
+300 -6
View File
@@ -1,3 +1,296 @@
## 1.9.4
### Bug Fixes:
* pin setuptools version to fix the startup error ([#15883](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15883))
## 1.9.3
### Bug Fixes:
* fix get_crop_region_v2 ([#15594](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15594))
## 1.9.2
### Extensions and API:
* restore 1.8.0-style naming of scripts
## 1.9.1
### Minor:
* Add avif support ([#15582](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15582))
* Add filename patterns: `[sampler_scheduler]` and `[scheduler]` ([#15581](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15581))
### Extensions and API:
* undo adding scripts to sys.modules
* Add schedulers API endpoint ([#15577](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15577))
* Remove API upscaling factor limits ([#15560](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15560))
### Bug Fixes:
* Fix images do not match / Coordinate 'right' is less than 'left' ([#15534](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15534))
* fix: remove_callbacks_for_function should also remove from the ordered map ([#15533](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15533))
* fix x1 upscalers ([#15555](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15555))
* Fix cls.__module__ value in extension script ([#15532](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15532))
* fix typo in function call (eror -> error) ([#15531](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15531))
### Other:
* Hide 'No Image data blocks found.' message ([#15567](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15567))
* Allow webui.sh to be runnable from arbitrary directories containing a .git file ([#15561](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15561))
* Compatibility with Debian 11, Fedora 34+ and openSUSE 15.4+ ([#15544](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15544))
* numpy DeprecationWarning product -> prod ([#15547](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15547))
* get_crop_region_v2 ([#15583](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15583), [#15587](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15587))
## 1.9.0
### Features:
* Make refiner switchover based on model timesteps instead of sampling steps ([#14978](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14978))
* add an option to have old-style directory view instead of tree view; stylistic changes for extra network sorting/search controls
* add UI for reordering callbacks, support for specifying callback order in extension metadata ([#15205](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15205))
* Sgm uniform scheduler for SDXL-Lightning models ([#15325](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15325))
* Scheduler selection in main UI ([#15333](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15333), [#15361](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15361), [#15394](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15394))
### Minor:
* "open images directory" button now opens the actual dir ([#14947](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14947))
* Support inference with LyCORIS BOFT networks ([#14871](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14871), [#14973](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14973))
* make extra network card description plaintext by default, with an option to re-enable HTML as it was
* resize handle for extra networks ([#15041](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15041))
* cmd args: `--unix-filenames-sanitization` and `--filenames-max-length` ([#15031](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15031))
* show extra networks parameters in HTML table rather than raw JSON ([#15131](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15131))
* Add DoRA (weight-decompose) support for LoRA/LoHa/LoKr ([#15160](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15160), [#15283](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15283))
* Add '--no-prompt-history' cmd args for disable last generation prompt history ([#15189](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15189))
* update preview on Replace Preview ([#15201](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15201))
* only fetch updates for extensions' active git branches ([#15233](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15233))
* put upscale postprocessing UI into an accordion ([#15223](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15223))
* Support dragdrop for URLs to read infotext ([#15262](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15262))
* use diskcache library for caching ([#15287](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15287), [#15299](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15299))
* Allow PNG-RGBA for Extras Tab ([#15334](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15334))
* Support cover images embedded in safetensors metadata ([#15319](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15319))
* faster interrupt when using NN upscale ([#15380](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15380))
* Extras upscaler: an input field to limit maximul side length for the output image ([#15293](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15293), [#15415](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15415), [#15417](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15417), [#15425](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15425))
* add an option to hide postprocessing options in Extras tab
### Extensions and API:
* ResizeHandleRow - allow overriden column scale parametr ([#15004](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15004))
* call script_callbacks.ui_settings_callback earlier; fix extra-options-section built-in extension killing the ui if using a setting that doesn't exist
* make it possible to use zoom.js outside webui context ([#15286](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15286), [#15288](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15288))
* allow variants for extension name in metadata.ini ([#15290](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15290))
* make reloading UI scripts optional when doing Reload UI, and off by default
* put request: gr.Request at start of img2img function similar to txt2img
* open_folder as util ([#15442](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15442))
* make it possible to import extensions' script files as `import scripts.<filename>` ([#15423](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15423))
### Performance:
* performance optimization for extra networks HTML pages
* optimization for extra networks filtering
* optimization for extra networks sorting
### Bug Fixes:
* prevent escape button causing an interrupt when no generation has been made yet
* [bug] avoid doble upscaling in inpaint ([#14966](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14966))
* possible fix for reload button not appearing in some cases for extra networks.
* fix: the `split_threshold` parameter does not work when running Split oversized images ([#15006](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15006))
* Fix resize-handle visability for vertical layout (mobile) ([#15010](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15010))
* register_tmp_file also for mtime ([#15012](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15012))
* Protect alphas_cumprod during refiner switchover ([#14979](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14979))
* Fix EXIF orientation in API image loading ([#15062](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15062))
* Only override emphasis if actually used in prompt ([#15141](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15141))
* Fix emphasis infotext missing from `params.txt` ([#15142](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15142))
* fix extract_style_text_from_prompt #15132 ([#15135](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15135))
* Fix Soft Inpaint for AnimateDiff ([#15148](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15148))
* edit-attention: deselect surrounding whitespace ([#15178](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15178))
* chore: fix font not loaded ([#15183](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15183))
* use natural sort in extra networks when ordering by path
* Fix built-in lora system bugs caused by torch.nn.MultiheadAttention ([#15190](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15190))
* Avoid error from None in get_learned_conditioning ([#15191](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15191))
* Add entry to MassFileLister after writing metadata ([#15199](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15199))
* fix issue with Styles when Hires prompt is used ([#15269](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15269), [#15276](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15276))
* Strip comments from hires fix prompt ([#15263](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15263))
* Make imageviewer event listeners browser consistent ([#15261](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15261))
* Fix AttributeError in OFT when trying to get MultiheadAttention weight ([#15260](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15260))
* Add missing .mean() back ([#15239](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15239))
* fix "Restore progress" button ([#15221](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15221))
* fix ui-config for InputAccordion [custom_script_source] ([#15231](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15231))
* handle 0 wheel deltaY ([#15268](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15268))
* prevent alt menu for firefox ([#15267](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15267))
* fix: fix syntax errors ([#15179](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15179))
* restore outputs path ([#15307](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15307))
* Escape btn_copy_path filename ([#15316](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15316))
* Fix extra networks buttons when filename contains an apostrophe ([#15331](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15331))
* escape brackets in lora random prompt generator ([#15343](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15343))
* fix: Python version check for PyTorch installation compatibility ([#15390](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15390))
* fix typo in call_queue.py ([#15386](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15386))
* fix: when find already_loaded model, remove loaded by array index ([#15382](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15382))
* minor bug fix of sd model memory management ([#15350](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15350))
* Fix CodeFormer weight ([#15414](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15414))
* Fix: Remove script callbacks in ordered_callbacks_map ([#15428](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15428))
* fix limited file write (thanks, Sylwia)
* Fix extra-single-image API not doing upscale failed ([#15465](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15465))
* error handling paste_field callables ([#15470](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15470))
### Hardware:
* Add training support and change lspci for Ascend NPU ([#14981](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14981))
* Update to ROCm5.7 and PyTorch ([#14820](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14820))
* Better workaround for Navi1, removing --pre for Navi3 ([#15224](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15224))
* Ascend NPU wiki page ([#15228](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15228))
### Other:
* Update comment for Pad prompt/negative prompt v0 to add a warning about truncation, make it override the v1 implementation
* support resizable columns for touch (tablets) ([#15002](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15002))
* Fix #14591 using translated content to do categories mapping ([#14995](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14995))
* Use `absolute` path for normalized filepath ([#15035](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15035))
* resizeHandle handle double tap ([#15065](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15065))
* --dat-models-path cmd flag ([#15039](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15039))
* Add a direct link to the binary release ([#15059](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15059))
* upscaler_utils: Reduce logging ([#15084](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15084))
* Fix various typos with crate-ci/typos ([#15116](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15116))
* fix_jpeg_live_preview ([#15102](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15102))
* [alternative fix] can't load webui if selected wrong extra option in ui ([#15121](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15121))
* Error handling for unsupported transparency ([#14958](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14958))
* Add model description to searched terms ([#15198](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15198))
* bump action version ([#15272](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15272))
* PEP 604 annotations ([#15259](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15259))
* Automatically Set the Scale by value when user selects an Upscale Model ([#15244](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15244))
* move postprocessing-for-training into builtin extensions ([#15222](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15222))
* type hinting in shared.py ([#15211](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15211))
* update ruff to 0.3.3
* Update pytorch lightning utilities ([#15310](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15310))
* Add Size as an XYZ Grid option ([#15354](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15354))
* Use HF_ENDPOINT variable for HuggingFace domain with default ([#15443](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15443))
* re-add update_file_entry ([#15446](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15446))
* create_infotext allow index and callable, re-work Hires prompt infotext ([#15460](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15460))
* update restricted_opts to include more options for --hide-ui-dir-config ([#15492](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15492))
## 1.8.0
### Features:
* Update torch to version 2.1.2
* Soft Inpainting ([#14208](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14208))
* FP8 support ([#14031](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14031), [#14327](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14327))
* Support for SDXL-Inpaint Model ([#14390](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14390))
* Use Spandrel for upscaling and face restoration architectures ([#14425](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14425), [#14467](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14467), [#14473](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14473), [#14474](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14474), [#14477](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14477), [#14476](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14476), [#14484](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14484), [#14500](https://github.com/AUTOMATIC1111/stable-difusion-webui/pull/14500), [#14501](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14501), [#14504](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14504), [#14524](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14524), [#14809](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14809))
* Automatic backwards version compatibility (when loading infotexts from old images with program version specified, will add compatibility settings)
* Implement zero terminal SNR noise schedule option (**[SEED BREAKING CHANGE](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Seed-breaking-changes#180-dev-170-225-2024-01-01---zero-terminal-snr-noise-schedule-option)**, [#14145](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14145), [#14979](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14979))
* Add a [✨] button to run hires fix on selected image in the gallery (with help from [#14598](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14598), [#14626](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14626), [#14728](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14728))
* [Separate assets repository](https://github.com/AUTOMATIC1111/stable-diffusion-webui-assets); serve fonts locally rather than from google's servers
* Official LCM Sampler Support ([#14583](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14583))
* Add support for DAT upscaler models ([#14690](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14690), [#15039](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15039))
* Extra Networks Tree View ([#14588](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14588), [#14900](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14900))
* NPU Support ([#14801](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14801))
* Prompt comments support
### Minor:
* Allow pasting in WIDTHxHEIGHT strings into the width/height fields ([#14296](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14296))
* add option: Live preview in full page image viewer ([#14230](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14230), [#14307](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14307))
* Add keyboard shortcuts for generate/skip/interrupt ([#14269](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14269))
* Better TCMALLOC support on different platforms ([#14227](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14227), [#14883](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14883), [#14910](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14910))
* Lora not found warning ([#14464](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14464))
* Adding negative prompts to Loras in extra networks ([#14475](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14475))
* xyz_grid: allow varying the seed along an axis separate from axis options ([#12180](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12180))
* option to convert VAE to bfloat16 (implementation of [#9295](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/9295))
* Better IPEX support ([#14229](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14229), [#14353](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14353), [#14559](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14559), [#14562](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14562), [#14597](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14597))
* Option to interrupt after current generation rather than immediately ([#13653](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13653), [#14659](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14659))
* Fullscreen Preview control fading/disable ([#14291](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14291))
* Finer settings freezing control ([#13789](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13789))
* Increase Upscaler Limits ([#14589](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14589))
* Adjust brush size with hotkeys ([#14638](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14638))
* Add checkpoint info to csv log file when saving images ([#14663](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14663))
* Make more columns resizable ([#14740](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14740), [#14884](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14884))
* Add an option to not overlay original image for inpainting for #14727
* Add Pad conds v0 option to support same generation with DDIM as before 1.6.0
* Add "Interrupting..." placeholder.
* Button for refresh extensions list ([#14857](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14857))
* Add an option to disable normalization after calculating emphasis. ([#14874](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14874))
* When counting tokens, also include enabled styles (can be disabled in settings to revert to previous behavior)
* Configuration for the [📂] button for image gallery ([#14947](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14947))
* Support inference with LyCORIS BOFT networks ([#14871](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14871), [#14973](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14973))
* support resizable columns for touch (tablets) ([#15002](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15002))
### Extensions and API:
* Removed packages from requirements: basicsr, gfpgan, realesrgan; as well as their dependencies: absl-py, addict, beautifulsoup4, future, gdown, grpcio, importlib-metadata, lmdb, lpips, Markdown, platformdirs, PySocks, soupsieve, tb-nightly, tensorboard-data-server, tomli, Werkzeug, yapf, zipp, soupsieve
* Enable task ids for API ([#14314](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14314))
* add override_settings support for infotext API
* rename generation_parameters_copypaste module to infotext_utils
* prevent crash due to Script __init__ exception ([#14407](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14407))
* Bump numpy to 1.26.2 ([#14471](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14471))
* Add utility to inspect a model's dtype/device ([#14478](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14478))
* Implement general forward method for all method in built-in lora ext ([#14547](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14547))
* Execute model_loaded_callback after moving to target device ([#14563](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14563))
* Add self to CFGDenoiserParams ([#14573](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14573))
* Allow TLS with API only mode (--nowebui) ([#14593](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14593))
* New callback: postprocess_image_after_composite ([#14657](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14657))
* modules/api/api.py: add api endpoint to refresh embeddings list ([#14715](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14715))
* set_named_arg ([#14773](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14773))
* add before_token_counter callback and use it for prompt comments
* ResizeHandleRow - allow overridden column scale parameter ([#15004](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15004))
### Performance:
* Massive performance improvement for extra networks directories with a huge number of files in them in an attempt to tackle #14507 ([#14528](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14528))
* Reduce unnecessary re-indexing extra networks directory ([#14512](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14512))
* Avoid unnecessary `isfile`/`exists` calls ([#14527](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14527))
### Bug Fixes:
* fix multiple bugs related to styles multi-file support ([#14203](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14203), [#14276](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14276), [#14707](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14707))
* Lora fixes ([#14300](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14300), [#14237](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14237), [#14546](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14546), [#14726](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14726))
* Re-add setting lost as part of e294e46 ([#14266](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14266))
* fix extras caption BLIP ([#14330](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14330))
* include infotext into saved init image for img2img ([#14452](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14452))
* xyz grid handle axis_type is None ([#14394](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14394))
* Update Added (Fixed) IPV6 Functionality When there is No Webui Argument Passed webui.py ([#14354](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14354))
* fix API thread safe issues of txt2img and img2img ([#14421](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14421))
* handle selectable script_index is None ([#14487](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14487))
* handle config.json failed to load ([#14525](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14525), [#14767](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14767))
* paste infotext cast int as float ([#14523](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14523))
* Ensure GRADIO_ANALYTICS_ENABLED is set early enough ([#14537](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14537))
* Fix logging configuration again ([#14538](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14538))
* Handle CondFunc exception when resolving attributes ([#14560](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14560))
* Fix extras big batch crashes ([#14699](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14699))
* Fix using wrong model caused by alias ([#14655](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14655))
* Add # to the invalid_filename_chars list ([#14640](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14640))
* Fix extension check for requirements ([#14639](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14639))
* Fix tab indexes are reset after restart UI ([#14637](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14637))
* Fix nested manual cast ([#14689](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14689))
* Keep postprocessing upscale selected tab after restart ([#14702](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14702))
* XYZ grid: filter out blank vals when axis is int or float type (like int axis seed) ([#14754](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14754))
* fix CLIP Interrogator topN regex ([#14775](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14775))
* Fix dtype error in MHA layer/change dtype checking mechanism for manual cast ([#14791](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14791))
* catch load style.csv error ([#14814](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14814))
* fix error when editing extra networks card
* fix extra networks metadata failing to work properly when you create the .json file with metadata for the first time.
* util.walk_files extensions case insensitive ([#14879](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14879))
* if extensions page not loaded, prevent apply ([#14873](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14873))
* call the right function for token counter in img2img
* Fix the bugs that search/reload will disappear when using other ExtraNetworks extensions ([#14939](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14939))
* Gracefully handle mtime read exception from cache ([#14933](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14933))
* Only trigger interrupt on `Esc` when interrupt button visible ([#14932](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14932))
* Disable prompt token counters option actually disables token counting rather than just hiding results.
* avoid double upscaling in inpaint ([#14966](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14966))
* Fix #14591 using translated content to do categories mapping ([#14995](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14995))
* fix: the `split_threshold` parameter does not work when running Split oversized images ([#15006](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15006))
* Fix resize-handle for mobile ([#15010](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15010), [#15065](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15065))
### Other:
* Assign id for "extra_options". Replace numeric field with slider. ([#14270](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14270))
* change state dict comparison to ref compare ([#14216](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14216))
* Bump torch-rocm to 5.6/5.7 ([#14293](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14293))
* Base output path off data path ([#14446](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14446))
* reorder training preprocessing modules in extras tab ([#14367](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14367))
* Remove `cleanup_models` code ([#14472](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14472))
* only rewrite ui-config when there is change ([#14352](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14352))
* Fix lint issue from 501993eb ([#14495](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14495))
* Update README.md ([#14548](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14548))
* hires button, fix seeds ()
* Logging: set formatter correctly for fallback logger too ([#14618](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14618))
* Read generation info from infotexts rather than json for internal needs (save, extract seed from generated pic) ([#14645](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14645))
* improve get_crop_region ([#14709](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14709))
* Bump safetensors' version to 0.4.2 ([#14782](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14782))
* add tooltip create_submit_box ([#14803](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14803))
* extensions tab table row hover highlight ([#14885](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14885))
* Always add timestamp to displayed image ([#14890](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14890))
* Added core.filemode=false so doesn't track changes in file permission… ([#14930](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14930))
* Normalize command-line argument paths ([#14934](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14934), [#15035](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15035))
* Use original App Title in progress bar ([#14916](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14916))
* register_tmp_file also for mtime ([#15012](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15012))
## 1.7.0
### Features:
@@ -40,7 +333,8 @@
* infotext updates: add option to disregard certain infotext fields, add option to not include VAE in infotext, add explanation to infotext settings page, move some options to infotext settings page
* add FP32 fallback support on sd_vae_approx ([#14046](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14046))
* support XYZ scripts / split hires path from unet ([#14126](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14126))
* allow use of mutiple styles csv files ([#14125](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14125))
* allow use of multiple styles csv files ([#14125](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14125))
* make extra network card description plaintext by default, with an option (Treat card description as HTML) to re-enable HTML as it was (originally by [#13241](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/13241))
### Extensions and API:
* update gradio to 3.41.2
@@ -176,7 +470,7 @@
* new samplers: Restart, DPM++ 2M SDE Exponential, DPM++ 2M SDE Heun, DPM++ 2M SDE Heun Karras, DPM++ 2M SDE Heun Exponential, DPM++ 3M SDE, DPM++ 3M SDE Karras, DPM++ 3M SDE Exponential ([#12300](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12300), [#12519](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12519), [#12542](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12542))
* rework DDIM, PLMS, UniPC to use CFG denoiser same as in k-diffusion samplers:
* makes all of them work with img2img
* makes prompt composition posssible (AND)
* makes prompt composition possible (AND)
* makes them available for SDXL
* always show extra networks tabs in the UI ([#11808](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/11808))
* use less RAM when creating models ([#11958](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/11958), [#12599](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12599))
@@ -352,7 +646,7 @@
* user metadata system for custom networks
* extended Lora metadata editor: set activation text, default weight, view tags, training info
* Lora extension rework to include other types of networks (all that were previously handled by LyCORIS extension)
* show github stars for extenstions
* show github stars for extensions
* img2img batch mode can read extra stuff from png info
* img2img batch works with subdirectories
* hotkeys to move prompt elements: alt+left/right
@@ -571,7 +865,7 @@
* do not wait for Stable Diffusion model to load at startup
* add filename patterns: `[denoising]`
* directory hiding for extra networks: dirs starting with `.` will hide their cards on extra network tabs unless specifically searched for
* LoRA: for the `<...>` text in prompt, use name of LoRA that is in the metdata of the file, if present, instead of filename (both can be used to activate LoRA)
* LoRA: for the `<...>` text in prompt, use name of LoRA that is in the metadata of the file, if present, instead of filename (both can be used to activate LoRA)
* LoRA: read infotext params from kohya-ss's extension parameters if they are present and if his extension is not active
* LoRA: fix some LoRAs not working (ones that have 3x3 convolution layer)
* LoRA: add an option to use old method of applying LoRAs (producing same results as with kohya-ss)
@@ -601,7 +895,7 @@
* fix gamepad navigation
* make the lightbox fullscreen image function properly
* fix squished thumbnails in extras tab
* keep "search" filter for extra networks when user refreshes the tab (previously it showed everthing after you refreshed)
* keep "search" filter for extra networks when user refreshes the tab (previously it showed everything after you refreshed)
* fix webui showing the same image if you configure the generation to always save results into same file
* fix bug with upscalers not working properly
* fix MPS on PyTorch 2.0.1, Intel Macs
@@ -619,7 +913,7 @@
* switch to PyTorch 2.0.0 (except for AMD GPUs)
* visual improvements to custom code scripts
* add filename patterns: `[clip_skip]`, `[hasprompt<>]`, `[batch_number]`, `[generation_number]`
* add support for saving init images in img2img, and record their hashes in infotext for reproducability
* add support for saving init images in img2img, and record their hashes in infotext for reproducibility
* automatically select current word when adjusting weight with ctrl+up/down
* add dropdowns for X/Y/Z plot
* add setting: Stable Diffusion/Random number generator source: makes it possible to make images generated from a given manual seed consistent across different GPUs
+8 -6
View File
@@ -1,5 +1,5 @@
# Stable Diffusion web UI
A browser interface based on Gradio library for Stable Diffusion.
A web interface for Stable Diffusion, implemented using Gradio library.
![](screenshot.png)
@@ -98,6 +98,7 @@ Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-di
- [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended)
- [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
- [Intel CPUs, Intel GPUs (both integrated and discrete)](https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon) (external wiki page)
- [Ascend NPUs](https://github.com/wangshuai09/stable-diffusion-webui/wiki/Install-and-run-on-Ascend-NPUs) (external wiki page)
Alternatively, use online services (like Google Colab):
@@ -151,11 +152,12 @@ Licenses for borrowed code can be found in `Settings -> Licenses` screen, and al
- Stable Diffusion - https://github.com/Stability-AI/stablediffusion, https://github.com/CompVis/taming-transformers
- k-diffusion - https://github.com/crowsonkb/k-diffusion.git
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- Spandrel - https://github.com/chaiNNer-org/spandrel implementing
- GFPGAN - https://github.com/TencentARC/GFPGAN.git
- CodeFormer - https://github.com/sczhou/CodeFormer
- ESRGAN - https://github.com/xinntao/ESRGAN
- SwinIR - https://github.com/JingyunLiang/SwinIR
- Swin2SR - https://github.com/mv-lab/swin2sr
- LDSR - https://github.com/Hafiidz/latent-diffusion
- MiDaS - https://github.com/isl-org/MiDaS
- Ideas for optimizations - https://github.com/basujindal/stable-diffusion
+5
View File
@@ -0,0 +1,5 @@
[default.extend-words]
# Part of "RGBa" (Pillow's pre-multiplied alpha RGB mode)
Ba = "Ba"
# HSA is something AMD uses for their GPUs
HSA = "HSA"
+98
View File
@@ -0,0 +1,98 @@
model:
target: sgm.models.diffusion.DiffusionEngine
params:
scale_factor: 0.13025
disable_first_stage_autocast: True
denoiser_config:
target: sgm.modules.diffusionmodules.denoiser.DiscreteDenoiser
params:
num_idx: 1000
weighting_config:
target: sgm.modules.diffusionmodules.denoiser_weighting.EpsWeighting
scaling_config:
target: sgm.modules.diffusionmodules.denoiser_scaling.EpsScaling
discretization_config:
target: sgm.modules.diffusionmodules.discretizer.LegacyDDPMDiscretization
network_config:
target: sgm.modules.diffusionmodules.openaimodel.UNetModel
params:
adm_in_channels: 2816
num_classes: sequential
use_checkpoint: True
in_channels: 9
out_channels: 4
model_channels: 320
attention_resolutions: [4, 2]
num_res_blocks: 2
channel_mult: [1, 2, 4]
num_head_channels: 64
use_spatial_transformer: True
use_linear_in_transformer: True
transformer_depth: [1, 2, 10] # note: the first is unused (due to attn_res starting at 2) 32, 16, 8 --> 64, 32, 16
context_dim: 2048
spatial_transformer_attn_type: softmax-xformers
legacy: False
conditioner_config:
target: sgm.modules.GeneralConditioner
params:
emb_models:
# crossattn cond
- is_trainable: False
input_key: txt
target: sgm.modules.encoders.modules.FrozenCLIPEmbedder
params:
layer: hidden
layer_idx: 11
# crossattn and vector cond
- is_trainable: False
input_key: txt
target: sgm.modules.encoders.modules.FrozenOpenCLIPEmbedder2
params:
arch: ViT-bigG-14
version: laion2b_s39b_b160k
freeze: True
layer: penultimate
always_return_pooled: True
legacy: False
# vector cond
- is_trainable: False
input_key: original_size_as_tuple
target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
params:
outdim: 256 # multiplied by two
# vector cond
- is_trainable: False
input_key: crop_coords_top_left
target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
params:
outdim: 256 # multiplied by two
# vector cond
- is_trainable: False
input_key: target_size_as_tuple
target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
params:
outdim: 256 # multiplied by two
first_stage_config:
target: sgm.models.autoencoder.AutoencoderKLInferenceWrapper
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
attn_type: vanilla-xformers
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult: [1, 2, 4, 4]
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
+4 -4
View File
@@ -301,7 +301,7 @@ class DDPMV1(pl.LightningModule):
elif self.parameterization == "x0":
target = x_start
else:
raise NotImplementedError(f"Paramterization {self.parameterization} not yet supported")
raise NotImplementedError(f"Parameterization {self.parameterization} not yet supported")
loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3])
@@ -880,7 +880,7 @@ class LatentDiffusionV1(DDPMV1):
def apply_model(self, x_noisy, t, cond, return_ids=False):
if isinstance(cond, dict):
# hybrid case, cond is exptected to be a dict
# hybrid case, cond is expected to be a dict
pass
else:
if not isinstance(cond, list):
@@ -916,7 +916,7 @@ class LatentDiffusionV1(DDPMV1):
cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])]
elif self.cond_stage_key == 'coordinates_bbox':
assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size'
assert 'original_image_size' in self.split_input_params, 'BoundingBoxRescaling is missing original_image_size'
# assuming padding of unfold is always 0 and its dilation is always 1
n_patches_per_row = int((w - ks[0]) / stride[0] + 1)
@@ -926,7 +926,7 @@ class LatentDiffusionV1(DDPMV1):
num_downs = self.first_stage_model.encoder.num_resolutions - 1
rescale_latent = 2 ** (num_downs)
# get top left postions of patches as conforming for the bbbox tokenizer, therefore we
# get top left positions of patches as conforming for the bbbox tokenizer, therefore we
# need to rescale the tl patch coordinates to be in between (0,1)
tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w,
rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h)
+1 -1
View File
@@ -30,7 +30,7 @@ def factorization(dimension: int, factor:int=-1) -> tuple[int, int]:
In LoRA with Kroneckor Product, first value is a value for weight scale.
secon value is a value for weight.
Becuase of non-commutative property, A⊗B ≠ B⊗A. Meaning of two matrices is slightly different.
Because of non-commutative property, A⊗B ≠ B⊗A. Meaning of two matrices is slightly different.
examples)
factor
+66 -3
View File
@@ -3,6 +3,9 @@ import os
from collections import namedtuple
import enum
import torch.nn as nn
import torch.nn.functional as F
from modules import sd_models, cache, errors, hashes, shared
NetworkWeights = namedtuple('NetworkWeights', ['network_key', 'sd_key', 'w', 'sd_module'])
@@ -26,7 +29,6 @@ class NetworkOnDisk:
def read_metadata():
metadata = sd_models.read_metadata_from_safetensors(filename)
metadata.pop('ssmd_cover_images', None) # those are cover images, and they are too big to display in UI as text
return metadata
@@ -114,12 +116,44 @@ class NetworkModule:
if hasattr(self.sd_module, 'weight'):
self.shape = self.sd_module.weight.shape
elif isinstance(self.sd_module, nn.MultiheadAttention):
# For now, only self-attn use Pytorch's MHA
# So assume all qkvo proj have same shape
self.shape = self.sd_module.out_proj.weight.shape
else:
self.shape = None
self.ops = None
self.extra_kwargs = {}
if isinstance(self.sd_module, nn.Conv2d):
self.ops = F.conv2d
self.extra_kwargs = {
'stride': self.sd_module.stride,
'padding': self.sd_module.padding
}
elif isinstance(self.sd_module, nn.Linear):
self.ops = F.linear
elif isinstance(self.sd_module, nn.LayerNorm):
self.ops = F.layer_norm
self.extra_kwargs = {
'normalized_shape': self.sd_module.normalized_shape,
'eps': self.sd_module.eps
}
elif isinstance(self.sd_module, nn.GroupNorm):
self.ops = F.group_norm
self.extra_kwargs = {
'num_groups': self.sd_module.num_groups,
'eps': self.sd_module.eps
}
self.dim = None
self.bias = weights.w.get("bias")
self.alpha = weights.w["alpha"].item() if "alpha" in weights.w else None
self.scale = weights.w["scale"].item() if "scale" in weights.w else None
self.dora_scale = weights.w.get("dora_scale", None)
self.dora_norm_dims = len(self.shape) - 1
def multiplier(self):
if 'transformer' in self.sd_key[:20]:
return self.network.te_multiplier
@@ -134,10 +168,31 @@ class NetworkModule:
return 1.0
def apply_weight_decompose(self, updown, orig_weight):
# Match the device/dtype
orig_weight = orig_weight.to(updown.dtype)
dora_scale = self.dora_scale.to(device=orig_weight.device, dtype=updown.dtype)
updown = updown.to(orig_weight.device)
merged_scale1 = updown + orig_weight
merged_scale1_norm = (
merged_scale1.transpose(0, 1)
.reshape(merged_scale1.shape[1], -1)
.norm(dim=1, keepdim=True)
.reshape(merged_scale1.shape[1], *[1] * self.dora_norm_dims)
.transpose(0, 1)
)
dora_merged = (
merged_scale1 * (dora_scale / merged_scale1_norm)
)
final_updown = dora_merged - orig_weight
return final_updown
def finalize_updown(self, updown, orig_weight, output_shape, ex_bias=None):
if self.bias is not None:
updown = updown.reshape(self.bias.shape)
updown += self.bias.to(orig_weight.device, dtype=orig_weight.dtype)
updown += self.bias.to(orig_weight.device, dtype=updown.dtype)
updown = updown.reshape(output_shape)
if len(output_shape) == 4:
@@ -149,11 +204,19 @@ class NetworkModule:
if ex_bias is not None:
ex_bias = ex_bias * self.multiplier()
if self.dora_scale is not None:
updown = self.apply_weight_decompose(updown, orig_weight)
return updown * self.calc_scale() * self.multiplier(), ex_bias
def calc_updown(self, target):
raise NotImplementedError()
def forward(self, x, y):
raise NotImplementedError()
"""A general forward implementation for all modules"""
if self.ops is None:
raise NotImplementedError()
else:
updown, ex_bias = self.calc_updown(self.sd_module.weight)
return y + self.ops(x, weight=updown, bias=ex_bias, **self.extra_kwargs)
+2 -2
View File
@@ -18,9 +18,9 @@ class NetworkModuleFull(network.NetworkModule):
def calc_updown(self, orig_weight):
output_shape = self.weight.shape
updown = self.weight.to(orig_weight.device, dtype=orig_weight.dtype)
updown = self.weight.to(orig_weight.device)
if self.ex_bias is not None:
ex_bias = self.ex_bias.to(orig_weight.device, dtype=orig_weight.dtype)
ex_bias = self.ex_bias.to(orig_weight.device)
else:
ex_bias = None
+5 -5
View File
@@ -22,12 +22,12 @@ class NetworkModuleGLora(network.NetworkModule):
self.w2b = weights.w["b2.weight"]
def calc_updown(self, orig_weight):
w1a = self.w1a.to(orig_weight.device, dtype=orig_weight.dtype)
w1b = self.w1b.to(orig_weight.device, dtype=orig_weight.dtype)
w2a = self.w2a.to(orig_weight.device, dtype=orig_weight.dtype)
w2b = self.w2b.to(orig_weight.device, dtype=orig_weight.dtype)
w1a = self.w1a.to(orig_weight.device)
w1b = self.w1b.to(orig_weight.device)
w2a = self.w2a.to(orig_weight.device)
w2b = self.w2b.to(orig_weight.device)
output_shape = [w1a.size(0), w1b.size(1)]
updown = ((w2b @ w1b) + ((orig_weight @ w2a) @ w1a))
updown = ((w2b @ w1b) + ((orig_weight.to(dtype = w1a.dtype) @ w2a) @ w1a))
return self.finalize_updown(updown, orig_weight, output_shape)
+6 -6
View File
@@ -27,16 +27,16 @@ class NetworkModuleHada(network.NetworkModule):
self.t2 = weights.w.get("hada_t2")
def calc_updown(self, orig_weight):
w1a = self.w1a.to(orig_weight.device, dtype=orig_weight.dtype)
w1b = self.w1b.to(orig_weight.device, dtype=orig_weight.dtype)
w2a = self.w2a.to(orig_weight.device, dtype=orig_weight.dtype)
w2b = self.w2b.to(orig_weight.device, dtype=orig_weight.dtype)
w1a = self.w1a.to(orig_weight.device)
w1b = self.w1b.to(orig_weight.device)
w2a = self.w2a.to(orig_weight.device)
w2b = self.w2b.to(orig_weight.device)
output_shape = [w1a.size(0), w1b.size(1)]
if self.t1 is not None:
output_shape = [w1a.size(1), w1b.size(1)]
t1 = self.t1.to(orig_weight.device, dtype=orig_weight.dtype)
t1 = self.t1.to(orig_weight.device)
updown1 = lyco_helpers.make_weight_cp(t1, w1a, w1b)
output_shape += t1.shape[2:]
else:
@@ -45,7 +45,7 @@ class NetworkModuleHada(network.NetworkModule):
updown1 = lyco_helpers.rebuild_conventional(w1a, w1b, output_shape)
if self.t2 is not None:
t2 = self.t2.to(orig_weight.device, dtype=orig_weight.dtype)
t2 = self.t2.to(orig_weight.device)
updown2 = lyco_helpers.make_weight_cp(t2, w2a, w2b)
else:
updown2 = lyco_helpers.rebuild_conventional(w2a, w2b, output_shape)
+1 -1
View File
@@ -17,7 +17,7 @@ class NetworkModuleIa3(network.NetworkModule):
self.on_input = weights.w["on_input"].item()
def calc_updown(self, orig_weight):
w = self.w.to(orig_weight.device, dtype=orig_weight.dtype)
w = self.w.to(orig_weight.device)
output_shape = [w.size(0), orig_weight.size(1)]
if self.on_input:
+9 -9
View File
@@ -37,22 +37,22 @@ class NetworkModuleLokr(network.NetworkModule):
def calc_updown(self, orig_weight):
if self.w1 is not None:
w1 = self.w1.to(orig_weight.device, dtype=orig_weight.dtype)
w1 = self.w1.to(orig_weight.device)
else:
w1a = self.w1a.to(orig_weight.device, dtype=orig_weight.dtype)
w1b = self.w1b.to(orig_weight.device, dtype=orig_weight.dtype)
w1a = self.w1a.to(orig_weight.device)
w1b = self.w1b.to(orig_weight.device)
w1 = w1a @ w1b
if self.w2 is not None:
w2 = self.w2.to(orig_weight.device, dtype=orig_weight.dtype)
w2 = self.w2.to(orig_weight.device)
elif self.t2 is None:
w2a = self.w2a.to(orig_weight.device, dtype=orig_weight.dtype)
w2b = self.w2b.to(orig_weight.device, dtype=orig_weight.dtype)
w2a = self.w2a.to(orig_weight.device)
w2b = self.w2b.to(orig_weight.device)
w2 = w2a @ w2b
else:
t2 = self.t2.to(orig_weight.device, dtype=orig_weight.dtype)
w2a = self.w2a.to(orig_weight.device, dtype=orig_weight.dtype)
w2b = self.w2b.to(orig_weight.device, dtype=orig_weight.dtype)
t2 = self.t2.to(orig_weight.device)
w2a = self.w2a.to(orig_weight.device)
w2b = self.w2b.to(orig_weight.device)
w2 = lyco_helpers.make_weight_cp(t2, w2a, w2b)
output_shape = [w1.size(0) * w2.size(0), w1.size(1) * w2.size(1)]
+3 -3
View File
@@ -61,13 +61,13 @@ class NetworkModuleLora(network.NetworkModule):
return module
def calc_updown(self, orig_weight):
up = self.up_model.weight.to(orig_weight.device, dtype=orig_weight.dtype)
down = self.down_model.weight.to(orig_weight.device, dtype=orig_weight.dtype)
up = self.up_model.weight.to(orig_weight.device)
down = self.down_model.weight.to(orig_weight.device)
output_shape = [up.size(0), down.size(1)]
if self.mid_model is not None:
# cp-decomposition
mid = self.mid_model.weight.to(orig_weight.device, dtype=orig_weight.dtype)
mid = self.mid_model.weight.to(orig_weight.device)
updown = lyco_helpers.rebuild_cp_decomposition(up, down, mid)
output_shape += mid.shape[2:]
else:
+2 -2
View File
@@ -18,10 +18,10 @@ class NetworkModuleNorm(network.NetworkModule):
def calc_updown(self, orig_weight):
output_shape = self.w_norm.shape
updown = self.w_norm.to(orig_weight.device, dtype=orig_weight.dtype)
updown = self.w_norm.to(orig_weight.device)
if self.b_norm is not None:
ex_bias = self.b_norm.to(orig_weight.device, dtype=orig_weight.dtype)
ex_bias = self.b_norm.to(orig_weight.device)
else:
ex_bias = None
+65 -29
View File
@@ -1,6 +1,5 @@
import torch
import network
from lyco_helpers import factorization
from einops import rearrange
@@ -22,16 +21,17 @@ class NetworkModuleOFT(network.NetworkModule):
self.org_module: list[torch.Module] = [self.sd_module]
self.scale = 1.0
self.is_R = False
self.is_boft = False
# kohya-ss
# kohya-ss/New LyCORIS OFT/BOFT
if "oft_blocks" in weights.w.keys():
self.is_kohya = True
self.oft_blocks = weights.w["oft_blocks"] # (num_blocks, block_size, block_size)
self.alpha = weights.w["alpha"] # alpha is constraint
self.alpha = weights.w.get("alpha", None) # alpha is constraint
self.dim = self.oft_blocks.shape[0] # lora dim
# LyCORIS
# Old LyCORIS OFT
elif "oft_diag" in weights.w.keys():
self.is_kohya = False
self.is_R = True
self.oft_blocks = weights.w["oft_diag"]
# self.alpha is unused
self.dim = self.oft_blocks.shape[1] # (num_blocks, block_size, block_size)
@@ -47,36 +47,72 @@ class NetworkModuleOFT(network.NetworkModule):
elif is_other_linear:
self.out_dim = self.sd_module.embed_dim
if self.is_kohya:
self.constraint = self.alpha * self.out_dim
self.num_blocks = self.dim
self.block_size = self.out_dim // self.dim
else:
# LyCORIS BOFT
if self.oft_blocks.dim() == 4:
self.is_boft = True
self.rescale = weights.w.get('rescale', None)
if self.rescale is not None and not is_other_linear:
self.rescale = self.rescale.reshape(-1, *[1]*(self.org_module[0].weight.dim() - 1))
self.num_blocks = self.dim
self.block_size = self.out_dim // self.dim
self.constraint = (0 if self.alpha is None else self.alpha) * self.out_dim
if self.is_R:
self.constraint = None
self.block_size, self.num_blocks = factorization(self.out_dim, self.dim)
self.block_size = self.dim
self.num_blocks = self.out_dim // self.dim
elif self.is_boft:
self.boft_m = self.oft_blocks.shape[0]
self.num_blocks = self.oft_blocks.shape[1]
self.block_size = self.oft_blocks.shape[2]
self.boft_b = self.block_size
def calc_updown(self, orig_weight):
oft_blocks = self.oft_blocks.to(orig_weight.device, dtype=orig_weight.dtype)
eye = torch.eye(self.block_size, device=self.oft_blocks.device)
oft_blocks = self.oft_blocks.to(orig_weight.device)
eye = torch.eye(self.block_size, device=oft_blocks.device)
if self.is_kohya:
block_Q = oft_blocks - oft_blocks.transpose(1, 2) # ensure skew-symmetric orthogonal matrix
norm_Q = torch.norm(block_Q.flatten())
new_norm_Q = torch.clamp(norm_Q, max=self.constraint)
block_Q = block_Q * ((new_norm_Q + 1e-8) / (norm_Q + 1e-8))
if not self.is_R:
block_Q = oft_blocks - oft_blocks.transpose(-1, -2) # ensure skew-symmetric orthogonal matrix
if self.constraint != 0:
norm_Q = torch.norm(block_Q.flatten())
new_norm_Q = torch.clamp(norm_Q, max=self.constraint.to(oft_blocks.device))
block_Q = block_Q * ((new_norm_Q + 1e-8) / (norm_Q + 1e-8))
oft_blocks = torch.matmul(eye + block_Q, (eye - block_Q).float().inverse())
R = oft_blocks.to(orig_weight.device, dtype=orig_weight.dtype)
R = oft_blocks.to(orig_weight.device)
# This errors out for MultiheadAttention, might need to be handled up-stream
merged_weight = rearrange(orig_weight, '(k n) ... -> k n ...', k=self.num_blocks, n=self.block_size)
merged_weight = torch.einsum(
'k n m, k n ... -> k m ...',
R,
merged_weight
)
merged_weight = rearrange(merged_weight, 'k m ... -> (k m) ...')
if not self.is_boft:
# This errors out for MultiheadAttention, might need to be handled up-stream
merged_weight = rearrange(orig_weight, '(k n) ... -> k n ...', k=self.num_blocks, n=self.block_size)
merged_weight = torch.einsum(
'k n m, k n ... -> k m ...',
R,
merged_weight
)
merged_weight = rearrange(merged_weight, 'k m ... -> (k m) ...')
else:
# TODO: determine correct value for scale
scale = 1.0
m = self.boft_m
b = self.boft_b
r_b = b // 2
inp = orig_weight
for i in range(m):
bi = R[i] # b_num, b_size, b_size
if i == 0:
# Apply multiplier/scale and rescale into first weight
bi = bi * scale + (1 - scale) * eye
inp = rearrange(inp, "(c g k) ... -> (c k g) ...", g=2, k=2**i * r_b)
inp = rearrange(inp, "(d b) ... -> d b ...", b=b)
inp = torch.einsum("b i j, b j ... -> b i ...", bi, inp)
inp = rearrange(inp, "d b ... -> (d b) ...")
inp = rearrange(inp, "(c k g) ... -> (c g k) ...", g=2, k=2**i * r_b)
merged_weight = inp
updown = merged_weight.to(orig_weight.device, dtype=orig_weight.dtype) - orig_weight
# Rescale mechanism
if self.rescale is not None:
merged_weight = self.rescale.to(merged_weight) * merged_weight
updown = merged_weight.to(orig_weight.device) - orig_weight.to(merged_weight.dtype)
output_shape = orig_weight.shape
return self.finalize_updown(updown, orig_weight, output_shape)
+35 -18
View File
@@ -1,3 +1,4 @@
import gradio as gr
import logging
import os
import re
@@ -259,11 +260,11 @@ def load_networks(names, te_multipliers=None, unet_multipliers=None, dyn_dims=No
loaded_networks.clear()
networks_on_disk = [available_network_aliases.get(name, None) for name in names]
networks_on_disk = [available_networks.get(name, None) if name.lower() in forbidden_network_aliases else available_network_aliases.get(name, None) for name in names]
if any(x is None for x in networks_on_disk):
list_available_networks()
networks_on_disk = [available_network_aliases.get(name, None) for name in names]
networks_on_disk = [available_networks.get(name, None) if name.lower() in forbidden_network_aliases else available_network_aliases.get(name, None) for name in names]
failed_to_load_networks = []
@@ -314,7 +315,12 @@ def load_networks(names, te_multipliers=None, unet_multipliers=None, dyn_dims=No
emb_db.skipped_embeddings[name] = embedding
if failed_to_load_networks:
sd_hijack.model_hijack.comments.append("Networks not found: " + ", ".join(failed_to_load_networks))
lora_not_found_message = f'Lora not found: {", ".join(failed_to_load_networks)}'
sd_hijack.model_hijack.comments.append(lora_not_found_message)
if shared.opts.lora_not_found_warning_console:
print(f'\n{lora_not_found_message}\n')
if shared.opts.lora_not_found_gradio_warning:
gr.Warning(lora_not_found_message)
purge_networks_from_memory()
@@ -349,7 +355,7 @@ def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn
"""
Applies the currently selected set of networks to the weights of torch layer self.
If weights already have this particular set of networks applied, does nothing.
If not, restores orginal weights from backup and alters weights according to networks.
If not, restores original weights from backup and alters weights according to networks.
"""
network_layer_name = getattr(self, 'network_layer_name', None)
@@ -389,18 +395,26 @@ def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn
if module is not None and hasattr(self, 'weight'):
try:
with torch.no_grad():
updown, ex_bias = module.calc_updown(self.weight)
if getattr(self, 'fp16_weight', None) is None:
weight = self.weight
bias = self.bias
else:
weight = self.fp16_weight.clone().to(self.weight.device)
bias = getattr(self, 'fp16_bias', None)
if bias is not None:
bias = bias.clone().to(self.bias.device)
updown, ex_bias = module.calc_updown(weight)
if len(self.weight.shape) == 4 and self.weight.shape[1] == 9:
if len(weight.shape) == 4 and weight.shape[1] == 9:
# inpainting model. zero pad updown to make channel[1] 4 to 9
updown = torch.nn.functional.pad(updown, (0, 0, 0, 0, 0, 5))
self.weight += updown
self.weight.copy_((weight.to(dtype=updown.dtype) + updown).to(dtype=self.weight.dtype))
if ex_bias is not None and hasattr(self, 'bias'):
if self.bias is None:
self.bias = torch.nn.Parameter(ex_bias)
self.bias = torch.nn.Parameter(ex_bias).to(self.weight.dtype)
else:
self.bias += ex_bias
self.bias.copy_((bias + ex_bias).to(dtype=self.bias.dtype))
except RuntimeError as e:
logging.debug(f"Network {net.name} layer {network_layer_name}: {e}")
extra_network_lora.errors[net.name] = extra_network_lora.errors.get(net.name, 0) + 1
@@ -415,9 +429,12 @@ def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn
if isinstance(self, torch.nn.MultiheadAttention) and module_q and module_k and module_v and module_out:
try:
with torch.no_grad():
updown_q, _ = module_q.calc_updown(self.in_proj_weight)
updown_k, _ = module_k.calc_updown(self.in_proj_weight)
updown_v, _ = module_v.calc_updown(self.in_proj_weight)
# Send "real" orig_weight into MHA's lora module
qw, kw, vw = self.in_proj_weight.chunk(3, 0)
updown_q, _ = module_q.calc_updown(qw)
updown_k, _ = module_k.calc_updown(kw)
updown_v, _ = module_v.calc_updown(vw)
del qw, kw, vw
updown_qkv = torch.vstack([updown_q, updown_k, updown_v])
updown_out, ex_bias = module_out.calc_updown(self.out_proj.weight)
@@ -444,23 +461,23 @@ def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn
self.network_current_names = wanted_names
def network_forward(module, input, original_forward):
def network_forward(org_module, input, original_forward):
"""
Old way of applying Lora by executing operations during layer's forward.
Stacking many loras this way results in big performance degradation.
"""
if len(loaded_networks) == 0:
return original_forward(module, input)
return original_forward(org_module, input)
input = devices.cond_cast_unet(input)
network_restore_weights_from_backup(module)
network_reset_cached_weight(module)
network_restore_weights_from_backup(org_module)
network_reset_cached_weight(org_module)
y = original_forward(module, input)
y = original_forward(org_module, input)
network_layer_name = getattr(module, 'network_layer_name', None)
network_layer_name = getattr(org_module, 'network_layer_name', None)
for lora in loaded_networks:
module = lora.modules.get(network_layer_name, None)
if module is None:
+3 -2
View File
@@ -1,7 +1,8 @@
import os
from modules import paths
from modules.paths_internal import normalized_filepath
def preload(parser):
parser.add_argument("--lora-dir", type=str, help="Path to directory with Lora networks.", default=os.path.join(paths.models_path, 'Lora'))
parser.add_argument("--lyco-dir-backcompat", type=str, help="Path to directory with LyCORIS networks (for backawards compatibility; can also use --lyco-dir).", default=os.path.join(paths.models_path, 'LyCORIS'))
parser.add_argument("--lora-dir", type=normalized_filepath, help="Path to directory with Lora networks.", default=os.path.join(paths.models_path, 'Lora'))
parser.add_argument("--lyco-dir-backcompat", type=normalized_filepath, help="Path to directory with LyCORIS networks (for backawards compatibility; can also use --lyco-dir).", default=os.path.join(paths.models_path, 'LyCORIS'))
@@ -39,6 +39,8 @@ shared.options_templates.update(shared.options_section(('extra_networks', "Extra
"lora_show_all": shared.OptionInfo(False, "Always show all networks on the Lora page").info("otherwise, those detected as for incompatible version of Stable Diffusion will be hidden"),
"lora_hide_unknown_for_versions": shared.OptionInfo([], "Hide networks of unknown versions for model versions", gr.CheckboxGroup, {"choices": ["SD1", "SD2", "SDXL"]}),
"lora_in_memory_limit": shared.OptionInfo(0, "Number of Lora networks to keep cached in memory", gr.Number, {"precision": 0}),
"lora_not_found_warning_console": shared.OptionInfo(False, "Lora not found warning in console"),
"lora_not_found_gradio_warning": shared.OptionInfo(False, "Lora not found warning popup in webui"),
}))
@@ -54,12 +54,13 @@ class LoraUserMetadataEditor(ui_extra_networks_user_metadata.UserMetadataEditor)
self.slider_preferred_weight = None
self.edit_notes = None
def save_lora_user_metadata(self, name, desc, sd_version, activation_text, preferred_weight, notes):
def save_lora_user_metadata(self, name, desc, sd_version, activation_text, preferred_weight, negative_text, notes):
user_metadata = self.get_user_metadata(name)
user_metadata["description"] = desc
user_metadata["sd version"] = sd_version
user_metadata["activation text"] = activation_text
user_metadata["preferred weight"] = preferred_weight
user_metadata["negative text"] = negative_text
user_metadata["notes"] = notes
self.write_user_metadata(name, user_metadata)
@@ -127,6 +128,7 @@ class LoraUserMetadataEditor(ui_extra_networks_user_metadata.UserMetadataEditor)
gr.HighlightedText.update(value=gradio_tags, visible=True if tags else False),
user_metadata.get('activation text', ''),
float(user_metadata.get('preferred weight', 0.0)),
user_metadata.get('negative text', ''),
gr.update(visible=True if tags else False),
gr.update(value=self.generate_random_prompt_from_tags(tags), visible=True if tags else False),
]
@@ -147,6 +149,8 @@ class LoraUserMetadataEditor(ui_extra_networks_user_metadata.UserMetadataEditor)
v = random.random() * max_count
if count > v:
for x in "({[]})":
tag = tag.replace(x, '\\' + x)
res.append(tag)
return ", ".join(sorted(res))
@@ -162,7 +166,7 @@ class LoraUserMetadataEditor(ui_extra_networks_user_metadata.UserMetadataEditor)
self.taginfo = gr.HighlightedText(label="Training dataset tags")
self.edit_activation_text = gr.Text(label='Activation text', info="Will be added to prompt along with Lora")
self.slider_preferred_weight = gr.Slider(label='Preferred weight', info="Set to 0 to disable", minimum=0.0, maximum=2.0, step=0.01)
self.edit_negative_text = gr.Text(label='Negative prompt', info="Will be added to negative prompts")
with gr.Row() as row_random_prompt:
with gr.Column(scale=8):
random_prompt = gr.Textbox(label='Random prompt', lines=4, max_lines=4, interactive=False)
@@ -198,6 +202,7 @@ class LoraUserMetadataEditor(ui_extra_networks_user_metadata.UserMetadataEditor)
self.taginfo,
self.edit_activation_text,
self.slider_preferred_weight,
self.edit_negative_text,
row_random_prompt,
random_prompt,
]
@@ -211,7 +216,9 @@ class LoraUserMetadataEditor(ui_extra_networks_user_metadata.UserMetadataEditor)
self.select_sd_version,
self.edit_activation_text,
self.slider_preferred_weight,
self.edit_negative_text,
self.edit_notes,
]
self.setup_save_handler(self.button_save, self.save_lora_user_metadata, edited_components)
@@ -24,13 +24,16 @@ class ExtraNetworksPageLora(ui_extra_networks.ExtraNetworksPage):
alias = lora_on_disk.get_alias()
search_terms = [self.search_terms_from_path(lora_on_disk.filename)]
if lora_on_disk.hash:
search_terms.append(lora_on_disk.hash)
item = {
"name": name,
"filename": lora_on_disk.filename,
"shorthash": lora_on_disk.shorthash,
"preview": self.find_preview(path),
"preview": self.find_preview(path) or self.find_embedded_preview(path, name, lora_on_disk.metadata),
"description": self.find_description(path),
"search_term": self.search_terms_from_path(lora_on_disk.filename) + " " + (lora_on_disk.hash or ""),
"search_terms": search_terms,
"local_preview": f"{path}.{shared.opts.samples_format}",
"metadata": lora_on_disk.metadata,
"sort_keys": {'default': index, **self.get_sort_keys(lora_on_disk.filename)},
@@ -45,6 +48,11 @@ class ExtraNetworksPageLora(ui_extra_networks.ExtraNetworksPage):
if activation_text:
item["prompt"] += " + " + quote_js(" " + activation_text)
negative_prompt = item["user_metadata"].get("negative text")
item["negative_prompt"] = quote_js("")
if negative_prompt:
item["negative_prompt"] = quote_js('(' + negative_prompt + ':1)')
sd_version = item["user_metadata"].get("sd version")
if sd_version in network.SdVersion.__members__:
item["sd_version"] = sd_version
@@ -1,16 +1,9 @@
import sys
import PIL.Image
import numpy as np
import torch
from tqdm import tqdm
import modules.upscaler
from modules import devices, modelloader, script_callbacks, errors
from scunet_model_arch import SCUNet
from modules.modelloader import load_file_from_url
from modules.shared import opts
from modules import devices, errors, modelloader, script_callbacks, shared, upscaler_utils
class UpscalerScuNET(modules.upscaler.Upscaler):
@@ -42,100 +35,37 @@ class UpscalerScuNET(modules.upscaler.Upscaler):
scalers.append(scaler_data2)
self.scalers = scalers
@staticmethod
@torch.no_grad()
def tiled_inference(img, model):
# test the image tile by tile
h, w = img.shape[2:]
tile = opts.SCUNET_tile
tile_overlap = opts.SCUNET_tile_overlap
if tile == 0:
return model(img)
device = devices.get_device_for('scunet')
assert tile % 8 == 0, "tile size should be a multiple of window_size"
sf = 1
stride = tile - tile_overlap
h_idx_list = list(range(0, h - tile, stride)) + [h - tile]
w_idx_list = list(range(0, w - tile, stride)) + [w - tile]
E = torch.zeros(1, 3, h * sf, w * sf, dtype=img.dtype, device=device)
W = torch.zeros_like(E, dtype=devices.dtype, device=device)
with tqdm(total=len(h_idx_list) * len(w_idx_list), desc="ScuNET tiles") as pbar:
for h_idx in h_idx_list:
for w_idx in w_idx_list:
in_patch = img[..., h_idx: h_idx + tile, w_idx: w_idx + tile]
out_patch = model(in_patch)
out_patch_mask = torch.ones_like(out_patch)
E[
..., h_idx * sf: (h_idx + tile) * sf, w_idx * sf: (w_idx + tile) * sf
].add_(out_patch)
W[
..., h_idx * sf: (h_idx + tile) * sf, w_idx * sf: (w_idx + tile) * sf
].add_(out_patch_mask)
pbar.update(1)
output = E.div_(W)
return output
def do_upscale(self, img: PIL.Image.Image, selected_file):
devices.torch_gc()
try:
model = self.load_model(selected_file)
except Exception as e:
print(f"ScuNET: Unable to load model from {selected_file}: {e}", file=sys.stderr)
return img
device = devices.get_device_for('scunet')
tile = opts.SCUNET_tile
h, w = img.height, img.width
np_img = np.array(img)
np_img = np_img[:, :, ::-1] # RGB to BGR
np_img = np_img.transpose((2, 0, 1)) / 255 # HWC to CHW
torch_img = torch.from_numpy(np_img).float().unsqueeze(0).to(device) # type: ignore
if tile > h or tile > w:
_img = torch.zeros(1, 3, max(h, tile), max(w, tile), dtype=torch_img.dtype, device=torch_img.device)
_img[:, :, :h, :w] = torch_img # pad image
torch_img = _img
torch_output = self.tiled_inference(torch_img, model).squeeze(0)
torch_output = torch_output[:, :h * 1, :w * 1] # remove padding, if any
np_output: np.ndarray = torch_output.float().cpu().clamp_(0, 1).numpy()
del torch_img, torch_output
img = upscaler_utils.upscale_2(
img,
model,
tile_size=shared.opts.SCUNET_tile,
tile_overlap=shared.opts.SCUNET_tile_overlap,
scale=1, # ScuNET is a denoising model, not an upscaler
desc='ScuNET',
)
devices.torch_gc()
output = np_output.transpose((1, 2, 0)) # CHW to HWC
output = output[:, :, ::-1] # BGR to RGB
return PIL.Image.fromarray((output * 255).astype(np.uint8))
return img
def load_model(self, path: str):
device = devices.get_device_for('scunet')
if path.startswith("http"):
# TODO: this doesn't use `path` at all?
filename = load_file_from_url(self.model_url, model_dir=self.model_download_path, file_name=f"{self.name}.pth")
filename = modelloader.load_file_from_url(self.model_url, model_dir=self.model_download_path, file_name=f"{self.name}.pth")
else:
filename = path
model = SCUNet(in_nc=3, config=[4, 4, 4, 4, 4, 4, 4], dim=64)
model.load_state_dict(torch.load(filename), strict=True)
model.eval()
for _, v in model.named_parameters():
v.requires_grad = False
model = model.to(device)
return model
return modelloader.load_spandrel_model(filename, device=device, expected_architecture='SCUNet')
def on_ui_settings():
import gradio as gr
from modules import shared
shared.opts.add_option("SCUNET_tile", shared.OptionInfo(256, "Tile size for SCUNET upscalers.", gr.Slider, {"minimum": 0, "maximum": 512, "step": 16}, section=('upscaling', "Upscaling")).info("0 = no tiling"))
shared.opts.add_option("SCUNET_tile_overlap", shared.OptionInfo(8, "Tile overlap for SCUNET upscalers.", gr.Slider, {"minimum": 0, "maximum": 64, "step": 1}, section=('upscaling', "Upscaling")).info("Low values = visible seam"))
@@ -1,268 +0,0 @@
# -*- coding: utf-8 -*-
import numpy as np
import torch
import torch.nn as nn
from einops import rearrange
from einops.layers.torch import Rearrange
from timm.models.layers import trunc_normal_, DropPath
class WMSA(nn.Module):
""" Self-attention module in Swin Transformer
"""
def __init__(self, input_dim, output_dim, head_dim, window_size, type):
super(WMSA, self).__init__()
self.input_dim = input_dim
self.output_dim = output_dim
self.head_dim = head_dim
self.scale = self.head_dim ** -0.5
self.n_heads = input_dim // head_dim
self.window_size = window_size
self.type = type
self.embedding_layer = nn.Linear(self.input_dim, 3 * self.input_dim, bias=True)
self.relative_position_params = nn.Parameter(
torch.zeros((2 * window_size - 1) * (2 * window_size - 1), self.n_heads))
self.linear = nn.Linear(self.input_dim, self.output_dim)
trunc_normal_(self.relative_position_params, std=.02)
self.relative_position_params = torch.nn.Parameter(
self.relative_position_params.view(2 * window_size - 1, 2 * window_size - 1, self.n_heads).transpose(1,
2).transpose(
0, 1))
def generate_mask(self, h, w, p, shift):
""" generating the mask of SW-MSA
Args:
shift: shift parameters in CyclicShift.
Returns:
attn_mask: should be (1 1 w p p),
"""
# supporting square.
attn_mask = torch.zeros(h, w, p, p, p, p, dtype=torch.bool, device=self.relative_position_params.device)
if self.type == 'W':
return attn_mask
s = p - shift
attn_mask[-1, :, :s, :, s:, :] = True
attn_mask[-1, :, s:, :, :s, :] = True
attn_mask[:, -1, :, :s, :, s:] = True
attn_mask[:, -1, :, s:, :, :s] = True
attn_mask = rearrange(attn_mask, 'w1 w2 p1 p2 p3 p4 -> 1 1 (w1 w2) (p1 p2) (p3 p4)')
return attn_mask
def forward(self, x):
""" Forward pass of Window Multi-head Self-attention module.
Args:
x: input tensor with shape of [b h w c];
attn_mask: attention mask, fill -inf where the value is True;
Returns:
output: tensor shape [b h w c]
"""
if self.type != 'W':
x = torch.roll(x, shifts=(-(self.window_size // 2), -(self.window_size // 2)), dims=(1, 2))
x = rearrange(x, 'b (w1 p1) (w2 p2) c -> b w1 w2 p1 p2 c', p1=self.window_size, p2=self.window_size)
h_windows = x.size(1)
w_windows = x.size(2)
# square validation
# assert h_windows == w_windows
x = rearrange(x, 'b w1 w2 p1 p2 c -> b (w1 w2) (p1 p2) c', p1=self.window_size, p2=self.window_size)
qkv = self.embedding_layer(x)
q, k, v = rearrange(qkv, 'b nw np (threeh c) -> threeh b nw np c', c=self.head_dim).chunk(3, dim=0)
sim = torch.einsum('hbwpc,hbwqc->hbwpq', q, k) * self.scale
# Adding learnable relative embedding
sim = sim + rearrange(self.relative_embedding(), 'h p q -> h 1 1 p q')
# Using Attn Mask to distinguish different subwindows.
if self.type != 'W':
attn_mask = self.generate_mask(h_windows, w_windows, self.window_size, shift=self.window_size // 2)
sim = sim.masked_fill_(attn_mask, float("-inf"))
probs = nn.functional.softmax(sim, dim=-1)
output = torch.einsum('hbwij,hbwjc->hbwic', probs, v)
output = rearrange(output, 'h b w p c -> b w p (h c)')
output = self.linear(output)
output = rearrange(output, 'b (w1 w2) (p1 p2) c -> b (w1 p1) (w2 p2) c', w1=h_windows, p1=self.window_size)
if self.type != 'W':
output = torch.roll(output, shifts=(self.window_size // 2, self.window_size // 2), dims=(1, 2))
return output
def relative_embedding(self):
cord = torch.tensor(np.array([[i, j] for i in range(self.window_size) for j in range(self.window_size)]))
relation = cord[:, None, :] - cord[None, :, :] + self.window_size - 1
# negative is allowed
return self.relative_position_params[:, relation[:, :, 0].long(), relation[:, :, 1].long()]
class Block(nn.Module):
def __init__(self, input_dim, output_dim, head_dim, window_size, drop_path, type='W', input_resolution=None):
""" SwinTransformer Block
"""
super(Block, self).__init__()
self.input_dim = input_dim
self.output_dim = output_dim
assert type in ['W', 'SW']
self.type = type
if input_resolution <= window_size:
self.type = 'W'
self.ln1 = nn.LayerNorm(input_dim)
self.msa = WMSA(input_dim, input_dim, head_dim, window_size, self.type)
self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.ln2 = nn.LayerNorm(input_dim)
self.mlp = nn.Sequential(
nn.Linear(input_dim, 4 * input_dim),
nn.GELU(),
nn.Linear(4 * input_dim, output_dim),
)
def forward(self, x):
x = x + self.drop_path(self.msa(self.ln1(x)))
x = x + self.drop_path(self.mlp(self.ln2(x)))
return x
class ConvTransBlock(nn.Module):
def __init__(self, conv_dim, trans_dim, head_dim, window_size, drop_path, type='W', input_resolution=None):
""" SwinTransformer and Conv Block
"""
super(ConvTransBlock, self).__init__()
self.conv_dim = conv_dim
self.trans_dim = trans_dim
self.head_dim = head_dim
self.window_size = window_size
self.drop_path = drop_path
self.type = type
self.input_resolution = input_resolution
assert self.type in ['W', 'SW']
if self.input_resolution <= self.window_size:
self.type = 'W'
self.trans_block = Block(self.trans_dim, self.trans_dim, self.head_dim, self.window_size, self.drop_path,
self.type, self.input_resolution)
self.conv1_1 = nn.Conv2d(self.conv_dim + self.trans_dim, self.conv_dim + self.trans_dim, 1, 1, 0, bias=True)
self.conv1_2 = nn.Conv2d(self.conv_dim + self.trans_dim, self.conv_dim + self.trans_dim, 1, 1, 0, bias=True)
self.conv_block = nn.Sequential(
nn.Conv2d(self.conv_dim, self.conv_dim, 3, 1, 1, bias=False),
nn.ReLU(True),
nn.Conv2d(self.conv_dim, self.conv_dim, 3, 1, 1, bias=False)
)
def forward(self, x):
conv_x, trans_x = torch.split(self.conv1_1(x), (self.conv_dim, self.trans_dim), dim=1)
conv_x = self.conv_block(conv_x) + conv_x
trans_x = Rearrange('b c h w -> b h w c')(trans_x)
trans_x = self.trans_block(trans_x)
trans_x = Rearrange('b h w c -> b c h w')(trans_x)
res = self.conv1_2(torch.cat((conv_x, trans_x), dim=1))
x = x + res
return x
class SCUNet(nn.Module):
# def __init__(self, in_nc=3, config=[2, 2, 2, 2, 2, 2, 2], dim=64, drop_path_rate=0.0, input_resolution=256):
def __init__(self, in_nc=3, config=None, dim=64, drop_path_rate=0.0, input_resolution=256):
super(SCUNet, self).__init__()
if config is None:
config = [2, 2, 2, 2, 2, 2, 2]
self.config = config
self.dim = dim
self.head_dim = 32
self.window_size = 8
# drop path rate for each layer
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(config))]
self.m_head = [nn.Conv2d(in_nc, dim, 3, 1, 1, bias=False)]
begin = 0
self.m_down1 = [ConvTransBlock(dim // 2, dim // 2, self.head_dim, self.window_size, dpr[i + begin],
'W' if not i % 2 else 'SW', input_resolution)
for i in range(config[0])] + \
[nn.Conv2d(dim, 2 * dim, 2, 2, 0, bias=False)]
begin += config[0]
self.m_down2 = [ConvTransBlock(dim, dim, self.head_dim, self.window_size, dpr[i + begin],
'W' if not i % 2 else 'SW', input_resolution // 2)
for i in range(config[1])] + \
[nn.Conv2d(2 * dim, 4 * dim, 2, 2, 0, bias=False)]
begin += config[1]
self.m_down3 = [ConvTransBlock(2 * dim, 2 * dim, self.head_dim, self.window_size, dpr[i + begin],
'W' if not i % 2 else 'SW', input_resolution // 4)
for i in range(config[2])] + \
[nn.Conv2d(4 * dim, 8 * dim, 2, 2, 0, bias=False)]
begin += config[2]
self.m_body = [ConvTransBlock(4 * dim, 4 * dim, self.head_dim, self.window_size, dpr[i + begin],
'W' if not i % 2 else 'SW', input_resolution // 8)
for i in range(config[3])]
begin += config[3]
self.m_up3 = [nn.ConvTranspose2d(8 * dim, 4 * dim, 2, 2, 0, bias=False), ] + \
[ConvTransBlock(2 * dim, 2 * dim, self.head_dim, self.window_size, dpr[i + begin],
'W' if not i % 2 else 'SW', input_resolution // 4)
for i in range(config[4])]
begin += config[4]
self.m_up2 = [nn.ConvTranspose2d(4 * dim, 2 * dim, 2, 2, 0, bias=False), ] + \
[ConvTransBlock(dim, dim, self.head_dim, self.window_size, dpr[i + begin],
'W' if not i % 2 else 'SW', input_resolution // 2)
for i in range(config[5])]
begin += config[5]
self.m_up1 = [nn.ConvTranspose2d(2 * dim, dim, 2, 2, 0, bias=False), ] + \
[ConvTransBlock(dim // 2, dim // 2, self.head_dim, self.window_size, dpr[i + begin],
'W' if not i % 2 else 'SW', input_resolution)
for i in range(config[6])]
self.m_tail = [nn.Conv2d(dim, in_nc, 3, 1, 1, bias=False)]
self.m_head = nn.Sequential(*self.m_head)
self.m_down1 = nn.Sequential(*self.m_down1)
self.m_down2 = nn.Sequential(*self.m_down2)
self.m_down3 = nn.Sequential(*self.m_down3)
self.m_body = nn.Sequential(*self.m_body)
self.m_up3 = nn.Sequential(*self.m_up3)
self.m_up2 = nn.Sequential(*self.m_up2)
self.m_up1 = nn.Sequential(*self.m_up1)
self.m_tail = nn.Sequential(*self.m_tail)
# self.apply(self._init_weights)
def forward(self, x0):
h, w = x0.size()[-2:]
paddingBottom = int(np.ceil(h / 64) * 64 - h)
paddingRight = int(np.ceil(w / 64) * 64 - w)
x0 = nn.ReplicationPad2d((0, paddingRight, 0, paddingBottom))(x0)
x1 = self.m_head(x0)
x2 = self.m_down1(x1)
x3 = self.m_down2(x2)
x4 = self.m_down3(x3)
x = self.m_body(x4)
x = self.m_up3(x + x4)
x = self.m_up2(x + x3)
x = self.m_up1(x + x2)
x = self.m_tail(x + x1)
x = x[..., :h, :w]
return x
def _init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
+32 -129
View File
@@ -1,20 +1,15 @@
import logging
import sys
import platform
import numpy as np
import torch
from PIL import Image
from tqdm import tqdm
from modules import modelloader, devices, script_callbacks, shared
from modules.shared import opts, state
from swinir_model_arch import SwinIR
from swinir_model_arch_v2 import Swin2SR
from modules import devices, modelloader, script_callbacks, shared, upscaler_utils
from modules.upscaler import Upscaler, UpscalerData
SWINIR_MODEL_URL = "https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/003_realSR_BSRGAN_DFOWMFC_s64w8_SwinIR-L_x4_GAN.pth"
device_swinir = devices.get_device_for('swinir')
logger = logging.getLogger(__name__)
class UpscalerSwinIR(Upscaler):
@@ -37,26 +32,28 @@ class UpscalerSwinIR(Upscaler):
scalers.append(model_data)
self.scalers = scalers
def do_upscale(self, img, model_file):
use_compile = hasattr(opts, 'SWIN_torch_compile') and opts.SWIN_torch_compile \
and int(torch.__version__.split('.')[0]) >= 2 and platform.system() != "Windows"
current_config = (model_file, opts.SWIN_tile)
def do_upscale(self, img: Image.Image, model_file: str) -> Image.Image:
current_config = (model_file, shared.opts.SWIN_tile)
if use_compile and self._cached_model_config == current_config:
if self._cached_model_config == current_config:
model = self._cached_model
else:
self._cached_model = None
try:
model = self.load_model(model_file)
except Exception as e:
print(f"Failed loading SwinIR model {model_file}: {e}", file=sys.stderr)
return img
model = model.to(device_swinir, dtype=devices.dtype)
if use_compile:
model = torch.compile(model)
self._cached_model = model
self._cached_model_config = current_config
img = upscale(img, model)
self._cached_model = model
self._cached_model_config = current_config
img = upscaler_utils.upscale_2(
img,
model,
tile_size=shared.opts.SWIN_tile,
tile_overlap=shared.opts.SWIN_tile_overlap,
scale=model.scale,
desc="SwinIR",
)
devices.torch_gc()
return img
@@ -69,115 +66,22 @@ class UpscalerSwinIR(Upscaler):
)
else:
filename = path
if filename.endswith(".v2.pth"):
model = Swin2SR(
upscale=scale,
in_chans=3,
img_size=64,
window_size=8,
img_range=1.0,
depths=[6, 6, 6, 6, 6, 6],
embed_dim=180,
num_heads=[6, 6, 6, 6, 6, 6],
mlp_ratio=2,
upsampler="nearest+conv",
resi_connection="1conv",
)
params = None
else:
model = SwinIR(
upscale=scale,
in_chans=3,
img_size=64,
window_size=8,
img_range=1.0,
depths=[6, 6, 6, 6, 6, 6, 6, 6, 6],
embed_dim=240,
num_heads=[8, 8, 8, 8, 8, 8, 8, 8, 8],
mlp_ratio=2,
upsampler="nearest+conv",
resi_connection="3conv",
)
params = "params_ema"
pretrained_model = torch.load(filename)
if params is not None:
model.load_state_dict(pretrained_model[params], strict=True)
else:
model.load_state_dict(pretrained_model, strict=True)
return model
model_descriptor = modelloader.load_spandrel_model(
filename,
device=self._get_device(),
prefer_half=(devices.dtype == torch.float16),
expected_architecture="SwinIR",
)
if getattr(shared.opts, 'SWIN_torch_compile', False):
try:
model_descriptor.model.compile()
except Exception:
logger.warning("Failed to compile SwinIR model, fallback to JIT", exc_info=True)
return model_descriptor
def upscale(
img,
model,
tile=None,
tile_overlap=None,
window_size=8,
scale=4,
):
tile = tile or opts.SWIN_tile
tile_overlap = tile_overlap or opts.SWIN_tile_overlap
img = np.array(img)
img = img[:, :, ::-1]
img = np.moveaxis(img, 2, 0) / 255
img = torch.from_numpy(img).float()
img = img.unsqueeze(0).to(device_swinir, dtype=devices.dtype)
with torch.no_grad(), devices.autocast():
_, _, h_old, w_old = img.size()
h_pad = (h_old // window_size + 1) * window_size - h_old
w_pad = (w_old // window_size + 1) * window_size - w_old
img = torch.cat([img, torch.flip(img, [2])], 2)[:, :, : h_old + h_pad, :]
img = torch.cat([img, torch.flip(img, [3])], 3)[:, :, :, : w_old + w_pad]
output = inference(img, model, tile, tile_overlap, window_size, scale)
output = output[..., : h_old * scale, : w_old * scale]
output = output.data.squeeze().float().cpu().clamp_(0, 1).numpy()
if output.ndim == 3:
output = np.transpose(
output[[2, 1, 0], :, :], (1, 2, 0)
) # CHW-RGB to HCW-BGR
output = (output * 255.0).round().astype(np.uint8) # float32 to uint8
return Image.fromarray(output, "RGB")
def inference(img, model, tile, tile_overlap, window_size, scale):
# test the image tile by tile
b, c, h, w = img.size()
tile = min(tile, h, w)
assert tile % window_size == 0, "tile size should be a multiple of window_size"
sf = scale
stride = tile - tile_overlap
h_idx_list = list(range(0, h - tile, stride)) + [h - tile]
w_idx_list = list(range(0, w - tile, stride)) + [w - tile]
E = torch.zeros(b, c, h * sf, w * sf, dtype=devices.dtype, device=device_swinir).type_as(img)
W = torch.zeros_like(E, dtype=devices.dtype, device=device_swinir)
with tqdm(total=len(h_idx_list) * len(w_idx_list), desc="SwinIR tiles") as pbar:
for h_idx in h_idx_list:
if state.interrupted or state.skipped:
break
for w_idx in w_idx_list:
if state.interrupted or state.skipped:
break
in_patch = img[..., h_idx: h_idx + tile, w_idx: w_idx + tile]
out_patch = model(in_patch)
out_patch_mask = torch.ones_like(out_patch)
E[
..., h_idx * sf: (h_idx + tile) * sf, w_idx * sf: (w_idx + tile) * sf
].add_(out_patch)
W[
..., h_idx * sf: (h_idx + tile) * sf, w_idx * sf: (w_idx + tile) * sf
].add_(out_patch_mask)
pbar.update(1)
output = E.div_(W)
return output
def _get_device(self):
return devices.get_device_for('swinir')
def on_ui_settings():
@@ -185,8 +89,7 @@ def on_ui_settings():
shared.opts.add_option("SWIN_tile", shared.OptionInfo(192, "Tile size for all SwinIR.", gr.Slider, {"minimum": 16, "maximum": 512, "step": 16}, section=('upscaling', "Upscaling")))
shared.opts.add_option("SWIN_tile_overlap", shared.OptionInfo(8, "Tile overlap, in pixels for SwinIR. Low values = visible seam.", gr.Slider, {"minimum": 0, "maximum": 48, "step": 1}, section=('upscaling', "Upscaling")))
if int(torch.__version__.split('.')[0]) >= 2 and platform.system() != "Windows": # torch.compile() require pytorch 2.0 or above, and not on Windows
shared.opts.add_option("SWIN_torch_compile", shared.OptionInfo(False, "Use torch.compile to accelerate SwinIR.", gr.Checkbox, {"interactive": True}, section=('upscaling', "Upscaling")).info("Takes longer on first run"))
shared.opts.add_option("SWIN_torch_compile", shared.OptionInfo(False, "Use torch.compile to accelerate SwinIR.", gr.Checkbox, {"interactive": True}, section=('upscaling', "Upscaling")).info("Takes longer on first run"))
script_callbacks.on_ui_settings(on_ui_settings)
@@ -1,867 +0,0 @@
# -----------------------------------------------------------------------------------
# SwinIR: Image Restoration Using Swin Transformer, https://arxiv.org/abs/2108.10257
# Originally Written by Ze Liu, Modified by Jingyun Liang.
# -----------------------------------------------------------------------------------
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.checkpoint as checkpoint
from timm.models.layers import DropPath, to_2tuple, trunc_normal_
class Mlp(nn.Module):
def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
super().__init__()
out_features = out_features or in_features
hidden_features = hidden_features or in_features
self.fc1 = nn.Linear(in_features, hidden_features)
self.act = act_layer()
self.fc2 = nn.Linear(hidden_features, out_features)
self.drop = nn.Dropout(drop)
def forward(self, x):
x = self.fc1(x)
x = self.act(x)
x = self.drop(x)
x = self.fc2(x)
x = self.drop(x)
return x
def window_partition(x, window_size):
"""
Args:
x: (B, H, W, C)
window_size (int): window size
Returns:
windows: (num_windows*B, window_size, window_size, C)
"""
B, H, W, C = x.shape
x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
return windows
def window_reverse(windows, window_size, H, W):
"""
Args:
windows: (num_windows*B, window_size, window_size, C)
window_size (int): Window size
H (int): Height of image
W (int): Width of image
Returns:
x: (B, H, W, C)
"""
B = int(windows.shape[0] / (H * W / window_size / window_size))
x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
return x
class WindowAttention(nn.Module):
r""" Window based multi-head self attention (W-MSA) module with relative position bias.
It supports both of shifted and non-shifted window.
Args:
dim (int): Number of input channels.
window_size (tuple[int]): The height and width of the window.
num_heads (int): Number of attention heads.
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
proj_drop (float, optional): Dropout ratio of output. Default: 0.0
"""
def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
super().__init__()
self.dim = dim
self.window_size = window_size # Wh, Ww
self.num_heads = num_heads
head_dim = dim // num_heads
self.scale = qk_scale or head_dim ** -0.5
# define a parameter table of relative position bias
self.relative_position_bias_table = nn.Parameter(
torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
# get pair-wise relative position index for each token inside the window
coords_h = torch.arange(self.window_size[0])
coords_w = torch.arange(self.window_size[1])
coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
relative_coords[:, :, 1] += self.window_size[1] - 1
relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
self.register_buffer("relative_position_index", relative_position_index)
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
self.attn_drop = nn.Dropout(attn_drop)
self.proj = nn.Linear(dim, dim)
self.proj_drop = nn.Dropout(proj_drop)
trunc_normal_(self.relative_position_bias_table, std=.02)
self.softmax = nn.Softmax(dim=-1)
def forward(self, x, mask=None):
"""
Args:
x: input features with shape of (num_windows*B, N, C)
mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
"""
B_, N, C = x.shape
qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
q = q * self.scale
attn = (q @ k.transpose(-2, -1))
relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
attn = attn + relative_position_bias.unsqueeze(0)
if mask is not None:
nW = mask.shape[0]
attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
attn = attn.view(-1, self.num_heads, N, N)
attn = self.softmax(attn)
else:
attn = self.softmax(attn)
attn = self.attn_drop(attn)
x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
x = self.proj(x)
x = self.proj_drop(x)
return x
def extra_repr(self) -> str:
return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}'
def flops(self, N):
# calculate flops for 1 window with token length of N
flops = 0
# qkv = self.qkv(x)
flops += N * self.dim * 3 * self.dim
# attn = (q @ k.transpose(-2, -1))
flops += self.num_heads * N * (self.dim // self.num_heads) * N
# x = (attn @ v)
flops += self.num_heads * N * N * (self.dim // self.num_heads)
# x = self.proj(x)
flops += N * self.dim * self.dim
return flops
class SwinTransformerBlock(nn.Module):
r""" Swin Transformer Block.
Args:
dim (int): Number of input channels.
input_resolution (tuple[int]): Input resolution.
num_heads (int): Number of attention heads.
window_size (int): Window size.
shift_size (int): Shift size for SW-MSA.
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
drop (float, optional): Dropout rate. Default: 0.0
attn_drop (float, optional): Attention dropout rate. Default: 0.0
drop_path (float, optional): Stochastic depth rate. Default: 0.0
act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
"""
def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0,
mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
act_layer=nn.GELU, norm_layer=nn.LayerNorm):
super().__init__()
self.dim = dim
self.input_resolution = input_resolution
self.num_heads = num_heads
self.window_size = window_size
self.shift_size = shift_size
self.mlp_ratio = mlp_ratio
if min(self.input_resolution) <= self.window_size:
# if window size is larger than input resolution, we don't partition windows
self.shift_size = 0
self.window_size = min(self.input_resolution)
assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
self.norm1 = norm_layer(dim)
self.attn = WindowAttention(
dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
self.norm2 = norm_layer(dim)
mlp_hidden_dim = int(dim * mlp_ratio)
self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
if self.shift_size > 0:
attn_mask = self.calculate_mask(self.input_resolution)
else:
attn_mask = None
self.register_buffer("attn_mask", attn_mask)
def calculate_mask(self, x_size):
# calculate attention mask for SW-MSA
H, W = x_size
img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1
h_slices = (slice(0, -self.window_size),
slice(-self.window_size, -self.shift_size),
slice(-self.shift_size, None))
w_slices = (slice(0, -self.window_size),
slice(-self.window_size, -self.shift_size),
slice(-self.shift_size, None))
cnt = 0
for h in h_slices:
for w in w_slices:
img_mask[:, h, w, :] = cnt
cnt += 1
mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
return attn_mask
def forward(self, x, x_size):
H, W = x_size
B, L, C = x.shape
# assert L == H * W, "input feature has wrong size"
shortcut = x
x = self.norm1(x)
x = x.view(B, H, W, C)
# cyclic shift
if self.shift_size > 0:
shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
else:
shifted_x = x
# partition windows
x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
# W-MSA/SW-MSA (to be compatible for testing on images whose shapes are the multiple of window size
if self.input_resolution == x_size:
attn_windows = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C
else:
attn_windows = self.attn(x_windows, mask=self.calculate_mask(x_size).to(x.device))
# merge windows
attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C
# reverse cyclic shift
if self.shift_size > 0:
x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
else:
x = shifted_x
x = x.view(B, H * W, C)
# FFN
x = shortcut + self.drop_path(x)
x = x + self.drop_path(self.mlp(self.norm2(x)))
return x
def extra_repr(self) -> str:
return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \
f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}"
def flops(self):
flops = 0
H, W = self.input_resolution
# norm1
flops += self.dim * H * W
# W-MSA/SW-MSA
nW = H * W / self.window_size / self.window_size
flops += nW * self.attn.flops(self.window_size * self.window_size)
# mlp
flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio
# norm2
flops += self.dim * H * W
return flops
class PatchMerging(nn.Module):
r""" Patch Merging Layer.
Args:
input_resolution (tuple[int]): Resolution of input feature.
dim (int): Number of input channels.
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
"""
def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm):
super().__init__()
self.input_resolution = input_resolution
self.dim = dim
self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
self.norm = norm_layer(4 * dim)
def forward(self, x):
"""
x: B, H*W, C
"""
H, W = self.input_resolution
B, L, C = x.shape
assert L == H * W, "input feature has wrong size"
assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even."
x = x.view(B, H, W, C)
x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
x = self.norm(x)
x = self.reduction(x)
return x
def extra_repr(self) -> str:
return f"input_resolution={self.input_resolution}, dim={self.dim}"
def flops(self):
H, W = self.input_resolution
flops = H * W * self.dim
flops += (H // 2) * (W // 2) * 4 * self.dim * 2 * self.dim
return flops
class BasicLayer(nn.Module):
""" A basic Swin Transformer layer for one stage.
Args:
dim (int): Number of input channels.
input_resolution (tuple[int]): Input resolution.
depth (int): Number of blocks.
num_heads (int): Number of attention heads.
window_size (int): Local window size.
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
drop (float, optional): Dropout rate. Default: 0.0
attn_drop (float, optional): Attention dropout rate. Default: 0.0
drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
"""
def __init__(self, dim, input_resolution, depth, num_heads, window_size,
mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False):
super().__init__()
self.dim = dim
self.input_resolution = input_resolution
self.depth = depth
self.use_checkpoint = use_checkpoint
# build blocks
self.blocks = nn.ModuleList([
SwinTransformerBlock(dim=dim, input_resolution=input_resolution,
num_heads=num_heads, window_size=window_size,
shift_size=0 if (i % 2 == 0) else window_size // 2,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias, qk_scale=qk_scale,
drop=drop, attn_drop=attn_drop,
drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
norm_layer=norm_layer)
for i in range(depth)])
# patch merging layer
if downsample is not None:
self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer)
else:
self.downsample = None
def forward(self, x, x_size):
for blk in self.blocks:
if self.use_checkpoint:
x = checkpoint.checkpoint(blk, x, x_size)
else:
x = blk(x, x_size)
if self.downsample is not None:
x = self.downsample(x)
return x
def extra_repr(self) -> str:
return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}"
def flops(self):
flops = 0
for blk in self.blocks:
flops += blk.flops()
if self.downsample is not None:
flops += self.downsample.flops()
return flops
class RSTB(nn.Module):
"""Residual Swin Transformer Block (RSTB).
Args:
dim (int): Number of input channels.
input_resolution (tuple[int]): Input resolution.
depth (int): Number of blocks.
num_heads (int): Number of attention heads.
window_size (int): Local window size.
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
drop (float, optional): Dropout rate. Default: 0.0
attn_drop (float, optional): Attention dropout rate. Default: 0.0
drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
img_size: Input image size.
patch_size: Patch size.
resi_connection: The convolutional block before residual connection.
"""
def __init__(self, dim, input_resolution, depth, num_heads, window_size,
mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False,
img_size=224, patch_size=4, resi_connection='1conv'):
super(RSTB, self).__init__()
self.dim = dim
self.input_resolution = input_resolution
self.residual_group = BasicLayer(dim=dim,
input_resolution=input_resolution,
depth=depth,
num_heads=num_heads,
window_size=window_size,
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias, qk_scale=qk_scale,
drop=drop, attn_drop=attn_drop,
drop_path=drop_path,
norm_layer=norm_layer,
downsample=downsample,
use_checkpoint=use_checkpoint)
if resi_connection == '1conv':
self.conv = nn.Conv2d(dim, dim, 3, 1, 1)
elif resi_connection == '3conv':
# to save parameters and memory
self.conv = nn.Sequential(nn.Conv2d(dim, dim // 4, 3, 1, 1), nn.LeakyReLU(negative_slope=0.2, inplace=True),
nn.Conv2d(dim // 4, dim // 4, 1, 1, 0),
nn.LeakyReLU(negative_slope=0.2, inplace=True),
nn.Conv2d(dim // 4, dim, 3, 1, 1))
self.patch_embed = PatchEmbed(
img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim,
norm_layer=None)
self.patch_unembed = PatchUnEmbed(
img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim,
norm_layer=None)
def forward(self, x, x_size):
return self.patch_embed(self.conv(self.patch_unembed(self.residual_group(x, x_size), x_size))) + x
def flops(self):
flops = 0
flops += self.residual_group.flops()
H, W = self.input_resolution
flops += H * W * self.dim * self.dim * 9
flops += self.patch_embed.flops()
flops += self.patch_unembed.flops()
return flops
class PatchEmbed(nn.Module):
r""" Image to Patch Embedding
Args:
img_size (int): Image size. Default: 224.
patch_size (int): Patch token size. Default: 4.
in_chans (int): Number of input image channels. Default: 3.
embed_dim (int): Number of linear projection output channels. Default: 96.
norm_layer (nn.Module, optional): Normalization layer. Default: None
"""
def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
super().__init__()
img_size = to_2tuple(img_size)
patch_size = to_2tuple(patch_size)
patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
self.img_size = img_size
self.patch_size = patch_size
self.patches_resolution = patches_resolution
self.num_patches = patches_resolution[0] * patches_resolution[1]
self.in_chans = in_chans
self.embed_dim = embed_dim
if norm_layer is not None:
self.norm = norm_layer(embed_dim)
else:
self.norm = None
def forward(self, x):
x = x.flatten(2).transpose(1, 2) # B Ph*Pw C
if self.norm is not None:
x = self.norm(x)
return x
def flops(self):
flops = 0
H, W = self.img_size
if self.norm is not None:
flops += H * W * self.embed_dim
return flops
class PatchUnEmbed(nn.Module):
r""" Image to Patch Unembedding
Args:
img_size (int): Image size. Default: 224.
patch_size (int): Patch token size. Default: 4.
in_chans (int): Number of input image channels. Default: 3.
embed_dim (int): Number of linear projection output channels. Default: 96.
norm_layer (nn.Module, optional): Normalization layer. Default: None
"""
def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
super().__init__()
img_size = to_2tuple(img_size)
patch_size = to_2tuple(patch_size)
patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
self.img_size = img_size
self.patch_size = patch_size
self.patches_resolution = patches_resolution
self.num_patches = patches_resolution[0] * patches_resolution[1]
self.in_chans = in_chans
self.embed_dim = embed_dim
def forward(self, x, x_size):
B, HW, C = x.shape
x = x.transpose(1, 2).view(B, self.embed_dim, x_size[0], x_size[1]) # B Ph*Pw C
return x
def flops(self):
flops = 0
return flops
class Upsample(nn.Sequential):
"""Upsample module.
Args:
scale (int): Scale factor. Supported scales: 2^n and 3.
num_feat (int): Channel number of intermediate features.
"""
def __init__(self, scale, num_feat):
m = []
if (scale & (scale - 1)) == 0: # scale = 2^n
for _ in range(int(math.log(scale, 2))):
m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1))
m.append(nn.PixelShuffle(2))
elif scale == 3:
m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1))
m.append(nn.PixelShuffle(3))
else:
raise ValueError(f'scale {scale} is not supported. ' 'Supported scales: 2^n and 3.')
super(Upsample, self).__init__(*m)
class UpsampleOneStep(nn.Sequential):
"""UpsampleOneStep module (the difference with Upsample is that it always only has 1conv + 1pixelshuffle)
Used in lightweight SR to save parameters.
Args:
scale (int): Scale factor. Supported scales: 2^n and 3.
num_feat (int): Channel number of intermediate features.
"""
def __init__(self, scale, num_feat, num_out_ch, input_resolution=None):
self.num_feat = num_feat
self.input_resolution = input_resolution
m = []
m.append(nn.Conv2d(num_feat, (scale ** 2) * num_out_ch, 3, 1, 1))
m.append(nn.PixelShuffle(scale))
super(UpsampleOneStep, self).__init__(*m)
def flops(self):
H, W = self.input_resolution
flops = H * W * self.num_feat * 3 * 9
return flops
class SwinIR(nn.Module):
r""" SwinIR
A PyTorch impl of : `SwinIR: Image Restoration Using Swin Transformer`, based on Swin Transformer.
Args:
img_size (int | tuple(int)): Input image size. Default 64
patch_size (int | tuple(int)): Patch size. Default: 1
in_chans (int): Number of input image channels. Default: 3
embed_dim (int): Patch embedding dimension. Default: 96
depths (tuple(int)): Depth of each Swin Transformer layer.
num_heads (tuple(int)): Number of attention heads in different layers.
window_size (int): Window size. Default: 7
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4
qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None
drop_rate (float): Dropout rate. Default: 0
attn_drop_rate (float): Attention dropout rate. Default: 0
drop_path_rate (float): Stochastic depth rate. Default: 0.1
norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
ape (bool): If True, add absolute position embedding to the patch embedding. Default: False
patch_norm (bool): If True, add normalization after patch embedding. Default: True
use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False
upscale: Upscale factor. 2/3/4/8 for image SR, 1 for denoising and compress artifact reduction
img_range: Image range. 1. or 255.
upsampler: The reconstruction reconstruction module. 'pixelshuffle'/'pixelshuffledirect'/'nearest+conv'/None
resi_connection: The convolutional block before residual connection. '1conv'/'3conv'
"""
def __init__(self, img_size=64, patch_size=1, in_chans=3,
embed_dim=96, depths=(6, 6, 6, 6), num_heads=(6, 6, 6, 6),
window_size=7, mlp_ratio=4., qkv_bias=True, qk_scale=None,
drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1,
norm_layer=nn.LayerNorm, ape=False, patch_norm=True,
use_checkpoint=False, upscale=2, img_range=1., upsampler='', resi_connection='1conv',
**kwargs):
super(SwinIR, self).__init__()
num_in_ch = in_chans
num_out_ch = in_chans
num_feat = 64
self.img_range = img_range
if in_chans == 3:
rgb_mean = (0.4488, 0.4371, 0.4040)
self.mean = torch.Tensor(rgb_mean).view(1, 3, 1, 1)
else:
self.mean = torch.zeros(1, 1, 1, 1)
self.upscale = upscale
self.upsampler = upsampler
self.window_size = window_size
#####################################################################################################
################################### 1, shallow feature extraction ###################################
self.conv_first = nn.Conv2d(num_in_ch, embed_dim, 3, 1, 1)
#####################################################################################################
################################### 2, deep feature extraction ######################################
self.num_layers = len(depths)
self.embed_dim = embed_dim
self.ape = ape
self.patch_norm = patch_norm
self.num_features = embed_dim
self.mlp_ratio = mlp_ratio
# split image into non-overlapping patches
self.patch_embed = PatchEmbed(
img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim,
norm_layer=norm_layer if self.patch_norm else None)
num_patches = self.patch_embed.num_patches
patches_resolution = self.patch_embed.patches_resolution
self.patches_resolution = patches_resolution
# merge non-overlapping patches into image
self.patch_unembed = PatchUnEmbed(
img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim,
norm_layer=norm_layer if self.patch_norm else None)
# absolute position embedding
if self.ape:
self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim))
trunc_normal_(self.absolute_pos_embed, std=.02)
self.pos_drop = nn.Dropout(p=drop_rate)
# stochastic depth
dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
# build Residual Swin Transformer blocks (RSTB)
self.layers = nn.ModuleList()
for i_layer in range(self.num_layers):
layer = RSTB(dim=embed_dim,
input_resolution=(patches_resolution[0],
patches_resolution[1]),
depth=depths[i_layer],
num_heads=num_heads[i_layer],
window_size=window_size,
mlp_ratio=self.mlp_ratio,
qkv_bias=qkv_bias, qk_scale=qk_scale,
drop=drop_rate, attn_drop=attn_drop_rate,
drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], # no impact on SR results
norm_layer=norm_layer,
downsample=None,
use_checkpoint=use_checkpoint,
img_size=img_size,
patch_size=patch_size,
resi_connection=resi_connection
)
self.layers.append(layer)
self.norm = norm_layer(self.num_features)
# build the last conv layer in deep feature extraction
if resi_connection == '1conv':
self.conv_after_body = nn.Conv2d(embed_dim, embed_dim, 3, 1, 1)
elif resi_connection == '3conv':
# to save parameters and memory
self.conv_after_body = nn.Sequential(nn.Conv2d(embed_dim, embed_dim // 4, 3, 1, 1),
nn.LeakyReLU(negative_slope=0.2, inplace=True),
nn.Conv2d(embed_dim // 4, embed_dim // 4, 1, 1, 0),
nn.LeakyReLU(negative_slope=0.2, inplace=True),
nn.Conv2d(embed_dim // 4, embed_dim, 3, 1, 1))
#####################################################################################################
################################ 3, high quality image reconstruction ################################
if self.upsampler == 'pixelshuffle':
# for classical SR
self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
nn.LeakyReLU(inplace=True))
self.upsample = Upsample(upscale, num_feat)
self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
elif self.upsampler == 'pixelshuffledirect':
# for lightweight SR (to save parameters)
self.upsample = UpsampleOneStep(upscale, embed_dim, num_out_ch,
(patches_resolution[0], patches_resolution[1]))
elif self.upsampler == 'nearest+conv':
# for real-world SR (less artifacts)
self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
nn.LeakyReLU(inplace=True))
self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
if self.upscale == 4:
self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
else:
# for image denoising and JPEG compression artifact reduction
self.conv_last = nn.Conv2d(embed_dim, num_out_ch, 3, 1, 1)
self.apply(self._init_weights)
def _init_weights(self, m):
if isinstance(m, nn.Linear):
trunc_normal_(m.weight, std=.02)
if isinstance(m, nn.Linear) and m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.LayerNorm):
nn.init.constant_(m.bias, 0)
nn.init.constant_(m.weight, 1.0)
@torch.jit.ignore
def no_weight_decay(self):
return {'absolute_pos_embed'}
@torch.jit.ignore
def no_weight_decay_keywords(self):
return {'relative_position_bias_table'}
def check_image_size(self, x):
_, _, h, w = x.size()
mod_pad_h = (self.window_size - h % self.window_size) % self.window_size
mod_pad_w = (self.window_size - w % self.window_size) % self.window_size
x = F.pad(x, (0, mod_pad_w, 0, mod_pad_h), 'reflect')
return x
def forward_features(self, x):
x_size = (x.shape[2], x.shape[3])
x = self.patch_embed(x)
if self.ape:
x = x + self.absolute_pos_embed
x = self.pos_drop(x)
for layer in self.layers:
x = layer(x, x_size)
x = self.norm(x) # B L C
x = self.patch_unembed(x, x_size)
return x
def forward(self, x):
H, W = x.shape[2:]
x = self.check_image_size(x)
self.mean = self.mean.type_as(x)
x = (x - self.mean) * self.img_range
if self.upsampler == 'pixelshuffle':
# for classical SR
x = self.conv_first(x)
x = self.conv_after_body(self.forward_features(x)) + x
x = self.conv_before_upsample(x)
x = self.conv_last(self.upsample(x))
elif self.upsampler == 'pixelshuffledirect':
# for lightweight SR
x = self.conv_first(x)
x = self.conv_after_body(self.forward_features(x)) + x
x = self.upsample(x)
elif self.upsampler == 'nearest+conv':
# for real-world SR
x = self.conv_first(x)
x = self.conv_after_body(self.forward_features(x)) + x
x = self.conv_before_upsample(x)
x = self.lrelu(self.conv_up1(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
if self.upscale == 4:
x = self.lrelu(self.conv_up2(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
x = self.conv_last(self.lrelu(self.conv_hr(x)))
else:
# for image denoising and JPEG compression artifact reduction
x_first = self.conv_first(x)
res = self.conv_after_body(self.forward_features(x_first)) + x_first
x = x + self.conv_last(res)
x = x / self.img_range + self.mean
return x[:, :, :H*self.upscale, :W*self.upscale]
def flops(self):
flops = 0
H, W = self.patches_resolution
flops += H * W * 3 * self.embed_dim * 9
flops += self.patch_embed.flops()
for layer in self.layers:
flops += layer.flops()
flops += H * W * 3 * self.embed_dim * self.embed_dim
flops += self.upsample.flops()
return flops
if __name__ == '__main__':
upscale = 4
window_size = 8
height = (1024 // upscale // window_size + 1) * window_size
width = (720 // upscale // window_size + 1) * window_size
model = SwinIR(upscale=2, img_size=(height, width),
window_size=window_size, img_range=1., depths=[6, 6, 6, 6],
embed_dim=60, num_heads=[6, 6, 6, 6], mlp_ratio=2, upsampler='pixelshuffledirect')
print(model)
print(height, width, model.flops() / 1e9)
x = torch.randn((1, 3, height, width))
x = model(x)
print(x.shape)
File diff suppressed because it is too large Load Diff
@@ -29,6 +29,7 @@ onUiLoaded(async() => {
});
function getActiveTab(elements, all = false) {
if (!elements.img2imgTabs) return null;
const tabs = elements.img2imgTabs.querySelectorAll("button");
if (all) return tabs;
@@ -43,6 +44,7 @@ onUiLoaded(async() => {
// Get tab ID
function getTabId(elements) {
const activeTab = getActiveTab(elements);
if (!activeTab) return null;
return tabNameToElementId[activeTab.innerText];
}
@@ -218,6 +220,8 @@ onUiLoaded(async() => {
canvas_hotkey_fullscreen: "KeyS",
canvas_hotkey_move: "KeyF",
canvas_hotkey_overlap: "KeyO",
canvas_hotkey_shrink_brush: "KeyQ",
canvas_hotkey_grow_brush: "KeyW",
canvas_disabled_functions: [],
canvas_show_tooltip: true,
canvas_auto_expand: true,
@@ -227,6 +231,8 @@ onUiLoaded(async() => {
const functionMap = {
"Zoom": "canvas_hotkey_zoom",
"Adjust brush size": "canvas_hotkey_adjust",
"Hotkey shrink brush": "canvas_hotkey_shrink_brush",
"Hotkey enlarge brush": "canvas_hotkey_grow_brush",
"Moving canvas": "canvas_hotkey_move",
"Fullscreen": "canvas_hotkey_fullscreen",
"Reset Zoom": "canvas_hotkey_reset",
@@ -248,6 +254,7 @@ onUiLoaded(async() => {
let isMoving = false;
let mouseX, mouseY;
let activeElement;
let interactedWithAltKey = false;
const elements = Object.fromEntries(
Object.keys(elementIDs).map(id => [
@@ -273,7 +280,7 @@ onUiLoaded(async() => {
const targetElement = gradioApp().querySelector(elemId);
if (!targetElement) {
console.log("Element not found");
console.log("Element not found", elemId);
return;
}
@@ -288,7 +295,7 @@ onUiLoaded(async() => {
// Create tooltip
function createTooltip() {
const toolTipElemnt =
const toolTipElement =
targetElement.querySelector(".image-container");
const tooltip = document.createElement("div");
tooltip.className = "canvas-tooltip";
@@ -351,7 +358,7 @@ onUiLoaded(async() => {
tooltip.appendChild(tooltipContent);
// Add a hint element to the target element
toolTipElemnt.appendChild(tooltip);
toolTipElement.appendChild(tooltip);
}
//Show tool tip if setting enable
@@ -361,9 +368,9 @@ onUiLoaded(async() => {
// In the course of research, it was found that the tag img is very harmful when zooming and creates white canvases. This hack allows you to almost never think about this problem, it has no effect on webui.
function fixCanvas() {
const activeTab = getActiveTab(elements).textContent.trim();
const activeTab = getActiveTab(elements)?.textContent.trim();
if (activeTab !== "img2img") {
if (activeTab && activeTab !== "img2img") {
const img = targetElement.querySelector(`${elemId} img`);
if (img && img.style.display !== "none") {
@@ -504,6 +511,10 @@ onUiLoaded(async() => {
if (isModifierKey(e, hotkeysConfig.canvas_hotkey_zoom)) {
e.preventDefault();
if (hotkeysConfig.canvas_hotkey_zoom === "Alt") {
interactedWithAltKey = true;
}
let zoomPosX, zoomPosY;
let delta = 0.2;
if (elemData[elemId].zoomLevel > 7) {
@@ -686,7 +697,9 @@ onUiLoaded(async() => {
const hotkeyActions = {
[hotkeysConfig.canvas_hotkey_reset]: resetZoom,
[hotkeysConfig.canvas_hotkey_overlap]: toggleOverlap,
[hotkeysConfig.canvas_hotkey_fullscreen]: fitToScreen
[hotkeysConfig.canvas_hotkey_fullscreen]: fitToScreen,
[hotkeysConfig.canvas_hotkey_shrink_brush]: () => adjustBrushSize(elemId, 10),
[hotkeysConfig.canvas_hotkey_grow_brush]: () => adjustBrushSize(elemId, -10)
};
const action = hotkeyActions[event.code];
@@ -777,23 +790,29 @@ onUiLoaded(async() => {
targetElement.addEventListener("mouseleave", handleMouseLeave);
// Reset zoom when click on another tab
elements.img2imgTabs.addEventListener("click", resetZoom);
elements.img2imgTabs.addEventListener("click", () => {
// targetElement.style.width = "";
if (parseInt(targetElement.style.width) > 865) {
setTimeout(fitToElement, 0);
}
});
if (elements.img2imgTabs) {
elements.img2imgTabs.addEventListener("click", resetZoom);
elements.img2imgTabs.addEventListener("click", () => {
// targetElement.style.width = "";
if (parseInt(targetElement.style.width) > 865) {
setTimeout(fitToElement, 0);
}
});
}
targetElement.addEventListener("wheel", e => {
// change zoom level
const operation = e.deltaY > 0 ? "-" : "+";
const operation = (e.deltaY || -e.wheelDelta) > 0 ? "-" : "+";
changeZoomLevel(operation, e);
// Handle brush size adjustment with ctrl key pressed
if (isModifierKey(e, hotkeysConfig.canvas_hotkey_adjust)) {
e.preventDefault();
if (hotkeysConfig.canvas_hotkey_adjust === "Alt") {
interactedWithAltKey = true;
}
// Increase or decrease brush size based on scroll direction
adjustBrushSize(elemId, e.deltaY);
}
@@ -833,6 +852,20 @@ onUiLoaded(async() => {
document.addEventListener("keydown", handleMoveKeyDown);
document.addEventListener("keyup", handleMoveKeyUp);
// Prevent firefox from opening main menu when alt is used as a hotkey for zoom or brush size
function handleAltKeyUp(e) {
if (e.key !== "Alt" || !interactedWithAltKey) {
return;
}
e.preventDefault();
interactedWithAltKey = false;
}
document.addEventListener("keyup", handleAltKeyUp);
// Detect zoom level and update the pan speed.
function updatePanPosition(movementX, movementY) {
let panSpeed = 2;
@@ -4,12 +4,14 @@ from modules import shared
shared.options_templates.update(shared.options_section(('canvas_hotkey', "Canvas Hotkeys"), {
"canvas_hotkey_zoom": shared.OptionInfo("Alt", "Zoom canvas", gr.Radio, {"choices": ["Shift","Ctrl", "Alt"]}).info("If you choose 'Shift' you cannot scroll horizontally, 'Alt' can cause a little trouble in firefox"),
"canvas_hotkey_adjust": shared.OptionInfo("Ctrl", "Adjust brush size", gr.Radio, {"choices": ["Shift","Ctrl", "Alt"]}).info("If you choose 'Shift' you cannot scroll horizontally, 'Alt' can cause a little trouble in firefox"),
"canvas_hotkey_shrink_brush": shared.OptionInfo("Q", "Shrink the brush size"),
"canvas_hotkey_grow_brush": shared.OptionInfo("W", "Enlarge the brush size"),
"canvas_hotkey_move": shared.OptionInfo("F", "Moving the canvas").info("To work correctly in firefox, turn off 'Automatically search the page text when typing' in the browser settings"),
"canvas_hotkey_fullscreen": shared.OptionInfo("S", "Fullscreen Mode, maximizes the picture so that it fits into the screen and stretches it to its full width "),
"canvas_hotkey_reset": shared.OptionInfo("R", "Reset zoom and canvas positon"),
"canvas_hotkey_overlap": shared.OptionInfo("O", "Toggle overlap").info("Technical button, neededs for testing"),
"canvas_hotkey_reset": shared.OptionInfo("R", "Reset zoom and canvas position"),
"canvas_hotkey_overlap": shared.OptionInfo("O", "Toggle overlap").info("Technical button, needed for testing"),
"canvas_show_tooltip": shared.OptionInfo(True, "Enable tooltip on the canvas"),
"canvas_auto_expand": shared.OptionInfo(True, "Automatically expands an image that does not fit completely in the canvas area, similar to manually pressing the S and R buttons"),
"canvas_blur_prompt": shared.OptionInfo(False, "Take the focus off the prompt when working with a canvas"),
"canvas_disabled_functions": shared.OptionInfo(["Overlap"], "Disable function that you don't use", gr.CheckboxGroup, {"choices": ["Zoom","Adjust brush size", "Moving canvas","Fullscreen","Reset Zoom","Overlap"]}),
"canvas_disabled_functions": shared.OptionInfo(["Overlap"], "Disable function that you don't use", gr.CheckboxGroup, {"choices": ["Zoom","Adjust brush size","Hotkey enlarge brush","Hotkey shrink brush","Moving canvas","Fullscreen","Reset Zoom","Overlap"]}),
}))
@@ -1,7 +1,7 @@
import math
import gradio as gr
from modules import scripts, shared, ui_components, ui_settings, generation_parameters_copypaste
from modules import scripts, shared, ui_components, ui_settings, infotext_utils, errors
from modules.ui_components import FormColumn
@@ -25,7 +25,7 @@ class ExtraOptionsSection(scripts.Script):
extra_options = shared.opts.extra_options_img2img if is_img2img else shared.opts.extra_options_txt2img
elem_id_tabname = "extra_options_" + ("img2img" if is_img2img else "txt2img")
mapping = {k: v for v, k in generation_parameters_copypaste.infotext_to_setting_name_mapping}
mapping = {k: v for v, k in infotext_utils.infotext_to_setting_name_mapping}
with gr.Blocks() as interface:
with gr.Accordion("Options", open=False, elem_id=elem_id_tabname) if shared.opts.extra_options_accordion and extra_options else gr.Group(elem_id=elem_id_tabname):
@@ -42,7 +42,11 @@ class ExtraOptionsSection(scripts.Script):
setting_name = extra_options[index]
with FormColumn():
comp = ui_settings.create_setting_component(setting_name)
try:
comp = ui_settings.create_setting_component(setting_name)
except KeyError:
errors.report(f"Can't add extra options for {setting_name} in ui")
continue
self.comps.append(comp)
self.setting_names.append(setting_name)
@@ -28,7 +28,7 @@ def multicrop_pic(image: Image, mindim, maxdim, minarea, maxarea, objective, thr
class ScriptPostprocessingAutosizedCrop(scripts_postprocessing.ScriptPostprocessing):
name = "Auto-sized crop"
order = 4000
order = 4020
def ui(self):
with ui_components.InputAccordion(False, label="Auto-sized crop") as enable:
@@ -4,7 +4,7 @@ import gradio as gr
class ScriptPostprocessingCeption(scripts_postprocessing.ScriptPostprocessing):
name = "Caption"
order = 4000
order = 4040
def ui(self):
with ui_components.InputAccordion(False, label="Caption") as enable:
@@ -25,6 +25,6 @@ class ScriptPostprocessingCeption(scripts_postprocessing.ScriptPostprocessing):
captions.append(deepbooru.model.tag(pp.image))
if "BLIP" in option:
captions.append(shared.interrogator.generate_caption(pp.image))
captions.append(shared.interrogator.interrogate(pp.image.convert("RGB")))
pp.caption = ", ".join([x for x in captions if x])
@@ -6,7 +6,7 @@ import gradio as gr
class ScriptPostprocessingCreateFlippedCopies(scripts_postprocessing.ScriptPostprocessing):
name = "Create flipped copies"
order = 4000
order = 4030
def ui(self):
with ui_components.InputAccordion(False, label="Create flipped copies") as enable:
@@ -7,7 +7,7 @@ from modules.textual_inversion import autocrop
class ScriptPostprocessingFocalCrop(scripts_postprocessing.ScriptPostprocessing):
name = "Auto focal point crop"
order = 4000
order = 4010
def ui(self):
with ui_components.InputAccordion(False, label="Auto focal point crop") as enable:
@@ -61,7 +61,7 @@ class ScriptPostprocessingSplitOversized(scripts_postprocessing.ScriptPostproces
ratio = (pp.image.height * width) / (pp.image.width * height)
inverse_xy = True
if ratio >= 1.0 and ratio > split_threshold:
if ratio >= 1.0 or ratio > split_threshold:
return
result, *others = split_pic(pp.image, inverse_xy, width, height, overlap_ratio)
@@ -0,0 +1,761 @@
import numpy as np
import gradio as gr
import math
from modules.ui_components import InputAccordion
import modules.scripts as scripts
class SoftInpaintingSettings:
def __init__(self,
mask_blend_power,
mask_blend_scale,
inpaint_detail_preservation,
composite_mask_influence,
composite_difference_threshold,
composite_difference_contrast):
self.mask_blend_power = mask_blend_power
self.mask_blend_scale = mask_blend_scale
self.inpaint_detail_preservation = inpaint_detail_preservation
self.composite_mask_influence = composite_mask_influence
self.composite_difference_threshold = composite_difference_threshold
self.composite_difference_contrast = composite_difference_contrast
def add_generation_params(self, dest):
dest[enabled_gen_param_label] = True
dest[gen_param_labels.mask_blend_power] = self.mask_blend_power
dest[gen_param_labels.mask_blend_scale] = self.mask_blend_scale
dest[gen_param_labels.inpaint_detail_preservation] = self.inpaint_detail_preservation
dest[gen_param_labels.composite_mask_influence] = self.composite_mask_influence
dest[gen_param_labels.composite_difference_threshold] = self.composite_difference_threshold
dest[gen_param_labels.composite_difference_contrast] = self.composite_difference_contrast
# ------------------- Methods -------------------
def processing_uses_inpainting(p):
# TODO: Figure out a better way to determine if inpainting is being used by p
if getattr(p, "image_mask", None) is not None:
return True
if getattr(p, "mask", None) is not None:
return True
if getattr(p, "nmask", None) is not None:
return True
return False
def latent_blend(settings, a, b, t):
"""
Interpolates two latent image representations according to the parameter t,
where the interpolated vectors' magnitudes are also interpolated separately.
The "detail_preservation" factor biases the magnitude interpolation towards
the larger of the two magnitudes.
"""
import torch
# NOTE: We use inplace operations wherever possible.
if len(t.shape) == 3:
# [4][w][h] to [1][4][w][h]
t2 = t.unsqueeze(0)
# [4][w][h] to [1][1][w][h] - the [4] seem redundant.
t3 = t[0].unsqueeze(0).unsqueeze(0)
else:
t2 = t
t3 = t[:, 0][:, None]
one_minus_t2 = 1 - t2
one_minus_t3 = 1 - t3
# Linearly interpolate the image vectors.
a_scaled = a * one_minus_t2
b_scaled = b * t2
image_interp = a_scaled
image_interp.add_(b_scaled)
result_type = image_interp.dtype
del a_scaled, b_scaled, t2, one_minus_t2
# Calculate the magnitude of the interpolated vectors. (We will remove this magnitude.)
# 64-bit operations are used here to allow large exponents.
current_magnitude = torch.norm(image_interp, p=2, dim=1, keepdim=True).to(torch.float64).add_(0.00001)
# Interpolate the powered magnitudes, then un-power them (bring them back to a power of 1).
a_magnitude = torch.norm(a, p=2, dim=1, keepdim=True).to(torch.float64).pow_(
settings.inpaint_detail_preservation) * one_minus_t3
b_magnitude = torch.norm(b, p=2, dim=1, keepdim=True).to(torch.float64).pow_(
settings.inpaint_detail_preservation) * t3
desired_magnitude = a_magnitude
desired_magnitude.add_(b_magnitude).pow_(1 / settings.inpaint_detail_preservation)
del a_magnitude, b_magnitude, t3, one_minus_t3
# Change the linearly interpolated image vectors' magnitudes to the value we want.
# This is the last 64-bit operation.
image_interp_scaling_factor = desired_magnitude
image_interp_scaling_factor.div_(current_magnitude)
image_interp_scaling_factor = image_interp_scaling_factor.to(result_type)
image_interp_scaled = image_interp
image_interp_scaled.mul_(image_interp_scaling_factor)
del current_magnitude
del desired_magnitude
del image_interp
del image_interp_scaling_factor
del result_type
return image_interp_scaled
def get_modified_nmask(settings, nmask, sigma):
"""
Converts a negative mask representing the transparency of the original latent vectors being overlaid
to a mask that is scaled according to the denoising strength for this step.
Where:
0 = fully opaque, infinite density, fully masked
1 = fully transparent, zero density, fully unmasked
We bring this transparency to a power, as this allows one to simulate N number of blending operations
where N can be any positive real value. Using this one can control the balance of influence between
the denoiser and the original latents according to the sigma value.
NOTE: "mask" is not used
"""
import torch
return torch.pow(nmask, (sigma ** settings.mask_blend_power) * settings.mask_blend_scale)
def apply_adaptive_masks(
settings: SoftInpaintingSettings,
nmask,
latent_orig,
latent_processed,
overlay_images,
width, height,
paste_to):
import torch
import modules.processing as proc
import modules.images as images
from PIL import Image, ImageOps, ImageFilter
# TODO: Bias the blending according to the latent mask, add adjustable parameter for bias control.
if len(nmask.shape) == 3:
latent_mask = nmask[0].float()
else:
latent_mask = nmask[:, 0].float()
# convert the original mask into a form we use to scale distances for thresholding
mask_scalar = 1 - (torch.clamp(latent_mask, min=0, max=1) ** (settings.mask_blend_scale / 2))
mask_scalar = (0.5 * (1 - settings.composite_mask_influence)
+ mask_scalar * settings.composite_mask_influence)
mask_scalar = mask_scalar / (1.00001 - mask_scalar)
mask_scalar = mask_scalar.cpu().numpy()
latent_distance = torch.norm(latent_processed - latent_orig, p=2, dim=1)
kernel, kernel_center = get_gaussian_kernel(stddev_radius=1.5, max_radius=2)
masks_for_overlay = []
for i, (distance_map, overlay_image) in enumerate(zip(latent_distance, overlay_images)):
converted_mask = distance_map.float().cpu().numpy()
converted_mask = weighted_histogram_filter(converted_mask, kernel, kernel_center,
percentile_min=0.9, percentile_max=1, min_width=1)
converted_mask = weighted_histogram_filter(converted_mask, kernel, kernel_center,
percentile_min=0.25, percentile_max=0.75, min_width=1)
# The distance at which opacity of original decreases to 50%
if len(mask_scalar.shape) == 3:
if mask_scalar.shape[0] > i:
half_weighted_distance = settings.composite_difference_threshold * mask_scalar[i]
else:
half_weighted_distance = settings.composite_difference_threshold * mask_scalar[0]
else:
half_weighted_distance = settings.composite_difference_threshold * mask_scalar
converted_mask = converted_mask / half_weighted_distance
converted_mask = 1 / (1 + converted_mask ** settings.composite_difference_contrast)
converted_mask = smootherstep(converted_mask)
converted_mask = 1 - converted_mask
converted_mask = 255. * converted_mask
converted_mask = converted_mask.astype(np.uint8)
converted_mask = Image.fromarray(converted_mask)
converted_mask = images.resize_image(2, converted_mask, width, height)
converted_mask = proc.create_binary_mask(converted_mask, round=False)
# Remove aliasing artifacts using a gaussian blur.
converted_mask = converted_mask.filter(ImageFilter.GaussianBlur(radius=4))
# Expand the mask to fit the whole image if needed.
if paste_to is not None:
converted_mask = proc.uncrop(converted_mask,
(overlay_image.width, overlay_image.height),
paste_to)
masks_for_overlay.append(converted_mask)
image_masked = Image.new('RGBa', (overlay_image.width, overlay_image.height))
image_masked.paste(overlay_image.convert("RGBA").convert("RGBa"),
mask=ImageOps.invert(converted_mask.convert('L')))
overlay_images[i] = image_masked.convert('RGBA')
return masks_for_overlay
def apply_masks(
settings,
nmask,
overlay_images,
width, height,
paste_to):
import torch
import modules.processing as proc
import modules.images as images
from PIL import Image, ImageOps, ImageFilter
converted_mask = nmask[0].float()
converted_mask = torch.clamp(converted_mask, min=0, max=1).pow_(settings.mask_blend_scale / 2)
converted_mask = 255. * converted_mask
converted_mask = converted_mask.cpu().numpy().astype(np.uint8)
converted_mask = Image.fromarray(converted_mask)
converted_mask = images.resize_image(2, converted_mask, width, height)
converted_mask = proc.create_binary_mask(converted_mask, round=False)
# Remove aliasing artifacts using a gaussian blur.
converted_mask = converted_mask.filter(ImageFilter.GaussianBlur(radius=4))
# Expand the mask to fit the whole image if needed.
if paste_to is not None:
converted_mask = proc.uncrop(converted_mask,
(width, height),
paste_to)
masks_for_overlay = []
for i, overlay_image in enumerate(overlay_images):
masks_for_overlay[i] = converted_mask
image_masked = Image.new('RGBa', (overlay_image.width, overlay_image.height))
image_masked.paste(overlay_image.convert("RGBA").convert("RGBa"),
mask=ImageOps.invert(converted_mask.convert('L')))
overlay_images[i] = image_masked.convert('RGBA')
return masks_for_overlay
def weighted_histogram_filter(img, kernel, kernel_center, percentile_min=0.0, percentile_max=1.0, min_width=1.0):
"""
Generalization convolution filter capable of applying
weighted mean, median, maximum, and minimum filters
parametrically using an arbitrary kernel.
Args:
img (nparray):
The image, a 2-D array of floats, to which the filter is being applied.
kernel (nparray):
The kernel, a 2-D array of floats.
kernel_center (nparray):
The kernel center coordinate, a 1-D array with two elements.
percentile_min (float):
The lower bound of the histogram window used by the filter,
from 0 to 1.
percentile_max (float):
The upper bound of the histogram window used by the filter,
from 0 to 1.
min_width (float):
The minimum size of the histogram window bounds, in weight units.
Must be greater than 0.
Returns:
(nparray): A filtered copy of the input image "img", a 2-D array of floats.
"""
# Converts an index tuple into a vector.
def vec(x):
return np.array(x)
kernel_min = -kernel_center
kernel_max = vec(kernel.shape) - kernel_center
def weighted_histogram_filter_single(idx):
idx = vec(idx)
min_index = np.maximum(0, idx + kernel_min)
max_index = np.minimum(vec(img.shape), idx + kernel_max)
window_shape = max_index - min_index
class WeightedElement:
"""
An element of the histogram, its weight
and bounds.
"""
def __init__(self, value, weight):
self.value: float = value
self.weight: float = weight
self.window_min: float = 0.0
self.window_max: float = 1.0
# Collect the values in the image as WeightedElements,
# weighted by their corresponding kernel values.
values = []
for window_tup in np.ndindex(tuple(window_shape)):
window_index = vec(window_tup)
image_index = window_index + min_index
centered_kernel_index = image_index - idx
kernel_index = centered_kernel_index + kernel_center
element = WeightedElement(img[tuple(image_index)], kernel[tuple(kernel_index)])
values.append(element)
def sort_key(x: WeightedElement):
return x.value
values.sort(key=sort_key)
# Calculate the height of the stack (sum)
# and each sample's range they occupy in the stack
sum = 0
for i in range(len(values)):
values[i].window_min = sum
sum += values[i].weight
values[i].window_max = sum
# Calculate what range of this stack ("window")
# we want to get the weighted average across.
window_min = sum * percentile_min
window_max = sum * percentile_max
window_width = window_max - window_min
# Ensure the window is within the stack and at least a certain size.
if window_width < min_width:
window_center = (window_min + window_max) / 2
window_min = window_center - min_width / 2
window_max = window_center + min_width / 2
if window_max > sum:
window_max = sum
window_min = sum - min_width
if window_min < 0:
window_min = 0
window_max = min_width
value = 0
value_weight = 0
# Get the weighted average of all the samples
# that overlap with the window, weighted
# by the size of their overlap.
for i in range(len(values)):
if window_min >= values[i].window_max:
continue
if window_max <= values[i].window_min:
break
s = max(window_min, values[i].window_min)
e = min(window_max, values[i].window_max)
w = e - s
value += values[i].value * w
value_weight += w
return value / value_weight if value_weight != 0 else 0
img_out = img.copy()
# Apply the kernel operation over each pixel.
for index in np.ndindex(img.shape):
img_out[index] = weighted_histogram_filter_single(index)
return img_out
def smoothstep(x):
"""
The smoothstep function, input should be clamped to 0-1 range.
Turns a diagonal line (f(x) = x) into a sigmoid-like curve.
"""
return x * x * (3 - 2 * x)
def smootherstep(x):
"""
The smootherstep function, input should be clamped to 0-1 range.
Turns a diagonal line (f(x) = x) into a sigmoid-like curve.
"""
return x * x * x * (x * (6 * x - 15) + 10)
def get_gaussian_kernel(stddev_radius=1.0, max_radius=2):
"""
Creates a Gaussian kernel with thresholded edges.
Args:
stddev_radius (float):
Standard deviation of the gaussian kernel, in pixels.
max_radius (int):
The size of the filter kernel. The number of pixels is (max_radius*2+1) ** 2.
The kernel is thresholded so that any values one pixel beyond this radius
is weighted at 0.
Returns:
(nparray, nparray): A kernel array (shape: (N, N)), its center coordinate (shape: (2))
"""
# Evaluates a 0-1 normalized gaussian function for a given square distance from the mean.
def gaussian(sqr_mag):
return math.exp(-sqr_mag / (stddev_radius * stddev_radius))
# Helper function for converting a tuple to an array.
def vec(x):
return np.array(x)
"""
Since a gaussian is unbounded, we need to limit ourselves
to a finite range.
We taper the ends off at the end of that range so they equal zero
while preserving the maximum value of 1 at the mean.
"""
zero_radius = max_radius + 1.0
gauss_zero = gaussian(zero_radius * zero_radius)
gauss_kernel_scale = 1 / (1 - gauss_zero)
def gaussian_kernel_func(coordinate):
x = coordinate[0] ** 2.0 + coordinate[1] ** 2.0
x = gaussian(x)
x -= gauss_zero
x *= gauss_kernel_scale
x = max(0.0, x)
return x
size = max_radius * 2 + 1
kernel_center = max_radius
kernel = np.zeros((size, size))
for index in np.ndindex(kernel.shape):
kernel[index] = gaussian_kernel_func(vec(index) - kernel_center)
return kernel, kernel_center
# ------------------- Constants -------------------
default = SoftInpaintingSettings(1, 0.5, 4, 0, 0.5, 2)
enabled_ui_label = "Soft inpainting"
enabled_gen_param_label = "Soft inpainting enabled"
enabled_el_id = "soft_inpainting_enabled"
ui_labels = SoftInpaintingSettings(
"Schedule bias",
"Preservation strength",
"Transition contrast boost",
"Mask influence",
"Difference threshold",
"Difference contrast")
ui_info = SoftInpaintingSettings(
"Shifts when preservation of original content occurs during denoising.",
"How strongly partially masked content should be preserved.",
"Amplifies the contrast that may be lost in partially masked regions.",
"How strongly the original mask should bias the difference threshold.",
"How much an image region can change before the original pixels are not blended in anymore.",
"How sharp the transition should be between blended and not blended.")
gen_param_labels = SoftInpaintingSettings(
"Soft inpainting schedule bias",
"Soft inpainting preservation strength",
"Soft inpainting transition contrast boost",
"Soft inpainting mask influence",
"Soft inpainting difference threshold",
"Soft inpainting difference contrast")
el_ids = SoftInpaintingSettings(
"mask_blend_power",
"mask_blend_scale",
"inpaint_detail_preservation",
"composite_mask_influence",
"composite_difference_threshold",
"composite_difference_contrast")
# ------------------- Script -------------------
class Script(scripts.Script):
def __init__(self):
self.section = "inpaint"
self.masks_for_overlay = None
self.overlay_images = None
def title(self):
return "Soft Inpainting"
def show(self, is_img2img):
return scripts.AlwaysVisible if is_img2img else False
def ui(self, is_img2img):
if not is_img2img:
return
with InputAccordion(False, label=enabled_ui_label, elem_id=enabled_el_id) as soft_inpainting_enabled:
with gr.Group():
gr.Markdown(
"""
Soft inpainting allows you to **seamlessly blend original content with inpainted content** according to the mask opacity.
**High _Mask blur_** values are recommended!
""")
power = \
gr.Slider(label=ui_labels.mask_blend_power,
info=ui_info.mask_blend_power,
minimum=0,
maximum=8,
step=0.1,
value=default.mask_blend_power,
elem_id=el_ids.mask_blend_power)
scale = \
gr.Slider(label=ui_labels.mask_blend_scale,
info=ui_info.mask_blend_scale,
minimum=0,
maximum=8,
step=0.05,
value=default.mask_blend_scale,
elem_id=el_ids.mask_blend_scale)
detail = \
gr.Slider(label=ui_labels.inpaint_detail_preservation,
info=ui_info.inpaint_detail_preservation,
minimum=1,
maximum=32,
step=0.5,
value=default.inpaint_detail_preservation,
elem_id=el_ids.inpaint_detail_preservation)
gr.Markdown(
"""
### Pixel Composite Settings
""")
mask_inf = \
gr.Slider(label=ui_labels.composite_mask_influence,
info=ui_info.composite_mask_influence,
minimum=0,
maximum=1,
step=0.05,
value=default.composite_mask_influence,
elem_id=el_ids.composite_mask_influence)
dif_thresh = \
gr.Slider(label=ui_labels.composite_difference_threshold,
info=ui_info.composite_difference_threshold,
minimum=0,
maximum=8,
step=0.25,
value=default.composite_difference_threshold,
elem_id=el_ids.composite_difference_threshold)
dif_contr = \
gr.Slider(label=ui_labels.composite_difference_contrast,
info=ui_info.composite_difference_contrast,
minimum=0,
maximum=8,
step=0.25,
value=default.composite_difference_contrast,
elem_id=el_ids.composite_difference_contrast)
with gr.Accordion("Help", open=False):
gr.Markdown(
f"""
### {ui_labels.mask_blend_power}
The blending strength of original content is scaled proportionally with the decreasing noise level values at each step (sigmas).
This ensures that the influence of the denoiser and original content preservation is roughly balanced at each step.
This balance can be shifted using this parameter, controlling whether earlier or later steps have stronger preservation.
- **Below 1**: Stronger preservation near the end (with low sigma)
- **1**: Balanced (proportional to sigma)
- **Above 1**: Stronger preservation in the beginning (with high sigma)
""")
gr.Markdown(
f"""
### {ui_labels.mask_blend_scale}
Skews whether partially masked image regions should be more likely to preserve the original content or favor inpainted content.
This may need to be adjusted depending on the {ui_labels.mask_blend_power}, CFG Scale, prompt and Denoising strength.
- **Low values**: Favors generated content.
- **High values**: Favors original content.
""")
gr.Markdown(
f"""
### {ui_labels.inpaint_detail_preservation}
This parameter controls how the original latent vectors and denoised latent vectors are interpolated.
With higher values, the magnitude of the resulting blended vector will be closer to the maximum of the two interpolated vectors.
This can prevent the loss of contrast that occurs with linear interpolation.
- **Low values**: Softer blending, details may fade.
- **High values**: Stronger contrast, may over-saturate colors.
""")
gr.Markdown(
"""
## Pixel Composite Settings
Masks are generated based on how much a part of the image changed after denoising.
These masks are used to blend the original and final images together.
If the difference is low, the original pixels are used instead of the pixels returned by the inpainting process.
""")
gr.Markdown(
f"""
### {ui_labels.composite_mask_influence}
This parameter controls how much the mask should bias this sensitivity to difference.
- **0**: Ignore the mask, only consider differences in image content.
- **1**: Follow the mask closely despite image content changes.
""")
gr.Markdown(
f"""
### {ui_labels.composite_difference_threshold}
This value represents the difference at which the original pixels will have less than 50% opacity.
- **Low values**: Two images patches must be almost the same in order to retain original pixels.
- **High values**: Two images patches can be very different and still retain original pixels.
""")
gr.Markdown(
f"""
### {ui_labels.composite_difference_contrast}
This value represents the contrast between the opacity of the original and inpainted content.
- **Low values**: The blend will be more gradual and have longer transitions, but may cause ghosting.
- **High values**: Ghosting will be less common, but transitions may be very sudden.
""")
self.infotext_fields = [(soft_inpainting_enabled, enabled_gen_param_label),
(power, gen_param_labels.mask_blend_power),
(scale, gen_param_labels.mask_blend_scale),
(detail, gen_param_labels.inpaint_detail_preservation),
(mask_inf, gen_param_labels.composite_mask_influence),
(dif_thresh, gen_param_labels.composite_difference_threshold),
(dif_contr, gen_param_labels.composite_difference_contrast)]
self.paste_field_names = []
for _, field_name in self.infotext_fields:
self.paste_field_names.append(field_name)
return [soft_inpainting_enabled,
power,
scale,
detail,
mask_inf,
dif_thresh,
dif_contr]
def process(self, p, enabled, power, scale, detail_preservation, mask_inf, dif_thresh, dif_contr):
if not enabled:
return
if not processing_uses_inpainting(p):
return
# Shut off the rounding it normally does.
p.mask_round = False
settings = SoftInpaintingSettings(power, scale, detail_preservation, mask_inf, dif_thresh, dif_contr)
# p.extra_generation_params["Mask rounding"] = False
settings.add_generation_params(p.extra_generation_params)
def on_mask_blend(self, p, mba: scripts.MaskBlendArgs, enabled, power, scale, detail_preservation, mask_inf,
dif_thresh, dif_contr):
if not enabled:
return
if not processing_uses_inpainting(p):
return
if mba.is_final_blend:
mba.blended_latent = mba.current_latent
return
settings = SoftInpaintingSettings(power, scale, detail_preservation, mask_inf, dif_thresh, dif_contr)
# todo: Why is sigma 2D? Both values are the same.
mba.blended_latent = latent_blend(settings,
mba.init_latent,
mba.current_latent,
get_modified_nmask(settings, mba.nmask, mba.sigma[0]))
def post_sample(self, p, ps: scripts.PostSampleArgs, enabled, power, scale, detail_preservation, mask_inf,
dif_thresh, dif_contr):
if not enabled:
return
if not processing_uses_inpainting(p):
return
nmask = getattr(p, "nmask", None)
if nmask is None:
return
from modules import images
from modules.shared import opts
settings = SoftInpaintingSettings(power, scale, detail_preservation, mask_inf, dif_thresh, dif_contr)
# since the original code puts holes in the existing overlay images,
# we have to rebuild them.
self.overlay_images = []
for img in p.init_images:
image = images.flatten(img, opts.img2img_background_color)
if p.paste_to is None and p.resize_mode != 3:
image = images.resize_image(p.resize_mode, image, p.width, p.height)
self.overlay_images.append(image.convert('RGBA'))
if len(p.init_images) == 1:
self.overlay_images = self.overlay_images * p.batch_size
if getattr(ps.samples, 'already_decoded', False):
self.masks_for_overlay = apply_masks(settings=settings,
nmask=nmask,
overlay_images=self.overlay_images,
width=p.width,
height=p.height,
paste_to=p.paste_to)
else:
self.masks_for_overlay = apply_adaptive_masks(settings=settings,
nmask=nmask,
latent_orig=p.init_latent,
latent_processed=ps.samples,
overlay_images=self.overlay_images,
width=p.width,
height=p.height,
paste_to=p.paste_to)
def postprocess_maskoverlay(self, p, ppmo: scripts.PostProcessMaskOverlayArgs, enabled, power, scale,
detail_preservation, mask_inf, dif_thresh, dif_contr):
if not enabled:
return
if not processing_uses_inpainting(p):
return
if self.masks_for_overlay is None:
return
if self.overlay_images is None:
return
ppmo.mask_for_overlay = self.masks_for_overlay[ppmo.index]
ppmo.overlay_image = self.overlay_images[ppmo.index]
+6 -11
View File
@@ -1,14 +1,9 @@
<div class='card' style={style} onclick={card_clicked} data-name="{name}" {sort_keys}>
<div class="card" style="{style}" onclick="{card_clicked}" data-name="{name}" {sort_keys}>
{background_image}
<div class="button-row">
{metadata_button}
{edit_button}
</div>
<div class='actions'>
<div class='additional'>
<span style="display:none" class='search_term{search_only}'>{search_term}</span>
</div>
<span class='name'>{name}</span>
<span class='description'>{description}</span>
<div class="button-row">{copy_path_button}{metadata_button}{edit_button}</div>
<div class="actions">
<div class="additional">{search_terms}</div>
<span class="name">{name}</span>
<span class="description">{description}</span>
</div>
</div>
@@ -0,0 +1,5 @@
<div class="copy-path-button card-button"
title="Copy path to clipboard"
onclick="extraNetworksCopyCardPath(event)"
data-clipboard-text="{filename}">
</div>
@@ -0,0 +1,4 @@
<div class="edit-button card-button"
title="Edit metadata"
onclick="extraNetworksEditUserMetadata(event, '{tabname}', '{extra_networks_tabname}')">
</div>
+4
View File
@@ -0,0 +1,4 @@
<div class="metadata-button card-button"
title="Show internal metadata"
onclick="extraNetworksRequestMetadata(event, '{extra_networks_tabname}')">
</div>
+8
View File
@@ -0,0 +1,8 @@
<div class="extra-network-pane-content-dirs">
<div id='{tabname}_{extra_networks_tabname}_dirs' class='extra-network-dirs'>
{dirs_html}
</div>
<div id='{tabname}_{extra_networks_tabname}_cards' class='extra-network-cards'>
{items_html}
</div>
</div>
+8
View File
@@ -0,0 +1,8 @@
<div class="extra-network-pane-content-tree resize-handle-row">
<div id='{tabname}_{extra_networks_tabname}_tree' class='extra-network-tree' style='flex-basis: {extra_networks_tree_view_default_width}px'>
{tree_html}
</div>
<div id='{tabname}_{extra_networks_tabname}_cards' class='extra-network-cards' style='flex-grow: 1;'>
{items_html}
</div>
</div>
+81
View File
@@ -0,0 +1,81 @@
<div id='{tabname}_{extra_networks_tabname}_pane' class='extra-network-pane {tree_view_div_default_display_class}'>
<div class="extra-network-control" id="{tabname}_{extra_networks_tabname}_controls" style="display:none" >
<div class="extra-network-control--search">
<input
id="{tabname}_{extra_networks_tabname}_extra_search"
class="extra-network-control--search-text"
type="search"
placeholder="Search"
>
</div>
<small>Sort: </small>
<div
id="{tabname}_{extra_networks_tabname}_extra_sort_path"
class="extra-network-control--sort{sort_path_active}"
data-sortkey="default"
title="Sort by path"
onclick="extraNetworksControlSortOnClick(event, '{tabname}', '{extra_networks_tabname}');"
>
<i class="extra-network-control--icon extra-network-control--sort-icon"></i>
</div>
<div
id="{tabname}_{extra_networks_tabname}_extra_sort_name"
class="extra-network-control--sort{sort_name_active}"
data-sortkey="name"
title="Sort by name"
onclick="extraNetworksControlSortOnClick(event, '{tabname}', '{extra_networks_tabname}');"
>
<i class="extra-network-control--icon extra-network-control--sort-icon"></i>
</div>
<div
id="{tabname}_{extra_networks_tabname}_extra_sort_date_created"
class="extra-network-control--sort{sort_date_created_active}"
data-sortkey="date_created"
title="Sort by date created"
onclick="extraNetworksControlSortOnClick(event, '{tabname}', '{extra_networks_tabname}');"
>
<i class="extra-network-control--icon extra-network-control--sort-icon"></i>
</div>
<div
id="{tabname}_{extra_networks_tabname}_extra_sort_date_modified"
class="extra-network-control--sort{sort_date_modified_active}"
data-sortkey="date_modified"
title="Sort by date modified"
onclick="extraNetworksControlSortOnClick(event, '{tabname}', '{extra_networks_tabname}');"
>
<i class="extra-network-control--icon extra-network-control--sort-icon"></i>
</div>
<small> </small>
<div
id="{tabname}_{extra_networks_tabname}_extra_sort_dir"
class="extra-network-control--sort-dir"
data-sortdir="{data_sortdir}"
title="Sort ascending"
onclick="extraNetworksControlSortDirOnClick(event, '{tabname}', '{extra_networks_tabname}');"
>
<i class="extra-network-control--icon extra-network-control--sort-dir-icon"></i>
</div>
<small> </small>
<div
id="{tabname}_{extra_networks_tabname}_extra_tree_view"
class="extra-network-control--tree-view {tree_view_btn_extra_class}"
title="Enable Tree View"
onclick="extraNetworksControlTreeViewOnClick(event, '{tabname}', '{extra_networks_tabname}');"
>
<i class="extra-network-control--icon extra-network-control--tree-view-icon"></i>
</div>
<div
id="{tabname}_{extra_networks_tabname}_extra_refresh"
class="extra-network-control--refresh"
title="Refresh page"
onclick="extraNetworksControlRefreshOnClick(event, '{tabname}', '{extra_networks_tabname}');"
>
<i class="extra-network-control--icon extra-network-control--refresh-icon"></i>
</div>
</div>
{pane_content}
</div>
+23
View File
@@ -0,0 +1,23 @@
<span data-filterable-item-text hidden>{search_terms}</span>
<div class="tree-list-content {subclass}"
type="button"
onclick="extraNetworksTreeOnClick(event, '{tabname}', '{extra_networks_tabname}');{onclick_extra}"
data-path="{data_path}"
data-hash="{data_hash}"
>
<span class='tree-list-item-action tree-list-item-action--leading'>
{action_list_item_action_leading}
</span>
<span class="tree-list-item-visual tree-list-item-visual--leading">
{action_list_item_visual_leading}
</span>
<span class="tree-list-item-label tree-list-item-label--truncate">
{action_list_item_label}
</span>
<span class="tree-list-item-visual tree-list-item-visual--trailing">
{action_list_item_visual_trailing}
</span>
<span class="tree-list-item-action tree-list-item-action--trailing">
{action_list_item_action_trailing}
</span>
</div>
+1 -309
View File
@@ -4,107 +4,6 @@
#licenses pre { margin: 1em 0 2em 0;}
</style>
<h2><a href="https://github.com/sczhou/CodeFormer/blob/master/LICENSE">CodeFormer</a></h2>
<small>Parts of CodeFormer code had to be copied to be compatible with GFPGAN.</small>
<pre>
S-Lab License 1.0
Copyright 2022 S-Lab
Redistribution and use for non-commercial purpose in source and
binary forms, with or without modification, are permitted provided
that the following conditions are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
In the event that redistribution and/or use for commercial purpose in
source or binary forms, with or without modification is required,
please contact the contributor(s) of the work.
</pre>
<h2><a href="https://github.com/victorca25/iNNfer/blob/main/LICENSE">ESRGAN</a></h2>
<small>Code for architecture and reading models copied.</small>
<pre>
MIT License
Copyright (c) 2021 victorca25
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
</pre>
<h2><a href="https://github.com/xinntao/Real-ESRGAN/blob/master/LICENSE">Real-ESRGAN</a></h2>
<small>Some code is copied to support ESRGAN models.</small>
<pre>
BSD 3-Clause License
Copyright (c) 2021, Xintao Wang
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
</pre>
<h2><a href="https://github.com/invoke-ai/InvokeAI/blob/main/LICENSE">InvokeAI</a></h2>
<small>Some code for compatibility with OSX is taken from lstein's repository.</small>
<pre>
@@ -183,213 +82,6 @@ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
</pre>
<h2><a href="https://github.com/JingyunLiang/SwinIR/blob/main/LICENSE">SwinIR</a></h2>
<small>Code added by contributors, most likely copied from this repository.</small>
<pre>
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [2021] [SwinIR Authors]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
</pre>
<h2><a href="https://github.com/AminRezaei0x443/memory-efficient-attention/blob/main/LICENSE">Memory Efficient Attention</a></h2>
<small>The sub-quadratic cross attention optimization uses modified code from the Memory Efficient Attention package that Alex Birch optimized for 3D tensors. This license is updated to reflect that.</small>
<pre>
@@ -687,4 +379,4 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
</pre>
</pre>
+6 -6
View File
@@ -50,17 +50,17 @@ function dimensionChange(e, is_width, is_height) {
var scaledx = targetElement.naturalWidth * viewportscale;
var scaledy = targetElement.naturalHeight * viewportscale;
var cleintRectTop = (viewportOffset.top + window.scrollY);
var cleintRectLeft = (viewportOffset.left + window.scrollX);
var cleintRectCentreY = cleintRectTop + (targetElement.clientHeight / 2);
var cleintRectCentreX = cleintRectLeft + (targetElement.clientWidth / 2);
var clientRectTop = (viewportOffset.top + window.scrollY);
var clientRectLeft = (viewportOffset.left + window.scrollX);
var clientRectCentreY = clientRectTop + (targetElement.clientHeight / 2);
var clientRectCentreX = clientRectLeft + (targetElement.clientWidth / 2);
var arscale = Math.min(scaledx / currentWidth, scaledy / currentHeight);
var arscaledx = currentWidth * arscale;
var arscaledy = currentHeight * arscale;
var arRectTop = cleintRectCentreY - (arscaledy / 2);
var arRectLeft = cleintRectCentreX - (arscaledx / 2);
var arRectTop = clientRectCentreY - (arscaledy / 2);
var arRectLeft = clientRectCentreX - (arscaledx / 2);
var arRectWidth = arscaledx;
var arRectHeight = arscaledy;
+22 -5
View File
@@ -74,22 +74,39 @@ window.document.addEventListener('dragover', e => {
e.dataTransfer.dropEffect = 'copy';
});
window.document.addEventListener('drop', e => {
window.document.addEventListener('drop', async e => {
const target = e.composedPath()[0];
if (!eventHasFiles(e)) return;
const url = e.dataTransfer.getData('text/uri-list') || e.dataTransfer.getData('text/plain');
if (!eventHasFiles(e) && !url) return;
if (dragDropTargetIsPrompt(target)) {
e.stopPropagation();
e.preventDefault();
let prompt_target = get_tab_index('tabs') == 1 ? "img2img_prompt_image" : "txt2img_prompt_image";
const isImg2img = get_tab_index('tabs') == 1;
let prompt_image_target = isImg2img ? "img2img_prompt_image" : "txt2img_prompt_image";
const imgParent = gradioApp().getElementById(prompt_target);
const imgParent = gradioApp().getElementById(prompt_image_target);
const files = e.dataTransfer.files;
const fileInput = imgParent.querySelector('input[type="file"]');
if (fileInput) {
if (eventHasFiles(e) && fileInput) {
fileInput.files = files;
fileInput.dispatchEvent(new Event('change'));
} else if (url) {
try {
const request = await fetch(url);
if (!request.ok) {
console.error('Error fetching URL:', url, request.status);
return;
}
const data = new DataTransfer();
data.items.add(new File([await request.blob()], 'image.png'));
fileInput.files = data.files;
fileInput.dispatchEvent(new Event('change'));
} catch (error) {
console.error('Error fetching URL:', url, error);
return;
}
}
}
+8
View File
@@ -64,6 +64,14 @@ function keyupEditAttention(event) {
selectionEnd++;
}
// deselect surrounding whitespace
while (text[selectionStart] == " " && selectionStart < selectionEnd) {
selectionStart++;
}
while (text[selectionEnd - 1] == " " && selectionEnd > selectionStart) {
selectionEnd--;
}
target.setSelectionRange(selectionStart, selectionEnd);
return true;
}
+5 -2
View File
@@ -2,8 +2,11 @@
function extensions_apply(_disabled_list, _update_list, disable_all) {
var disable = [];
var update = [];
gradioApp().querySelectorAll('#extensions input[type="checkbox"]').forEach(function(x) {
const extensions_input = gradioApp().querySelectorAll('#extensions input[type="checkbox"]');
if (extensions_input.length == 0) {
throw Error("Extensions page not yet loaded.");
}
extensions_input.forEach(function(x) {
if (x.name.startsWith("enable_") && !x.checked) {
disable.push(x.name.substring(7));
}
+440 -128
View File
@@ -16,99 +16,116 @@ function toggleCss(key, css, enable) {
}
function setupExtraNetworksForTab(tabname) {
gradioApp().querySelector('#' + tabname + '_extra_tabs').classList.add('extra-networks');
function registerPrompt(tabname, id) {
var textarea = gradioApp().querySelector("#" + id + " > label > textarea");
var tabs = gradioApp().querySelector('#' + tabname + '_extra_tabs > div');
var searchDiv = gradioApp().getElementById(tabname + '_extra_search');
var search = searchDiv.querySelector('textarea');
var sort = gradioApp().getElementById(tabname + '_extra_sort');
var sortOrder = gradioApp().getElementById(tabname + '_extra_sortorder');
var refresh = gradioApp().getElementById(tabname + '_extra_refresh');
var showDirsDiv = gradioApp().getElementById(tabname + '_extra_show_dirs');
var showDirs = gradioApp().querySelector('#' + tabname + '_extra_show_dirs input');
var promptContainer = gradioApp().querySelector('.prompt-container-compact#' + tabname + '_prompt_container');
var negativePrompt = gradioApp().querySelector('#' + tabname + '_neg_prompt');
if (!activePromptTextarea[tabname]) {
activePromptTextarea[tabname] = textarea;
}
tabs.appendChild(searchDiv);
tabs.appendChild(sort);
tabs.appendChild(sortOrder);
tabs.appendChild(refresh);
tabs.appendChild(showDirsDiv);
textarea.addEventListener("focus", function() {
activePromptTextarea[tabname] = textarea;
});
}
var applyFilter = function() {
var searchTerm = search.value.toLowerCase();
var tabnav = gradioApp().querySelector('#' + tabname + '_extra_tabs > div.tab-nav');
var controlsDiv = document.createElement('DIV');
controlsDiv.classList.add('extra-networks-controls-div');
tabnav.appendChild(controlsDiv);
tabnav.insertBefore(controlsDiv, null);
gradioApp().querySelectorAll('#' + tabname + '_extra_tabs div.card').forEach(function(elem) {
var searchOnly = elem.querySelector('.search_only');
var text = elem.querySelector('.name').textContent.toLowerCase() + " " + elem.querySelector('.search_term').textContent.toLowerCase();
var this_tab = gradioApp().querySelector('#' + tabname + '_extra_tabs');
this_tab.querySelectorAll(":scope > [id^='" + tabname + "_']").forEach(function(elem) {
// tabname_full = {tabname}_{extra_networks_tabname}
var tabname_full = elem.id;
var search = gradioApp().querySelector("#" + tabname_full + "_extra_search");
var sort_dir = gradioApp().querySelector("#" + tabname_full + "_extra_sort_dir");
var refresh = gradioApp().querySelector("#" + tabname_full + "_extra_refresh");
var currentSort = '';
var visible = text.indexOf(searchTerm) != -1;
// If any of the buttons above don't exist, we want to skip this iteration of the loop.
if (!search || !sort_dir || !refresh) {
return; // `return` is equivalent of `continue` but for forEach loops.
}
if (searchOnly && searchTerm.length < 4) {
visible = false;
var applyFilter = function(force) {
var searchTerm = search.value.toLowerCase();
gradioApp().querySelectorAll('#' + tabname + '_extra_tabs div.card').forEach(function(elem) {
var searchOnly = elem.querySelector('.search_only');
var text = Array.prototype.map.call(elem.querySelectorAll('.search_terms, .description'), function(t) {
return t.textContent.toLowerCase();
}).join(" ");
var visible = text.indexOf(searchTerm) != -1;
if (searchOnly && searchTerm.length < 4) {
visible = false;
}
if (visible) {
elem.classList.remove("hidden");
} else {
elem.classList.add("hidden");
}
});
applySort(force);
};
var applySort = function(force) {
var cards = gradioApp().querySelectorAll('#' + tabname_full + ' div.card');
var parent = gradioApp().querySelector('#' + tabname_full + "_cards");
var reverse = sort_dir.dataset.sortdir == "Descending";
var activeSearchElem = gradioApp().querySelector('#' + tabname_full + "_controls .extra-network-control--sort.extra-network-control--enabled");
var sortKey = activeSearchElem ? activeSearchElem.dataset.sortkey : "default";
var sortKeyDataField = "sort" + sortKey.charAt(0).toUpperCase() + sortKey.slice(1);
var sortKeyStore = sortKey + "-" + sort_dir.dataset.sortdir + "-" + cards.length;
if (sortKeyStore == currentSort && !force) {
return;
}
currentSort = sortKeyStore;
var sortedCards = Array.from(cards);
sortedCards.sort(function(cardA, cardB) {
var a = cardA.dataset[sortKeyDataField];
var b = cardB.dataset[sortKeyDataField];
if (!isNaN(a) && !isNaN(b)) {
return parseInt(a) - parseInt(b);
}
return (a < b ? -1 : (a > b ? 1 : 0));
});
if (reverse) {
sortedCards.reverse();
}
elem.style.display = visible ? "" : "none";
});
parent.innerHTML = '';
var frag = document.createDocumentFragment();
sortedCards.forEach(function(card) {
frag.appendChild(card);
});
parent.appendChild(frag);
};
search.addEventListener("input", function() {
applyFilter();
});
applySort();
};
applyFilter();
extraNetworksApplySort[tabname_full] = applySort;
extraNetworksApplyFilter[tabname_full] = applyFilter;
var applySort = function() {
var cards = gradioApp().querySelectorAll('#' + tabname + '_extra_tabs div.card');
var controls = gradioApp().querySelector("#" + tabname_full + "_controls");
controlsDiv.insertBefore(controls, null);
var reverse = sortOrder.classList.contains("sortReverse");
var sortKey = sort.querySelector("input").value.toLowerCase().replace("sort", "").replaceAll(" ", "_").replace(/_+$/, "").trim() || "name";
sortKey = "sort" + sortKey.charAt(0).toUpperCase() + sortKey.slice(1);
var sortKeyStore = sortKey + "-" + (reverse ? "Descending" : "Ascending") + "-" + cards.length;
if (sortKeyStore == sort.dataset.sortkey) {
return;
if (elem.style.display != "none") {
extraNetworksShowControlsForPage(tabname, tabname_full);
}
sort.dataset.sortkey = sortKeyStore;
cards.forEach(function(card) {
card.originalParentElement = card.parentElement;
});
var sortedCards = Array.from(cards);
sortedCards.sort(function(cardA, cardB) {
var a = cardA.dataset[sortKey];
var b = cardB.dataset[sortKey];
if (!isNaN(a) && !isNaN(b)) {
return parseInt(a) - parseInt(b);
}
return (a < b ? -1 : (a > b ? 1 : 0));
});
if (reverse) {
sortedCards.reverse();
}
cards.forEach(function(card) {
card.remove();
});
sortedCards.forEach(function(card) {
card.originalParentElement.appendChild(card);
});
};
search.addEventListener("input", applyFilter);
sortOrder.addEventListener("click", function() {
sortOrder.classList.toggle("sortReverse");
applySort();
});
applyFilter();
extraNetworksApplySort[tabname] = applySort;
extraNetworksApplyFilter[tabname] = applyFilter;
var showDirsUpdate = function() {
var css = '#' + tabname + '_extra_tabs .extra-network-subdirs { display: none; }';
toggleCss(tabname + '_extra_show_dirs_style', css, !showDirs.checked);
localSet('extra-networks-show-dirs', showDirs.checked ? 1 : 0);
};
showDirs.checked = localGet('extra-networks-show-dirs', 1) == 1;
showDirs.addEventListener("change", showDirsUpdate);
showDirsUpdate();
registerPrompt(tabname, tabname + "_prompt");
registerPrompt(tabname, tabname + "_neg_prompt");
}
function extraNetworksMovePromptToTab(tabname, id, showPrompt, showNegativePrompt) {
@@ -137,21 +154,42 @@ function extraNetworksMovePromptToTab(tabname, id, showPrompt, showNegativePromp
}
function extraNetworksUrelatedTabSelected(tabname) { // called from python when user selects an unrelated tab (generate)
extraNetworksMovePromptToTab(tabname, '', false, false);
function extraNetworksShowControlsForPage(tabname, tabname_full) {
gradioApp().querySelectorAll('#' + tabname + '_extra_tabs .extra-networks-controls-div > div').forEach(function(elem) {
var targetId = tabname_full + "_controls";
elem.style.display = elem.id == targetId ? "" : "none";
});
}
function extraNetworksTabSelected(tabname, id, showPrompt, showNegativePrompt) { // called from python when user selects an extra networks tab
function extraNetworksUnrelatedTabSelected(tabname) { // called from python when user selects an unrelated tab (generate)
extraNetworksMovePromptToTab(tabname, '', false, false);
extraNetworksShowControlsForPage(tabname, null);
}
function extraNetworksTabSelected(tabname, id, showPrompt, showNegativePrompt, tabname_full) { // called from python when user selects an extra networks tab
extraNetworksMovePromptToTab(tabname, id, showPrompt, showNegativePrompt);
extraNetworksShowControlsForPage(tabname, tabname_full);
}
function applyExtraNetworkFilter(tabname) {
setTimeout(extraNetworksApplyFilter[tabname], 1);
function applyExtraNetworkFilter(tabname_full) {
var doFilter = function() {
var applyFunction = extraNetworksApplyFilter[tabname_full];
if (applyFunction) {
applyFunction(true);
}
};
setTimeout(doFilter, 1);
}
function applyExtraNetworkSort(tabname) {
setTimeout(extraNetworksApplySort[tabname], 1);
function applyExtraNetworkSort(tabname_full) {
var doSort = function() {
extraNetworksApplySort[tabname_full](true);
};
setTimeout(doSort, 1);
}
var extraNetworksApplyFilter = {};
@@ -161,41 +199,24 @@ var activePromptTextarea = {};
function setupExtraNetworks() {
setupExtraNetworksForTab('txt2img');
setupExtraNetworksForTab('img2img');
function registerPrompt(tabname, id) {
var textarea = gradioApp().querySelector("#" + id + " > label > textarea");
if (!activePromptTextarea[tabname]) {
activePromptTextarea[tabname] = textarea;
}
textarea.addEventListener("focus", function() {
activePromptTextarea[tabname] = textarea;
});
}
registerPrompt('txt2img', 'txt2img_prompt');
registerPrompt('txt2img', 'txt2img_neg_prompt');
registerPrompt('img2img', 'img2img_prompt');
registerPrompt('img2img', 'img2img_neg_prompt');
}
onUiLoaded(setupExtraNetworks);
var re_extranet = /<([^:^>]+:[^:]+):[\d.]+>(.*)/;
var re_extranet_g = /<([^:^>]+:[^:]+):[\d.]+>/g;
function tryToRemoveExtraNetworkFromPrompt(textarea, text) {
var m = text.match(re_extranet);
var re_extranet_neg = /\(([^:^>]+:[\d.]+)\)/;
var re_extranet_g_neg = /\(([^:^>]+:[\d.]+)\)/g;
function tryToRemoveExtraNetworkFromPrompt(textarea, text, isNeg) {
var m = text.match(isNeg ? re_extranet_neg : re_extranet);
var replaced = false;
var newTextareaText;
var extraTextBeforeNet = opts.extra_networks_add_text_separator;
if (m) {
var extraTextBeforeNet = opts.extra_networks_add_text_separator;
var extraTextAfterNet = m[2];
var partToSearch = m[1];
var foundAtPosition = -1;
newTextareaText = textarea.value.replaceAll(re_extranet_g, function(found, net, pos) {
m = found.match(re_extranet);
newTextareaText = textarea.value.replaceAll(isNeg ? re_extranet_g_neg : re_extranet_g, function(found, net, pos) {
m = found.match(isNeg ? re_extranet_neg : re_extranet);
if (m[1] == partToSearch) {
replaced = true;
foundAtPosition = pos;
@@ -203,9 +224,8 @@ function tryToRemoveExtraNetworkFromPrompt(textarea, text) {
}
return found;
});
if (foundAtPosition >= 0) {
if (newTextareaText.substr(foundAtPosition, extraTextAfterNet.length) == extraTextAfterNet) {
if (extraTextAfterNet && newTextareaText.substr(foundAtPosition, extraTextAfterNet.length) == extraTextAfterNet) {
newTextareaText = newTextareaText.substr(0, foundAtPosition) + newTextareaText.substr(foundAtPosition + extraTextAfterNet.length);
}
if (newTextareaText.substr(foundAtPosition - extraTextBeforeNet.length, extraTextBeforeNet.length) == extraTextBeforeNet) {
@@ -213,13 +233,8 @@ function tryToRemoveExtraNetworkFromPrompt(textarea, text) {
}
}
} else {
newTextareaText = textarea.value.replaceAll(new RegExp(text, "g"), function(found) {
if (found == text) {
replaced = true;
return "";
}
return found;
});
newTextareaText = textarea.value.replaceAll(new RegExp(`((?:${extraTextBeforeNet})?${text})`, "g"), "");
replaced = (newTextareaText != textarea.value);
}
if (replaced) {
@@ -230,14 +245,22 @@ function tryToRemoveExtraNetworkFromPrompt(textarea, text) {
return false;
}
function cardClicked(tabname, textToAdd, allowNegativePrompt) {
var textarea = allowNegativePrompt ? activePromptTextarea[tabname] : gradioApp().querySelector("#" + tabname + "_prompt > label > textarea");
if (!tryToRemoveExtraNetworkFromPrompt(textarea, textToAdd)) {
textarea.value = textarea.value + opts.extra_networks_add_text_separator + textToAdd;
function updatePromptArea(text, textArea, isNeg) {
if (!tryToRemoveExtraNetworkFromPrompt(textArea, text, isNeg)) {
textArea.value = textArea.value + opts.extra_networks_add_text_separator + text;
}
updateInput(textarea);
updateInput(textArea);
}
function cardClicked(tabname, textToAdd, textToAddNegative, allowNegativePrompt) {
if (textToAddNegative.length > 0) {
updatePromptArea(textToAdd, gradioApp().querySelector("#" + tabname + "_prompt > label > textarea"));
updatePromptArea(textToAddNegative, gradioApp().querySelector("#" + tabname + "_neg_prompt > label > textarea"), true);
} else {
var textarea = allowNegativePrompt ? activePromptTextarea[tabname] : gradioApp().querySelector("#" + tabname + "_prompt > label > textarea");
updatePromptArea(textToAdd, textarea);
}
}
function saveCardPreview(event, tabname, filename) {
@@ -253,8 +276,8 @@ function saveCardPreview(event, tabname, filename) {
event.preventDefault();
}
function extraNetworksSearchButton(tabs_id, event) {
var searchTextarea = gradioApp().querySelector("#" + tabs_id + ' > label > textarea');
function extraNetworksSearchButton(tabname, extra_networks_tabname, event) {
var searchTextarea = gradioApp().querySelector("#" + tabname + "_" + extra_networks_tabname + "_extra_search");
var button = event.target;
var text = button.classList.contains("search-all") ? "" : button.textContent.trim();
@@ -262,6 +285,187 @@ function extraNetworksSearchButton(tabs_id, event) {
updateInput(searchTextarea);
}
function extraNetworksTreeProcessFileClick(event, btn, tabname, extra_networks_tabname) {
/**
* Processes `onclick` events when user clicks on files in tree.
*
* @param event The generated event.
* @param btn The clicked `tree-list-item` button.
* @param tabname The name of the active tab in the sd webui. Ex: txt2img, img2img, etc.
* @param extra_networks_tabname The id of the active extraNetworks tab. Ex: lora, checkpoints, etc.
*/
// NOTE: Currently unused.
return;
}
function extraNetworksTreeProcessDirectoryClick(event, btn, tabname, extra_networks_tabname) {
/**
* Processes `onclick` events when user clicks on directories in tree.
*
* Here is how the tree reacts to clicks for various states:
* unselected unopened directory: Directory is selected and expanded.
* unselected opened directory: Directory is selected.
* selected opened directory: Directory is collapsed and deselected.
* chevron is clicked: Directory is expanded or collapsed. Selected state unchanged.
*
* @param event The generated event.
* @param btn The clicked `tree-list-item` button.
* @param tabname The name of the active tab in the sd webui. Ex: txt2img, img2img, etc.
* @param extra_networks_tabname The id of the active extraNetworks tab. Ex: lora, checkpoints, etc.
*/
var ul = btn.nextElementSibling;
// This is the actual target that the user clicked on within the target button.
// We use this to detect if the chevron was clicked.
var true_targ = event.target;
function _expand_or_collapse(_ul, _btn) {
// Expands <ul> if it is collapsed, collapses otherwise. Updates button attributes.
if (_ul.hasAttribute("hidden")) {
_ul.removeAttribute("hidden");
_btn.dataset.expanded = "";
} else {
_ul.setAttribute("hidden", "");
delete _btn.dataset.expanded;
}
}
function _remove_selected_from_all() {
// Removes the `selected` attribute from all buttons.
var sels = document.querySelectorAll("div.tree-list-content");
[...sels].forEach(el => {
delete el.dataset.selected;
});
}
function _select_button(_btn) {
// Removes `data-selected` attribute from all buttons then adds to passed button.
_remove_selected_from_all();
_btn.dataset.selected = "";
}
function _update_search(_tabname, _extra_networks_tabname, _search_text) {
// Update search input with select button's path.
var search_input_elem = gradioApp().querySelector("#" + tabname + "_" + extra_networks_tabname + "_extra_search");
search_input_elem.value = _search_text;
updateInput(search_input_elem);
}
// If user clicks on the chevron, then we do not select the folder.
if (true_targ.matches(".tree-list-item-action--leading, .tree-list-item-action-chevron")) {
_expand_or_collapse(ul, btn);
} else {
// User clicked anywhere else on the button.
if ("selected" in btn.dataset && !(ul.hasAttribute("hidden"))) {
// If folder is select and open, collapse and deselect button.
_expand_or_collapse(ul, btn);
delete btn.dataset.selected;
_update_search(tabname, extra_networks_tabname, "");
} else if (!(!("selected" in btn.dataset) && !(ul.hasAttribute("hidden")))) {
// If folder is open and not selected, then we don't collapse; just select.
// NOTE: Double inversion sucks but it is the clearest way to show the branching here.
_expand_or_collapse(ul, btn);
_select_button(btn, tabname, extra_networks_tabname);
_update_search(tabname, extra_networks_tabname, btn.dataset.path);
} else {
// All other cases, just select the button.
_select_button(btn, tabname, extra_networks_tabname);
_update_search(tabname, extra_networks_tabname, btn.dataset.path);
}
}
}
function extraNetworksTreeOnClick(event, tabname, extra_networks_tabname) {
/**
* Handles `onclick` events for buttons within an `extra-network-tree .tree-list--tree`.
*
* Determines whether the clicked button in the tree is for a file entry or a directory
* then calls the appropriate function.
*
* @param event The generated event.
* @param tabname The name of the active tab in the sd webui. Ex: txt2img, img2img, etc.
* @param extra_networks_tabname The id of the active extraNetworks tab. Ex: lora, checkpoints, etc.
*/
var btn = event.currentTarget;
var par = btn.parentElement;
if (par.dataset.treeEntryType === "file") {
extraNetworksTreeProcessFileClick(event, btn, tabname, extra_networks_tabname);
} else {
extraNetworksTreeProcessDirectoryClick(event, btn, tabname, extra_networks_tabname);
}
}
function extraNetworksControlSortOnClick(event, tabname, extra_networks_tabname) {
/** Handles `onclick` events for Sort Mode buttons. */
var self = event.currentTarget;
var parent = event.currentTarget.parentElement;
parent.querySelectorAll('.extra-network-control--sort').forEach(function(x) {
x.classList.remove('extra-network-control--enabled');
});
self.classList.add('extra-network-control--enabled');
applyExtraNetworkSort(tabname + "_" + extra_networks_tabname);
}
function extraNetworksControlSortDirOnClick(event, tabname, extra_networks_tabname) {
/**
* Handles `onclick` events for the Sort Direction button.
*
* Modifies the data attributes of the Sort Direction button to cycle between
* ascending and descending sort directions.
*
* @param event The generated event.
* @param tabname The name of the active tab in the sd webui. Ex: txt2img, img2img, etc.
* @param extra_networks_tabname The id of the active extraNetworks tab. Ex: lora, checkpoints, etc.
*/
if (event.currentTarget.dataset.sortdir == "Ascending") {
event.currentTarget.dataset.sortdir = "Descending";
event.currentTarget.setAttribute("title", "Sort descending");
} else {
event.currentTarget.dataset.sortdir = "Ascending";
event.currentTarget.setAttribute("title", "Sort ascending");
}
applyExtraNetworkSort(tabname + "_" + extra_networks_tabname);
}
function extraNetworksControlTreeViewOnClick(event, tabname, extra_networks_tabname) {
/**
* Handles `onclick` events for the Tree View button.
*
* Toggles the tree view in the extra networks pane.
*
* @param event The generated event.
* @param tabname The name of the active tab in the sd webui. Ex: txt2img, img2img, etc.
* @param extra_networks_tabname The id of the active extraNetworks tab. Ex: lora, checkpoints, etc.
*/
var button = event.currentTarget;
button.classList.toggle("extra-network-control--enabled");
var show = !button.classList.contains("extra-network-control--enabled");
var pane = gradioApp().getElementById(tabname + "_" + extra_networks_tabname + "_pane");
pane.classList.toggle("extra-network-dirs-hidden", show);
}
function extraNetworksControlRefreshOnClick(event, tabname, extra_networks_tabname) {
/**
* Handles `onclick` events for the Refresh Page button.
*
* In order to actually call the python functions in `ui_extra_networks.py`
* to refresh the page, we created an empty gradio button in that file with an
* event handler that refreshes the page. So what this function here does
* is it manually raises a `click` event on that button.
*
* @param event The generated event.
* @param tabname The name of the active tab in the sd webui. Ex: txt2img, img2img, etc.
* @param extra_networks_tabname The id of the active extraNetworks tab. Ex: lora, checkpoints, etc.
*/
var btn_refresh_internal = gradioApp().getElementById(tabname + "_" + extra_networks_tabname + "_extra_refresh_internal");
btn_refresh_internal.dispatchEvent(new Event("click"));
}
var globalPopup = null;
var globalPopupInner = null;
@@ -303,12 +507,76 @@ function popupId(id) {
popup(storedPopupIds[id]);
}
function extraNetworksFlattenMetadata(obj) {
const result = {};
// Convert any stringified JSON objects to actual objects
for (const key of Object.keys(obj)) {
if (typeof obj[key] === 'string') {
try {
const parsed = JSON.parse(obj[key]);
if (parsed && typeof parsed === 'object') {
obj[key] = parsed;
}
} catch (error) {
continue;
}
}
}
// Flatten the object
for (const key of Object.keys(obj)) {
if (typeof obj[key] === 'object' && obj[key] !== null) {
const nested = extraNetworksFlattenMetadata(obj[key]);
for (const nestedKey of Object.keys(nested)) {
result[`${key}/${nestedKey}`] = nested[nestedKey];
}
} else {
result[key] = obj[key];
}
}
// Special case for handling modelspec keys
for (const key of Object.keys(result)) {
if (key.startsWith("modelspec.")) {
result[key.replaceAll(".", "/")] = result[key];
delete result[key];
}
}
// Add empty keys to designate hierarchy
for (const key of Object.keys(result)) {
const parts = key.split("/");
for (let i = 1; i < parts.length; i++) {
const parent = parts.slice(0, i).join("/");
if (!result[parent]) {
result[parent] = "";
}
}
}
return result;
}
function extraNetworksShowMetadata(text) {
try {
let parsed = JSON.parse(text);
if (parsed && typeof parsed === 'object') {
parsed = extraNetworksFlattenMetadata(parsed);
const table = createVisualizationTable(parsed, 0);
popup(table);
return;
}
} catch (error) {
console.error(error);
}
var elem = document.createElement('pre');
elem.classList.add('popup-metadata');
elem.textContent = text;
popup(elem);
return;
}
function requestGet(url, data, handler, errorHandler) {
@@ -337,11 +605,18 @@ function requestGet(url, data, handler, errorHandler) {
xhr.send(js);
}
function extraNetworksRequestMetadata(event, extraPage, cardName) {
function extraNetworksCopyCardPath(event) {
navigator.clipboard.writeText(event.target.getAttribute("data-clipboard-text"));
event.stopPropagation();
}
function extraNetworksRequestMetadata(event, extraPage) {
var showError = function() {
extraNetworksShowMetadata("there was an error getting metadata");
};
var cardName = event.target.parentElement.parentElement.getAttribute("data-name");
requestGet("./sd_extra_networks/metadata", {page: extraPage, item: cardName}, function(data) {
if (data && data.metadata) {
extraNetworksShowMetadata(data.metadata);
@@ -355,7 +630,7 @@ function extraNetworksRequestMetadata(event, extraPage, cardName) {
var extraPageUserMetadataEditors = {};
function extraNetworksEditUserMetadata(event, tabname, extraPage, cardName) {
function extraNetworksEditUserMetadata(event, tabname, extraPage) {
var id = tabname + '_' + extraPage + '_edit_user_metadata';
var editor = extraPageUserMetadataEditors[id];
@@ -367,6 +642,7 @@ function extraNetworksEditUserMetadata(event, tabname, extraPage, cardName) {
extraPageUserMetadataEditors[id] = editor;
}
var cardName = event.target.parentElement.parentElement.getAttribute("data-name");
editor.nameTextarea.value = cardName;
updateInput(editor.nameTextarea);
@@ -398,3 +674,39 @@ window.addEventListener("keydown", function(event) {
closePopup();
}
});
/**
* Setup custom loading for this script.
* We need to wait for all of our HTML to be generated in the extra networks tabs
* before we can actually run the `setupExtraNetworks` function.
* The `onUiLoaded` function actually runs before all of our extra network tabs are
* finished generating. Thus we needed this new method.
*
*/
var uiAfterScriptsCallbacks = [];
var uiAfterScriptsTimeout = null;
var executedAfterScripts = false;
function scheduleAfterScriptsCallbacks() {
clearTimeout(uiAfterScriptsTimeout);
uiAfterScriptsTimeout = setTimeout(function() {
executeCallbacks(uiAfterScriptsCallbacks);
}, 200);
}
onUiLoaded(function() {
var mutationObserver = new MutationObserver(function(m) {
let existingSearchfields = gradioApp().querySelectorAll("[id$='_extra_search']").length;
let neededSearchfields = gradioApp().querySelectorAll("[id$='_extra_tabs'] > .tab-nav > button").length - 2;
if (!executedAfterScripts && existingSearchfields >= neededSearchfields) {
mutationObserver.disconnect();
executedAfterScripts = true;
scheduleAfterScriptsCallbacks();
}
});
mutationObserver.observe(gradioApp(), {childList: true, subtree: true});
});
uiAfterScriptsCallbacks.push(setupExtraNetworks);
+4 -8
View File
@@ -131,19 +131,15 @@ function setupImageForLightbox(e) {
e.style.cursor = 'pointer';
e.style.userSelect = 'none';
var isFirefox = navigator.userAgent.toLowerCase().indexOf('firefox') > -1;
// For Firefox, listening on click first switched to next image then shows the lightbox.
// If you know how to fix this without switching to mousedown event, please.
// For other browsers the event is click to make it possiblr to drag picture.
var event = isFirefox ? 'mousedown' : 'click';
e.addEventListener(event, function(evt) {
e.addEventListener('mousedown', function(evt) {
if (evt.button == 1) {
open(evt.target.src);
evt.preventDefault();
return;
}
}, true);
e.addEventListener('click', function(evt) {
if (!opts.js_modal_lightbox || evt.button != 0) return;
modalZoomSet(gradioApp().getElementById('modalImage'), opts.js_modal_lightbox_initially_zoomed);
+113 -92
View File
@@ -33,120 +33,141 @@ function createRow(table, cellName, items) {
return res;
}
function showProfile(path, cutoff = 0.05) {
requestGet(path, {}, function(data) {
var table = document.createElement('table');
table.className = 'popup-table';
function createVisualizationTable(data, cutoff = 0, sort = "") {
var table = document.createElement('table');
table.className = 'popup-table';
data.records['total'] = data.total;
var keys = Object.keys(data.records).sort(function(a, b) {
return data.records[b] - data.records[a];
var keys = Object.keys(data);
if (sort === "number") {
keys = keys.sort(function(a, b) {
return data[b] - data[a];
});
var items = keys.map(function(x) {
return {key: x, parts: x.split('/'), time: data.records[x]};
} else {
keys = keys.sort();
}
var items = keys.map(function(x) {
return {key: x, parts: x.split('/'), value: data[x]};
});
var maxLength = items.reduce(function(a, b) {
return Math.max(a, b.parts.length);
}, 0);
var cols = createRow(
table,
'th',
[
cutoff === 0 ? 'key' : 'record',
cutoff === 0 ? 'value' : 'seconds'
]
);
cols[0].colSpan = maxLength;
function arraysEqual(a, b) {
return !(a < b || b < a);
}
var addLevel = function(level, parent, hide) {
var matching = items.filter(function(x) {
return x.parts[level] && !x.parts[level + 1] && arraysEqual(x.parts.slice(0, level), parent);
});
var maxLength = items.reduce(function(a, b) {
return Math.max(a, b.parts.length);
}, 0);
var cols = createRow(table, 'th', ['record', 'seconds']);
cols[0].colSpan = maxLength;
function arraysEqual(a, b) {
return !(a < b || b < a);
if (sort === "number") {
matching = matching.sort(function(a, b) {
return b.value - a.value;
});
} else {
matching = matching.sort();
}
var othersTime = 0;
var othersList = [];
var othersRows = [];
var childrenRows = [];
matching.forEach(function(x) {
var visible = (cutoff === 0 && !hide) || (x.value >= cutoff && !hide);
var addLevel = function(level, parent, hide) {
var matching = items.filter(function(x) {
return x.parts[level] && !x.parts[level + 1] && arraysEqual(x.parts.slice(0, level), parent);
});
var sorted = matching.sort(function(a, b) {
return b.time - a.time;
});
var othersTime = 0;
var othersList = [];
var othersRows = [];
var childrenRows = [];
sorted.forEach(function(x) {
var visible = x.time >= cutoff && !hide;
var cells = [];
for (var i = 0; i < maxLength; i++) {
cells.push(x.parts[i]);
}
cells.push(cutoff === 0 ? x.value : x.value.toFixed(3));
var cols = createRow(table, 'td', cells);
for (i = 0; i < level; i++) {
cols[i].className = 'muted';
}
var cells = [];
for (var i = 0; i < maxLength; i++) {
cells.push(x.parts[i]);
}
cells.push(x.time.toFixed(3));
var cols = createRow(table, 'td', cells);
for (i = 0; i < level; i++) {
cols[i].className = 'muted';
}
var tr = cols[0].parentNode;
if (!visible) {
tr.classList.add("hidden");
}
var tr = cols[0].parentNode;
if (!visible) {
tr.classList.add("hidden");
}
if (x.time >= cutoff) {
childrenRows.push(tr);
} else {
othersTime += x.time;
othersList.push(x.parts[level]);
othersRows.push(tr);
}
var children = addLevel(level + 1, parent.concat([x.parts[level]]), true);
if (children.length > 0) {
var cell = cols[level];
var onclick = function() {
cell.classList.remove("link");
cell.removeEventListener("click", onclick);
children.forEach(function(x) {
x.classList.remove("hidden");
});
};
cell.classList.add("link");
cell.addEventListener("click", onclick);
}
});
if (othersTime > 0) {
var cells = [];
for (var i = 0; i < maxLength; i++) {
cells.push(parent[i]);
}
cells.push(othersTime.toFixed(3));
cells[level] = 'others';
var cols = createRow(table, 'td', cells);
for (i = 0; i < level; i++) {
cols[i].className = 'muted';
}
if (cutoff === 0 || x.value >= cutoff) {
childrenRows.push(tr);
} else {
othersTime += x.value;
othersList.push(x.parts[level]);
othersRows.push(tr);
}
var children = addLevel(level + 1, parent.concat([x.parts[level]]), true);
if (children.length > 0) {
var cell = cols[level];
var tr = cell.parentNode;
var onclick = function() {
tr.classList.add("hidden");
cell.classList.remove("link");
cell.removeEventListener("click", onclick);
othersRows.forEach(function(x) {
children.forEach(function(x) {
x.classList.remove("hidden");
});
};
cell.title = othersList.join(", ");
cell.classList.add("link");
cell.addEventListener("click", onclick);
}
});
if (hide) {
tr.classList.add("hidden");
}
childrenRows.push(tr);
if (othersTime > 0) {
var cells = [];
for (var i = 0; i < maxLength; i++) {
cells.push(parent[i]);
}
cells.push(othersTime.toFixed(3));
cells[level] = 'others';
var cols = createRow(table, 'td', cells);
for (i = 0; i < level; i++) {
cols[i].className = 'muted';
}
return childrenRows;
};
var cell = cols[level];
var tr = cell.parentNode;
var onclick = function() {
tr.classList.add("hidden");
cell.classList.remove("link");
cell.removeEventListener("click", onclick);
othersRows.forEach(function(x) {
x.classList.remove("hidden");
});
};
addLevel(0, []);
cell.title = othersList.join(", ");
cell.classList.add("link");
cell.addEventListener("click", onclick);
if (hide) {
tr.classList.add("hidden");
}
childrenRows.push(tr);
}
return childrenRows;
};
addLevel(0, []);
return table;
}
function showProfile(path, cutoff = 0.05) {
requestGet(path, {}, function(data) {
data.records['total'] = data.total;
const table = createVisualizationTable(data.records, cutoff, "number");
popup(table);
});
}
+8 -1
View File
@@ -45,8 +45,15 @@ function formatTime(secs) {
}
}
var originalAppTitle = undefined;
onUiLoaded(function() {
originalAppTitle = document.title;
});
function setTitle(progress) {
var title = 'Stable Diffusion';
var title = originalAppTitle;
if (opts.show_progress_in_title && progress) {
title = '[' + progress.trim() + '] ' + title;
+112 -48
View File
@@ -1,8 +1,8 @@
(function() {
const GRADIO_MIN_WIDTH = 320;
const GRID_TEMPLATE_COLUMNS = '1fr 16px 1fr';
const PAD = 16;
const DEBOUNCE_TIME = 100;
const DOUBLE_TAP_DELAY = 200; //ms
const R = {
tracking: false,
@@ -11,6 +11,7 @@
leftCol: null,
leftColStartWidth: null,
screenX: null,
lastTapTime: null,
};
let resizeTimer;
@@ -21,30 +22,29 @@
}
function displayResizeHandle(parent) {
if (!parent.needHideOnMoblie) {
return true;
}
if (window.innerWidth < GRADIO_MIN_WIDTH * 2 + PAD * 4) {
parent.style.display = 'flex';
if (R.handle != null) {
R.handle.style.opacity = '0';
}
parent.resizeHandle.style.display = "none";
return false;
} else {
parent.style.display = 'grid';
if (R.handle != null) {
R.handle.style.opacity = '100';
}
parent.resizeHandle.style.display = "block";
return true;
}
}
function afterResize(parent) {
if (displayResizeHandle(parent) && parent.style.gridTemplateColumns != GRID_TEMPLATE_COLUMNS) {
if (displayResizeHandle(parent) && parent.style.gridTemplateColumns != parent.style.originalGridTemplateColumns) {
const oldParentWidth = R.parentWidth;
const newParentWidth = parent.offsetWidth;
const widthL = parseInt(parent.style.gridTemplateColumns.split(' ')[0]);
const ratio = newParentWidth / oldParentWidth;
const newWidthL = Math.max(Math.floor(ratio * widthL), GRADIO_MIN_WIDTH);
const newWidthL = Math.max(Math.floor(ratio * widthL), parent.minLeftColWidth);
setLeftColGridTemplate(parent, newWidthL);
R.parentWidth = newParentWidth;
@@ -52,6 +52,14 @@
}
function setup(parent) {
function onDoubleClick(evt) {
evt.preventDefault();
evt.stopPropagation();
parent.style.gridTemplateColumns = parent.style.originalGridTemplateColumns;
}
const leftCol = parent.firstElementChild;
const rightCol = parent.lastElementChild;
@@ -59,63 +67,114 @@
parent.style.display = 'grid';
parent.style.gap = '0';
parent.style.gridTemplateColumns = GRID_TEMPLATE_COLUMNS;
let leftColTemplate = "";
if (parent.children[0].style.flexGrow) {
leftColTemplate = `${parent.children[0].style.flexGrow}fr`;
parent.minLeftColWidth = GRADIO_MIN_WIDTH;
parent.minRightColWidth = GRADIO_MIN_WIDTH;
parent.needHideOnMoblie = true;
} else {
leftColTemplate = parent.children[0].style.flexBasis;
parent.minLeftColWidth = parent.children[0].style.flexBasis.slice(0, -2) / 2;
parent.minRightColWidth = 0;
parent.needHideOnMoblie = false;
}
if (!leftColTemplate) {
leftColTemplate = '1fr';
}
const gridTemplateColumns = `${leftColTemplate} ${PAD}px ${parent.children[1].style.flexGrow}fr`;
parent.style.gridTemplateColumns = gridTemplateColumns;
parent.style.originalGridTemplateColumns = gridTemplateColumns;
const resizeHandle = document.createElement('div');
resizeHandle.classList.add('resize-handle');
parent.insertBefore(resizeHandle, rightCol);
parent.resizeHandle = resizeHandle;
resizeHandle.addEventListener('mousedown', (evt) => {
if (evt.button !== 0) return;
['mousedown', 'touchstart'].forEach((eventType) => {
resizeHandle.addEventListener(eventType, (evt) => {
if (eventType.startsWith('mouse')) {
if (evt.button !== 0) return;
} else {
if (evt.changedTouches.length !== 1) return;
evt.preventDefault();
evt.stopPropagation();
const currentTime = new Date().getTime();
if (R.lastTapTime && currentTime - R.lastTapTime <= DOUBLE_TAP_DELAY) {
onDoubleClick(evt);
return;
}
document.body.classList.add('resizing');
R.lastTapTime = currentTime;
}
R.tracking = true;
R.parent = parent;
R.parentWidth = parent.offsetWidth;
R.handle = resizeHandle;
R.leftCol = leftCol;
R.leftColStartWidth = leftCol.offsetWidth;
R.screenX = evt.screenX;
evt.preventDefault();
evt.stopPropagation();
document.body.classList.add('resizing');
R.tracking = true;
R.parent = parent;
R.parentWidth = parent.offsetWidth;
R.leftCol = leftCol;
R.leftColStartWidth = leftCol.offsetWidth;
if (eventType.startsWith('mouse')) {
R.screenX = evt.screenX;
} else {
R.screenX = evt.changedTouches[0].screenX;
}
});
});
resizeHandle.addEventListener('dblclick', (evt) => {
evt.preventDefault();
evt.stopPropagation();
parent.style.gridTemplateColumns = GRID_TEMPLATE_COLUMNS;
});
resizeHandle.addEventListener('dblclick', onDoubleClick);
afterResize(parent);
}
window.addEventListener('mousemove', (evt) => {
if (evt.button !== 0) return;
['mousemove', 'touchmove'].forEach((eventType) => {
window.addEventListener(eventType, (evt) => {
if (eventType.startsWith('mouse')) {
if (evt.button !== 0) return;
} else {
if (evt.changedTouches.length !== 1) return;
}
if (R.tracking) {
evt.preventDefault();
evt.stopPropagation();
if (R.tracking) {
if (eventType.startsWith('mouse')) {
evt.preventDefault();
}
evt.stopPropagation();
const delta = R.screenX - evt.screenX;
const leftColWidth = Math.max(Math.min(R.leftColStartWidth - delta, R.parent.offsetWidth - GRADIO_MIN_WIDTH - PAD), GRADIO_MIN_WIDTH);
setLeftColGridTemplate(R.parent, leftColWidth);
}
let delta = 0;
if (eventType.startsWith('mouse')) {
delta = R.screenX - evt.screenX;
} else {
delta = R.screenX - evt.changedTouches[0].screenX;
}
const leftColWidth = Math.max(Math.min(R.leftColStartWidth - delta, R.parent.offsetWidth - R.parent.minRightColWidth - PAD), R.parent.minLeftColWidth);
setLeftColGridTemplate(R.parent, leftColWidth);
}
});
});
window.addEventListener('mouseup', (evt) => {
if (evt.button !== 0) return;
['mouseup', 'touchend'].forEach((eventType) => {
window.addEventListener(eventType, (evt) => {
if (eventType.startsWith('mouse')) {
if (evt.button !== 0) return;
} else {
if (evt.changedTouches.length !== 1) return;
}
if (R.tracking) {
evt.preventDefault();
evt.stopPropagation();
if (R.tracking) {
evt.preventDefault();
evt.stopPropagation();
R.tracking = false;
R.tracking = false;
document.body.classList.remove('resizing');
}
document.body.classList.remove('resizing');
}
});
});
@@ -132,10 +191,15 @@
setupResizeHandle = setup;
})();
onUiLoaded(function() {
function setupAllResizeHandles() {
for (var elem of gradioApp().querySelectorAll('.resize-handle-row')) {
if (!elem.querySelector('.resize-handle')) {
if (!elem.querySelector('.resize-handle') && !elem.children[0].classList.contains("hidden")) {
setupResizeHandle(elem);
}
}
});
}
onUiLoaded(setupAllResizeHandles);
+2 -2
View File
@@ -55,8 +55,8 @@ onOptionsChanged(function() {
});
opts._categories.forEach(function(x) {
var section = x[0];
var category = x[1];
var section = localization[x[0]] ?? x[0];
var category = localization[x[1]] ?? x[1];
var span = document.createElement('SPAN');
span.textContent = category;
+23 -11
View File
@@ -48,11 +48,6 @@ function setupTokenCounting(id, id_counter, id_button) {
var counter = gradioApp().getElementById(id_counter);
var textarea = gradioApp().querySelector(`#${id} > label > textarea`);
if (opts.disable_token_counters) {
counter.style.display = "none";
return;
}
if (counter.parentElement == prompt.parentElement) {
return;
}
@@ -61,15 +56,32 @@ function setupTokenCounting(id, id_counter, id_button) {
prompt.parentElement.style.position = "relative";
var func = onEdit(id, textarea, 800, function() {
gradioApp().getElementById(id_button)?.click();
if (counter.classList.contains("token-counter-visible")) {
gradioApp().getElementById(id_button)?.click();
}
});
promptTokenCountUpdateFunctions[id] = func;
promptTokenCountUpdateFunctions[id_button] = func;
}
function setupTokenCounters() {
setupTokenCounting('txt2img_prompt', 'txt2img_token_counter', 'txt2img_token_button');
setupTokenCounting('txt2img_neg_prompt', 'txt2img_negative_token_counter', 'txt2img_negative_token_button');
setupTokenCounting('img2img_prompt', 'img2img_token_counter', 'img2img_token_button');
setupTokenCounting('img2img_neg_prompt', 'img2img_negative_token_counter', 'img2img_negative_token_button');
function toggleTokenCountingVisibility(id, id_counter, id_button) {
var counter = gradioApp().getElementById(id_counter);
counter.style.display = opts.disable_token_counters ? "none" : "block";
counter.classList.toggle("token-counter-visible", !opts.disable_token_counters);
}
function runCodeForTokenCounters(fun) {
fun('txt2img_prompt', 'txt2img_token_counter', 'txt2img_token_button');
fun('txt2img_neg_prompt', 'txt2img_negative_token_counter', 'txt2img_negative_token_button');
fun('img2img_prompt', 'img2img_token_counter', 'img2img_token_button');
fun('img2img_neg_prompt', 'img2img_negative_token_counter', 'img2img_negative_token_button');
}
onUiLoaded(function() {
runCodeForTokenCounters(setupTokenCounting);
});
onOptionsChanged(function() {
runCodeForTokenCounters(toggleTokenCountingVisibility);
});
+23 -7
View File
@@ -119,16 +119,24 @@ function create_submit_args(args) {
return res;
}
function setSubmitButtonsVisibility(tabname, showInterrupt, showSkip, showInterrupting) {
gradioApp().getElementById(tabname + '_interrupt').style.display = showInterrupt ? "block" : "none";
gradioApp().getElementById(tabname + '_skip').style.display = showSkip ? "block" : "none";
gradioApp().getElementById(tabname + '_interrupting').style.display = showInterrupting ? "block" : "none";
}
function showSubmitButtons(tabname, show) {
gradioApp().getElementById(tabname + '_interrupt').style.display = show ? "none" : "block";
gradioApp().getElementById(tabname + '_skip').style.display = show ? "none" : "block";
setSubmitButtonsVisibility(tabname, !show, !show, false);
}
function showSubmitInterruptingPlaceholder(tabname) {
setSubmitButtonsVisibility(tabname, false, true, true);
}
function showRestoreProgressButton(tabname, show) {
var button = gradioApp().getElementById(tabname + "_restore_progress");
if (!button) return;
button.style.display = show ? "flex" : "none";
button.style.setProperty('display', show ? 'flex' : 'none', 'important');
}
function submit() {
@@ -150,6 +158,14 @@ function submit() {
return res;
}
function submit_txt2img_upscale() {
var res = submit(...arguments);
res[2] = selected_gallery_index();
return res;
}
function submit_img2img() {
showSubmitButtons('img2img', false);
@@ -192,6 +208,7 @@ function restoreProgressTxt2img() {
var id = localGet("txt2img_task_id");
if (id) {
showSubmitInterruptingPlaceholder('txt2img');
requestProgress(id, gradioApp().getElementById('txt2img_gallery_container'), gradioApp().getElementById('txt2img_gallery'), function() {
showSubmitButtons('txt2img', true);
}, null, 0);
@@ -206,6 +223,7 @@ function restoreProgressImg2img() {
var id = localGet("img2img_task_id");
if (id) {
showSubmitInterruptingPlaceholder('img2img');
requestProgress(id, gradioApp().getElementById('img2img_gallery_container'), gradioApp().getElementById('img2img_gallery'), function() {
showSubmitButtons('img2img', true);
}, null, 0);
@@ -302,8 +320,6 @@ onAfterUiUpdate(function() {
});
json_elem.parentElement.style.display = "none";
setupTokenCounters();
});
onOptionsChanged(function() {
@@ -396,7 +412,7 @@ function switchWidthHeight(tabname) {
var onEditTimers = {};
// calls func after afterMs milliseconds has passed since the input elem has beed enited by user
// calls func after afterMs milliseconds has passed since the input elem has been edited by user
function onEdit(editId, elem, afterMs, func) {
var edited = function() {
var existingTimer = onEditTimers[editId];
+150 -21
View File
@@ -17,13 +17,13 @@ from fastapi.encoders import jsonable_encoder
from secrets import compare_digest
import modules.shared as shared
from modules import sd_samplers, deepbooru, sd_hijack, images, scripts, ui, postprocessing, errors, restart, shared_items, script_callbacks, generation_parameters_copypaste, sd_models
from modules import sd_samplers, deepbooru, sd_hijack, images, scripts, ui, postprocessing, errors, restart, shared_items, script_callbacks, infotext_utils, sd_models, sd_schedulers
from modules.api import models
from modules.shared import opts
from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img, process_images
from modules.textual_inversion.textual_inversion import create_embedding, train_embedding
from modules.hypernetworks.hypernetwork import create_hypernetwork, train_hypernetwork
from PIL import PngImagePlugin, Image
from PIL import PngImagePlugin
from modules.sd_models_config import find_checkpoint_config_near_filename
from modules.realesrgan_model import get_realesrgan_models
from modules import devices
@@ -31,7 +31,7 @@ from typing import Any
import piexif
import piexif.helper
from contextlib import closing
from modules.progress import create_task_id, add_task_to_queue, start_task, finish_task, current_task
def script_name_to_index(name, scripts):
try:
@@ -85,7 +85,7 @@ def decode_base64_to_image(encoding):
headers = {'user-agent': opts.api_useragent} if opts.api_useragent else {}
response = requests.get(encoding, timeout=30, headers=headers)
try:
image = Image.open(BytesIO(response.content))
image = images.read(BytesIO(response.content))
return image
except Exception as e:
raise HTTPException(status_code=500, detail="Invalid image url") from e
@@ -93,7 +93,7 @@ def decode_base64_to_image(encoding):
if encoding.startswith("data:image/"):
encoding = encoding.split(";")[1].split(",")[1]
try:
image = Image.open(BytesIO(base64.b64decode(encoding)))
image = images.read(BytesIO(base64.b64decode(encoding)))
return image
except Exception as e:
raise HTTPException(status_code=500, detail="Invalid encoded image") from e
@@ -221,6 +221,7 @@ class Api:
self.add_api_route("/sdapi/v1/options", self.set_config, methods=["POST"])
self.add_api_route("/sdapi/v1/cmd-flags", self.get_cmd_flags, methods=["GET"], response_model=models.FlagsModel)
self.add_api_route("/sdapi/v1/samplers", self.get_samplers, methods=["GET"], response_model=list[models.SamplerItem])
self.add_api_route("/sdapi/v1/schedulers", self.get_schedulers, methods=["GET"], response_model=list[models.SchedulerItem])
self.add_api_route("/sdapi/v1/upscalers", self.get_upscalers, methods=["GET"], response_model=list[models.UpscalerItem])
self.add_api_route("/sdapi/v1/latent-upscale-modes", self.get_latent_upscale_modes, methods=["GET"], response_model=list[models.LatentUpscalerModeItem])
self.add_api_route("/sdapi/v1/sd-models", self.get_sd_models, methods=["GET"], response_model=list[models.SDModelItem])
@@ -230,6 +231,7 @@ class Api:
self.add_api_route("/sdapi/v1/realesrgan-models", self.get_realesrgan_models, methods=["GET"], response_model=list[models.RealesrganItem])
self.add_api_route("/sdapi/v1/prompt-styles", self.get_prompt_styles, methods=["GET"], response_model=list[models.PromptStyleItem])
self.add_api_route("/sdapi/v1/embeddings", self.get_embeddings, methods=["GET"], response_model=models.EmbeddingsResponse)
self.add_api_route("/sdapi/v1/refresh-embeddings", self.refresh_embeddings, methods=["POST"])
self.add_api_route("/sdapi/v1/refresh-checkpoints", self.refresh_checkpoints, methods=["POST"])
self.add_api_route("/sdapi/v1/refresh-vae", self.refresh_vae, methods=["POST"])
self.add_api_route("/sdapi/v1/create/embedding", self.create_embedding, methods=["POST"], response_model=models.CreateResponse)
@@ -251,6 +253,24 @@ class Api:
self.default_script_arg_txt2img = []
self.default_script_arg_img2img = []
txt2img_script_runner = scripts.scripts_txt2img
img2img_script_runner = scripts.scripts_img2img
if not txt2img_script_runner.scripts or not img2img_script_runner.scripts:
ui.create_ui()
if not txt2img_script_runner.scripts:
txt2img_script_runner.initialize_scripts(False)
if not self.default_script_arg_txt2img:
self.default_script_arg_txt2img = self.init_default_script_args(txt2img_script_runner)
if not img2img_script_runner.scripts:
img2img_script_runner.initialize_scripts(True)
if not self.default_script_arg_img2img:
self.default_script_arg_img2img = self.init_default_script_args(img2img_script_runner)
def add_api_route(self, path: str, endpoint, **kwargs):
if shared.cmd_opts.api_auth:
return self.app.add_api_route(path, endpoint, dependencies=[Depends(self.auth)], **kwargs)
@@ -312,8 +332,13 @@ class Api:
script_args[script.args_from:script.args_to] = ui_default_values
return script_args
def init_script_args(self, request, default_script_args, selectable_scripts, selectable_idx, script_runner):
def init_script_args(self, request, default_script_args, selectable_scripts, selectable_idx, script_runner, *, input_script_args=None):
script_args = default_script_args.copy()
if input_script_args is not None:
for index, value in input_script_args.items():
script_args[index] = value
# position 0 in script_arg is the idx+1 of the selectable script that is going to be run when using scripts.scripts_*2img.run()
if selectable_scripts:
script_args[selectable_scripts.args_from:selectable_scripts.args_to] = request.script_args
@@ -335,13 +360,83 @@ class Api:
script_args[alwayson_script.args_from + idx] = request.alwayson_scripts[alwayson_script_name]["args"][idx]
return script_args
def apply_infotext(self, request, tabname, *, script_runner=None, mentioned_script_args=None):
"""Processes `infotext` field from the `request`, and sets other fields of the `request` according to what's in infotext.
If request already has a field set, and that field is encountered in infotext too, the value from infotext is ignored.
Additionally, fills `mentioned_script_args` dict with index: value pairs for script arguments read from infotext.
"""
if not request.infotext:
return {}
possible_fields = infotext_utils.paste_fields[tabname]["fields"]
set_fields = request.model_dump(exclude_unset=True) if hasattr(request, "request") else request.dict(exclude_unset=True) # pydantic v1/v2 have differenrt names for this
params = infotext_utils.parse_generation_parameters(request.infotext)
def get_field_value(field, params):
value = field.function(params) if field.function else params.get(field.label)
if value is None:
return None
if field.api in request.__fields__:
target_type = request.__fields__[field.api].type_
else:
target_type = type(field.component.value)
if target_type == type(None):
return None
if isinstance(value, dict) and value.get('__type__') == 'generic_update': # this is a gradio.update rather than a value
value = value.get('value')
if value is not None and not isinstance(value, target_type):
value = target_type(value)
return value
for field in possible_fields:
if not field.api:
continue
if field.api in set_fields:
continue
value = get_field_value(field, params)
if value is not None:
setattr(request, field.api, value)
if request.override_settings is None:
request.override_settings = {}
overridden_settings = infotext_utils.get_override_settings(params)
for _, setting_name, value in overridden_settings:
if setting_name not in request.override_settings:
request.override_settings[setting_name] = value
if script_runner is not None and mentioned_script_args is not None:
indexes = {v: i for i, v in enumerate(script_runner.inputs)}
script_fields = ((field, indexes[field.component]) for field in possible_fields if field.component in indexes)
for field, index in script_fields:
value = get_field_value(field, params)
if value is None:
continue
mentioned_script_args[index] = value
return params
def text2imgapi(self, txt2imgreq: models.StableDiffusionTxt2ImgProcessingAPI):
task_id = txt2imgreq.force_task_id or create_task_id("txt2img")
script_runner = scripts.scripts_txt2img
if not script_runner.scripts:
script_runner.initialize_scripts(False)
ui.create_ui()
if not self.default_script_arg_txt2img:
self.default_script_arg_txt2img = self.init_default_script_args(script_runner)
infotext_script_args = {}
self.apply_infotext(txt2imgreq, "txt2img", script_runner=script_runner, mentioned_script_args=infotext_script_args)
selectable_scripts, selectable_script_idx = self.get_selectable_script(txt2imgreq.script_name, script_runner)
populate = txt2imgreq.copy(update={ # Override __init__ params
@@ -356,12 +451,15 @@ class Api:
args.pop('script_name', None)
args.pop('script_args', None) # will refeed them to the pipeline directly after initializing them
args.pop('alwayson_scripts', None)
args.pop('infotext', None)
script_args = self.init_script_args(txt2imgreq, self.default_script_arg_txt2img, selectable_scripts, selectable_script_idx, script_runner)
script_args = self.init_script_args(txt2imgreq, self.default_script_arg_txt2img, selectable_scripts, selectable_script_idx, script_runner, input_script_args=infotext_script_args)
send_images = args.pop('send_images', True)
args.pop('save_images', None)
add_task_to_queue(task_id)
with self.queue_lock:
with closing(StableDiffusionProcessingTxt2Img(sd_model=shared.sd_model, **args)) as p:
p.is_api = True
@@ -371,12 +469,14 @@ class Api:
try:
shared.state.begin(job="scripts_txt2img")
start_task(task_id)
if selectable_scripts is not None:
p.script_args = script_args
processed = scripts.scripts_txt2img.run(p, *p.script_args) # Need to pass args as list here
else:
p.script_args = tuple(script_args) # Need to pass args as tuple here
processed = process_images(p)
finish_task(task_id)
finally:
shared.state.end()
shared.total_tqdm.clear()
@@ -386,6 +486,8 @@ class Api:
return models.TextToImageResponse(images=b64images, parameters=vars(txt2imgreq), info=processed.js())
def img2imgapi(self, img2imgreq: models.StableDiffusionImg2ImgProcessingAPI):
task_id = img2imgreq.force_task_id or create_task_id("img2img")
init_images = img2imgreq.init_images
if init_images is None:
raise HTTPException(status_code=404, detail="Init image not found")
@@ -395,11 +497,10 @@ class Api:
mask = decode_base64_to_image(mask)
script_runner = scripts.scripts_img2img
if not script_runner.scripts:
script_runner.initialize_scripts(True)
ui.create_ui()
if not self.default_script_arg_img2img:
self.default_script_arg_img2img = self.init_default_script_args(script_runner)
infotext_script_args = {}
self.apply_infotext(img2imgreq, "img2img", script_runner=script_runner, mentioned_script_args=infotext_script_args)
selectable_scripts, selectable_script_idx = self.get_selectable_script(img2imgreq.script_name, script_runner)
populate = img2imgreq.copy(update={ # Override __init__ params
@@ -416,12 +517,15 @@ class Api:
args.pop('script_name', None)
args.pop('script_args', None) # will refeed them to the pipeline directly after initializing them
args.pop('alwayson_scripts', None)
args.pop('infotext', None)
script_args = self.init_script_args(img2imgreq, self.default_script_arg_img2img, selectable_scripts, selectable_script_idx, script_runner)
script_args = self.init_script_args(img2imgreq, self.default_script_arg_img2img, selectable_scripts, selectable_script_idx, script_runner, input_script_args=infotext_script_args)
send_images = args.pop('send_images', True)
args.pop('save_images', None)
add_task_to_queue(task_id)
with self.queue_lock:
with closing(StableDiffusionProcessingImg2Img(sd_model=shared.sd_model, **args)) as p:
p.init_images = [decode_base64_to_image(x) for x in init_images]
@@ -432,12 +536,14 @@ class Api:
try:
shared.state.begin(job="scripts_img2img")
start_task(task_id)
if selectable_scripts is not None:
p.script_args = script_args
processed = scripts.scripts_img2img.run(p, *p.script_args) # Need to pass args as list here
else:
p.script_args = tuple(script_args) # Need to pass args as tuple here
processed = process_images(p)
finish_task(task_id)
finally:
shared.state.end()
shared.total_tqdm.clear()
@@ -480,7 +586,7 @@ class Api:
if geninfo is None:
geninfo = ""
params = generation_parameters_copypaste.parse_generation_parameters(geninfo)
params = infotext_utils.parse_generation_parameters(geninfo)
script_callbacks.infotext_pasted_callback(geninfo, params)
return models.PNGInfoResponse(info=geninfo, items=items, parameters=params)
@@ -511,7 +617,7 @@ class Api:
if shared.state.current_image and not req.skip_current_image:
current_image = encode_pil_to_base64(shared.state.current_image)
return models.ProgressResponse(progress=progress, eta_relative=eta_relative, state=shared.state.dict(), current_image=current_image, textinfo=shared.state.textinfo)
return models.ProgressResponse(progress=progress, eta_relative=eta_relative, state=shared.state.dict(), current_image=current_image, textinfo=shared.state.textinfo, current_task=current_task)
def interrogateapi(self, interrogatereq: models.InterrogateRequest):
image_b64 = interrogatereq.image
@@ -578,6 +684,17 @@ class Api:
def get_samplers(self):
return [{"name": sampler[0], "aliases":sampler[2], "options":sampler[3]} for sampler in sd_samplers.all_samplers]
def get_schedulers(self):
return [
{
"name": scheduler.name,
"label": scheduler.label,
"aliases": scheduler.aliases,
"default_rho": scheduler.default_rho,
"need_inner_model": scheduler.need_inner_model,
}
for scheduler in sd_schedulers.schedulers]
def get_upscalers(self):
return [
{
@@ -643,6 +760,10 @@ class Api:
"skipped": convert_embeddings(db.skipped_embeddings),
}
def refresh_embeddings(self):
with self.queue_lock:
sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True)
def refresh_checkpoints(self):
with self.queue_lock:
shared.refresh_checkpoints()
@@ -775,7 +896,15 @@ class Api:
def launch(self, server_name, port, root_path):
self.app.include_router(self.router)
uvicorn.run(self.app, host=server_name, port=port, timeout_keep_alive=shared.cmd_opts.timeout_keep_alive, root_path=root_path)
uvicorn.run(
self.app,
host=server_name,
port=port,
timeout_keep_alive=shared.cmd_opts.timeout_keep_alive,
root_path=root_path,
ssl_keyfile=shared.cmd_opts.tls_keyfile,
ssl_certfile=shared.cmd_opts.tls_certfile
)
def kill_webui(self):
restart.stop_program()
+12 -1
View File
@@ -107,6 +107,8 @@ StableDiffusionTxt2ImgProcessingAPI = PydanticModelGenerator(
{"key": "send_images", "type": bool, "default": True},
{"key": "save_images", "type": bool, "default": False},
{"key": "alwayson_scripts", "type": dict, "default": {}},
{"key": "force_task_id", "type": str, "default": None},
{"key": "infotext", "type": str, "default": None},
]
).generate_model()
@@ -124,6 +126,8 @@ StableDiffusionImg2ImgProcessingAPI = PydanticModelGenerator(
{"key": "send_images", "type": bool, "default": True},
{"key": "save_images", "type": bool, "default": False},
{"key": "alwayson_scripts", "type": dict, "default": {}},
{"key": "force_task_id", "type": str, "default": None},
{"key": "infotext", "type": str, "default": None},
]
).generate_model()
@@ -143,7 +147,7 @@ class ExtrasBaseRequest(BaseModel):
gfpgan_visibility: float = Field(default=0, title="GFPGAN Visibility", ge=0, le=1, allow_inf_nan=False, description="Sets the visibility of GFPGAN, values should be between 0 and 1.")
codeformer_visibility: float = Field(default=0, title="CodeFormer Visibility", ge=0, le=1, allow_inf_nan=False, description="Sets the visibility of CodeFormer, values should be between 0 and 1.")
codeformer_weight: float = Field(default=0, title="CodeFormer Weight", ge=0, le=1, allow_inf_nan=False, description="Sets the weight of CodeFormer, values should be between 0 and 1.")
upscaling_resize: float = Field(default=2, title="Upscaling Factor", ge=1, le=8, description="By how much to upscale the image, only used when resize_mode=0.")
upscaling_resize: float = Field(default=2, title="Upscaling Factor", gt=0, description="By how much to upscale the image, only used when resize_mode=0.")
upscaling_resize_w: int = Field(default=512, title="Target Width", ge=1, description="Target width for the upscaler to hit. Only used when resize_mode=1.")
upscaling_resize_h: int = Field(default=512, title="Target Height", ge=1, description="Target height for the upscaler to hit. Only used when resize_mode=1.")
upscaling_crop: bool = Field(default=True, title="Crop to fit", description="Should the upscaler crop the image to fit in the chosen size?")
@@ -231,6 +235,13 @@ class SamplerItem(BaseModel):
aliases: list[str] = Field(title="Aliases")
options: dict[str, str] = Field(title="Options")
class SchedulerItem(BaseModel):
name: str = Field(title="Name")
label: str = Field(title="Label")
aliases: Optional[list[str]] = Field(title="Aliases")
default_rho: Optional[float] = Field(title="Default Rho")
need_inner_model: Optional[bool] = Field(title="Needs Inner Model")
class UpscalerItem(BaseModel):
name: str = Field(title="Name")
model_name: Optional[str] = Field(title="Model Name")
+44 -45
View File
@@ -2,48 +2,55 @@ import json
import os
import os.path
import threading
import time
import diskcache
import tqdm
from modules.paths import data_path, script_path
cache_filename = os.environ.get('SD_WEBUI_CACHE_FILE', os.path.join(data_path, "cache.json"))
cache_data = None
cache_dir = os.environ.get('SD_WEBUI_CACHE_DIR', os.path.join(data_path, "cache"))
caches = {}
cache_lock = threading.Lock()
dump_cache_after = None
dump_cache_thread = None
def dump_cache():
"""
Marks cache for writing to disk. 5 seconds after no one else flags the cache for writing, it is written.
"""
"""old function for dumping cache to disk; does nothing since diskcache."""
global dump_cache_after
global dump_cache_thread
pass
def thread_func():
global dump_cache_after
global dump_cache_thread
while dump_cache_after is not None and time.time() < dump_cache_after:
time.sleep(1)
def make_cache(subsection: str) -> diskcache.Cache:
return diskcache.Cache(
os.path.join(cache_dir, subsection),
size_limit=2**32, # 4 GB, culling oldest first
disk_min_file_size=2**18, # keep up to 256KB in Sqlite
)
with cache_lock:
cache_filename_tmp = cache_filename + "-"
with open(cache_filename_tmp, "w", encoding="utf8") as file:
json.dump(cache_data, file, indent=4, ensure_ascii=False)
os.replace(cache_filename_tmp, cache_filename)
def convert_old_cached_data():
try:
with open(cache_filename, "r", encoding="utf8") as file:
data = json.load(file)
except FileNotFoundError:
return
except Exception:
os.replace(cache_filename, os.path.join(script_path, "tmp", "cache.json"))
print('[ERROR] issue occurred while trying to read cache.json; old cache has been moved to tmp/cache.json')
return
dump_cache_after = None
dump_cache_thread = None
total_count = sum(len(keyvalues) for keyvalues in data.values())
with cache_lock:
dump_cache_after = time.time() + 5
if dump_cache_thread is None:
dump_cache_thread = threading.Thread(name='cache-writer', target=thread_func)
dump_cache_thread.start()
with tqdm.tqdm(total=total_count, desc="converting cache") as progress:
for subsection, keyvalues in data.items():
cache_obj = caches.get(subsection)
if cache_obj is None:
cache_obj = make_cache(subsection)
caches[subsection] = cache_obj
for key, value in keyvalues.items():
cache_obj[key] = value
progress.update(1)
def cache(subsection):
@@ -54,29 +61,21 @@ def cache(subsection):
subsection (str): The subsection identifier for the cache.
Returns:
dict: The cache data for the specified subsection.
diskcache.Cache: The cache data for the specified subsection.
"""
global cache_data
if cache_data is None:
cache_obj = caches.get(subsection)
if not cache_obj:
with cache_lock:
if cache_data is None:
if not os.path.isfile(cache_filename):
cache_data = {}
else:
try:
with open(cache_filename, "r", encoding="utf8") as file:
cache_data = json.load(file)
except Exception:
os.replace(cache_filename, os.path.join(script_path, "tmp", "cache.json"))
print('[ERROR] issue occurred while trying to read cache.json, move current cache to tmp/cache.json and create new cache')
cache_data = {}
if not os.path.exists(cache_dir) and os.path.isfile(cache_filename):
convert_old_cached_data()
s = cache_data.get(subsection, {})
cache_data[subsection] = s
cache_obj = caches.get(subsection)
if not cache_obj:
cache_obj = make_cache(subsection)
caches[subsection] = cache_obj
return s
return cache_obj
def cached_data_for_file(subsection, title, filename, func):
+3 -2
View File
@@ -78,6 +78,7 @@ def wrap_gradio_call(func, extra_outputs=None, add_stats=False):
shared.state.skipped = False
shared.state.interrupted = False
shared.state.stopping_generation = False
shared.state.job_count = 0
if not add_stats:
@@ -99,8 +100,8 @@ def wrap_gradio_call(func, extra_outputs=None, add_stats=False):
sys_pct = sys_peak/max(sys_total, 1) * 100
toltip_a = "Active: peak amount of video memory used during generation (excluding cached data)"
toltip_r = "Reserved: total amout of video memory allocated by the Torch library "
toltip_sys = "System: peak amout of video memory allocated by all running programs, out of total capacity"
toltip_r = "Reserved: total amount of video memory allocated by the Torch library "
toltip_sys = "System: peak amount of video memory allocated by all running programs, out of total capacity"
text_a = f"<abbr title='{toltip_a}'>A</abbr>: <span class='measurement'>{active_peak/1024:.2f} GB</span>"
text_r = f"<abbr title='{toltip_r}'>R</abbr>: <span class='measurement'>{reserved_peak/1024:.2f} GB</span>"
+29 -23
View File
@@ -1,7 +1,7 @@
import argparse
import json
import os
from modules.paths_internal import models_path, script_path, data_path, extensions_dir, extensions_builtin_dir, sd_default_config, sd_model_file # noqa: F401
from modules.paths_internal import normalized_filepath, models_path, script_path, data_path, extensions_dir, extensions_builtin_dir, sd_default_config, sd_model_file # noqa: F401
parser = argparse.ArgumentParser()
@@ -19,21 +19,21 @@ parser.add_argument("--skip-install", action='store_true', help="launch.py argum
parser.add_argument("--dump-sysinfo", action='store_true', help="launch.py argument: dump limited sysinfo file (without information about extensions, options) to disk and quit")
parser.add_argument("--loglevel", type=str, help="log level; one of: CRITICAL, ERROR, WARNING, INFO, DEBUG", default=None)
parser.add_argument("--do-not-download-clip", action='store_true', help="do not download CLIP model even if it's not included in the checkpoint")
parser.add_argument("--data-dir", type=str, default=os.path.dirname(os.path.dirname(os.path.realpath(__file__))), help="base path where all user data is stored")
parser.add_argument("--config", type=str, default=sd_default_config, help="path to config which constructs model",)
parser.add_argument("--ckpt", type=str, default=sd_model_file, help="path to checkpoint of stable diffusion model; if specified, this checkpoint will be added to the list of checkpoints and loaded",)
parser.add_argument("--ckpt-dir", type=str, default=None, help="Path to directory with stable diffusion checkpoints")
parser.add_argument("--vae-dir", type=str, default=None, help="Path to directory with VAE files")
parser.add_argument("--gfpgan-dir", type=str, help="GFPGAN directory", default=('./src/gfpgan' if os.path.exists('./src/gfpgan') else './GFPGAN'))
parser.add_argument("--gfpgan-model", type=str, help="GFPGAN model file name", default=None)
parser.add_argument("--data-dir", type=normalized_filepath, default=os.path.dirname(os.path.dirname(os.path.realpath(__file__))), help="base path where all user data is stored")
parser.add_argument("--config", type=normalized_filepath, default=sd_default_config, help="path to config which constructs model",)
parser.add_argument("--ckpt", type=normalized_filepath, default=sd_model_file, help="path to checkpoint of stable diffusion model; if specified, this checkpoint will be added to the list of checkpoints and loaded",)
parser.add_argument("--ckpt-dir", type=normalized_filepath, default=None, help="Path to directory with stable diffusion checkpoints")
parser.add_argument("--vae-dir", type=normalized_filepath, default=None, help="Path to directory with VAE files")
parser.add_argument("--gfpgan-dir", type=normalized_filepath, help="GFPGAN directory", default=('./src/gfpgan' if os.path.exists('./src/gfpgan') else './GFPGAN'))
parser.add_argument("--gfpgan-model", type=normalized_filepath, help="GFPGAN model file name", default=None)
parser.add_argument("--no-half", action='store_true', help="do not switch the model to 16-bit floats")
parser.add_argument("--no-half-vae", action='store_true', help="do not switch the VAE model to 16-bit floats")
parser.add_argument("--no-progressbar-hiding", action='store_true', help="do not hide progressbar in gradio UI (we hide it because it slows down ML if you have hardware acceleration in browser)")
parser.add_argument("--max-batch-count", type=int, default=16, help="maximum batch count value for the UI")
parser.add_argument("--embeddings-dir", type=str, default=os.path.join(data_path, 'embeddings'), help="embeddings directory for textual inversion (default: embeddings)")
parser.add_argument("--textual-inversion-templates-dir", type=str, default=os.path.join(script_path, 'textual_inversion_templates'), help="directory with textual inversion templates")
parser.add_argument("--hypernetwork-dir", type=str, default=os.path.join(models_path, 'hypernetworks'), help="hypernetwork directory")
parser.add_argument("--localizations-dir", type=str, default=os.path.join(script_path, 'localizations'), help="localizations directory")
parser.add_argument("--embeddings-dir", type=normalized_filepath, default=os.path.join(data_path, 'embeddings'), help="embeddings directory for textual inversion (default: embeddings)")
parser.add_argument("--textual-inversion-templates-dir", type=normalized_filepath, default=os.path.join(script_path, 'textual_inversion_templates'), help="directory with textual inversion templates")
parser.add_argument("--hypernetwork-dir", type=normalized_filepath, default=os.path.join(models_path, 'hypernetworks'), help="hypernetwork directory")
parser.add_argument("--localizations-dir", type=normalized_filepath, default=os.path.join(script_path, 'localizations'), help="localizations directory")
parser.add_argument("--allow-code", action='store_true', help="allow custom script execution from webui")
parser.add_argument("--medvram", action='store_true', help="enable stable diffusion model optimizations for sacrificing a little speed for low VRM usage")
parser.add_argument("--medvram-sdxl", action='store_true', help="enable --medvram optimization just for SDXL models")
@@ -48,12 +48,13 @@ parser.add_argument("--ngrok", type=str, help="ngrok authtoken, alternative to g
parser.add_argument("--ngrok-region", type=str, help="does not do anything.", default="")
parser.add_argument("--ngrok-options", type=json.loads, help='The options to pass to ngrok in JSON format, e.g.: \'{"authtoken_from_env":true, "basic_auth":"user:password", "oauth_provider":"google", "oauth_allow_emails":"user@asdf.com"}\'', default=dict())
parser.add_argument("--enable-insecure-extension-access", action='store_true', help="enable extensions tab regardless of other options")
parser.add_argument("--codeformer-models-path", type=str, help="Path to directory with codeformer model file(s).", default=os.path.join(models_path, 'Codeformer'))
parser.add_argument("--gfpgan-models-path", type=str, help="Path to directory with GFPGAN model file(s).", default=os.path.join(models_path, 'GFPGAN'))
parser.add_argument("--esrgan-models-path", type=str, help="Path to directory with ESRGAN model file(s).", default=os.path.join(models_path, 'ESRGAN'))
parser.add_argument("--bsrgan-models-path", type=str, help="Path to directory with BSRGAN model file(s).", default=os.path.join(models_path, 'BSRGAN'))
parser.add_argument("--realesrgan-models-path", type=str, help="Path to directory with RealESRGAN model file(s).", default=os.path.join(models_path, 'RealESRGAN'))
parser.add_argument("--clip-models-path", type=str, help="Path to directory with CLIP model file(s).", default=None)
parser.add_argument("--codeformer-models-path", type=normalized_filepath, help="Path to directory with codeformer model file(s).", default=os.path.join(models_path, 'Codeformer'))
parser.add_argument("--gfpgan-models-path", type=normalized_filepath, help="Path to directory with GFPGAN model file(s).", default=os.path.join(models_path, 'GFPGAN'))
parser.add_argument("--esrgan-models-path", type=normalized_filepath, help="Path to directory with ESRGAN model file(s).", default=os.path.join(models_path, 'ESRGAN'))
parser.add_argument("--bsrgan-models-path", type=normalized_filepath, help="Path to directory with BSRGAN model file(s).", default=os.path.join(models_path, 'BSRGAN'))
parser.add_argument("--realesrgan-models-path", type=normalized_filepath, help="Path to directory with RealESRGAN model file(s).", default=os.path.join(models_path, 'RealESRGAN'))
parser.add_argument("--dat-models-path", type=normalized_filepath, help="Path to directory with DAT model file(s).", default=os.path.join(models_path, 'DAT'))
parser.add_argument("--clip-models-path", type=normalized_filepath, help="Path to directory with CLIP model file(s).", default=None)
parser.add_argument("--xformers", action='store_true', help="enable xformers for cross attention layers")
parser.add_argument("--force-enable-xformers", action='store_true', help="enable xformers for cross attention layers regardless of whether the checking code thinks you can run it; do not make bug reports if this fails to work")
parser.add_argument("--xformers-flash-attention", action='store_true', help="enable xformers with Flash Attention to improve reproducibility (supported for SD2.x or variant only)")
@@ -77,22 +78,24 @@ parser.add_argument("--port", type=int, help="launch gradio with given server po
parser.add_argument("--show-negative-prompt", action='store_true', help="does not do anything", default=False)
parser.add_argument("--ui-config-file", type=str, help="filename to use for ui configuration", default=os.path.join(data_path, 'ui-config.json'))
parser.add_argument("--hide-ui-dir-config", action='store_true', help="hide directory configuration from webui", default=False)
parser.add_argument("--freeze-settings", action='store_true', help="disable editing settings", default=False)
parser.add_argument("--freeze-settings", action='store_true', help="disable editing of all settings globally", default=False)
parser.add_argument("--freeze-settings-in-sections", type=str, help='disable editing settings in specific sections of the settings page by specifying a comma-delimited list such like "saving-images,upscaling". The list of setting names can be found in the modules/shared_options.py file', default=None)
parser.add_argument("--freeze-specific-settings", type=str, help='disable editing of individual settings by specifying a comma-delimited list like "samples_save,samples_format". The list of setting names can be found in the config.json file', default=None)
parser.add_argument("--ui-settings-file", type=str, help="filename to use for ui settings", default=os.path.join(data_path, 'config.json'))
parser.add_argument("--gradio-debug", action='store_true', help="launch gradio with --debug option")
parser.add_argument("--gradio-auth", type=str, help='set gradio authentication like "username:password"; or comma-delimit multiple like "u1:p1,u2:p2,u3:p3"', default=None)
parser.add_argument("--gradio-auth-path", type=str, help='set gradio authentication file path ex. "/path/to/auth/file" same auth format as --gradio-auth', default=None)
parser.add_argument("--gradio-auth-path", type=normalized_filepath, help='set gradio authentication file path ex. "/path/to/auth/file" same auth format as --gradio-auth', default=None)
parser.add_argument("--gradio-img2img-tool", type=str, help='does not do anything')
parser.add_argument("--gradio-inpaint-tool", type=str, help="does not do anything")
parser.add_argument("--gradio-allowed-path", action='append', help="add path to gradio's allowed_paths, make it possible to serve files from it", default=[data_path])
parser.add_argument("--opt-channelslast", action='store_true', help="change memory type for stable diffusion to channels last")
parser.add_argument("--styles-file", type=str, help="filename to use for styles", default=os.path.join(data_path, 'styles.csv'))
parser.add_argument("--styles-file", type=str, action='append', help="path or wildcard path of styles files, allow multiple entries.", default=[])
parser.add_argument("--autolaunch", action='store_true', help="open the webui URL in the system's default browser upon launch", default=False)
parser.add_argument("--theme", type=str, help="launches the UI with light or dark theme", default=None)
parser.add_argument("--use-textbox-seed", action='store_true', help="use textbox for seeds in UI (no up/down, but possible to input long seeds)", default=False)
parser.add_argument("--disable-console-progressbars", action='store_true', help="do not output progressbars to console", default=False)
parser.add_argument("--enable-console-prompts", action='store_true', help="does not do anything", default=False) # Legacy compatibility, use as default value shared.opts.enable_console_prompts
parser.add_argument('--vae-path', type=str, help='Checkpoint to use as VAE; setting this argument disables all settings related to VAE', default=None)
parser.add_argument('--vae-path', type=normalized_filepath, help='Checkpoint to use as VAE; setting this argument disables all settings related to VAE', default=None)
parser.add_argument("--disable-safe-unpickle", action='store_true', help="disable checking pytorch models for malicious code", default=False)
parser.add_argument("--api", action='store_true', help="use api=True to launch the API together with the webui (use --nowebui instead for only the API)")
parser.add_argument("--api-auth", type=str, help='Set authentication for API like "username:password"; or comma-delimit multiple like "u1:p1,u2:p2,u3:p3"', default=None)
@@ -118,4 +121,7 @@ parser.add_argument('--api-server-stop', action='store_true', help='enable serve
parser.add_argument('--timeout-keep-alive', type=int, default=30, help='set timeout_keep_alive for uvicorn')
parser.add_argument("--disable-all-extensions", action='store_true', help="prevent all extensions from running regardless of any other settings", default=False)
parser.add_argument("--disable-extra-extensions", action='store_true', help="prevent all extensions except built-in from running regardless of any other settings", default=False)
parser.add_argument("--skip-load-model-at-start", action='store_true', help="if load a model at web start, only take effect when --nowebui", )
parser.add_argument("--skip-load-model-at-start", action='store_true', help="if load a model at web start, only take effect when --nowebui")
parser.add_argument("--unix-filenames-sanitization", action='store_true', help="allow any symbols except '/' in filenames. May conflict with your browser and file system")
parser.add_argument("--filenames-max-length", type=int, default=128, help='maximal length of filenames of saved images. If you override it, it can conflict with your file system')
parser.add_argument("--no-prompt-history", action='store_true', help="disable read prompt from last generation feature; settings this argument will not create '--data_path/params.txt' file")
-276
View File
@@ -1,276 +0,0 @@
# this file is copied from CodeFormer repository. Please see comment in modules/codeformer_model.py
import math
import torch
from torch import nn, Tensor
import torch.nn.functional as F
from typing import Optional
from modules.codeformer.vqgan_arch import VQAutoEncoder, ResBlock
from basicsr.utils.registry import ARCH_REGISTRY
def calc_mean_std(feat, eps=1e-5):
"""Calculate mean and std for adaptive_instance_normalization.
Args:
feat (Tensor): 4D tensor.
eps (float): A small value added to the variance to avoid
divide-by-zero. Default: 1e-5.
"""
size = feat.size()
assert len(size) == 4, 'The input feature should be 4D tensor.'
b, c = size[:2]
feat_var = feat.view(b, c, -1).var(dim=2) + eps
feat_std = feat_var.sqrt().view(b, c, 1, 1)
feat_mean = feat.view(b, c, -1).mean(dim=2).view(b, c, 1, 1)
return feat_mean, feat_std
def adaptive_instance_normalization(content_feat, style_feat):
"""Adaptive instance normalization.
Adjust the reference features to have the similar color and illuminations
as those in the degradate features.
Args:
content_feat (Tensor): The reference feature.
style_feat (Tensor): The degradate features.
"""
size = content_feat.size()
style_mean, style_std = calc_mean_std(style_feat)
content_mean, content_std = calc_mean_std(content_feat)
normalized_feat = (content_feat - content_mean.expand(size)) / content_std.expand(size)
return normalized_feat * style_std.expand(size) + style_mean.expand(size)
class PositionEmbeddingSine(nn.Module):
"""
This is a more standard version of the position embedding, very similar to the one
used by the Attention is all you need paper, generalized to work on images.
"""
def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None):
super().__init__()
self.num_pos_feats = num_pos_feats
self.temperature = temperature
self.normalize = normalize
if scale is not None and normalize is False:
raise ValueError("normalize should be True if scale is passed")
if scale is None:
scale = 2 * math.pi
self.scale = scale
def forward(self, x, mask=None):
if mask is None:
mask = torch.zeros((x.size(0), x.size(2), x.size(3)), device=x.device, dtype=torch.bool)
not_mask = ~mask
y_embed = not_mask.cumsum(1, dtype=torch.float32)
x_embed = not_mask.cumsum(2, dtype=torch.float32)
if self.normalize:
eps = 1e-6
y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale
x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale
dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)
pos_x = x_embed[:, :, :, None] / dim_t
pos_y = y_embed[:, :, :, None] / dim_t
pos_x = torch.stack(
(pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4
).flatten(3)
pos_y = torch.stack(
(pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4
).flatten(3)
pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
return pos
def _get_activation_fn(activation):
"""Return an activation function given a string"""
if activation == "relu":
return F.relu
if activation == "gelu":
return F.gelu
if activation == "glu":
return F.glu
raise RuntimeError(F"activation should be relu/gelu, not {activation}.")
class TransformerSALayer(nn.Module):
def __init__(self, embed_dim, nhead=8, dim_mlp=2048, dropout=0.0, activation="gelu"):
super().__init__()
self.self_attn = nn.MultiheadAttention(embed_dim, nhead, dropout=dropout)
# Implementation of Feedforward model - MLP
self.linear1 = nn.Linear(embed_dim, dim_mlp)
self.dropout = nn.Dropout(dropout)
self.linear2 = nn.Linear(dim_mlp, embed_dim)
self.norm1 = nn.LayerNorm(embed_dim)
self.norm2 = nn.LayerNorm(embed_dim)
self.dropout1 = nn.Dropout(dropout)
self.dropout2 = nn.Dropout(dropout)
self.activation = _get_activation_fn(activation)
def with_pos_embed(self, tensor, pos: Optional[Tensor]):
return tensor if pos is None else tensor + pos
def forward(self, tgt,
tgt_mask: Optional[Tensor] = None,
tgt_key_padding_mask: Optional[Tensor] = None,
query_pos: Optional[Tensor] = None):
# self attention
tgt2 = self.norm1(tgt)
q = k = self.with_pos_embed(tgt2, query_pos)
tgt2 = self.self_attn(q, k, value=tgt2, attn_mask=tgt_mask,
key_padding_mask=tgt_key_padding_mask)[0]
tgt = tgt + self.dropout1(tgt2)
# ffn
tgt2 = self.norm2(tgt)
tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2))))
tgt = tgt + self.dropout2(tgt2)
return tgt
class Fuse_sft_block(nn.Module):
def __init__(self, in_ch, out_ch):
super().__init__()
self.encode_enc = ResBlock(2*in_ch, out_ch)
self.scale = nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size=3, padding=1),
nn.LeakyReLU(0.2, True),
nn.Conv2d(out_ch, out_ch, kernel_size=3, padding=1))
self.shift = nn.Sequential(
nn.Conv2d(in_ch, out_ch, kernel_size=3, padding=1),
nn.LeakyReLU(0.2, True),
nn.Conv2d(out_ch, out_ch, kernel_size=3, padding=1))
def forward(self, enc_feat, dec_feat, w=1):
enc_feat = self.encode_enc(torch.cat([enc_feat, dec_feat], dim=1))
scale = self.scale(enc_feat)
shift = self.shift(enc_feat)
residual = w * (dec_feat * scale + shift)
out = dec_feat + residual
return out
@ARCH_REGISTRY.register()
class CodeFormer(VQAutoEncoder):
def __init__(self, dim_embd=512, n_head=8, n_layers=9,
codebook_size=1024, latent_size=256,
connect_list=('32', '64', '128', '256'),
fix_modules=('quantize', 'generator')):
super(CodeFormer, self).__init__(512, 64, [1, 2, 2, 4, 4, 8], 'nearest',2, [16], codebook_size)
if fix_modules is not None:
for module in fix_modules:
for param in getattr(self, module).parameters():
param.requires_grad = False
self.connect_list = connect_list
self.n_layers = n_layers
self.dim_embd = dim_embd
self.dim_mlp = dim_embd*2
self.position_emb = nn.Parameter(torch.zeros(latent_size, self.dim_embd))
self.feat_emb = nn.Linear(256, self.dim_embd)
# transformer
self.ft_layers = nn.Sequential(*[TransformerSALayer(embed_dim=dim_embd, nhead=n_head, dim_mlp=self.dim_mlp, dropout=0.0)
for _ in range(self.n_layers)])
# logits_predict head
self.idx_pred_layer = nn.Sequential(
nn.LayerNorm(dim_embd),
nn.Linear(dim_embd, codebook_size, bias=False))
self.channels = {
'16': 512,
'32': 256,
'64': 256,
'128': 128,
'256': 128,
'512': 64,
}
# after second residual block for > 16, before attn layer for ==16
self.fuse_encoder_block = {'512':2, '256':5, '128':8, '64':11, '32':14, '16':18}
# after first residual block for > 16, before attn layer for ==16
self.fuse_generator_block = {'16':6, '32': 9, '64':12, '128':15, '256':18, '512':21}
# fuse_convs_dict
self.fuse_convs_dict = nn.ModuleDict()
for f_size in self.connect_list:
in_ch = self.channels[f_size]
self.fuse_convs_dict[f_size] = Fuse_sft_block(in_ch, in_ch)
def _init_weights(self, module):
if isinstance(module, (nn.Linear, nn.Embedding)):
module.weight.data.normal_(mean=0.0, std=0.02)
if isinstance(module, nn.Linear) and module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm):
module.bias.data.zero_()
module.weight.data.fill_(1.0)
def forward(self, x, w=0, detach_16=True, code_only=False, adain=False):
# ################### Encoder #####################
enc_feat_dict = {}
out_list = [self.fuse_encoder_block[f_size] for f_size in self.connect_list]
for i, block in enumerate(self.encoder.blocks):
x = block(x)
if i in out_list:
enc_feat_dict[str(x.shape[-1])] = x.clone()
lq_feat = x
# ################# Transformer ###################
# quant_feat, codebook_loss, quant_stats = self.quantize(lq_feat)
pos_emb = self.position_emb.unsqueeze(1).repeat(1,x.shape[0],1)
# BCHW -> BC(HW) -> (HW)BC
feat_emb = self.feat_emb(lq_feat.flatten(2).permute(2,0,1))
query_emb = feat_emb
# Transformer encoder
for layer in self.ft_layers:
query_emb = layer(query_emb, query_pos=pos_emb)
# output logits
logits = self.idx_pred_layer(query_emb) # (hw)bn
logits = logits.permute(1,0,2) # (hw)bn -> b(hw)n
if code_only: # for training stage II
# logits doesn't need softmax before cross_entropy loss
return logits, lq_feat
# ################# Quantization ###################
# if self.training:
# quant_feat = torch.einsum('btn,nc->btc', [soft_one_hot, self.quantize.embedding.weight])
# # b(hw)c -> bc(hw) -> bchw
# quant_feat = quant_feat.permute(0,2,1).view(lq_feat.shape)
# ------------
soft_one_hot = F.softmax(logits, dim=2)
_, top_idx = torch.topk(soft_one_hot, 1, dim=2)
quant_feat = self.quantize.get_codebook_feat(top_idx, shape=[x.shape[0],16,16,256])
# preserve gradients
# quant_feat = lq_feat + (quant_feat - lq_feat).detach()
if detach_16:
quant_feat = quant_feat.detach() # for training stage III
if adain:
quant_feat = adaptive_instance_normalization(quant_feat, lq_feat)
# ################## Generator ####################
x = quant_feat
fuse_list = [self.fuse_generator_block[f_size] for f_size in self.connect_list]
for i, block in enumerate(self.generator.blocks):
x = block(x)
if i in fuse_list: # fuse after i-th block
f_size = str(x.shape[-1])
if w>0:
x = self.fuse_convs_dict[f_size](enc_feat_dict[f_size].detach(), x, w)
out = x
# logits doesn't need softmax before cross_entropy loss
return out, logits, lq_feat
-435
View File
@@ -1,435 +0,0 @@
# this file is copied from CodeFormer repository. Please see comment in modules/codeformer_model.py
'''
VQGAN code, adapted from the original created by the Unleashing Transformers authors:
https://github.com/samb-t/unleashing-transformers/blob/master/models/vqgan.py
'''
import torch
import torch.nn as nn
import torch.nn.functional as F
from basicsr.utils import get_root_logger
from basicsr.utils.registry import ARCH_REGISTRY
def normalize(in_channels):
return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True)
@torch.jit.script
def swish(x):
return x*torch.sigmoid(x)
# Define VQVAE classes
class VectorQuantizer(nn.Module):
def __init__(self, codebook_size, emb_dim, beta):
super(VectorQuantizer, self).__init__()
self.codebook_size = codebook_size # number of embeddings
self.emb_dim = emb_dim # dimension of embedding
self.beta = beta # commitment cost used in loss term, beta * ||z_e(x)-sg[e]||^2
self.embedding = nn.Embedding(self.codebook_size, self.emb_dim)
self.embedding.weight.data.uniform_(-1.0 / self.codebook_size, 1.0 / self.codebook_size)
def forward(self, z):
# reshape z -> (batch, height, width, channel) and flatten
z = z.permute(0, 2, 3, 1).contiguous()
z_flattened = z.view(-1, self.emb_dim)
# distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z
d = (z_flattened ** 2).sum(dim=1, keepdim=True) + (self.embedding.weight**2).sum(1) - \
2 * torch.matmul(z_flattened, self.embedding.weight.t())
mean_distance = torch.mean(d)
# find closest encodings
# min_encoding_indices = torch.argmin(d, dim=1).unsqueeze(1)
min_encoding_scores, min_encoding_indices = torch.topk(d, 1, dim=1, largest=False)
# [0-1], higher score, higher confidence
min_encoding_scores = torch.exp(-min_encoding_scores/10)
min_encodings = torch.zeros(min_encoding_indices.shape[0], self.codebook_size).to(z)
min_encodings.scatter_(1, min_encoding_indices, 1)
# get quantized latent vectors
z_q = torch.matmul(min_encodings, self.embedding.weight).view(z.shape)
# compute loss for embedding
loss = torch.mean((z_q.detach()-z)**2) + self.beta * torch.mean((z_q - z.detach()) ** 2)
# preserve gradients
z_q = z + (z_q - z).detach()
# perplexity
e_mean = torch.mean(min_encodings, dim=0)
perplexity = torch.exp(-torch.sum(e_mean * torch.log(e_mean + 1e-10)))
# reshape back to match original input shape
z_q = z_q.permute(0, 3, 1, 2).contiguous()
return z_q, loss, {
"perplexity": perplexity,
"min_encodings": min_encodings,
"min_encoding_indices": min_encoding_indices,
"min_encoding_scores": min_encoding_scores,
"mean_distance": mean_distance
}
def get_codebook_feat(self, indices, shape):
# input indices: batch*token_num -> (batch*token_num)*1
# shape: batch, height, width, channel
indices = indices.view(-1,1)
min_encodings = torch.zeros(indices.shape[0], self.codebook_size).to(indices)
min_encodings.scatter_(1, indices, 1)
# get quantized latent vectors
z_q = torch.matmul(min_encodings.float(), self.embedding.weight)
if shape is not None: # reshape back to match original input shape
z_q = z_q.view(shape).permute(0, 3, 1, 2).contiguous()
return z_q
class GumbelQuantizer(nn.Module):
def __init__(self, codebook_size, emb_dim, num_hiddens, straight_through=False, kl_weight=5e-4, temp_init=1.0):
super().__init__()
self.codebook_size = codebook_size # number of embeddings
self.emb_dim = emb_dim # dimension of embedding
self.straight_through = straight_through
self.temperature = temp_init
self.kl_weight = kl_weight
self.proj = nn.Conv2d(num_hiddens, codebook_size, 1) # projects last encoder layer to quantized logits
self.embed = nn.Embedding(codebook_size, emb_dim)
def forward(self, z):
hard = self.straight_through if self.training else True
logits = self.proj(z)
soft_one_hot = F.gumbel_softmax(logits, tau=self.temperature, dim=1, hard=hard)
z_q = torch.einsum("b n h w, n d -> b d h w", soft_one_hot, self.embed.weight)
# + kl divergence to the prior loss
qy = F.softmax(logits, dim=1)
diff = self.kl_weight * torch.sum(qy * torch.log(qy * self.codebook_size + 1e-10), dim=1).mean()
min_encoding_indices = soft_one_hot.argmax(dim=1)
return z_q, diff, {
"min_encoding_indices": min_encoding_indices
}
class Downsample(nn.Module):
def __init__(self, in_channels):
super().__init__()
self.conv = torch.nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=2, padding=0)
def forward(self, x):
pad = (0, 1, 0, 1)
x = torch.nn.functional.pad(x, pad, mode="constant", value=0)
x = self.conv(x)
return x
class Upsample(nn.Module):
def __init__(self, in_channels):
super().__init__()
self.conv = nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1)
def forward(self, x):
x = F.interpolate(x, scale_factor=2.0, mode="nearest")
x = self.conv(x)
return x
class ResBlock(nn.Module):
def __init__(self, in_channels, out_channels=None):
super(ResBlock, self).__init__()
self.in_channels = in_channels
self.out_channels = in_channels if out_channels is None else out_channels
self.norm1 = normalize(in_channels)
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1)
self.norm2 = normalize(out_channels)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1)
if self.in_channels != self.out_channels:
self.conv_out = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0)
def forward(self, x_in):
x = x_in
x = self.norm1(x)
x = swish(x)
x = self.conv1(x)
x = self.norm2(x)
x = swish(x)
x = self.conv2(x)
if self.in_channels != self.out_channels:
x_in = self.conv_out(x_in)
return x + x_in
class AttnBlock(nn.Module):
def __init__(self, in_channels):
super().__init__()
self.in_channels = in_channels
self.norm = normalize(in_channels)
self.q = torch.nn.Conv2d(
in_channels,
in_channels,
kernel_size=1,
stride=1,
padding=0
)
self.k = torch.nn.Conv2d(
in_channels,
in_channels,
kernel_size=1,
stride=1,
padding=0
)
self.v = torch.nn.Conv2d(
in_channels,
in_channels,
kernel_size=1,
stride=1,
padding=0
)
self.proj_out = torch.nn.Conv2d(
in_channels,
in_channels,
kernel_size=1,
stride=1,
padding=0
)
def forward(self, x):
h_ = x
h_ = self.norm(h_)
q = self.q(h_)
k = self.k(h_)
v = self.v(h_)
# compute attention
b, c, h, w = q.shape
q = q.reshape(b, c, h*w)
q = q.permute(0, 2, 1)
k = k.reshape(b, c, h*w)
w_ = torch.bmm(q, k)
w_ = w_ * (int(c)**(-0.5))
w_ = F.softmax(w_, dim=2)
# attend to values
v = v.reshape(b, c, h*w)
w_ = w_.permute(0, 2, 1)
h_ = torch.bmm(v, w_)
h_ = h_.reshape(b, c, h, w)
h_ = self.proj_out(h_)
return x+h_
class Encoder(nn.Module):
def __init__(self, in_channels, nf, emb_dim, ch_mult, num_res_blocks, resolution, attn_resolutions):
super().__init__()
self.nf = nf
self.num_resolutions = len(ch_mult)
self.num_res_blocks = num_res_blocks
self.resolution = resolution
self.attn_resolutions = attn_resolutions
curr_res = self.resolution
in_ch_mult = (1,)+tuple(ch_mult)
blocks = []
# initial convultion
blocks.append(nn.Conv2d(in_channels, nf, kernel_size=3, stride=1, padding=1))
# residual and downsampling blocks, with attention on smaller res (16x16)
for i in range(self.num_resolutions):
block_in_ch = nf * in_ch_mult[i]
block_out_ch = nf * ch_mult[i]
for _ in range(self.num_res_blocks):
blocks.append(ResBlock(block_in_ch, block_out_ch))
block_in_ch = block_out_ch
if curr_res in attn_resolutions:
blocks.append(AttnBlock(block_in_ch))
if i != self.num_resolutions - 1:
blocks.append(Downsample(block_in_ch))
curr_res = curr_res // 2
# non-local attention block
blocks.append(ResBlock(block_in_ch, block_in_ch))
blocks.append(AttnBlock(block_in_ch))
blocks.append(ResBlock(block_in_ch, block_in_ch))
# normalise and convert to latent size
blocks.append(normalize(block_in_ch))
blocks.append(nn.Conv2d(block_in_ch, emb_dim, kernel_size=3, stride=1, padding=1))
self.blocks = nn.ModuleList(blocks)
def forward(self, x):
for block in self.blocks:
x = block(x)
return x
class Generator(nn.Module):
def __init__(self, nf, emb_dim, ch_mult, res_blocks, img_size, attn_resolutions):
super().__init__()
self.nf = nf
self.ch_mult = ch_mult
self.num_resolutions = len(self.ch_mult)
self.num_res_blocks = res_blocks
self.resolution = img_size
self.attn_resolutions = attn_resolutions
self.in_channels = emb_dim
self.out_channels = 3
block_in_ch = self.nf * self.ch_mult[-1]
curr_res = self.resolution // 2 ** (self.num_resolutions-1)
blocks = []
# initial conv
blocks.append(nn.Conv2d(self.in_channels, block_in_ch, kernel_size=3, stride=1, padding=1))
# non-local attention block
blocks.append(ResBlock(block_in_ch, block_in_ch))
blocks.append(AttnBlock(block_in_ch))
blocks.append(ResBlock(block_in_ch, block_in_ch))
for i in reversed(range(self.num_resolutions)):
block_out_ch = self.nf * self.ch_mult[i]
for _ in range(self.num_res_blocks):
blocks.append(ResBlock(block_in_ch, block_out_ch))
block_in_ch = block_out_ch
if curr_res in self.attn_resolutions:
blocks.append(AttnBlock(block_in_ch))
if i != 0:
blocks.append(Upsample(block_in_ch))
curr_res = curr_res * 2
blocks.append(normalize(block_in_ch))
blocks.append(nn.Conv2d(block_in_ch, self.out_channels, kernel_size=3, stride=1, padding=1))
self.blocks = nn.ModuleList(blocks)
def forward(self, x):
for block in self.blocks:
x = block(x)
return x
@ARCH_REGISTRY.register()
class VQAutoEncoder(nn.Module):
def __init__(self, img_size, nf, ch_mult, quantizer="nearest", res_blocks=2, attn_resolutions=None, codebook_size=1024, emb_dim=256,
beta=0.25, gumbel_straight_through=False, gumbel_kl_weight=1e-8, model_path=None):
super().__init__()
logger = get_root_logger()
self.in_channels = 3
self.nf = nf
self.n_blocks = res_blocks
self.codebook_size = codebook_size
self.embed_dim = emb_dim
self.ch_mult = ch_mult
self.resolution = img_size
self.attn_resolutions = attn_resolutions or [16]
self.quantizer_type = quantizer
self.encoder = Encoder(
self.in_channels,
self.nf,
self.embed_dim,
self.ch_mult,
self.n_blocks,
self.resolution,
self.attn_resolutions
)
if self.quantizer_type == "nearest":
self.beta = beta #0.25
self.quantize = VectorQuantizer(self.codebook_size, self.embed_dim, self.beta)
elif self.quantizer_type == "gumbel":
self.gumbel_num_hiddens = emb_dim
self.straight_through = gumbel_straight_through
self.kl_weight = gumbel_kl_weight
self.quantize = GumbelQuantizer(
self.codebook_size,
self.embed_dim,
self.gumbel_num_hiddens,
self.straight_through,
self.kl_weight
)
self.generator = Generator(
self.nf,
self.embed_dim,
self.ch_mult,
self.n_blocks,
self.resolution,
self.attn_resolutions
)
if model_path is not None:
chkpt = torch.load(model_path, map_location='cpu')
if 'params_ema' in chkpt:
self.load_state_dict(torch.load(model_path, map_location='cpu')['params_ema'])
logger.info(f'vqgan is loaded from: {model_path} [params_ema]')
elif 'params' in chkpt:
self.load_state_dict(torch.load(model_path, map_location='cpu')['params'])
logger.info(f'vqgan is loaded from: {model_path} [params]')
else:
raise ValueError('Wrong params!')
def forward(self, x):
x = self.encoder(x)
quant, codebook_loss, quant_stats = self.quantize(x)
x = self.generator(quant)
return x, codebook_loss, quant_stats
# patch based discriminator
@ARCH_REGISTRY.register()
class VQGANDiscriminator(nn.Module):
def __init__(self, nc=3, ndf=64, n_layers=4, model_path=None):
super().__init__()
layers = [nn.Conv2d(nc, ndf, kernel_size=4, stride=2, padding=1), nn.LeakyReLU(0.2, True)]
ndf_mult = 1
ndf_mult_prev = 1
for n in range(1, n_layers): # gradually increase the number of filters
ndf_mult_prev = ndf_mult
ndf_mult = min(2 ** n, 8)
layers += [
nn.Conv2d(ndf * ndf_mult_prev, ndf * ndf_mult, kernel_size=4, stride=2, padding=1, bias=False),
nn.BatchNorm2d(ndf * ndf_mult),
nn.LeakyReLU(0.2, True)
]
ndf_mult_prev = ndf_mult
ndf_mult = min(2 ** n_layers, 8)
layers += [
nn.Conv2d(ndf * ndf_mult_prev, ndf * ndf_mult, kernel_size=4, stride=1, padding=1, bias=False),
nn.BatchNorm2d(ndf * ndf_mult),
nn.LeakyReLU(0.2, True)
]
layers += [
nn.Conv2d(ndf * ndf_mult, 1, kernel_size=4, stride=1, padding=1)] # output 1 channel prediction map
self.main = nn.Sequential(*layers)
if model_path is not None:
chkpt = torch.load(model_path, map_location='cpu')
if 'params_d' in chkpt:
self.load_state_dict(torch.load(model_path, map_location='cpu')['params_d'])
elif 'params' in chkpt:
self.load_state_dict(torch.load(model_path, map_location='cpu')['params'])
else:
raise ValueError('Wrong params!')
def forward(self, x):
return self.main(x)
+49 -117
View File
@@ -1,132 +1,64 @@
import os
from __future__ import annotations
import logging
import cv2
import torch
import modules.face_restoration
import modules.shared
from modules import shared, devices, modelloader, errors
from modules.paths import models_path
from modules import (
devices,
errors,
face_restoration,
face_restoration_utils,
modelloader,
shared,
)
logger = logging.getLogger(__name__)
# codeformer people made a choice to include modified basicsr library to their project which makes
# it utterly impossible to use it alongside with other libraries that also use basicsr, like GFPGAN.
# I am making a choice to include some files from codeformer to work around this issue.
model_dir = "Codeformer"
model_path = os.path.join(models_path, model_dir)
model_url = 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth'
model_download_name = 'codeformer-v0.1.0.pth'
codeformer = None
# used by e.g. postprocessing_codeformer.py
codeformer: face_restoration.FaceRestoration | None = None
def setup_model(dirname):
os.makedirs(model_path, exist_ok=True)
class FaceRestorerCodeFormer(face_restoration_utils.CommonFaceRestoration):
def name(self):
return "CodeFormer"
path = modules.paths.paths.get("CodeFormer", None)
if path is None:
return
def load_net(self) -> torch.Module:
for model_path in modelloader.load_models(
model_path=self.model_path,
model_url=model_url,
command_path=self.model_path,
download_name=model_download_name,
ext_filter=['.pth'],
):
return modelloader.load_spandrel_model(
model_path,
device=devices.device_codeformer,
expected_architecture='CodeFormer',
).model
raise ValueError("No codeformer model found")
def get_device(self):
return devices.device_codeformer
def restore(self, np_image, w: float | None = None):
if w is None:
w = getattr(shared.opts, "code_former_weight", 0.5)
def restore_face(cropped_face_t):
assert self.net is not None
return self.net(cropped_face_t, weight=w, adain=True)[0]
return self.restore_with_helper(np_image, restore_face)
def setup_model(dirname: str) -> None:
global codeformer
try:
from torchvision.transforms.functional import normalize
from modules.codeformer.codeformer_arch import CodeFormer
from basicsr.utils import img2tensor, tensor2img
from facelib.utils.face_restoration_helper import FaceRestoreHelper
from facelib.detection.retinaface import retinaface
net_class = CodeFormer
class FaceRestorerCodeFormer(modules.face_restoration.FaceRestoration):
def name(self):
return "CodeFormer"
def __init__(self, dirname):
self.net = None
self.face_helper = None
self.cmd_dir = dirname
def create_models(self):
if self.net is not None and self.face_helper is not None:
self.net.to(devices.device_codeformer)
return self.net, self.face_helper
model_paths = modelloader.load_models(model_path, model_url, self.cmd_dir, download_name='codeformer-v0.1.0.pth', ext_filter=['.pth'])
if len(model_paths) != 0:
ckpt_path = model_paths[0]
else:
print("Unable to load codeformer model.")
return None, None
net = net_class(dim_embd=512, codebook_size=1024, n_head=8, n_layers=9, connect_list=['32', '64', '128', '256']).to(devices.device_codeformer)
checkpoint = torch.load(ckpt_path)['params_ema']
net.load_state_dict(checkpoint)
net.eval()
if hasattr(retinaface, 'device'):
retinaface.device = devices.device_codeformer
face_helper = FaceRestoreHelper(1, face_size=512, crop_ratio=(1, 1), det_model='retinaface_resnet50', save_ext='png', use_parse=True, device=devices.device_codeformer)
self.net = net
self.face_helper = face_helper
return net, face_helper
def send_model_to(self, device):
self.net.to(device)
self.face_helper.face_det.to(device)
self.face_helper.face_parse.to(device)
def restore(self, np_image, w=None):
np_image = np_image[:, :, ::-1]
original_resolution = np_image.shape[0:2]
self.create_models()
if self.net is None or self.face_helper is None:
return np_image
self.send_model_to(devices.device_codeformer)
self.face_helper.clean_all()
self.face_helper.read_image(np_image)
self.face_helper.get_face_landmarks_5(only_center_face=False, resize=640, eye_dist_threshold=5)
self.face_helper.align_warp_face()
for cropped_face in self.face_helper.cropped_faces:
cropped_face_t = img2tensor(cropped_face / 255., bgr2rgb=True, float32=True)
normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True)
cropped_face_t = cropped_face_t.unsqueeze(0).to(devices.device_codeformer)
try:
with torch.no_grad():
output = self.net(cropped_face_t, w=w if w is not None else shared.opts.code_former_weight, adain=True)[0]
restored_face = tensor2img(output, rgb2bgr=True, min_max=(-1, 1))
del output
devices.torch_gc()
except Exception:
errors.report('Failed inference for CodeFormer', exc_info=True)
restored_face = tensor2img(cropped_face_t, rgb2bgr=True, min_max=(-1, 1))
restored_face = restored_face.astype('uint8')
self.face_helper.add_restored_face(restored_face)
self.face_helper.get_inverse_affine(None)
restored_img = self.face_helper.paste_faces_to_input_image()
restored_img = restored_img[:, :, ::-1]
if original_resolution != restored_img.shape[0:2]:
restored_img = cv2.resize(restored_img, (0, 0), fx=original_resolution[1]/restored_img.shape[1], fy=original_resolution[0]/restored_img.shape[0], interpolation=cv2.INTER_LINEAR)
self.face_helper.clean_all()
if shared.opts.face_restoration_unload:
self.send_model_to(devices.cpu)
return restored_img
global codeformer
codeformer = FaceRestorerCodeFormer(dirname)
shared.face_restorers.append(codeformer)
except Exception:
errors.report("Error setting up CodeFormer", exc_info=True)
# sys.path = stored_sys_path
+79
View File
@@ -0,0 +1,79 @@
import os
from modules import modelloader, errors
from modules.shared import cmd_opts, opts
from modules.upscaler import Upscaler, UpscalerData
from modules.upscaler_utils import upscale_with_model
class UpscalerDAT(Upscaler):
def __init__(self, user_path):
self.name = "DAT"
self.user_path = user_path
self.scalers = []
super().__init__()
for file in self.find_models(ext_filter=[".pt", ".pth"]):
name = modelloader.friendly_name(file)
scaler_data = UpscalerData(name, file, upscaler=self, scale=None)
self.scalers.append(scaler_data)
for model in get_dat_models(self):
if model.name in opts.dat_enabled_models:
self.scalers.append(model)
def do_upscale(self, img, path):
try:
info = self.load_model(path)
except Exception:
errors.report(f"Unable to load DAT model {path}", exc_info=True)
return img
model_descriptor = modelloader.load_spandrel_model(
info.local_data_path,
device=self.device,
prefer_half=(not cmd_opts.no_half and not cmd_opts.upcast_sampling),
expected_architecture="DAT",
)
return upscale_with_model(
model_descriptor,
img,
tile_size=opts.DAT_tile,
tile_overlap=opts.DAT_tile_overlap,
)
def load_model(self, path):
for scaler in self.scalers:
if scaler.data_path == path:
if scaler.local_data_path.startswith("http"):
scaler.local_data_path = modelloader.load_file_from_url(
scaler.data_path,
model_dir=self.model_download_path,
)
if not os.path.exists(scaler.local_data_path):
raise FileNotFoundError(f"DAT data missing: {scaler.local_data_path}")
return scaler
raise ValueError(f"Unable to find model info: {path}")
def get_dat_models(scaler):
return [
UpscalerData(
name="DAT x2",
path="https://github.com/n0kovo/dat_upscaler_models/raw/main/DAT/DAT_x2.pth",
scale=2,
upscaler=scaler,
),
UpscalerData(
name="DAT x3",
path="https://github.com/n0kovo/dat_upscaler_models/raw/main/DAT/DAT_x3.pth",
scale=3,
upscaler=scaler,
),
UpscalerData(
name="DAT x4",
path="https://github.com/n0kovo/dat_upscaler_models/raw/main/DAT/DAT_x4.pth",
scale=4,
upscaler=scaler,
),
]
+110 -6
View File
@@ -3,7 +3,7 @@ import contextlib
from functools import lru_cache
import torch
from modules import errors, shared
from modules import errors, shared, npu_specific
if sys.platform == "darwin":
from modules import mac_specific
@@ -23,6 +23,23 @@ def has_mps() -> bool:
return mac_specific.has_mps
def cuda_no_autocast(device_id=None) -> bool:
if device_id is None:
device_id = get_cuda_device_id()
return (
torch.cuda.get_device_capability(device_id) == (7, 5)
and torch.cuda.get_device_name(device_id).startswith("NVIDIA GeForce GTX 16")
)
def get_cuda_device_id():
return (
int(shared.cmd_opts.device_id)
if shared.cmd_opts.device_id is not None and shared.cmd_opts.device_id.isdigit()
else 0
) or torch.cuda.current_device()
def get_cuda_device_string():
if shared.cmd_opts.device_id is not None:
return f"cuda:{shared.cmd_opts.device_id}"
@@ -40,6 +57,9 @@ def get_optimal_device_name():
if has_xpu():
return xpu_specific.get_xpu_device_string()
if npu_specific.has_npu:
return npu_specific.get_npu_device_string()
return "cpu"
@@ -67,14 +87,23 @@ def torch_gc():
if has_xpu():
xpu_specific.torch_xpu_gc()
if npu_specific.has_npu:
torch_npu_set_device()
npu_specific.torch_npu_gc()
def torch_npu_set_device():
# Work around due to bug in torch_npu, revert me after fixed, @see https://gitee.com/ascend/pytorch/issues/I8KECW?from=project-issue
if npu_specific.has_npu:
torch.npu.set_device(0)
def enable_tf32():
if torch.cuda.is_available():
# enabling benchmark option seems to enable a range of cards to do fp16 when they otherwise can't
# see https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4407
device_id = (int(shared.cmd_opts.device_id) if shared.cmd_opts.device_id is not None and shared.cmd_opts.device_id.isdigit() else 0) or torch.cuda.current_device()
if torch.cuda.get_device_capability(device_id) == (7, 5) and torch.cuda.get_device_name(device_id).startswith("NVIDIA GeForce GTX 16"):
if cuda_no_autocast():
torch.backends.cudnn.benchmark = True
torch.backends.cuda.matmul.allow_tf32 = True
@@ -84,6 +113,7 @@ def enable_tf32():
errors.run(enable_tf32, "Enabling TF32")
cpu: torch.device = torch.device("cpu")
fp8: bool = False
device: torch.device = None
device_interrogate: torch.device = None
device_gfpgan: torch.device = None
@@ -92,6 +122,7 @@ device_codeformer: torch.device = None
dtype: torch.dtype = torch.float16
dtype_vae: torch.dtype = torch.float16
dtype_unet: torch.dtype = torch.float16
dtype_inference: torch.dtype = torch.float16
unet_needs_upcast = False
@@ -104,15 +135,89 @@ def cond_cast_float(input):
nv_rng = None
patch_module_list = [
torch.nn.Linear,
torch.nn.Conv2d,
torch.nn.MultiheadAttention,
torch.nn.GroupNorm,
torch.nn.LayerNorm,
]
def manual_cast_forward(target_dtype):
def forward_wrapper(self, *args, **kwargs):
if any(
isinstance(arg, torch.Tensor) and arg.dtype != target_dtype
for arg in args
):
args = [arg.to(target_dtype) if isinstance(arg, torch.Tensor) else arg for arg in args]
kwargs = {k: v.to(target_dtype) if isinstance(v, torch.Tensor) else v for k, v in kwargs.items()}
org_dtype = target_dtype
for param in self.parameters():
if param.dtype != target_dtype:
org_dtype = param.dtype
break
if org_dtype != target_dtype:
self.to(target_dtype)
result = self.org_forward(*args, **kwargs)
if org_dtype != target_dtype:
self.to(org_dtype)
if target_dtype != dtype_inference:
if isinstance(result, tuple):
result = tuple(
i.to(dtype_inference)
if isinstance(i, torch.Tensor)
else i
for i in result
)
elif isinstance(result, torch.Tensor):
result = result.to(dtype_inference)
return result
return forward_wrapper
@contextlib.contextmanager
def manual_cast(target_dtype):
applied = False
for module_type in patch_module_list:
if hasattr(module_type, "org_forward"):
continue
applied = True
org_forward = module_type.forward
if module_type == torch.nn.MultiheadAttention:
module_type.forward = manual_cast_forward(torch.float32)
else:
module_type.forward = manual_cast_forward(target_dtype)
module_type.org_forward = org_forward
try:
yield None
finally:
if applied:
for module_type in patch_module_list:
if hasattr(module_type, "org_forward"):
module_type.forward = module_type.org_forward
delattr(module_type, "org_forward")
def autocast(disable=False):
if disable:
return contextlib.nullcontext()
if dtype == torch.float32 or shared.cmd_opts.precision == "full":
if fp8 and device==cpu:
return torch.autocast("cpu", dtype=torch.bfloat16, enabled=True)
if fp8 and dtype_inference == torch.float32:
return manual_cast(dtype)
if dtype == torch.float32 or dtype_inference == torch.float32:
return contextlib.nullcontext()
if has_xpu() or has_mps() or cuda_no_autocast():
return manual_cast(dtype)
return torch.autocast("cuda")
@@ -154,7 +259,7 @@ def test_for_nans(x, where):
def first_time_calculation():
"""
just do any calculation with pytorch layers - the first time this is done it allocaltes about 700MB of memory and
spends about 2.7 seconds doing that, at least wih NVidia.
spends about 2.7 seconds doing that, at least with NVidia.
"""
x = torch.zeros((1, 1)).to(device, dtype)
@@ -164,4 +269,3 @@ def first_time_calculation():
x = torch.zeros((1, 1, 3, 3)).to(device, dtype)
conv2d = torch.nn.Conv2d(1, 1, (3, 3)).to(device, dtype)
conv2d(x)
+2 -2
View File
@@ -107,8 +107,8 @@ def check_versions():
import torch
import gradio
expected_torch_version = "2.0.0"
expected_xformers_version = "0.0.20"
expected_torch_version = "2.1.2"
expected_xformers_version = "0.0.23.post1"
expected_gradio_version = "3.41.2"
if version.parse(torch.__version__) < version.parse(expected_torch_version):
+16 -183
View File
@@ -1,121 +1,7 @@
import sys
import numpy as np
import torch
from PIL import Image
import modules.esrgan_model_arch as arch
from modules import modelloader, images, devices
from modules import modelloader, devices, errors
from modules.shared import opts
from modules.upscaler import Upscaler, UpscalerData
def mod2normal(state_dict):
# this code is copied from https://github.com/victorca25/iNNfer
if 'conv_first.weight' in state_dict:
crt_net = {}
items = list(state_dict)
crt_net['model.0.weight'] = state_dict['conv_first.weight']
crt_net['model.0.bias'] = state_dict['conv_first.bias']
for k in items.copy():
if 'RDB' in k:
ori_k = k.replace('RRDB_trunk.', 'model.1.sub.')
if '.weight' in k:
ori_k = ori_k.replace('.weight', '.0.weight')
elif '.bias' in k:
ori_k = ori_k.replace('.bias', '.0.bias')
crt_net[ori_k] = state_dict[k]
items.remove(k)
crt_net['model.1.sub.23.weight'] = state_dict['trunk_conv.weight']
crt_net['model.1.sub.23.bias'] = state_dict['trunk_conv.bias']
crt_net['model.3.weight'] = state_dict['upconv1.weight']
crt_net['model.3.bias'] = state_dict['upconv1.bias']
crt_net['model.6.weight'] = state_dict['upconv2.weight']
crt_net['model.6.bias'] = state_dict['upconv2.bias']
crt_net['model.8.weight'] = state_dict['HRconv.weight']
crt_net['model.8.bias'] = state_dict['HRconv.bias']
crt_net['model.10.weight'] = state_dict['conv_last.weight']
crt_net['model.10.bias'] = state_dict['conv_last.bias']
state_dict = crt_net
return state_dict
def resrgan2normal(state_dict, nb=23):
# this code is copied from https://github.com/victorca25/iNNfer
if "conv_first.weight" in state_dict and "body.0.rdb1.conv1.weight" in state_dict:
re8x = 0
crt_net = {}
items = list(state_dict)
crt_net['model.0.weight'] = state_dict['conv_first.weight']
crt_net['model.0.bias'] = state_dict['conv_first.bias']
for k in items.copy():
if "rdb" in k:
ori_k = k.replace('body.', 'model.1.sub.')
ori_k = ori_k.replace('.rdb', '.RDB')
if '.weight' in k:
ori_k = ori_k.replace('.weight', '.0.weight')
elif '.bias' in k:
ori_k = ori_k.replace('.bias', '.0.bias')
crt_net[ori_k] = state_dict[k]
items.remove(k)
crt_net[f'model.1.sub.{nb}.weight'] = state_dict['conv_body.weight']
crt_net[f'model.1.sub.{nb}.bias'] = state_dict['conv_body.bias']
crt_net['model.3.weight'] = state_dict['conv_up1.weight']
crt_net['model.3.bias'] = state_dict['conv_up1.bias']
crt_net['model.6.weight'] = state_dict['conv_up2.weight']
crt_net['model.6.bias'] = state_dict['conv_up2.bias']
if 'conv_up3.weight' in state_dict:
# modification supporting: https://github.com/ai-forever/Real-ESRGAN/blob/main/RealESRGAN/rrdbnet_arch.py
re8x = 3
crt_net['model.9.weight'] = state_dict['conv_up3.weight']
crt_net['model.9.bias'] = state_dict['conv_up3.bias']
crt_net[f'model.{8+re8x}.weight'] = state_dict['conv_hr.weight']
crt_net[f'model.{8+re8x}.bias'] = state_dict['conv_hr.bias']
crt_net[f'model.{10+re8x}.weight'] = state_dict['conv_last.weight']
crt_net[f'model.{10+re8x}.bias'] = state_dict['conv_last.bias']
state_dict = crt_net
return state_dict
def infer_params(state_dict):
# this code is copied from https://github.com/victorca25/iNNfer
scale2x = 0
scalemin = 6
n_uplayer = 0
plus = False
for block in list(state_dict):
parts = block.split(".")
n_parts = len(parts)
if n_parts == 5 and parts[2] == "sub":
nb = int(parts[3])
elif n_parts == 3:
part_num = int(parts[1])
if (part_num > scalemin
and parts[0] == "model"
and parts[2] == "weight"):
scale2x += 1
if part_num > n_uplayer:
n_uplayer = part_num
out_nc = state_dict[block].shape[0]
if not plus and "conv1x1" in block:
plus = True
nf = state_dict["model.0.weight"].shape[0]
in_nc = state_dict["model.0.weight"].shape[1]
out_nc = out_nc
scale = 2 ** scale2x
return in_nc, out_nc, nf, nb, plus, scale
from modules.upscaler_utils import upscale_with_model
class UpscalerESRGAN(Upscaler):
@@ -143,12 +29,11 @@ class UpscalerESRGAN(Upscaler):
def do_upscale(self, img, selected_model):
try:
model = self.load_model(selected_model)
except Exception as e:
print(f"Unable to load ESRGAN model {selected_model}: {e}", file=sys.stderr)
except Exception:
errors.report(f"Unable to load ESRGAN model {selected_model}", exc_info=True)
return img
model.to(devices.device_esrgan)
img = esrgan_upscale(model, img)
return img
return esrgan_upscale(model, img)
def load_model(self, path: str):
if path.startswith("http"):
@@ -161,69 +46,17 @@ class UpscalerESRGAN(Upscaler):
else:
filename = path
state_dict = torch.load(filename, map_location='cpu' if devices.device_esrgan.type == 'mps' else None)
if "params_ema" in state_dict:
state_dict = state_dict["params_ema"]
elif "params" in state_dict:
state_dict = state_dict["params"]
num_conv = 16 if "realesr-animevideov3" in filename else 32
model = arch.SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=num_conv, upscale=4, act_type='prelu')
model.load_state_dict(state_dict)
model.eval()
return model
if "body.0.rdb1.conv1.weight" in state_dict and "conv_first.weight" in state_dict:
nb = 6 if "RealESRGAN_x4plus_anime_6B" in filename else 23
state_dict = resrgan2normal(state_dict, nb)
elif "conv_first.weight" in state_dict:
state_dict = mod2normal(state_dict)
elif "model.0.weight" not in state_dict:
raise Exception("The file is not a recognized ESRGAN model.")
in_nc, out_nc, nf, nb, plus, mscale = infer_params(state_dict)
model = arch.RRDBNet(in_nc=in_nc, out_nc=out_nc, nf=nf, nb=nb, upscale=mscale, plus=plus)
model.load_state_dict(state_dict)
model.eval()
return model
def upscale_without_tiling(model, img):
img = np.array(img)
img = img[:, :, ::-1]
img = np.ascontiguousarray(np.transpose(img, (2, 0, 1))) / 255
img = torch.from_numpy(img).float()
img = img.unsqueeze(0).to(devices.device_esrgan)
with torch.no_grad():
output = model(img)
output = output.squeeze().float().cpu().clamp_(0, 1).numpy()
output = 255. * np.moveaxis(output, 0, 2)
output = output.astype(np.uint8)
output = output[:, :, ::-1]
return Image.fromarray(output, 'RGB')
return modelloader.load_spandrel_model(
filename,
device=('cpu' if devices.device_esrgan.type == 'mps' else None),
expected_architecture='ESRGAN',
)
def esrgan_upscale(model, img):
if opts.ESRGAN_tile == 0:
return upscale_without_tiling(model, img)
grid = images.split_grid(img, opts.ESRGAN_tile, opts.ESRGAN_tile, opts.ESRGAN_tile_overlap)
newtiles = []
scale_factor = 1
for y, h, row in grid.tiles:
newrow = []
for tiledata in row:
x, w, tile = tiledata
output = upscale_without_tiling(model, tile)
scale_factor = output.width // tile.width
newrow.append([x * scale_factor, w * scale_factor, output])
newtiles.append([y * scale_factor, h * scale_factor, newrow])
newgrid = images.Grid(newtiles, grid.tile_w * scale_factor, grid.tile_h * scale_factor, grid.image_w * scale_factor, grid.image_h * scale_factor, grid.overlap * scale_factor)
output = images.combine_grid(newgrid)
return output
return upscale_with_model(
model,
img,
tile_size=opts.ESRGAN_tile,
tile_overlap=opts.ESRGAN_tile_overlap,
)
-465
View File
@@ -1,465 +0,0 @@
# this file is adapted from https://github.com/victorca25/iNNfer
from collections import OrderedDict
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
####################
# RRDBNet Generator
####################
class RRDBNet(nn.Module):
def __init__(self, in_nc, out_nc, nf, nb, nr=3, gc=32, upscale=4, norm_type=None,
act_type='leakyrelu', mode='CNA', upsample_mode='upconv', convtype='Conv2D',
finalact=None, gaussian_noise=False, plus=False):
super(RRDBNet, self).__init__()
n_upscale = int(math.log(upscale, 2))
if upscale == 3:
n_upscale = 1
self.resrgan_scale = 0
if in_nc % 16 == 0:
self.resrgan_scale = 1
elif in_nc != 4 and in_nc % 4 == 0:
self.resrgan_scale = 2
fea_conv = conv_block(in_nc, nf, kernel_size=3, norm_type=None, act_type=None, convtype=convtype)
rb_blocks = [RRDB(nf, nr, kernel_size=3, gc=32, stride=1, bias=1, pad_type='zero',
norm_type=norm_type, act_type=act_type, mode='CNA', convtype=convtype,
gaussian_noise=gaussian_noise, plus=plus) for _ in range(nb)]
LR_conv = conv_block(nf, nf, kernel_size=3, norm_type=norm_type, act_type=None, mode=mode, convtype=convtype)
if upsample_mode == 'upconv':
upsample_block = upconv_block
elif upsample_mode == 'pixelshuffle':
upsample_block = pixelshuffle_block
else:
raise NotImplementedError(f'upsample mode [{upsample_mode}] is not found')
if upscale == 3:
upsampler = upsample_block(nf, nf, 3, act_type=act_type, convtype=convtype)
else:
upsampler = [upsample_block(nf, nf, act_type=act_type, convtype=convtype) for _ in range(n_upscale)]
HR_conv0 = conv_block(nf, nf, kernel_size=3, norm_type=None, act_type=act_type, convtype=convtype)
HR_conv1 = conv_block(nf, out_nc, kernel_size=3, norm_type=None, act_type=None, convtype=convtype)
outact = act(finalact) if finalact else None
self.model = sequential(fea_conv, ShortcutBlock(sequential(*rb_blocks, LR_conv)),
*upsampler, HR_conv0, HR_conv1, outact)
def forward(self, x, outm=None):
if self.resrgan_scale == 1:
feat = pixel_unshuffle(x, scale=4)
elif self.resrgan_scale == 2:
feat = pixel_unshuffle(x, scale=2)
else:
feat = x
return self.model(feat)
class RRDB(nn.Module):
"""
Residual in Residual Dense Block
(ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks)
"""
def __init__(self, nf, nr=3, kernel_size=3, gc=32, stride=1, bias=1, pad_type='zero',
norm_type=None, act_type='leakyrelu', mode='CNA', convtype='Conv2D',
spectral_norm=False, gaussian_noise=False, plus=False):
super(RRDB, self).__init__()
# This is for backwards compatibility with existing models
if nr == 3:
self.RDB1 = ResidualDenseBlock_5C(nf, kernel_size, gc, stride, bias, pad_type,
norm_type, act_type, mode, convtype, spectral_norm=spectral_norm,
gaussian_noise=gaussian_noise, plus=plus)
self.RDB2 = ResidualDenseBlock_5C(nf, kernel_size, gc, stride, bias, pad_type,
norm_type, act_type, mode, convtype, spectral_norm=spectral_norm,
gaussian_noise=gaussian_noise, plus=plus)
self.RDB3 = ResidualDenseBlock_5C(nf, kernel_size, gc, stride, bias, pad_type,
norm_type, act_type, mode, convtype, spectral_norm=spectral_norm,
gaussian_noise=gaussian_noise, plus=plus)
else:
RDB_list = [ResidualDenseBlock_5C(nf, kernel_size, gc, stride, bias, pad_type,
norm_type, act_type, mode, convtype, spectral_norm=spectral_norm,
gaussian_noise=gaussian_noise, plus=plus) for _ in range(nr)]
self.RDBs = nn.Sequential(*RDB_list)
def forward(self, x):
if hasattr(self, 'RDB1'):
out = self.RDB1(x)
out = self.RDB2(out)
out = self.RDB3(out)
else:
out = self.RDBs(x)
return out * 0.2 + x
class ResidualDenseBlock_5C(nn.Module):
"""
Residual Dense Block
The core module of paper: (Residual Dense Network for Image Super-Resolution, CVPR 18)
Modified options that can be used:
- "Partial Convolution based Padding" arXiv:1811.11718
- "Spectral normalization" arXiv:1802.05957
- "ICASSP 2020 - ESRGAN+ : Further Improving ESRGAN" N. C.
{Rakotonirina} and A. {Rasoanaivo}
"""
def __init__(self, nf=64, kernel_size=3, gc=32, stride=1, bias=1, pad_type='zero',
norm_type=None, act_type='leakyrelu', mode='CNA', convtype='Conv2D',
spectral_norm=False, gaussian_noise=False, plus=False):
super(ResidualDenseBlock_5C, self).__init__()
self.noise = GaussianNoise() if gaussian_noise else None
self.conv1x1 = conv1x1(nf, gc) if plus else None
self.conv1 = conv_block(nf, gc, kernel_size, stride, bias=bias, pad_type=pad_type,
norm_type=norm_type, act_type=act_type, mode=mode, convtype=convtype,
spectral_norm=spectral_norm)
self.conv2 = conv_block(nf+gc, gc, kernel_size, stride, bias=bias, pad_type=pad_type,
norm_type=norm_type, act_type=act_type, mode=mode, convtype=convtype,
spectral_norm=spectral_norm)
self.conv3 = conv_block(nf+2*gc, gc, kernel_size, stride, bias=bias, pad_type=pad_type,
norm_type=norm_type, act_type=act_type, mode=mode, convtype=convtype,
spectral_norm=spectral_norm)
self.conv4 = conv_block(nf+3*gc, gc, kernel_size, stride, bias=bias, pad_type=pad_type,
norm_type=norm_type, act_type=act_type, mode=mode, convtype=convtype,
spectral_norm=spectral_norm)
if mode == 'CNA':
last_act = None
else:
last_act = act_type
self.conv5 = conv_block(nf+4*gc, nf, 3, stride, bias=bias, pad_type=pad_type,
norm_type=norm_type, act_type=last_act, mode=mode, convtype=convtype,
spectral_norm=spectral_norm)
def forward(self, x):
x1 = self.conv1(x)
x2 = self.conv2(torch.cat((x, x1), 1))
if self.conv1x1:
x2 = x2 + self.conv1x1(x)
x3 = self.conv3(torch.cat((x, x1, x2), 1))
x4 = self.conv4(torch.cat((x, x1, x2, x3), 1))
if self.conv1x1:
x4 = x4 + x2
x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1))
if self.noise:
return self.noise(x5.mul(0.2) + x)
else:
return x5 * 0.2 + x
####################
# ESRGANplus
####################
class GaussianNoise(nn.Module):
def __init__(self, sigma=0.1, is_relative_detach=False):
super().__init__()
self.sigma = sigma
self.is_relative_detach = is_relative_detach
self.noise = torch.tensor(0, dtype=torch.float)
def forward(self, x):
if self.training and self.sigma != 0:
self.noise = self.noise.to(x.device)
scale = self.sigma * x.detach() if self.is_relative_detach else self.sigma * x
sampled_noise = self.noise.repeat(*x.size()).normal_() * scale
x = x + sampled_noise
return x
def conv1x1(in_planes, out_planes, stride=1):
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
####################
# SRVGGNetCompact
####################
class SRVGGNetCompact(nn.Module):
"""A compact VGG-style network structure for super-resolution.
This class is copied from https://github.com/xinntao/Real-ESRGAN
"""
def __init__(self, num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu'):
super(SRVGGNetCompact, self).__init__()
self.num_in_ch = num_in_ch
self.num_out_ch = num_out_ch
self.num_feat = num_feat
self.num_conv = num_conv
self.upscale = upscale
self.act_type = act_type
self.body = nn.ModuleList()
# the first conv
self.body.append(nn.Conv2d(num_in_ch, num_feat, 3, 1, 1))
# the first activation
if act_type == 'relu':
activation = nn.ReLU(inplace=True)
elif act_type == 'prelu':
activation = nn.PReLU(num_parameters=num_feat)
elif act_type == 'leakyrelu':
activation = nn.LeakyReLU(negative_slope=0.1, inplace=True)
self.body.append(activation)
# the body structure
for _ in range(num_conv):
self.body.append(nn.Conv2d(num_feat, num_feat, 3, 1, 1))
# activation
if act_type == 'relu':
activation = nn.ReLU(inplace=True)
elif act_type == 'prelu':
activation = nn.PReLU(num_parameters=num_feat)
elif act_type == 'leakyrelu':
activation = nn.LeakyReLU(negative_slope=0.1, inplace=True)
self.body.append(activation)
# the last conv
self.body.append(nn.Conv2d(num_feat, num_out_ch * upscale * upscale, 3, 1, 1))
# upsample
self.upsampler = nn.PixelShuffle(upscale)
def forward(self, x):
out = x
for i in range(0, len(self.body)):
out = self.body[i](out)
out = self.upsampler(out)
# add the nearest upsampled image, so that the network learns the residual
base = F.interpolate(x, scale_factor=self.upscale, mode='nearest')
out += base
return out
####################
# Upsampler
####################
class Upsample(nn.Module):
r"""Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data.
The input data is assumed to be of the form
`minibatch x channels x [optional depth] x [optional height] x width`.
"""
def __init__(self, size=None, scale_factor=None, mode="nearest", align_corners=None):
super(Upsample, self).__init__()
if isinstance(scale_factor, tuple):
self.scale_factor = tuple(float(factor) for factor in scale_factor)
else:
self.scale_factor = float(scale_factor) if scale_factor else None
self.mode = mode
self.size = size
self.align_corners = align_corners
def forward(self, x):
return nn.functional.interpolate(x, size=self.size, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners)
def extra_repr(self):
if self.scale_factor is not None:
info = f'scale_factor={self.scale_factor}'
else:
info = f'size={self.size}'
info += f', mode={self.mode}'
return info
def pixel_unshuffle(x, scale):
""" Pixel unshuffle.
Args:
x (Tensor): Input feature with shape (b, c, hh, hw).
scale (int): Downsample ratio.
Returns:
Tensor: the pixel unshuffled feature.
"""
b, c, hh, hw = x.size()
out_channel = c * (scale**2)
assert hh % scale == 0 and hw % scale == 0
h = hh // scale
w = hw // scale
x_view = x.view(b, c, h, scale, w, scale)
return x_view.permute(0, 1, 3, 5, 2, 4).reshape(b, out_channel, h, w)
def pixelshuffle_block(in_nc, out_nc, upscale_factor=2, kernel_size=3, stride=1, bias=True,
pad_type='zero', norm_type=None, act_type='relu', convtype='Conv2D'):
"""
Pixel shuffle layer
(Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional
Neural Network, CVPR17)
"""
conv = conv_block(in_nc, out_nc * (upscale_factor ** 2), kernel_size, stride, bias=bias,
pad_type=pad_type, norm_type=None, act_type=None, convtype=convtype)
pixel_shuffle = nn.PixelShuffle(upscale_factor)
n = norm(norm_type, out_nc) if norm_type else None
a = act(act_type) if act_type else None
return sequential(conv, pixel_shuffle, n, a)
def upconv_block(in_nc, out_nc, upscale_factor=2, kernel_size=3, stride=1, bias=True,
pad_type='zero', norm_type=None, act_type='relu', mode='nearest', convtype='Conv2D'):
""" Upconv layer """
upscale_factor = (1, upscale_factor, upscale_factor) if convtype == 'Conv3D' else upscale_factor
upsample = Upsample(scale_factor=upscale_factor, mode=mode)
conv = conv_block(in_nc, out_nc, kernel_size, stride, bias=bias,
pad_type=pad_type, norm_type=norm_type, act_type=act_type, convtype=convtype)
return sequential(upsample, conv)
####################
# Basic blocks
####################
def make_layer(basic_block, num_basic_block, **kwarg):
"""Make layers by stacking the same blocks.
Args:
basic_block (nn.module): nn.module class for basic block. (block)
num_basic_block (int): number of blocks. (n_layers)
Returns:
nn.Sequential: Stacked blocks in nn.Sequential.
"""
layers = []
for _ in range(num_basic_block):
layers.append(basic_block(**kwarg))
return nn.Sequential(*layers)
def act(act_type, inplace=True, neg_slope=0.2, n_prelu=1, beta=1.0):
""" activation helper """
act_type = act_type.lower()
if act_type == 'relu':
layer = nn.ReLU(inplace)
elif act_type in ('leakyrelu', 'lrelu'):
layer = nn.LeakyReLU(neg_slope, inplace)
elif act_type == 'prelu':
layer = nn.PReLU(num_parameters=n_prelu, init=neg_slope)
elif act_type == 'tanh': # [-1, 1] range output
layer = nn.Tanh()
elif act_type == 'sigmoid': # [0, 1] range output
layer = nn.Sigmoid()
else:
raise NotImplementedError(f'activation layer [{act_type}] is not found')
return layer
class Identity(nn.Module):
def __init__(self, *kwargs):
super(Identity, self).__init__()
def forward(self, x, *kwargs):
return x
def norm(norm_type, nc):
""" Return a normalization layer """
norm_type = norm_type.lower()
if norm_type == 'batch':
layer = nn.BatchNorm2d(nc, affine=True)
elif norm_type == 'instance':
layer = nn.InstanceNorm2d(nc, affine=False)
elif norm_type == 'none':
def norm_layer(x): return Identity()
else:
raise NotImplementedError(f'normalization layer [{norm_type}] is not found')
return layer
def pad(pad_type, padding):
""" padding layer helper """
pad_type = pad_type.lower()
if padding == 0:
return None
if pad_type == 'reflect':
layer = nn.ReflectionPad2d(padding)
elif pad_type == 'replicate':
layer = nn.ReplicationPad2d(padding)
elif pad_type == 'zero':
layer = nn.ZeroPad2d(padding)
else:
raise NotImplementedError(f'padding layer [{pad_type}] is not implemented')
return layer
def get_valid_padding(kernel_size, dilation):
kernel_size = kernel_size + (kernel_size - 1) * (dilation - 1)
padding = (kernel_size - 1) // 2
return padding
class ShortcutBlock(nn.Module):
""" Elementwise sum the output of a submodule to its input """
def __init__(self, submodule):
super(ShortcutBlock, self).__init__()
self.sub = submodule
def forward(self, x):
output = x + self.sub(x)
return output
def __repr__(self):
return 'Identity + \n|' + self.sub.__repr__().replace('\n', '\n|')
def sequential(*args):
""" Flatten Sequential. It unwraps nn.Sequential. """
if len(args) == 1:
if isinstance(args[0], OrderedDict):
raise NotImplementedError('sequential does not support OrderedDict input.')
return args[0] # No sequential is needed.
modules = []
for module in args:
if isinstance(module, nn.Sequential):
for submodule in module.children():
modules.append(submodule)
elif isinstance(module, nn.Module):
modules.append(module)
return nn.Sequential(*modules)
def conv_block(in_nc, out_nc, kernel_size, stride=1, dilation=1, groups=1, bias=True,
pad_type='zero', norm_type=None, act_type='relu', mode='CNA', convtype='Conv2D',
spectral_norm=False):
""" Conv layer with padding, normalization, activation """
assert mode in ['CNA', 'NAC', 'CNAC'], f'Wrong conv mode [{mode}]'
padding = get_valid_padding(kernel_size, dilation)
p = pad(pad_type, padding) if pad_type and pad_type != 'zero' else None
padding = padding if pad_type == 'zero' else 0
if convtype=='PartialConv2D':
from torchvision.ops import PartialConv2d # this is definitely not going to work, but PartialConv2d doesn't work anyway and this shuts up static analyzer
c = PartialConv2d(in_nc, out_nc, kernel_size=kernel_size, stride=stride, padding=padding,
dilation=dilation, bias=bias, groups=groups)
elif convtype=='DeformConv2D':
from torchvision.ops import DeformConv2d # not tested
c = DeformConv2d(in_nc, out_nc, kernel_size=kernel_size, stride=stride, padding=padding,
dilation=dilation, bias=bias, groups=groups)
elif convtype=='Conv3D':
c = nn.Conv3d(in_nc, out_nc, kernel_size=kernel_size, stride=stride, padding=padding,
dilation=dilation, bias=bias, groups=groups)
else:
c = nn.Conv2d(in_nc, out_nc, kernel_size=kernel_size, stride=stride, padding=padding,
dilation=dilation, bias=bias, groups=groups)
if spectral_norm:
c = nn.utils.spectral_norm(c)
a = act(act_type) if act_type else None
if 'CNA' in mode:
n = norm(norm_type, out_nc) if norm_type else None
return sequential(p, c, n, a)
elif mode == 'NAC':
if norm_type is None and act_type is not None:
a = act(act_type, inplace=False)
n = norm(norm_type, in_nc) if norm_type else None
return sequential(n, a, p, c)
+69 -10
View File
@@ -1,6 +1,7 @@
from __future__ import annotations
import configparser
import dataclasses
import os
import threading
import re
@@ -9,6 +10,10 @@ from modules import shared, errors, cache, scripts
from modules.gitpython_hack import Repo
from modules.paths_internal import extensions_dir, extensions_builtin_dir, script_path # noqa: F401
extensions: list[Extension] = []
extension_paths: dict[str, Extension] = {}
loaded_extensions: dict[str, Exception] = {}
os.makedirs(extensions_dir, exist_ok=True)
@@ -22,6 +27,13 @@ def active():
return [x for x in extensions if x.enabled]
@dataclasses.dataclass
class CallbackOrderInfo:
name: str
before: list
after: list
class ExtensionMetadata:
filename = "metadata.ini"
config: configparser.ConfigParser
@@ -32,16 +44,17 @@ class ExtensionMetadata:
self.config = configparser.ConfigParser()
filepath = os.path.join(path, self.filename)
if os.path.isfile(filepath):
try:
self.config.read(filepath)
except Exception:
errors.report(f"Error reading {self.filename} for extension {canonical_name}.", exc_info=True)
# `self.config.read()` will quietly swallow OSErrors (which FileNotFoundError is),
# so no need to check whether the file exists beforehand.
try:
self.config.read(filepath)
except Exception:
errors.report(f"Error reading {self.filename} for extension {canonical_name}.", exc_info=True)
self.canonical_name = self.config.get("Extension", "Name", fallback=canonical_name)
self.canonical_name = canonical_name.lower().strip()
self.requires = self.get_script_requirements("Requires", "Extension")
self.requires = None
def get_script_requirements(self, field, section, extra_section=None):
"""reads a list of requirements from the config; field is the name of the field in the ini file,
@@ -53,7 +66,15 @@ class ExtensionMetadata:
if extra_section:
x = x + ', ' + self.config.get(extra_section, field, fallback='')
return self.parse_list(x.lower())
listed_requirements = self.parse_list(x.lower())
res = []
for requirement in listed_requirements:
loaded_requirements = (x for x in requirement.split("|") if x in loaded_extensions)
relevant_requirement = next(loaded_requirements, requirement)
res.append(relevant_requirement)
return res
def parse_list(self, text):
"""converts a line from config ("ext1 ext2, ext3 ") into a python list (["ext1", "ext2", "ext3"])"""
@@ -64,6 +85,22 @@ class ExtensionMetadata:
# both "," and " " are accepted as separator
return [x for x in re.split(r"[,\s]+", text.strip()) if x]
def list_callback_order_instructions(self):
for section in self.config.sections():
if not section.startswith("callbacks/"):
continue
callback_name = section[10:]
if not callback_name.startswith(self.canonical_name):
errors.report(f"Callback order section for extension {self.canonical_name} is referencing the wrong extension: {section}")
continue
before = self.parse_list(self.config.get(section, 'Before', fallback=''))
after = self.parse_list(self.config.get(section, 'After', fallback=''))
yield CallbackOrderInfo(callback_name, before, after)
class Extension:
lock = threading.Lock()
@@ -155,6 +192,8 @@ class Extension:
def check_updates(self):
repo = Repo(self.path)
for fetch in repo.remote().fetch(dry_run=True):
if self.branch and fetch.name != f'{repo.remote().name}/{self.branch}':
continue
if fetch.flags != fetch.HEAD_UPTODATE:
self.can_update = True
self.status = "new commits"
@@ -185,6 +224,8 @@ class Extension:
def list_extensions():
extensions.clear()
extension_paths.clear()
loaded_extensions.clear()
if shared.cmd_opts.disable_all_extensions:
print("*** \"--disable-all-extensions\" arg was used, will not load any extensions ***")
@@ -195,7 +236,6 @@ def list_extensions():
elif shared.opts.disable_all_extensions == "extra":
print("*** \"Disable all extensions\" option was set, will only load built-in extensions ***")
loaded_extensions = {}
# scan through extensions directory and load metadata
for dirname in [extensions_builtin_dir, extensions_dir]:
@@ -219,19 +259,38 @@ def list_extensions():
is_builtin = dirname == extensions_builtin_dir
extension = Extension(name=extension_dirname, path=path, enabled=extension_dirname not in shared.opts.disabled_extensions, is_builtin=is_builtin, metadata=metadata)
extensions.append(extension)
extension_paths[extension.path] = extension
loaded_extensions[canonical_name] = extension
for extension in extensions:
extension.metadata.requires = extension.metadata.get_script_requirements("Requires", "Extension")
# check for requirements
for extension in extensions:
if not extension.enabled:
continue
for req in extension.metadata.requires:
required_extension = loaded_extensions.get(req)
if required_extension is None:
errors.report(f'Extension "{extension.name}" requires "{req}" which is not installed.', exc_info=False)
continue
if not extension.enabled:
if not required_extension.enabled:
errors.report(f'Extension "{extension.name}" requires "{required_extension.name}" which is disabled.', exc_info=False)
continue
extensions: list[Extension] = []
def find_extension(filename):
parentdir = os.path.dirname(os.path.realpath(filename))
while parentdir != filename:
extension = extension_paths.get(parentdir)
if extension is not None:
return extension
filename = parentdir
parentdir = os.path.dirname(filename)
return None
+4 -3
View File
@@ -60,7 +60,7 @@ class ExtraNetwork:
Where name matches the name of this ExtraNetwork object, and arg1:arg2:arg3 are any natural number of text arguments
separated by colon.
Even if the user does not mention this ExtraNetwork in his prompt, the call will stil be made, with empty params_list -
Even if the user does not mention this ExtraNetwork in his prompt, the call will still be made, with empty params_list -
in this case, all effects of this extra networks should be disabled.
Can be called multiple times before deactivate() - each new call should override the previous call completely.
@@ -206,7 +206,7 @@ def parse_prompts(prompts):
return res, extra_data
def get_user_metadata(filename):
def get_user_metadata(filename, lister=None):
if filename is None:
return {}
@@ -215,7 +215,8 @@ def get_user_metadata(filename):
metadata = {}
try:
if os.path.isfile(metadata_filename):
exists = lister.exists(metadata_filename) if lister else os.path.exists(metadata_filename)
if exists:
with open(metadata_filename, "r", encoding="utf8") as file:
metadata = json.load(file)
except Exception as e:
+180
View File
@@ -0,0 +1,180 @@
from __future__ import annotations
import logging
import os
from functools import cached_property
from typing import TYPE_CHECKING, Callable
import cv2
import numpy as np
import torch
from modules import devices, errors, face_restoration, shared
if TYPE_CHECKING:
from facexlib.utils.face_restoration_helper import FaceRestoreHelper
logger = logging.getLogger(__name__)
def bgr_image_to_rgb_tensor(img: np.ndarray) -> torch.Tensor:
"""Convert a BGR NumPy image in [0..1] range to a PyTorch RGB float32 tensor."""
assert img.shape[2] == 3, "image must be RGB"
if img.dtype == "float64":
img = img.astype("float32")
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
return torch.from_numpy(img.transpose(2, 0, 1)).float()
def rgb_tensor_to_bgr_image(tensor: torch.Tensor, *, min_max=(0.0, 1.0)) -> np.ndarray:
"""
Convert a PyTorch RGB tensor in range `min_max` to a BGR NumPy image in [0..1] range.
"""
tensor = tensor.squeeze(0).float().detach().cpu().clamp_(*min_max)
tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0])
assert tensor.dim() == 3, "tensor must be RGB"
img_np = tensor.numpy().transpose(1, 2, 0)
if img_np.shape[2] == 1: # gray image, no RGB/BGR required
return np.squeeze(img_np, axis=2)
return cv2.cvtColor(img_np, cv2.COLOR_BGR2RGB)
def create_face_helper(device) -> FaceRestoreHelper:
from facexlib.detection import retinaface
from facexlib.utils.face_restoration_helper import FaceRestoreHelper
if hasattr(retinaface, 'device'):
retinaface.device = device
return FaceRestoreHelper(
upscale_factor=1,
face_size=512,
crop_ratio=(1, 1),
det_model='retinaface_resnet50',
save_ext='png',
use_parse=True,
device=device,
)
def restore_with_face_helper(
np_image: np.ndarray,
face_helper: FaceRestoreHelper,
restore_face: Callable[[torch.Tensor], torch.Tensor],
) -> np.ndarray:
"""
Find faces in the image using face_helper, restore them using restore_face, and paste them back into the image.
`restore_face` should take a cropped face image and return a restored face image.
"""
from torchvision.transforms.functional import normalize
np_image = np_image[:, :, ::-1]
original_resolution = np_image.shape[0:2]
try:
logger.debug("Detecting faces...")
face_helper.clean_all()
face_helper.read_image(np_image)
face_helper.get_face_landmarks_5(only_center_face=False, resize=640, eye_dist_threshold=5)
face_helper.align_warp_face()
logger.debug("Found %d faces, restoring", len(face_helper.cropped_faces))
for cropped_face in face_helper.cropped_faces:
cropped_face_t = bgr_image_to_rgb_tensor(cropped_face / 255.0)
normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True)
cropped_face_t = cropped_face_t.unsqueeze(0).to(devices.device_codeformer)
try:
with torch.no_grad():
cropped_face_t = restore_face(cropped_face_t)
devices.torch_gc()
except Exception:
errors.report('Failed face-restoration inference', exc_info=True)
restored_face = rgb_tensor_to_bgr_image(cropped_face_t, min_max=(-1, 1))
restored_face = (restored_face * 255.0).astype('uint8')
face_helper.add_restored_face(restored_face)
logger.debug("Merging restored faces into image")
face_helper.get_inverse_affine(None)
img = face_helper.paste_faces_to_input_image()
img = img[:, :, ::-1]
if original_resolution != img.shape[0:2]:
img = cv2.resize(
img,
(0, 0),
fx=original_resolution[1] / img.shape[1],
fy=original_resolution[0] / img.shape[0],
interpolation=cv2.INTER_LINEAR,
)
logger.debug("Face restoration complete")
finally:
face_helper.clean_all()
return img
class CommonFaceRestoration(face_restoration.FaceRestoration):
net: torch.Module | None
model_url: str
model_download_name: str
def __init__(self, model_path: str):
super().__init__()
self.net = None
self.model_path = model_path
os.makedirs(model_path, exist_ok=True)
@cached_property
def face_helper(self) -> FaceRestoreHelper:
return create_face_helper(self.get_device())
def send_model_to(self, device):
if self.net:
logger.debug("Sending %s to %s", self.net, device)
self.net.to(device)
if self.face_helper:
logger.debug("Sending face helper to %s", device)
self.face_helper.face_det.to(device)
self.face_helper.face_parse.to(device)
def get_device(self):
raise NotImplementedError("get_device must be implemented by subclasses")
def load_net(self) -> torch.Module:
raise NotImplementedError("load_net must be implemented by subclasses")
def restore_with_helper(
self,
np_image: np.ndarray,
restore_face: Callable[[torch.Tensor], torch.Tensor],
) -> np.ndarray:
try:
if self.net is None:
self.net = self.load_net()
except Exception:
logger.warning("Unable to load face-restoration model", exc_info=True)
return np_image
try:
self.send_model_to(self.get_device())
return restore_with_face_helper(np_image, self.face_helper, restore_face)
finally:
if shared.opts.face_restoration_unload:
self.send_model_to(devices.cpu)
def patch_facexlib(dirname: str) -> None:
import facexlib.detection
import facexlib.parsing
det_facex_load_file_from_url = facexlib.detection.load_file_from_url
par_facex_load_file_from_url = facexlib.parsing.load_file_from_url
def update_kwargs(kwargs):
return dict(kwargs, save_dir=dirname, model_dir=None)
def facex_load_file_from_url(**kwargs):
return det_facex_load_file_from_url(**update_kwargs(kwargs))
def facex_load_file_from_url2(**kwargs):
return par_facex_load_file_from_url(**update_kwargs(kwargs))
facexlib.detection.load_file_from_url = facex_load_file_from_url
facexlib.parsing.load_file_from_url = facex_load_file_from_url2
+50 -104
View File
@@ -1,125 +1,71 @@
from __future__ import annotations
import logging
import os
import facexlib
import gfpgan
import torch
import modules.face_restoration
from modules import paths, shared, devices, modelloader, errors
from modules import (
devices,
errors,
face_restoration,
face_restoration_utils,
modelloader,
shared,
)
model_dir = "GFPGAN"
user_path = None
model_path = os.path.join(paths.models_path, model_dir)
model_file_path = None
logger = logging.getLogger(__name__)
model_url = "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth"
have_gfpgan = False
loaded_gfpgan_model = None
model_download_name = "GFPGANv1.4.pth"
gfpgan_face_restorer: face_restoration.FaceRestoration | None = None
def gfpgann():
global loaded_gfpgan_model
global model_path
global model_file_path
if loaded_gfpgan_model is not None:
loaded_gfpgan_model.gfpgan.to(devices.device_gfpgan)
return loaded_gfpgan_model
class FaceRestorerGFPGAN(face_restoration_utils.CommonFaceRestoration):
def name(self):
return "GFPGAN"
if gfpgan_constructor is None:
return None
def get_device(self):
return devices.device_gfpgan
models = modelloader.load_models(model_path, model_url, user_path, ext_filter=['.pth'])
def load_net(self) -> torch.Module:
for model_path in modelloader.load_models(
model_path=self.model_path,
model_url=model_url,
command_path=self.model_path,
download_name=model_download_name,
ext_filter=['.pth'],
):
if 'GFPGAN' in os.path.basename(model_path):
model = modelloader.load_spandrel_model(
model_path,
device=self.get_device(),
expected_architecture='GFPGAN',
).model
model.different_w = True # see https://github.com/chaiNNer-org/spandrel/pull/81
return model
raise ValueError("No GFPGAN model found")
if len(models) == 1 and models[0].startswith("http"):
model_file = models[0]
elif len(models) != 0:
gfp_models = []
for item in models:
if 'GFPGAN' in os.path.basename(item):
gfp_models.append(item)
latest_file = max(gfp_models, key=os.path.getctime)
model_file = latest_file
else:
print("Unable to load gfpgan model!")
return None
def restore(self, np_image):
def restore_face(cropped_face_t):
assert self.net is not None
return self.net(cropped_face_t, return_rgb=False)[0]
if hasattr(facexlib.detection.retinaface, 'device'):
facexlib.detection.retinaface.device = devices.device_gfpgan
model_file_path = model_file
model = gfpgan_constructor(model_path=model_file, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None, device=devices.device_gfpgan)
loaded_gfpgan_model = model
return model
def send_model_to(model, device):
model.gfpgan.to(device)
model.face_helper.face_det.to(device)
model.face_helper.face_parse.to(device)
return self.restore_with_helper(np_image, restore_face)
def gfpgan_fix_faces(np_image):
model = gfpgann()
if model is None:
return np_image
send_model_to(model, devices.device_gfpgan)
np_image_bgr = np_image[:, :, ::-1]
cropped_faces, restored_faces, gfpgan_output_bgr = model.enhance(np_image_bgr, has_aligned=False, only_center_face=False, paste_back=True)
np_image = gfpgan_output_bgr[:, :, ::-1]
model.face_helper.clean_all()
if shared.opts.face_restoration_unload:
send_model_to(model, devices.cpu)
if gfpgan_face_restorer:
return gfpgan_face_restorer.restore(np_image)
logger.warning("GFPGAN face restorer not set up")
return np_image
gfpgan_constructor = None
def setup_model(dirname: str) -> None:
global gfpgan_face_restorer
def setup_model(dirname):
try:
os.makedirs(model_path, exist_ok=True)
from gfpgan import GFPGANer
from facexlib import detection, parsing # noqa: F401
global user_path
global have_gfpgan
global gfpgan_constructor
global model_file_path
facexlib_path = model_path
if dirname is not None:
facexlib_path = dirname
load_file_from_url_orig = gfpgan.utils.load_file_from_url
facex_load_file_from_url_orig = facexlib.detection.load_file_from_url
facex_load_file_from_url_orig2 = facexlib.parsing.load_file_from_url
def my_load_file_from_url(**kwargs):
return load_file_from_url_orig(**dict(kwargs, model_dir=model_file_path))
def facex_load_file_from_url(**kwargs):
return facex_load_file_from_url_orig(**dict(kwargs, save_dir=facexlib_path, model_dir=None))
def facex_load_file_from_url2(**kwargs):
return facex_load_file_from_url_orig2(**dict(kwargs, save_dir=facexlib_path, model_dir=None))
gfpgan.utils.load_file_from_url = my_load_file_from_url
facexlib.detection.load_file_from_url = facex_load_file_from_url
facexlib.parsing.load_file_from_url = facex_load_file_from_url2
user_path = dirname
have_gfpgan = True
gfpgan_constructor = GFPGANer
class FaceRestorerGFPGAN(modules.face_restoration.FaceRestoration):
def name(self):
return "GFPGAN"
def restore(self, np_image):
return gfpgan_fix_faces(np_image)
shared.face_restorers.append(FaceRestorerGFPGAN())
face_restoration_utils.patch_facexlib(dirname)
gfpgan_face_restorer = FaceRestorerGFPGAN(model_path=dirname)
shared.face_restorers.append(gfpgan_face_restorer)
except Exception:
errors.report("Error setting up GFPGAN", exc_info=True)
+4 -1
View File
@@ -21,7 +21,10 @@ def calculate_sha256(filename):
def sha256_from_cache(filename, title, use_addnet_hash=False):
hashes = cache("hashes-addnet") if use_addnet_hash else cache("hashes")
ondisk_mtime = os.path.getmtime(filename)
try:
ondisk_mtime = os.path.getmtime(filename)
except FileNotFoundError:
return None
if title not in hashes:
return None
+43
View File
@@ -0,0 +1,43 @@
import os
import sys
from modules import modelloader, devices
from modules.shared import opts
from modules.upscaler import Upscaler, UpscalerData
from modules.upscaler_utils import upscale_with_model
class UpscalerHAT(Upscaler):
def __init__(self, dirname):
self.name = "HAT"
self.scalers = []
self.user_path = dirname
super().__init__()
for file in self.find_models(ext_filter=[".pt", ".pth"]):
name = modelloader.friendly_name(file)
scale = 4 # TODO: scale might not be 4, but we can't know without loading the model
scaler_data = UpscalerData(name, file, upscaler=self, scale=scale)
self.scalers.append(scaler_data)
def do_upscale(self, img, selected_model):
try:
model = self.load_model(selected_model)
except Exception as e:
print(f"Unable to load HAT model {selected_model}: {e}", file=sys.stderr)
return img
model.to(devices.device_esrgan) # TODO: should probably be device_hat
return upscale_with_model(
model,
img,
tile_size=opts.ESRGAN_tile, # TODO: should probably be HAT_tile
tile_overlap=opts.ESRGAN_tile_overlap, # TODO: should probably be HAT_tile_overlap
)
def load_model(self, path: str):
if not os.path.isfile(path):
raise FileNotFoundError(f"Model file {path} not found")
return modelloader.load_spandrel_model(
path,
device=devices.device_esrgan, # TODO: should probably be device_hat
expected_architecture='HAT',
)
+3 -2
View File
@@ -11,7 +11,7 @@ import tqdm
from einops import rearrange, repeat
from ldm.util import default
from modules import devices, sd_models, shared, sd_samplers, hashes, sd_hijack_checkpoint, errors
from modules.textual_inversion import textual_inversion, logging
from modules.textual_inversion import textual_inversion, saving_settings
from modules.textual_inversion.learn_schedule import LearnRateScheduler
from torch import einsum
from torch.nn.init import normal_, xavier_normal_, xavier_uniform_, kaiming_normal_, kaiming_uniform_, zeros_
@@ -95,6 +95,7 @@ class HypernetworkModule(torch.nn.Module):
zeros_(b)
else:
raise KeyError(f"Key {weight_init} is not defined as initialization!")
devices.torch_npu_set_device()
self.to(devices.device)
def fix_old_state_dict(self, state_dict):
@@ -532,7 +533,7 @@ def train_hypernetwork(id_task, hypernetwork_name: str, learn_rate: float, batch
model_name=checkpoint.model_name, model_hash=checkpoint.shorthash, num_of_dataset_images=len(ds),
**{field: getattr(hypernetwork, field) for field in ['layer_structure', 'activation_func', 'weight_init', 'add_layer_norm', 'use_dropout', ]}
)
logging.save_settings_to_file(log_directory, {**saved_params, **locals()})
saving_settings.save_settings_to_file(log_directory, {**saved_params, **locals()})
latent_sampling_method = ds.latent_sampling_method
+85 -10
View File
@@ -1,7 +1,7 @@
from __future__ import annotations
import datetime
import functools
import pytz
import io
import math
@@ -12,7 +12,9 @@ import re
import numpy as np
import piexif
import piexif.helper
from PIL import Image, ImageFont, ImageDraw, ImageColor, PngImagePlugin
from PIL import Image, ImageFont, ImageDraw, ImageColor, PngImagePlugin, ImageOps
# pillow_avif needs to be imported somewhere in code for it to work
import pillow_avif # noqa: F401
import string
import json
import hashlib
@@ -61,12 +63,17 @@ def image_grid(imgs, batch_size=1, rows=None):
return grid
Grid = namedtuple("Grid", ["tiles", "tile_w", "tile_h", "image_w", "image_h", "overlap"])
class Grid(namedtuple("_Grid", ["tiles", "tile_w", "tile_h", "image_w", "image_h", "overlap"])):
@property
def tile_count(self) -> int:
"""
The total number of tiles in the grid.
"""
return sum(len(row[2]) for row in self.tiles)
def split_grid(image, tile_w=512, tile_h=512, overlap=64):
w = image.width
h = image.height
def split_grid(image: Image.Image, tile_w: int = 512, tile_h: int = 512, overlap: int = 64) -> Grid:
w, h = image.size
non_overlap_width = tile_w - overlap
non_overlap_height = tile_h - overlap
@@ -316,13 +323,16 @@ def resize_image(resize_mode, im, width, height, upscaler_name=None):
return res
invalid_filename_chars = '<>:"/\\|?*\n\r\t'
if not shared.cmd_opts.unix_filenames_sanitization:
invalid_filename_chars = '#<>:"/\\|?*\n\r\t'
else:
invalid_filename_chars = '/'
invalid_filename_prefix = ' '
invalid_filename_postfix = ' .'
re_nonletters = re.compile(r'[\s' + string.punctuation + ']+')
re_pattern = re.compile(r"(.*?)(?:\[([^\[\]]+)\]|$)")
re_pattern_arg = re.compile(r"(.*)<([^>]*)>$")
max_filename_part_length = 128
max_filename_part_length = shared.cmd_opts.filenames_max_length
NOTHING_AND_SKIP_PREVIOUS_TEXT = object()
@@ -339,6 +349,32 @@ def sanitize_filename_part(text, replace_spaces=True):
return text
@functools.cache
def get_scheduler_str(sampler_name, scheduler_name):
"""Returns {Scheduler} if the scheduler is applicable to the sampler"""
if scheduler_name == 'Automatic':
config = sd_samplers.find_sampler_config(sampler_name)
scheduler_name = config.options.get('scheduler', 'Automatic')
return scheduler_name.capitalize()
@functools.cache
def get_sampler_scheduler_str(sampler_name, scheduler_name):
"""Returns the '{Sampler} {Scheduler}' if the scheduler is applicable to the sampler"""
return f'{sampler_name} {get_scheduler_str(sampler_name, scheduler_name)}'
def get_sampler_scheduler(p, sampler):
"""Returns '{Sampler} {Scheduler}' / '{Scheduler}' / 'NOTHING_AND_SKIP_PREVIOUS_TEXT'"""
if hasattr(p, 'scheduler') and hasattr(p, 'sampler_name'):
if sampler:
sampler_scheduler = get_sampler_scheduler_str(p.sampler_name, p.scheduler)
else:
sampler_scheduler = get_scheduler_str(p.sampler_name, p.scheduler)
return sanitize_filename_part(sampler_scheduler, replace_spaces=False)
return NOTHING_AND_SKIP_PREVIOUS_TEXT
class FilenameGenerator:
replacements = {
'seed': lambda self: self.seed if self.seed is not None else '',
@@ -350,6 +386,8 @@ class FilenameGenerator:
'height': lambda self: self.image.height,
'styles': lambda self: self.p and sanitize_filename_part(", ".join([style for style in self.p.styles if not style == "None"]) or "None", replace_spaces=False),
'sampler': lambda self: self.p and sanitize_filename_part(self.p.sampler_name, replace_spaces=False),
'sampler_scheduler': lambda self: self.p and get_sampler_scheduler(self.p, True),
'scheduler': lambda self: self.p and get_sampler_scheduler(self.p, False),
'model_hash': lambda self: getattr(self.p, "sd_model_hash", shared.sd_model.sd_model_hash),
'model_name': lambda self: sanitize_filename_part(shared.sd_model.sd_checkpoint_info.name_for_extra, replace_spaces=False),
'date': lambda self: datetime.datetime.now().strftime('%Y-%m-%d'),
@@ -561,6 +599,16 @@ def save_image_with_geninfo(image, geninfo, filename, extension=None, existing_p
})
piexif.insert(exif_bytes, filename)
elif extension.lower() == '.avif':
if opts.enable_pnginfo and geninfo is not None:
exif_bytes = piexif.dump({
"Exif": {
piexif.ExifIFD.UserComment: piexif.helper.UserComment.dump(geninfo or "", encoding="unicode")
},
})
image.save(filename,format=image_format, exif=exif_bytes)
elif extension.lower() == ".gif":
image.save(filename, format=image_format, comment=geninfo)
else:
@@ -739,7 +787,6 @@ def read_info_from_image(image: Image.Image) -> tuple[str | None, dict]:
exif_comment = exif_comment.decode('utf8', errors="ignore")
if exif_comment:
items['exif comment'] = exif_comment
geninfo = exif_comment
elif "comment" in items: # for gif
geninfo = items["comment"].decode('utf8', errors="ignore")
@@ -765,7 +812,7 @@ def image_data(data):
import gradio as gr
try:
image = Image.open(io.BytesIO(data))
image = read(io.BytesIO(data))
textinfo, _ = read_info_from_image(image)
return textinfo, None
except Exception:
@@ -791,3 +838,31 @@ def flatten(img, bgcolor):
img = background
return img.convert('RGB')
def read(fp, **kwargs):
image = Image.open(fp, **kwargs)
image = fix_image(image)
return image
def fix_image(image: Image.Image):
if image is None:
return None
try:
image = ImageOps.exif_transpose(image)
image = fix_png_transparency(image)
except Exception:
pass
return image
def fix_png_transparency(image: Image.Image):
if image.mode not in ("RGB", "P") or not isinstance(image.info.get("transparency"), bytes):
return image
image = image.convert("RGBA")
return image
+15 -21
View File
@@ -6,8 +6,8 @@ import numpy as np
from PIL import Image, ImageOps, ImageFilter, ImageEnhance, UnidentifiedImageError
import gradio as gr
from modules import images as imgutil
from modules.generation_parameters_copypaste import create_override_settings_dict, parse_generation_parameters
from modules import images
from modules.infotext_utils import create_override_settings_dict, parse_generation_parameters
from modules.processing import Processed, StableDiffusionProcessingImg2Img, process_images
from modules.shared import opts, state
from modules.sd_models import get_closet_checkpoint_match
@@ -21,7 +21,7 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal
output_dir = output_dir.strip()
processing.fix_seed(p)
images = list(shared.walk_files(input_dir, allowed_extensions=(".png", ".jpg", ".jpeg", ".webp", ".tif", ".tiff")))
batch_images = list(shared.walk_files(input_dir, allowed_extensions=(".png", ".jpg", ".jpeg", ".webp", ".tif", ".tiff")))
is_inpaint_batch = False
if inpaint_mask_dir:
@@ -31,9 +31,9 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal
if is_inpaint_batch:
print(f"\nInpaint batch is enabled. {len(inpaint_masks)} masks found.")
print(f"Will process {len(images)} images, creating {p.n_iter * p.batch_size} new images for each.")
print(f"Will process {len(batch_images)} images, creating {p.n_iter * p.batch_size} new images for each.")
state.job_count = len(images) * p.n_iter
state.job_count = len(batch_images) * p.n_iter
# extract "default" params to use in case getting png info fails
prompt = p.prompt
@@ -46,16 +46,16 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal
sd_model_checkpoint_override = get_closet_checkpoint_match(override_settings.get("sd_model_checkpoint", None))
batch_results = None
discard_further_results = False
for i, image in enumerate(images):
state.job = f"{i+1} out of {len(images)}"
for i, image in enumerate(batch_images):
state.job = f"{i+1} out of {len(batch_images)}"
if state.skipped:
state.skipped = False
if state.interrupted:
if state.interrupted or state.stopping_generation:
break
try:
img = Image.open(image)
img = images.read(image)
except UnidentifiedImageError as e:
print(e)
continue
@@ -86,7 +86,7 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal
# otherwise user has many masks with the same name but different extensions
mask_image_path = masks_found[0]
mask_image = Image.open(mask_image_path)
mask_image = images.read(mask_image_path)
p.image_mask = mask_image
if use_png_info:
@@ -94,8 +94,8 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal
info_img = img
if png_info_dir:
info_img_path = os.path.join(png_info_dir, os.path.basename(image))
info_img = Image.open(info_img_path)
geninfo, _ = imgutil.read_info_from_image(info_img)
info_img = images.read(info_img_path)
geninfo, _ = images.read_info_from_image(info_img)
parsed_parameters = parse_generation_parameters(geninfo)
parsed_parameters = {k: v for k, v in parsed_parameters.items() if k in (png_info_props or {})}
except Exception:
@@ -146,7 +146,7 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal
return batch_results
def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_styles, init_img, sketch, init_img_with_mask, inpaint_color_sketch, inpaint_color_sketch_orig, init_img_inpaint, init_mask_inpaint, steps: int, sampler_name: str, mask_blur: int, mask_alpha: float, inpainting_fill: int, n_iter: int, batch_size: int, cfg_scale: float, image_cfg_scale: float, denoising_strength: float, selected_scale_tab: int, height: int, width: int, scale_by: float, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, img2img_batch_inpaint_mask_dir: str, override_settings_texts, img2img_batch_use_png_info: bool, img2img_batch_png_info_props: list, img2img_batch_png_info_dir: str, request: gr.Request, *args):
def img2img(id_task: str, request: gr.Request, mode: int, prompt: str, negative_prompt: str, prompt_styles, init_img, sketch, init_img_with_mask, inpaint_color_sketch, inpaint_color_sketch_orig, init_img_inpaint, init_mask_inpaint, mask_blur: int, mask_alpha: float, inpainting_fill: int, n_iter: int, batch_size: int, cfg_scale: float, image_cfg_scale: float, denoising_strength: float, selected_scale_tab: int, height: int, width: int, scale_by: float, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, img2img_batch_inpaint_mask_dir: str, override_settings_texts, img2img_batch_use_png_info: bool, img2img_batch_png_info_props: list, img2img_batch_png_info_dir: str, *args):
override_settings = create_override_settings_dict(override_settings_texts)
is_batch = mode == 5
@@ -175,9 +175,8 @@ def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_s
image = None
mask = None
# Use the EXIF orientation of photos taken by smartphones.
if image is not None:
image = ImageOps.exif_transpose(image)
image = images.fix_image(image)
mask = images.fix_image(mask)
if selected_scale_tab == 1 and not is_batch:
assert image, "Can't scale by because no image is selected"
@@ -194,10 +193,8 @@ def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_s
prompt=prompt,
negative_prompt=negative_prompt,
styles=prompt_styles,
sampler_name=sampler_name,
batch_size=batch_size,
n_iter=n_iter,
steps=steps,
cfg_scale=cfg_scale,
width=width,
height=height,
@@ -222,9 +219,6 @@ def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_s
if shared.opts.enable_console_prompts:
print(f"\nimg2img: {prompt}", file=shared.progress_print_out)
if mask:
p.extra_generation_params["Mask blur"] = mask_blur
with closing(p):
if is_batch:
assert not shared.cmd_opts.hide_ui_dir_config, "Launched with --hide-ui-dir-config, batch img2img disabled"
@@ -4,12 +4,15 @@ import io
import json
import os
import re
import sys
import gradio as gr
from modules.paths import data_path
from modules import shared, ui_tempdir, script_callbacks, processing
from modules import shared, ui_tempdir, script_callbacks, processing, infotext_versions, images, prompt_parser, errors
from PIL import Image
sys.modules['modules.generation_parameters_copypaste'] = sys.modules[__name__] # alias for old name
re_param_code = r'\s*(\w[\w \-/]+):\s*("(?:\\.|[^\\"])+"|[^,]*)(?:,|$)'
re_param = re.compile(re_param_code)
re_imagesize = re.compile(r"^(\d+)x(\d+)$")
@@ -28,6 +31,19 @@ class ParamBinding:
self.paste_field_names = paste_field_names or []
class PasteField(tuple):
def __new__(cls, component, target, *, api=None):
return super().__new__(cls, (component, target))
def __init__(self, component, target, *, api=None):
super().__init__()
self.api = api
self.component = component
self.label = target if isinstance(target, str) else None
self.function = target if callable(target) else None
paste_fields: dict[str, dict] = {}
registered_param_bindings: list[ParamBinding] = []
@@ -67,7 +83,7 @@ def image_from_url_text(filedata):
assert is_in_right_dir, 'trying to open image file outside of allowed directories'
filename = filename.rsplit('?', 1)[0]
return Image.open(filename)
return images.read(filename)
if type(filedata) == list:
if len(filedata) == 0:
@@ -79,11 +95,17 @@ def image_from_url_text(filedata):
filedata = filedata[len("data:image/png;base64,"):]
filedata = base64.decodebytes(filedata.encode('utf-8'))
image = Image.open(io.BytesIO(filedata))
image = images.read(io.BytesIO(filedata))
return image
def add_paste_fields(tabname, init_img, fields, override_settings_component=None):
if fields:
for i in range(len(fields)):
if not isinstance(fields[i], PasteField):
fields[i] = PasteField(*fields[i])
paste_fields[tabname] = {"init_img": init_img, "fields": fields, "override_settings_component": override_settings_component}
# backwards compatibility for existing extensions
@@ -208,7 +230,7 @@ def restore_old_hires_fix_params(res):
res['Hires resize-2'] = height
def parse_generation_parameters(x: str):
def parse_generation_parameters(x: str, skip_fields: list[str] | None = None):
"""parses generation parameters string, the one you see in text field under the picture in UI:
```
girl with an artist's beret, determined, blue eyes, desert scene, computer monitors, heavy makeup, by Alphonse Mucha and Charlie Bowater, ((eyeshadow)), (coquettish), detailed, intricate
@@ -218,6 +240,8 @@ Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 965400086, Size: 512x512, Model
returns a dict with field values
"""
if skip_fields is None:
skip_fields = shared.opts.infotext_skip_pasting
res = {}
@@ -241,17 +265,6 @@ Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 965400086, Size: 512x512, Model
else:
prompt += ("" if prompt == "" else "\n") + line
if shared.opts.infotext_styles != "Ignore":
found_styles, prompt, negative_prompt = shared.prompt_styles.extract_styles_from_prompt(prompt, negative_prompt)
if shared.opts.infotext_styles == "Apply":
res["Styles array"] = found_styles
elif shared.opts.infotext_styles == "Apply if any" and found_styles:
res["Styles array"] = found_styles
res["Prompt"] = prompt
res["Negative prompt"] = negative_prompt
for k, v in re_param.findall(lastline):
try:
if v[0] == '"' and v[-1] == '"':
@@ -266,6 +279,26 @@ Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 965400086, Size: 512x512, Model
except Exception:
print(f"Error parsing \"{k}: {v}\"")
# Extract styles from prompt
if shared.opts.infotext_styles != "Ignore":
found_styles, prompt_no_styles, negative_prompt_no_styles = shared.prompt_styles.extract_styles_from_prompt(prompt, negative_prompt)
same_hr_styles = True
if ("Hires prompt" in res or "Hires negative prompt" in res) and (infotext_ver > infotext_versions.v180_hr_styles if (infotext_ver := infotext_versions.parse_version(res.get("Version"))) else True):
hr_prompt, hr_negative_prompt = res.get("Hires prompt", prompt), res.get("Hires negative prompt", negative_prompt)
hr_found_styles, hr_prompt_no_styles, hr_negative_prompt_no_styles = shared.prompt_styles.extract_styles_from_prompt(hr_prompt, hr_negative_prompt)
if same_hr_styles := found_styles == hr_found_styles:
res["Hires prompt"] = '' if hr_prompt_no_styles == prompt_no_styles else hr_prompt_no_styles
res['Hires negative prompt'] = '' if hr_negative_prompt_no_styles == negative_prompt_no_styles else hr_negative_prompt_no_styles
if same_hr_styles:
prompt, negative_prompt = prompt_no_styles, negative_prompt_no_styles
if (shared.opts.infotext_styles == "Apply if any" and found_styles) or shared.opts.infotext_styles == "Apply":
res['Styles array'] = found_styles
res["Prompt"] = prompt
res["Negative prompt"] = negative_prompt
# Missing CLIP skip means it was set to 1 (the default)
if "Clip skip" not in res:
res["Clip skip"] = "1"
@@ -281,6 +314,9 @@ Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 965400086, Size: 512x512, Model
if "Hires sampler" not in res:
res["Hires sampler"] = "Use same sampler"
if "Hires schedule type" not in res:
res["Hires schedule type"] = "Use same scheduler"
if "Hires checkpoint" not in res:
res["Hires checkpoint"] = "Use same checkpoint"
@@ -290,6 +326,18 @@ Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 965400086, Size: 512x512, Model
if "Hires negative prompt" not in res:
res["Hires negative prompt"] = ""
if "Mask mode" not in res:
res["Mask mode"] = "Inpaint masked"
if "Masked content" not in res:
res["Masked content"] = 'original'
if "Inpaint area" not in res:
res["Inpaint area"] = "Whole picture"
if "Masked area padding" not in res:
res["Masked area padding"] = 32
restore_old_hires_fix_params(res)
# Missing RNG means the default was set, which is GPU RNG
@@ -314,8 +362,25 @@ Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 965400086, Size: 512x512, Model
if "VAE Decoder" not in res:
res["VAE Decoder"] = "Full"
skip = set(shared.opts.infotext_skip_pasting)
res = {k: v for k, v in res.items() if k not in skip}
if "FP8 weight" not in res:
res["FP8 weight"] = "Disable"
if "Cache FP16 weight for LoRA" not in res and res["FP8 weight"] != "Disable":
res["Cache FP16 weight for LoRA"] = False
prompt_attention = prompt_parser.parse_prompt_attention(prompt)
prompt_attention += prompt_parser.parse_prompt_attention(negative_prompt)
prompt_uses_emphasis = len(prompt_attention) != len([p for p in prompt_attention if p[1] == 1.0 or p[0] == 'BREAK'])
if "Emphasis" not in res and prompt_uses_emphasis:
res["Emphasis"] = "Original"
if "Refiner switch by sampling steps" not in res:
res["Refiner switch by sampling steps"] = False
infotext_versions.backcompat(res)
for key in skip_fields:
res.pop(key, None)
return res
@@ -365,13 +430,57 @@ def create_override_settings_dict(text_pairs):
return res
def get_override_settings(params, *, skip_fields=None):
"""Returns a list of settings overrides from the infotext parameters dictionary.
This function checks the `params` dictionary for any keys that correspond to settings in `shared.opts` and returns
a list of tuples containing the parameter name, setting name, and new value cast to correct type.
It checks for conditions before adding an override:
- ignores settings that match the current value
- ignores parameter keys present in skip_fields argument.
Example input:
{"Clip skip": "2"}
Example output:
[("Clip skip", "CLIP_stop_at_last_layers", 2)]
"""
res = []
mapping = [(info.infotext, k) for k, info in shared.opts.data_labels.items() if info.infotext]
for param_name, setting_name in mapping + infotext_to_setting_name_mapping:
if param_name in (skip_fields or {}):
continue
v = params.get(param_name, None)
if v is None:
continue
if setting_name == "sd_model_checkpoint" and shared.opts.disable_weights_auto_swap:
continue
v = shared.opts.cast_value(setting_name, v)
current_value = getattr(shared.opts, setting_name, None)
if v == current_value:
continue
res.append((param_name, setting_name, v))
return res
def connect_paste(button, paste_fields, input_comp, override_settings_component, tabname):
def paste_func(prompt):
if not prompt and not shared.cmd_opts.hide_ui_dir_config:
if not prompt and not shared.cmd_opts.hide_ui_dir_config and not shared.cmd_opts.no_prompt_history:
filename = os.path.join(data_path, "params.txt")
if os.path.exists(filename):
try:
with open(filename, "r", encoding="utf8") as file:
prompt = file.read()
except OSError:
pass
params = parse_generation_parameters(prompt)
script_callbacks.infotext_pasted_callback(prompt, params)
@@ -379,7 +488,11 @@ def connect_paste(button, paste_fields, input_comp, override_settings_component,
for output, key in paste_fields:
if callable(key):
v = key(params)
try:
v = key(params)
except Exception:
errors.report(f"Error executing {key}", exc_info=True)
v = None
else:
v = params.get(key, None)
@@ -393,6 +506,8 @@ def connect_paste(button, paste_fields, input_comp, override_settings_component,
if valtype == bool and v == "False":
val = False
elif valtype == int:
val = float(v)
else:
val = valtype(v)
@@ -406,29 +521,9 @@ def connect_paste(button, paste_fields, input_comp, override_settings_component,
already_handled_fields = {key: 1 for _, key in paste_fields}
def paste_settings(params):
vals = {}
vals = get_override_settings(params, skip_fields=already_handled_fields)
mapping = [(info.infotext, k) for k, info in shared.opts.data_labels.items() if info.infotext]
for param_name, setting_name in mapping + infotext_to_setting_name_mapping:
if param_name in already_handled_fields:
continue
v = params.get(param_name, None)
if v is None:
continue
if setting_name == "sd_model_checkpoint" and shared.opts.disable_weights_auto_swap:
continue
v = shared.opts.cast_value(setting_name, v)
current_value = getattr(shared.opts, setting_name, None)
if v == current_value:
continue
vals[param_name] = v
vals_pairs = [f"{k}: {v}" for k, v in vals.items()]
vals_pairs = [f"{infotext_text}: {value}" for infotext_text, setting_name, value in vals]
return gr.Dropdown.update(value=vals_pairs, choices=vals_pairs, visible=bool(vals_pairs))
+46
View File
@@ -0,0 +1,46 @@
from modules import shared
from packaging import version
import re
v160 = version.parse("1.6.0")
v170_tsnr = version.parse("v1.7.0-225")
v180 = version.parse("1.8.0")
v180_hr_styles = version.parse("1.8.0-139")
def parse_version(text):
if text is None:
return None
m = re.match(r'([^-]+-[^-]+)-.*', text)
if m:
text = m.group(1)
try:
return version.parse(text)
except Exception:
return None
def backcompat(d):
"""Checks infotext Version field, and enables backwards compatibility options according to it."""
if not shared.opts.auto_backcompat:
return
ver = parse_version(d.get("Version"))
if ver is None:
return
if ver < v160 and '[' in d.get('Prompt', ''):
d["Old prompt editing timelines"] = True
if ver < v160 and d.get('Sampler', '') in ('DDIM', 'PLMS'):
d["Pad conds v0"] = True
if ver < v170_tsnr:
d["Downcast alphas_cumprod"] = True
if ver < v180 and d.get('Refiner'):
d["Refiner switch by sampling steps"] = True
+7 -6
View File
@@ -1,5 +1,6 @@
import importlib
import logging
import os
import sys
import warnings
from threading import Thread
@@ -18,6 +19,7 @@ def imports():
warnings.filterwarnings(action="ignore", category=DeprecationWarning, module="pytorch_lightning")
warnings.filterwarnings(action="ignore", category=UserWarning, module="torchvision")
os.environ.setdefault('GRADIO_ANALYTICS_ENABLED', 'False')
import gradio # noqa: F401
startup_timer.record("import gradio")
@@ -49,14 +51,12 @@ def check_versions():
def initialize():
from modules import initialize_util
initialize_util.fix_torch_version()
initialize_util.fix_pytorch_lightning()
initialize_util.fix_asyncio_event_loop_policy()
initialize_util.validate_tls_options()
initialize_util.configure_sigint_handler()
initialize_util.configure_opts_onchange()
from modules import modelloader
modelloader.cleanup_models()
from modules import sd_models
sd_models.setup_model()
startup_timer.record("setup SD model")
@@ -110,7 +110,7 @@ def initialize_rest(*, reload_script_modules=False):
with startup_timer.subcategory("load scripts"):
scripts.load_scripts()
if reload_script_modules:
if reload_script_modules and shared.opts.enable_reloading_ui_scripts:
for module in [module for name, module in sys.modules.items() if name.startswith("modules.ui")]:
importlib.reload(module)
startup_timer.record("reload script modules")
@@ -140,16 +140,17 @@ def initialize_rest(*, reload_script_modules=False):
"""
Accesses shared.sd_model property to load model.
After it's available, if it has been loaded before this access by some extension,
its optimization may be None because the list of optimizaers has neet been filled
its optimization may be None because the list of optimizers has not been filled
by that time, so we apply optimization again.
"""
from modules import devices
devices.torch_npu_set_device()
shared.sd_model # noqa: B018
if sd_hijack.current_optimizer is None:
sd_hijack.apply_optimizations()
from modules import devices
devices.first_time_calculation()
if not shared.cmd_opts.skip_load_model_at_start:
Thread(target=load_model).start()
+9
View File
@@ -24,6 +24,13 @@ def fix_torch_version():
torch.__long_version__ = torch.__version__
torch.__version__ = re.search(r'[\d.]+[\d]', torch.__version__).group(0)
def fix_pytorch_lightning():
# Checks if pytorch_lightning.utilities.distributed already exists in the sys.modules cache
if 'pytorch_lightning.utilities.distributed' not in sys.modules:
import pytorch_lightning
# Lets the user know that the library was not found and then will set it to pytorch_lightning.utilities.rank_zero
print("Pytorch_lightning.distributed not found, attempting pytorch_lightning.rank_zero")
sys.modules["pytorch_lightning.utilities.distributed"] = pytorch_lightning.utilities.rank_zero
def fix_asyncio_event_loop_policy():
"""
@@ -177,6 +184,8 @@ def configure_opts_onchange():
shared.opts.onchange("temp_dir", ui_tempdir.on_tmpdir_changed)
shared.opts.onchange("gradio_theme", shared.reload_gradio_theme)
shared.opts.onchange("cross_attention_optimization", wrap_queued_call(lambda: sd_hijack.model_hijack.redo_hijack(shared.sd_model)), call=False)
shared.opts.onchange("fp8_storage", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
shared.opts.onchange("cache_fp16_weight", wrap_queued_call(lambda: sd_models.reload_model_weights(forced_reload=True)), call=False)
startup_timer.record("opts onchange")
+3 -3
View File
@@ -10,14 +10,14 @@ import torch.hub
from torchvision import transforms
from torchvision.transforms.functional import InterpolationMode
from modules import devices, paths, shared, lowvram, modelloader, errors
from modules import devices, paths, shared, lowvram, modelloader, errors, torch_utils
blip_image_eval_size = 384
clip_model_name = 'ViT-L/14'
Category = namedtuple("Category", ["name", "topn", "items"])
re_topn = re.compile(r"\.top(\d+)\.")
re_topn = re.compile(r"\.top(\d+)$")
def category_types():
return [f.stem for f in Path(shared.interrogator.content_dir).glob('*.txt')]
@@ -131,7 +131,7 @@ class InterrogateModels:
self.clip_model = self.clip_model.to(devices.device_interrogate)
self.dtype = next(self.clip_model.parameters()).dtype
self.dtype = torch_utils.get_param(self.clip_model).dtype
def send_clip_to_ram(self):
if not shared.opts.interrogate_keep_models_in_memory:
+23 -18
View File
@@ -27,8 +27,7 @@ dir_repos = "repositories"
# Whether to default to printing command output
default_command_live = (os.environ.get('WEBUI_LAUNCH_LIVE_OUTPUT') == "1")
if 'GRADIO_ANALYTICS_ENABLED' not in os.environ:
os.environ['GRADIO_ANALYTICS_ENABLED'] = 'False'
os.environ.setdefault('GRADIO_ANALYTICS_ENABLED', 'False')
def check_python_version():
@@ -56,7 +55,7 @@ and delete current Python and "venv" folder in WebUI's directory.
You can download 3.10 Python from here: https://www.python.org/downloads/release/python-3106/
{"Alternatively, use a binary release of WebUI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases" if is_windows else ""}
{"Alternatively, use a binary release of WebUI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre" if is_windows else ""}
Use --skip-python-version-check to suppress this warning.
""")
@@ -189,7 +188,7 @@ def git_clone(url, dir, name, commithash=None):
return
try:
run(f'"{git}" clone "{url}" "{dir}"', f"Cloning {name} into {dir}...", f"Couldn't clone {name}", live=True)
run(f'"{git}" clone --config core.filemode=false "{url}" "{dir}"', f"Cloning {name} into {dir}...", f"Couldn't clone {name}", live=True)
except RuntimeError:
shutil.rmtree(dir, ignore_errors=True)
raise
@@ -245,11 +244,13 @@ def list_extensions(settings_file):
settings = {}
try:
if os.path.isfile(settings_file):
with open(settings_file, "r", encoding="utf8") as file:
settings = json.load(file)
with open(settings_file, "r", encoding="utf8") as file:
settings = json.load(file)
except FileNotFoundError:
pass
except Exception:
errors.report("Could not load settings", exc_info=True)
errors.report(f'\nCould not load settings\nThe config file "{settings_file}" is likely corrupted\nIt has been moved to the "tmp/config.json"\nReverting config to default\n\n''', exc_info=True)
os.replace(settings_file, os.path.join(script_path, "tmp", "config.json"))
disabled_extensions = set(settings.get('disabled_extensions', []))
disable_all_extensions = settings.get('disable_all_extensions', 'none')
@@ -314,8 +315,8 @@ def requirements_met(requirements_file):
def prepare_environment():
torch_index_url = os.environ.get('TORCH_INDEX_URL', "https://download.pytorch.org/whl/cu118")
torch_command = os.environ.get('TORCH_COMMAND', f"pip install torch==2.0.1 torchvision==0.15.2 --extra-index-url {torch_index_url}")
torch_index_url = os.environ.get('TORCH_INDEX_URL', "https://download.pytorch.org/whl/cu121")
torch_command = os.environ.get('TORCH_COMMAND', f"pip install torch==2.1.2 torchvision==0.16.2 --extra-index-url {torch_index_url}")
if args.use_ipex:
if platform.system() == "Windows":
# The "Nuullll/intel-extension-for-pytorch" wheels were built from IPEX source for Intel Arc GPU: https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main
@@ -337,21 +338,22 @@ def prepare_environment():
torch_index_url = os.environ.get('TORCH_INDEX_URL', "https://pytorch-extension.intel.com/release-whl/stable/xpu/us/")
torch_command = os.environ.get('TORCH_COMMAND', f"pip install torch==2.0.0a0 intel-extension-for-pytorch==2.0.110+gitba7f6c1 --extra-index-url {torch_index_url}")
requirements_file = os.environ.get('REQS_FILE', "requirements_versions.txt")
requirements_file_for_npu = os.environ.get('REQS_FILE_FOR_NPU', "requirements_npu.txt")
xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.20')
xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.23.post1')
clip_package = os.environ.get('CLIP_PACKAGE', "https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip")
openclip_package = os.environ.get('OPENCLIP_PACKAGE', "https://github.com/mlfoundations/open_clip/archive/bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b.zip")
assets_repo = os.environ.get('ASSETS_REPO', "https://github.com/AUTOMATIC1111/stable-diffusion-webui-assets.git")
stable_diffusion_repo = os.environ.get('STABLE_DIFFUSION_REPO', "https://github.com/Stability-AI/stablediffusion.git")
stable_diffusion_xl_repo = os.environ.get('STABLE_DIFFUSION_XL_REPO', "https://github.com/Stability-AI/generative-models.git")
k_diffusion_repo = os.environ.get('K_DIFFUSION_REPO', 'https://github.com/crowsonkb/k-diffusion.git')
codeformer_repo = os.environ.get('CODEFORMER_REPO', 'https://github.com/sczhou/CodeFormer.git')
blip_repo = os.environ.get('BLIP_REPO', 'https://github.com/salesforce/BLIP.git')
assets_commit_hash = os.environ.get('ASSETS_COMMIT_HASH', "6f7db241d2f8ba7457bac5ca9753331f0c266917")
stable_diffusion_commit_hash = os.environ.get('STABLE_DIFFUSION_COMMIT_HASH', "cf1d67a6fd5ea1aa600c4df58e5b47da45f6bdbf")
stable_diffusion_xl_commit_hash = os.environ.get('STABLE_DIFFUSION_XL_COMMIT_HASH', "45c443b316737a4ab6e40413d7794a7f5657c19f")
k_diffusion_commit_hash = os.environ.get('K_DIFFUSION_COMMIT_HASH', "ab527a9a6d347f364e3d185ba6d714e22d80cb3c")
codeformer_commit_hash = os.environ.get('CODEFORMER_COMMIT_HASH', "c5b4593074ba6214284d6acd5f1719b6c5d739af")
blip_commit_hash = os.environ.get('BLIP_COMMIT_HASH', "48211a1594f1321b00f14c9f7a5b4813144b2fb9")
try:
@@ -405,18 +407,14 @@ def prepare_environment():
os.makedirs(os.path.join(script_path, dir_repos), exist_ok=True)
git_clone(assets_repo, repo_dir('stable-diffusion-webui-assets'), "assets", assets_commit_hash)
git_clone(stable_diffusion_repo, repo_dir('stable-diffusion-stability-ai'), "Stable Diffusion", stable_diffusion_commit_hash)
git_clone(stable_diffusion_xl_repo, repo_dir('generative-models'), "Stable Diffusion XL", stable_diffusion_xl_commit_hash)
git_clone(k_diffusion_repo, repo_dir('k-diffusion'), "K-diffusion", k_diffusion_commit_hash)
git_clone(codeformer_repo, repo_dir('CodeFormer'), "CodeFormer", codeformer_commit_hash)
git_clone(blip_repo, repo_dir('BLIP'), "BLIP", blip_commit_hash)
startup_timer.record("clone repositores")
if not is_installed("lpips"):
run_pip(f"install -r \"{os.path.join(repo_dir('CodeFormer'), 'requirements.txt')}\"", "requirements for CodeFormer")
startup_timer.record("install CodeFormer requirements")
if not os.path.isfile(requirements_file):
requirements_file = os.path.join(script_path, requirements_file)
@@ -424,6 +422,13 @@ def prepare_environment():
run_pip(f"install -r \"{requirements_file}\"", "requirements")
startup_timer.record("install requirements")
if not os.path.isfile(requirements_file_for_npu):
requirements_file_for_npu = os.path.join(script_path, requirements_file_for_npu)
if "torch_npu" in torch_command and not requirements_met(requirements_file_for_npu):
run_pip(f"install -r \"{requirements_file_for_npu}\"", "requirements_for_npu")
startup_timer.record("install requirements_for_npu")
if not args.skip_install:
run_extensions_installers(settings_file=args.ui_settings_file)
+40 -23
View File
@@ -1,41 +1,58 @@
import os
import logging
import os
try:
from tqdm.auto import tqdm
from tqdm import tqdm
class TqdmLoggingHandler(logging.Handler):
def __init__(self, level=logging.INFO):
super().__init__(level)
def __init__(self, fallback_handler: logging.Handler):
super().__init__()
self.fallback_handler = fallback_handler
def emit(self, record):
try:
msg = self.format(record)
tqdm.write(msg)
self.flush()
# If there are active tqdm progress bars,
# attempt to not interfere with them.
if tqdm._instances:
tqdm.write(self.format(record))
else:
self.fallback_handler.emit(record)
except Exception:
self.handleError(record)
self.fallback_handler.emit(record)
TQDM_IMPORTED = True
except ImportError:
# tqdm does not exist before first launch
# I will import once the UI finishes seting up the enviroment and reloads.
TQDM_IMPORTED = False
TqdmLoggingHandler = None
def setup_logging(loglevel):
if loglevel is None:
loglevel = os.environ.get("SD_WEBUI_LOG_LEVEL")
loghandlers = []
if not loglevel:
return
if TQDM_IMPORTED:
loghandlers.append(TqdmLoggingHandler())
if logging.root.handlers:
# Already configured, do not interfere
return
if loglevel:
log_level = getattr(logging, loglevel.upper(), None) or logging.INFO
logging.basicConfig(
level=log_level,
format='%(asctime)s %(levelname)s [%(name)s] %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
handlers=loghandlers
)
formatter = logging.Formatter(
'%(asctime)s %(levelname)s [%(name)s] %(message)s',
'%Y-%m-%d %H:%M:%S',
)
if os.environ.get("SD_WEBUI_RICH_LOG"):
from rich.logging import RichHandler
handler = RichHandler()
else:
handler = logging.StreamHandler()
handler.setFormatter(formatter)
if TqdmLoggingHandler:
handler = TqdmLoggingHandler(handler)
handler.setFormatter(formatter)
log_level = getattr(logging, loglevel.upper(), None) or logging.INFO
logging.root.setLevel(log_level)
logging.root.addHandler(handler)
+1 -1
View File
@@ -12,7 +12,7 @@ log = logging.getLogger(__name__)
# before torch version 1.13, has_mps is only available in nightly pytorch and macOS 12.3+,
# use check `getattr` and try it for compatibility.
# in torch version 1.13, backends.mps.is_available() and backends.mps.is_built() are introduced in to check mps availabilty,
# in torch version 1.13, backends.mps.is_available() and backends.mps.is_built() are introduced in to check mps availability,
# since torch 2.0.1+ nightly build, getattr(torch, 'has_mps', False) was deprecated, see https://github.com/pytorch/pytorch/pull/103279
def check_for_mps() -> bool:
if version.parse(torch.__version__) <= version.parse("2.0.1"):
+31 -34
View File
@@ -1,42 +1,39 @@
from PIL import Image, ImageFilter, ImageOps
def get_crop_region_v2(mask, pad=0):
"""
Finds a rectangular region that contains all masked ares in a mask.
Returns None if mask is completely black mask (all 0)
Parameters:
mask: PIL.Image.Image L mode or numpy 1d array
pad: int number of pixels that the region will be extended on all sides
Returns: (x1, y1, x2, y2) | None
Introduced post 1.9.0
"""
mask = mask if isinstance(mask, Image.Image) else Image.fromarray(mask)
if box := mask.getbbox():
x1, y1, x2, y2 = box
return (max(x1 - pad, 0), max(y1 - pad, 0), min(x2 + pad, mask.size[0]), min(y2 + pad, mask.size[1])) if pad else box
def get_crop_region(mask, pad=0):
"""finds a rectangular region that contains all masked ares in an image. Returns (x1, y1, x2, y2) coordinates of the rectangle.
For example, if a user has painted the top-right part of a 512x512 image", the result may be (256, 0, 512, 256)"""
"""
Same function as get_crop_region_v2 but handles completely black mask (all 0) differently
when mask all black still return coordinates but the coordinates may be invalid ie x2>x1 or y2>y1
Notes: it is possible for the coordinates to be "valid" again if pad size is sufficiently large
(mask_size.x-pad, mask_size.y-pad, pad, pad)
h, w = mask.shape
crop_left = 0
for i in range(w):
if not (mask[:, i] == 0).all():
break
crop_left += 1
crop_right = 0
for i in reversed(range(w)):
if not (mask[:, i] == 0).all():
break
crop_right += 1
crop_top = 0
for i in range(h):
if not (mask[i] == 0).all():
break
crop_top += 1
crop_bottom = 0
for i in reversed(range(h)):
if not (mask[i] == 0).all():
break
crop_bottom += 1
return (
int(max(crop_left-pad, 0)),
int(max(crop_top-pad, 0)),
int(min(w - crop_right + pad, w)),
int(min(h - crop_bottom + pad, h))
)
Extension developer should use get_crop_region_v2 instead unless for compatibility considerations.
"""
mask = mask if isinstance(mask, Image.Image) else Image.fromarray(mask)
if box := get_crop_region_v2(mask, pad):
return box
x1, y1 = mask.size
x2 = y2 = 0
return max(x1 - pad, 0), max(y1 - pad, 0), min(x2 + pad, mask.size[0]), min(y2 + pad, mask.size[1])
def expand_crop_region(crop_region, processing_width, processing_height, image_width, image_height):
+44 -54
View File
@@ -1,13 +1,20 @@
from __future__ import annotations
import os
import shutil
import importlib
import logging
import os
from typing import TYPE_CHECKING
from urllib.parse import urlparse
import torch
from modules import shared
from modules.upscaler import Upscaler, UpscalerLanczos, UpscalerNearest, UpscalerNone
from modules.paths import script_path, models_path
if TYPE_CHECKING:
import spandrel
logger = logging.getLogger(__name__)
def load_file_from_url(
@@ -90,54 +97,6 @@ def friendly_name(file: str):
return model_name
def cleanup_models():
# This code could probably be more efficient if we used a tuple list or something to store the src/destinations
# and then enumerate that, but this works for now. In the future, it'd be nice to just have every "model" scaler
# somehow auto-register and just do these things...
root_path = script_path
src_path = models_path
dest_path = os.path.join(models_path, "Stable-diffusion")
move_files(src_path, dest_path, ".ckpt")
move_files(src_path, dest_path, ".safetensors")
src_path = os.path.join(root_path, "ESRGAN")
dest_path = os.path.join(models_path, "ESRGAN")
move_files(src_path, dest_path)
src_path = os.path.join(models_path, "BSRGAN")
dest_path = os.path.join(models_path, "ESRGAN")
move_files(src_path, dest_path, ".pth")
src_path = os.path.join(root_path, "gfpgan")
dest_path = os.path.join(models_path, "GFPGAN")
move_files(src_path, dest_path)
src_path = os.path.join(root_path, "SwinIR")
dest_path = os.path.join(models_path, "SwinIR")
move_files(src_path, dest_path)
src_path = os.path.join(root_path, "repositories/latent-diffusion/experiments/pretrained_models/")
dest_path = os.path.join(models_path, "LDSR")
move_files(src_path, dest_path)
def move_files(src_path: str, dest_path: str, ext_filter: str = None):
try:
os.makedirs(dest_path, exist_ok=True)
if os.path.exists(src_path):
for file in os.listdir(src_path):
fullpath = os.path.join(src_path, file)
if os.path.isfile(fullpath):
if ext_filter is not None:
if ext_filter not in file:
continue
print(f"Moving {file} from {src_path} to {dest_path}.")
try:
shutil.move(fullpath, dest_path)
except Exception:
pass
if len(os.listdir(src_path)) == 0:
print(f"Removing empty folder: {src_path}")
shutil.rmtree(src_path, True)
except Exception:
pass
def load_upscalers():
# We can only do this 'magic' method to dynamically load upscalers if they are referenced,
# so we'll try to import any _model.py files before looking in __subclasses__
@@ -151,7 +110,7 @@ def load_upscalers():
except Exception:
pass
datas = []
data = []
commandline_options = vars(shared.cmd_opts)
# some of upscaler classes will not go away after reloading their modules, and we'll end
@@ -170,10 +129,41 @@ def load_upscalers():
scaler = cls(commandline_model_path)
scaler.user_path = commandline_model_path
scaler.model_download_path = commandline_model_path or scaler.model_path
datas += scaler.scalers
data += scaler.scalers
shared.sd_upscalers = sorted(
datas,
data,
# Special case for UpscalerNone keeps it at the beginning of the list.
key=lambda x: x.name.lower() if not isinstance(x.scaler, (UpscalerNone, UpscalerLanczos, UpscalerNearest)) else ""
)
def load_spandrel_model(
path: str | os.PathLike,
*,
device: str | torch.device | None,
prefer_half: bool = False,
dtype: str | torch.dtype | None = None,
expected_architecture: str | None = None,
) -> spandrel.ModelDescriptor:
import spandrel
model_descriptor = spandrel.ModelLoader(device=device).load_from_file(str(path))
if expected_architecture and model_descriptor.architecture != expected_architecture:
logger.warning(
f"Model {path!r} is not a {expected_architecture!r} model (got {model_descriptor.architecture!r})",
)
half = False
if prefer_half:
if model_descriptor.supports_half:
model_descriptor.model.half()
half = True
else:
logger.info("Model %s does not support half precision, ignoring --half", path)
if dtype:
model_descriptor.model.to(dtype=dtype)
model_descriptor.model.eval()
logger.debug(
"Loaded %s from %s (device=%s, half=%s, dtype=%s)",
model_descriptor, path, device, half, dtype,
)
return model_descriptor
+4 -4
View File
@@ -341,7 +341,7 @@ class DDPM(pl.LightningModule):
elif self.parameterization == "x0":
target = x_start
else:
raise NotImplementedError(f"Paramterization {self.parameterization} not yet supported")
raise NotImplementedError(f"Parameterization {self.parameterization} not yet supported")
loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3])
@@ -901,7 +901,7 @@ class LatentDiffusion(DDPM):
def apply_model(self, x_noisy, t, cond, return_ids=False):
if isinstance(cond, dict):
# hybrid case, cond is exptected to be a dict
# hybrid case, cond is expected to be a dict
pass
else:
if not isinstance(cond, list):
@@ -937,7 +937,7 @@ class LatentDiffusion(DDPM):
cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])]
elif self.cond_stage_key == 'coordinates_bbox':
assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size'
assert 'original_image_size' in self.split_input_params, 'BoundingBoxRescaling is missing original_image_size'
# assuming padding of unfold is always 0 and its dilation is always 1
n_patches_per_row = int((w - ks[0]) / stride[0] + 1)
@@ -947,7 +947,7 @@ class LatentDiffusion(DDPM):
num_downs = self.first_stage_model.encoder.num_resolutions - 1
rescale_latent = 2 ** (num_downs)
# get top left postions of patches as conforming for the bbbox tokenizer, therefore we
# get top left positions of patches as conforming for the bbbox tokenizer, therefore we
# need to rescale the tl patch coordinates to be in between (0,1)
tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w,
rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h)
+31
View File
@@ -0,0 +1,31 @@
import importlib
import torch
from modules import shared
def check_for_npu():
if importlib.util.find_spec("torch_npu") is None:
return False
import torch_npu
try:
# Will raise a RuntimeError if no NPU is found
_ = torch_npu.npu.device_count()
return torch.npu.is_available()
except RuntimeError:
return False
def get_npu_device_string():
if shared.cmd_opts.device_id is not None:
return f"npu:{shared.cmd_opts.device_id}"
return "npu:0"
def torch_npu_gc():
with torch.npu.device(get_npu_device_string()):
torch.npu.empty_cache()
has_npu = check_for_npu()
+33 -5
View File
@@ -1,3 +1,4 @@
import os
import json
import sys
from dataclasses import dataclass
@@ -6,6 +7,7 @@ import gradio as gr
from modules import errors
from modules.shared_cmd_options import cmd_opts
from modules.paths_internal import script_path
class OptionInfo:
@@ -91,18 +93,35 @@ class Options:
if self.data is not None:
if key in self.data or key in self.data_labels:
# Check that settings aren't globally frozen
assert not cmd_opts.freeze_settings, "changing settings is disabled"
# Get the info related to the setting being changed
info = self.data_labels.get(key, None)
if info.do_not_save:
return
# Restrict component arguments
comp_args = info.component_args if info else None
if isinstance(comp_args, dict) and comp_args.get('visible', True) is False:
raise RuntimeError(f"not possible to set {key} because it is restricted")
raise RuntimeError(f"not possible to set '{key}' because it is restricted")
# Check that this section isn't frozen
if cmd_opts.freeze_settings_in_sections is not None:
frozen_sections = list(map(str.strip, cmd_opts.freeze_settings_in_sections.split(','))) # Trim whitespace from section names
section_key = info.section[0]
section_name = info.section[1]
assert section_key not in frozen_sections, f"not possible to set '{key}' because settings in section '{section_name}' ({section_key}) are frozen with --freeze-settings-in-sections"
# Check that this section of the settings isn't frozen
if cmd_opts.freeze_specific_settings is not None:
frozen_keys = list(map(str.strip, cmd_opts.freeze_specific_settings.split(','))) # Trim whitespace from setting keys
assert key not in frozen_keys, f"not possible to set '{key}' because this setting is frozen with --freeze-specific-settings"
# Check shorthand option which disables editing options in "saving-paths"
if cmd_opts.hide_ui_dir_config and key in self.restricted_opts:
raise RuntimeError(f"not possible to set {key} because it is restricted")
raise RuntimeError(f"not possible to set '{key}' because it is restricted with --hide_ui_dir_config")
self.data[key] = value
return
@@ -176,9 +195,15 @@ class Options:
return type_x == type_y
def load(self, filename):
with open(filename, "r", encoding="utf8") as file:
self.data = json.load(file)
try:
with open(filename, "r", encoding="utf8") as file:
self.data = json.load(file)
except FileNotFoundError:
self.data = {}
except Exception:
errors.report(f'\nCould not load settings\nThe config file "{filename}" is likely corrupted\nIt has been moved to the "tmp/config.json"\nReverting config to default\n\n''', exc_info=True)
os.replace(filename, os.path.join(script_path, "tmp", "config.json"))
self.data = {}
# 1.6.0 VAE defaults
if self.data.get('sd_vae_as_default') is not None and self.data.get('sd_vae_overrides_per_model_preferences') is None:
self.data['sd_vae_overrides_per_model_preferences'] = not self.data.get('sd_vae_as_default')
@@ -215,6 +240,9 @@ class Options:
item_categories = {}
for item in self.data_labels.values():
if item.section[0] is None:
continue
category = categories.mapping.get(item.category_id)
category = "Uncategorized" if category is None else category.label
if category not in item_categories:
-1
View File
@@ -38,7 +38,6 @@ mute_sdxl_imports()
path_dirs = [
(sd_path, 'ldm', 'Stable Diffusion', []),
(os.path.join(sd_path, '../generative-models'), 'sgm', 'Stable Diffusion XL', ["sgm"]),
(os.path.join(sd_path, '../CodeFormer'), 'inference_codeformer.py', 'CodeFormer', []),
(os.path.join(sd_path, '../BLIP'), 'models/blip.py', 'BLIP', []),
(os.path.join(sd_path, '../k-diffusion'), 'k_diffusion/sampling.py', 'k_diffusion', ["atstart"]),
]
+5
View File
@@ -4,6 +4,10 @@ import argparse
import os
import sys
import shlex
from pathlib import Path
normalized_filepath = lambda filepath: str(Path(filepath).absolute())
commandline_args = os.environ.get('COMMANDLINE_ARGS', "")
sys.argv += shlex.split(commandline_args)
@@ -28,5 +32,6 @@ models_path = os.path.join(data_path, "models")
extensions_dir = os.path.join(data_path, "extensions")
extensions_builtin_dir = os.path.join(script_path, "extensions-builtin")
config_states_dir = os.path.join(script_path, "config_states")
default_output_dir = os.path.join(data_path, "outputs")
roboto_ttf_file = os.path.join(modules_path, 'Roboto-Regular.ttf')
+15 -14
View File
@@ -2,7 +2,7 @@ import os
from PIL import Image
from modules import shared, images, devices, scripts, scripts_postprocessing, ui_common, generation_parameters_copypaste
from modules import shared, images, devices, scripts, scripts_postprocessing, ui_common, infotext_utils
from modules.shared import opts
@@ -17,10 +17,10 @@ def run_postprocessing(extras_mode, image, image_folder, input_dir, output_dir,
if extras_mode == 1:
for img in image_folder:
if isinstance(img, Image.Image):
image = img
image = images.fix_image(img)
fn = ''
else:
image = Image.open(os.path.abspath(img.name))
image = images.read(os.path.abspath(img.name))
fn = os.path.splitext(img.orig_name)[0]
yield image, fn
elif extras_mode == 2:
@@ -56,19 +56,17 @@ def run_postprocessing(extras_mode, image, image_folder, input_dir, output_dir,
if isinstance(image_placeholder, str):
try:
image_data = Image.open(image_placeholder)
image_data = images.read(image_placeholder)
except Exception:
continue
else:
image_data = image_placeholder
shared.state.assign_current_image(image_data)
parameters, existing_pnginfo = images.read_info_from_image(image_data)
if parameters:
existing_pnginfo["parameters"] = parameters
initial_pp = scripts_postprocessing.PostprocessedImage(image_data.convert("RGB"))
initial_pp = scripts_postprocessing.PostprocessedImage(image_data if image_data.mode in ("RGBA", "RGB") else image_data.convert("RGB"))
scripts.scripts_postproc.run(initial_pp, args)
@@ -86,22 +84,25 @@ def run_postprocessing(extras_mode, image, image_folder, input_dir, output_dir,
basename = ''
forced_filename = None
infotext = ", ".join([k if k == v else f'{k}: {generation_parameters_copypaste.quote(v)}' for k, v in pp.info.items() if v is not None])
infotext = ", ".join([k if k == v else f'{k}: {infotext_utils.quote(v)}' for k, v in pp.info.items() if v is not None])
if opts.enable_pnginfo:
pp.image.info = existing_pnginfo
pp.image.info["postprocessing"] = infotext
shared.state.assign_current_image(pp.image)
if save_output:
fullfn, _ = images.save_image(pp.image, path=outpath, basename=basename, extension=opts.samples_format, info=infotext, short_filename=True, no_prompt=True, grid=False, pnginfo_section_name="extras", existing_info=existing_pnginfo, forced_filename=forced_filename, suffix=suffix)
if pp.caption:
caption_filename = os.path.splitext(fullfn)[0] + ".txt"
if os.path.isfile(caption_filename):
existing_caption = ""
try:
with open(caption_filename, encoding="utf8") as file:
existing_caption = file.read().strip()
else:
existing_caption = ""
except FileNotFoundError:
pass
action = shared.opts.postprocessing_existing_caption_action
if action == 'Prepend' and existing_caption:
@@ -121,8 +122,6 @@ def run_postprocessing(extras_mode, image, image_folder, input_dir, output_dir,
if extras_mode != 2 or show_extras_results:
outputs.append(pp.image)
image_data.close()
devices.torch_gc()
shared.state.end()
return outputs, ui_common.plaintext_to_html(infotext), ''
@@ -132,13 +131,15 @@ def run_postprocessing_webui(id_task, *args, **kwargs):
return run_postprocessing(*args, **kwargs)
def run_extras(extras_mode, resize_mode, image, image_folder, input_dir, output_dir, show_extras_results, gfpgan_visibility, codeformer_visibility, codeformer_weight, upscaling_resize, upscaling_resize_w, upscaling_resize_h, upscaling_crop, extras_upscaler_1, extras_upscaler_2, extras_upscaler_2_visibility, upscale_first: bool, save_output: bool = True):
def run_extras(extras_mode, resize_mode, image, image_folder, input_dir, output_dir, show_extras_results, gfpgan_visibility, codeformer_visibility, codeformer_weight, upscaling_resize, upscaling_resize_w, upscaling_resize_h, upscaling_crop, extras_upscaler_1, extras_upscaler_2, extras_upscaler_2_visibility, upscale_first: bool, save_output: bool = True, max_side_length: int = 0):
"""old handler for API"""
args = scripts.scripts_postproc.create_args_for_run({
"Upscale": {
"upscale_enabled": True,
"upscale_mode": resize_mode,
"upscale_by": upscaling_resize,
"max_side_length": max_side_length,
"upscale_to_width": upscaling_resize_w,
"upscale_to_height": upscaling_resize_h,
"upscale_crop": upscaling_crop,
+289 -68
View File
@@ -16,7 +16,7 @@ from skimage import exposure
from typing import Any
import modules.sd_hijack
from modules import devices, prompt_parser, masking, sd_samplers, lowvram, generation_parameters_copypaste, extra_networks, sd_vae_approx, scripts, sd_samplers_common, sd_unet, errors, rng
from modules import devices, prompt_parser, masking, sd_samplers, lowvram, infotext_utils, extra_networks, sd_vae_approx, scripts, sd_samplers_common, sd_unet, errors, rng
from modules.rng import slerp # noqa: F401
from modules.sd_hijack import model_hijack
from modules.sd_samplers_common import images_tensor_to_samples, decode_first_stage, approximation_indexes
@@ -62,28 +62,37 @@ def apply_color_correction(correction, original_image):
return image.convert('RGB')
def apply_overlay(image, paste_loc, index, overlays):
if overlays is None or index >= len(overlays):
return image
def uncrop(image, dest_size, paste_loc):
x, y, w, h = paste_loc
base_image = Image.new('RGBA', dest_size)
image = images.resize_image(1, image, w, h)
base_image.paste(image, (x, y))
image = base_image
overlay = overlays[index]
return image
def apply_overlay(image, paste_loc, overlay):
if overlay is None:
return image, image.copy()
if paste_loc is not None:
x, y, w, h = paste_loc
base_image = Image.new('RGBA', (overlay.width, overlay.height))
image = images.resize_image(1, image, w, h)
base_image.paste(image, (x, y))
image = base_image
image = uncrop(image, (overlay.width, overlay.height), paste_loc)
original_denoised_image = image.copy()
image = image.convert('RGBA')
image.alpha_composite(overlay)
image = image.convert('RGB')
return image
return image, original_denoised_image
def create_binary_mask(image):
def create_binary_mask(image, round=True):
if image.mode == 'RGBA' and image.getextrema()[-1] != (255, 255):
image = image.split()[-1].convert("L").point(lambda x: 255 if x > 128 else 0)
if round:
image = image.split()[-1].convert("L").point(lambda x: 255 if x > 128 else 0)
else:
image = image.split()[-1].convert("L")
else:
image = image.convert('L')
return image
@@ -106,6 +115,21 @@ def txt2img_image_conditioning(sd_model, x, width, height):
return x.new_zeros(x.shape[0], 2*sd_model.noise_augmentor.time_embed.dim, dtype=x.dtype, device=x.device)
else:
sd = sd_model.model.state_dict()
diffusion_model_input = sd.get('diffusion_model.input_blocks.0.0.weight', None)
if diffusion_model_input is not None:
if diffusion_model_input.shape[1] == 9:
# The "masked-image" in this case will just be all 0.5 since the entire image is masked.
image_conditioning = torch.ones(x.shape[0], 3, height, width, device=x.device) * 0.5
image_conditioning = images_tensor_to_samples(image_conditioning,
approximation_indexes.get(opts.sd_vae_encode_method))
# Add the fake full 1s mask to the first dimension.
image_conditioning = torch.nn.functional.pad(image_conditioning, (0, 0, 0, 0, 1, 0), value=1.0)
image_conditioning = image_conditioning.to(x.dtype)
return image_conditioning
# Dummy zero conditioning if we're not using inpainting or unclip models.
# Still takes up a bit of memory, but no encoder call.
# Pretty sure we can just make this a 1x1 image since its not going to be used besides its batch size.
@@ -128,6 +152,7 @@ class StableDiffusionProcessing:
seed_resize_from_w: int = -1
seed_enable_extras: bool = True
sampler_name: str = None
scheduler: str = None
batch_size: int = 1
n_iter: int = 1
steps: int = 50
@@ -157,6 +182,7 @@ class StableDiffusionProcessing:
token_merging_ratio = 0
token_merging_ratio_hr = 0
disable_extra_networks: bool = False
firstpass_image: Image = None
scripts_value: scripts.ScriptRunner = field(default=None, init=False)
script_args_value: list = field(default=None, init=False)
@@ -308,7 +334,7 @@ class StableDiffusionProcessing:
c_adm = torch.cat((c_adm, noise_level_emb), 1)
return c_adm
def inpainting_image_conditioning(self, source_image, latent_image, image_mask=None):
def inpainting_image_conditioning(self, source_image, latent_image, image_mask=None, round_image_mask=True):
self.is_using_inpainting_conditioning = True
# Handle the different mask inputs
@@ -320,8 +346,10 @@ class StableDiffusionProcessing:
conditioning_mask = conditioning_mask.astype(np.float32) / 255.0
conditioning_mask = torch.from_numpy(conditioning_mask[None, None])
# Inpainting model uses a discretized mask as input, so we round to either 1.0 or 0.0
conditioning_mask = torch.round(conditioning_mask)
if round_image_mask:
# Caller is requesting a discretized mask as input, so we round to either 1.0 or 0.0
conditioning_mask = torch.round(conditioning_mask)
else:
conditioning_mask = source_image.new_ones(1, 1, *source_image.shape[-2:])
@@ -345,7 +373,7 @@ class StableDiffusionProcessing:
return image_conditioning
def img2img_image_conditioning(self, source_image, latent_image, image_mask=None):
def img2img_image_conditioning(self, source_image, latent_image, image_mask=None, round_image_mask=True):
source_image = devices.cond_cast_float(source_image)
# HACK: Using introspection as the Depth2Image model doesn't appear to uniquely
@@ -357,11 +385,17 @@ class StableDiffusionProcessing:
return self.edit_image_conditioning(source_image)
if self.sampler.conditioning_key in {'hybrid', 'concat'}:
return self.inpainting_image_conditioning(source_image, latent_image, image_mask=image_mask)
return self.inpainting_image_conditioning(source_image, latent_image, image_mask=image_mask, round_image_mask=round_image_mask)
if self.sampler.conditioning_key == "crossattn-adm":
return self.unclip_image_conditioning(source_image)
sd = self.sampler.model_wrap.inner_model.model.state_dict()
diffusion_model_input = sd.get('diffusion_model.input_blocks.0.0.weight', None)
if diffusion_model_input is not None:
if diffusion_model_input.shape[1] == 9:
return self.inpainting_image_conditioning(source_image, latent_image, image_mask=image_mask)
# Dummy zero conditioning if we're not using inpainting or depth model.
return latent_image.new_zeros(latent_image.shape[0], 5, 1, 1)
@@ -422,6 +456,9 @@ class StableDiffusionProcessing:
opts.sdxl_crop_top,
self.width,
self.height,
opts.fp8_storage,
opts.cache_fp16_weight,
opts.emphasis,
)
def get_conds_with_caching(self, function, required_prompts, steps, caches, extra_network_data, hires_steps=None):
@@ -571,7 +608,7 @@ class Processed:
"version": self.version,
}
return json.dumps(obj)
return json.dumps(obj, default=lambda o: None)
def infotext(self, p: StableDiffusionProcessing, index):
return create_infotext(p, self.all_prompts, self.all_seeds, self.all_subseeds, comments=[], position_in_batch=index % self.batch_size, iteration=index // self.batch_size)
@@ -596,20 +633,33 @@ def decode_latent_batch(model, batch, target_device=None, check_for_nans=False):
sample = decode_first_stage(model, batch[i:i + 1])[0]
if check_for_nans:
try:
devices.test_for_nans(sample, "vae")
except devices.NansException as e:
if devices.dtype_vae == torch.float32 or not shared.opts.auto_vae_precision:
if shared.opts.auto_vae_precision_bfloat16:
autofix_dtype = torch.bfloat16
autofix_dtype_text = "bfloat16"
autofix_dtype_setting = "Automatically convert VAE to bfloat16"
autofix_dtype_comment = ""
elif shared.opts.auto_vae_precision:
autofix_dtype = torch.float32
autofix_dtype_text = "32-bit float"
autofix_dtype_setting = "Automatically revert VAE to 32-bit floats"
autofix_dtype_comment = "\nTo always start with 32-bit VAE, use --no-half-vae commandline flag."
else:
raise e
if devices.dtype_vae == autofix_dtype:
raise e
errors.print_error_explanation(
"A tensor with all NaNs was produced in VAE.\n"
"Web UI will now convert VAE into 32-bit float and retry.\n"
"To disable this behavior, disable the 'Automatically revert VAE to 32-bit floats' setting.\n"
"To always start with 32-bit VAE, use --no-half-vae commandline flag."
f"Web UI will now convert VAE into {autofix_dtype_text} and retry.\n"
f"To disable this behavior, disable the '{autofix_dtype_setting}' setting.{autofix_dtype_comment}"
)
devices.dtype_vae = torch.float32
devices.dtype_vae = autofix_dtype
model.first_stage_model.to(devices.dtype_vae)
batch = batch.to(devices.dtype_vae)
@@ -654,7 +704,53 @@ def program_version():
def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iteration=0, position_in_batch=0, use_main_prompt=False, index=None, all_negative_prompts=None):
if index is None:
"""
this function is used to generate the infotext that is stored in the generated images, it's contains the parameters that are required to generate the imagee
Args:
p: StableDiffusionProcessing
all_prompts: list[str]
all_seeds: list[int]
all_subseeds: list[int]
comments: list[str]
iteration: int
position_in_batch: int
use_main_prompt: bool
index: int
all_negative_prompts: list[str]
Returns: str
Extra generation params
p.extra_generation_params dictionary allows for additional parameters to be added to the infotext
this can be use by the base webui or extensions.
To add a new entry, add a new key value pair, the dictionary key will be used as the key of the parameter in the infotext
the value generation_params can be defined as:
- str | None
- List[str|None]
- callable func(**kwargs) -> str | None
When defined as a string, it will be used as without extra processing; this is this most common use case.
Defining as a list allows for parameter that changes across images in the job, for example, the 'Seed' parameter.
The list should have the same length as the total number of images in the entire job.
Defining as a callable function allows parameter cannot be generated earlier or when extra logic is required.
For example 'Hires prompt', due to reasons the hr_prompt might be changed by process in the pipeline or extensions
and may vary across different images, defining as a static string or list would not work.
The function takes locals() as **kwargs, as such will have access to variables like 'p' and 'index'.
the base signature of the function should be:
func(**kwargs) -> str | None
optionally it can have additional arguments that will be used in the function:
func(p, index, **kwargs) -> str | None
note: for better future compatibility even though this function will have access to all variables in the locals(),
it is recommended to only use the arguments present in the function signature of create_infotext.
For actual implementation examples, see StableDiffusionProcessingTxt2Img.init > get_hr_prompt.
"""
if use_main_prompt:
index = 0
elif index is None:
index = position_in_batch + iteration * p.batch_size
if all_negative_prompts is None:
@@ -665,6 +761,9 @@ def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iter
token_merging_ratio = p.get_token_merging_ratio()
token_merging_ratio_hr = p.get_token_merging_ratio(for_hr=True)
prompt_text = p.main_prompt if use_main_prompt else all_prompts[index]
negative_prompt = p.main_negative_prompt if use_main_prompt else all_negative_prompts[index]
uses_ensd = opts.eta_noise_seed_delta != 0
if uses_ensd:
uses_ensd = sd_samplers_common.is_sampler_using_eta_noise_seed_delta(p)
@@ -672,6 +771,7 @@ def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iter
generation_params = {
"Steps": p.steps,
"Sampler": p.sampler_name,
"Schedule type": p.scheduler,
"CFG scale": p.cfg_scale,
"Image CFG scale": getattr(p, 'image_cfg_scale', None),
"Seed": p.all_seeds[0] if use_main_prompt else all_seeds[index],
@@ -679,12 +779,14 @@ def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iter
"Size": f"{p.width}x{p.height}",
"Model hash": p.sd_model_hash if opts.add_model_hash_to_info else None,
"Model": p.sd_model_name if opts.add_model_name_to_info else None,
"FP8 weight": opts.fp8_storage if devices.fp8 else None,
"Cache FP16 weight for LoRA": opts.cache_fp16_weight if devices.fp8 else None,
"VAE hash": p.sd_vae_hash if opts.add_vae_hash_to_info else None,
"VAE": p.sd_vae_name if opts.add_vae_name_to_info else None,
"Variation seed": (None if p.subseed_strength == 0 else (p.all_subseeds[0] if use_main_prompt else all_subseeds[index])),
"Variation seed strength": (None if p.subseed_strength == 0 else p.subseed_strength),
"Seed resize from": (None if p.seed_resize_from_w <= 0 or p.seed_resize_from_h <= 0 else f"{p.seed_resize_from_w}x{p.seed_resize_from_h}"),
"Denoising strength": getattr(p, 'denoising_strength', None),
"Denoising strength": p.extra_generation_params.get("Denoising strength"),
"Conditional mask weight": getattr(p, "inpainting_mask_weight", shared.opts.inpainting_mask_weight) if p.is_using_inpainting_conditioning else None,
"Clip skip": None if clip_skip <= 1 else clip_skip,
"ENSD": opts.eta_noise_seed_delta if uses_ensd else None,
@@ -699,10 +801,19 @@ def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iter
"User": p.user if opts.add_user_name_to_info else None,
}
generation_params_text = ", ".join([k if k == v else f'{k}: {generation_parameters_copypaste.quote(v)}' for k, v in generation_params.items() if v is not None])
for key, value in generation_params.items():
try:
if isinstance(value, list):
generation_params[key] = value[index]
elif callable(value):
generation_params[key] = value(**locals())
except Exception:
errors.report(f'Error creating infotext for key "{key}"', exc_info=True)
generation_params[key] = None
prompt_text = p.main_prompt if use_main_prompt else all_prompts[index]
negative_prompt_text = f"\nNegative prompt: {p.main_negative_prompt if use_main_prompt else all_negative_prompts[index]}" if all_negative_prompts[index] else ""
generation_params_text = ", ".join([k if k == v else f'{k}: {infotext_utils.quote(v)}' for k, v in generation_params.items() if v is not None])
negative_prompt_text = f"\nNegative prompt: {negative_prompt}" if negative_prompt else ""
return f"{prompt_text}{negative_prompt_text}\n{generation_params_text}".strip()
@@ -818,7 +929,7 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
if state.skipped:
state.skipped = False
if state.interrupted:
if state.interrupted or state.stopping_generation:
break
sd_models.reload_model_weights() # model can be changed for example by refiner
@@ -845,28 +956,35 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
if p.scripts is not None:
p.scripts.process_batch(p, batch_number=n, prompts=p.prompts, seeds=p.seeds, subseeds=p.subseeds)
p.setup_conds()
p.extra_generation_params.update(model_hijack.extra_generation_params)
# params.txt should be saved after scripts.process_batch, since the
# infotext could be modified by that callback
# Example: a wildcard processed by process_batch sets an extra model
# strength, which is saved as "Model Strength: 1.0" in the infotext
if n == 0:
if n == 0 and not cmd_opts.no_prompt_history:
with open(os.path.join(paths.data_path, "params.txt"), "w", encoding="utf8") as file:
processed = Processed(p, [])
file.write(processed.infotext(p, 0))
p.setup_conds()
for comment in model_hijack.comments:
p.comment(comment)
p.extra_generation_params.update(model_hijack.extra_generation_params)
if p.n_iter > 1:
shared.state.job = f"Batch {n+1} out of {p.n_iter}"
sd_models.apply_alpha_schedule_override(p.sd_model, p)
with devices.without_autocast() if devices.unet_needs_upcast else devices.autocast():
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
if p.scripts is not None:
ps = scripts.PostSampleArgs(samples_ddim)
p.scripts.post_sample(p, ps)
samples_ddim = ps.samples
if getattr(samples_ddim, 'already_decoded', False):
x_samples_ddim = samples_ddim
else:
@@ -922,13 +1040,37 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
pp = scripts.PostprocessImageArgs(image)
p.scripts.postprocess_image(p, pp)
image = pp.image
mask_for_overlay = getattr(p, "mask_for_overlay", None)
if not shared.opts.overlay_inpaint:
overlay_image = None
elif getattr(p, "overlay_images", None) is not None and i < len(p.overlay_images):
overlay_image = p.overlay_images[i]
else:
overlay_image = None
if p.scripts is not None:
ppmo = scripts.PostProcessMaskOverlayArgs(i, mask_for_overlay, overlay_image)
p.scripts.postprocess_maskoverlay(p, ppmo)
mask_for_overlay, overlay_image = ppmo.mask_for_overlay, ppmo.overlay_image
if p.color_corrections is not None and i < len(p.color_corrections):
if save_samples and opts.save_images_before_color_correction:
image_without_cc = apply_overlay(image, p.paste_to, i, p.overlay_images)
image_without_cc, _ = apply_overlay(image, p.paste_to, overlay_image)
images.save_image(image_without_cc, p.outpath_samples, "", p.seeds[i], p.prompts[i], opts.samples_format, info=infotext(i), p=p, suffix="-before-color-correction")
image = apply_color_correction(p.color_corrections[i], image)
image = apply_overlay(image, p.paste_to, i, p.overlay_images)
# If the intention is to show the output from the model
# that is being composited over the original image,
# we need to keep the original image around
# and use it in the composite step.
image, original_denoised_image = apply_overlay(image, p.paste_to, overlay_image)
if p.scripts is not None:
pp = scripts.PostprocessImageArgs(image)
p.scripts.postprocess_image_after_composite(p, pp)
image = pp.image
if save_samples:
images.save_image(image, p.outpath_samples, "", p.seeds[i], p.prompts[i], opts.samples_format, info=infotext(i), p=p)
@@ -938,16 +1080,17 @@ def process_images_inner(p: StableDiffusionProcessing) -> Processed:
if opts.enable_pnginfo:
image.info["parameters"] = text
output_images.append(image)
if hasattr(p, 'mask_for_overlay') and p.mask_for_overlay:
if mask_for_overlay is not None:
if opts.return_mask or opts.save_mask:
image_mask = p.mask_for_overlay.convert('RGB')
image_mask = mask_for_overlay.convert('RGB')
if save_samples and opts.save_mask:
images.save_image(image_mask, p.outpath_samples, "", p.seeds[i], p.prompts[i], opts.samples_format, info=infotext(i), p=p, suffix="-mask")
if opts.return_mask:
output_images.append(image_mask)
if opts.return_mask_composite or opts.save_mask_composite:
image_mask_composite = Image.composite(image.convert('RGBA').convert('RGBa'), Image.new('RGBa', image.size), images.resize_image(2, p.mask_for_overlay, image.width, image.height).convert('L')).convert('RGBA')
image_mask_composite = Image.composite(original_denoised_image.convert('RGBA').convert('RGBa'), Image.new('RGBa', image.size), images.resize_image(2, mask_for_overlay, image.width, image.height).convert('L')).convert('RGBA')
if save_samples and opts.save_mask_composite:
images.save_image(image_mask_composite, p.outpath_samples, "", p.seeds[i], p.prompts[i], opts.samples_format, info=infotext(i), p=p, suffix="-mask-composite")
if opts.return_mask_composite:
@@ -1023,8 +1166,10 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
hr_resize_y: int = 0
hr_checkpoint_name: str = None
hr_sampler_name: str = None
hr_scheduler: str = None
hr_prompt: str = ''
hr_negative_prompt: str = ''
force_task_id: str = None
cached_hr_uc = [None, None]
cached_hr_c = [None, None]
@@ -1097,7 +1242,9 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
def init(self, all_prompts, all_seeds, all_subseeds):
if self.enable_hr:
if self.hr_checkpoint_name:
self.extra_generation_params["Denoising strength"] = self.denoising_strength
if self.hr_checkpoint_name and self.hr_checkpoint_name != 'Use same checkpoint':
self.hr_checkpoint_info = sd_models.get_closet_checkpoint_match(self.hr_checkpoint_name)
if self.hr_checkpoint_info is None:
@@ -1108,11 +1255,21 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
if self.hr_sampler_name is not None and self.hr_sampler_name != self.sampler_name:
self.extra_generation_params["Hires sampler"] = self.hr_sampler_name
if tuple(self.hr_prompt) != tuple(self.prompt):
self.extra_generation_params["Hires prompt"] = self.hr_prompt
def get_hr_prompt(p, index, prompt_text, **kwargs):
hr_prompt = p.all_hr_prompts[index]
return hr_prompt if hr_prompt != prompt_text else None
if tuple(self.hr_negative_prompt) != tuple(self.negative_prompt):
self.extra_generation_params["Hires negative prompt"] = self.hr_negative_prompt
def get_hr_negative_prompt(p, index, negative_prompt, **kwargs):
hr_negative_prompt = p.all_hr_negative_prompts[index]
return hr_negative_prompt if hr_negative_prompt != negative_prompt else None
self.extra_generation_params["Hires prompt"] = get_hr_prompt
self.extra_generation_params["Hires negative prompt"] = get_hr_negative_prompt
self.extra_generation_params["Hires schedule type"] = None # to be set in sd_samplers_kdiffusion.py
if self.hr_scheduler is None:
self.hr_scheduler = self.scheduler
self.latent_scale_mode = shared.latent_upscale_modes.get(self.hr_upscaler, None) if self.hr_upscaler is not None else shared.latent_upscale_modes.get(shared.latent_upscale_default_mode, "nearest")
if self.enable_hr and self.latent_scale_mode is None:
@@ -1124,8 +1281,11 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
if not state.processing_has_refined_job_count:
if state.job_count == -1:
state.job_count = self.n_iter
shared.total_tqdm.updateTotal((self.steps + (self.hr_second_pass_steps or self.steps)) * state.job_count)
if getattr(self, 'txt2img_upscale', False):
total_steps = (self.hr_second_pass_steps or self.steps) * state.job_count
else:
total_steps = (self.steps + (self.hr_second_pass_steps or self.steps)) * state.job_count
shared.total_tqdm.updateTotal(total_steps)
state.job_count = state.job_count * 2
state.processing_has_refined_job_count = True
@@ -1138,18 +1298,45 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
def sample(self, conditioning, unconditional_conditioning, seeds, subseeds, subseed_strength, prompts):
self.sampler = sd_samplers.create_sampler(self.sampler_name, self.sd_model)
x = self.rng.next()
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
del x
if self.firstpass_image is not None and self.enable_hr:
# here we don't need to generate image, we just take self.firstpass_image and prepare it for hires fix
if not self.enable_hr:
return samples
devices.torch_gc()
if self.latent_scale_mode is None:
image = np.array(self.firstpass_image).astype(np.float32) / 255.0 * 2.0 - 1.0
image = np.moveaxis(image, 2, 0)
samples = None
decoded_samples = torch.asarray(np.expand_dims(image, 0))
else:
image = np.array(self.firstpass_image).astype(np.float32) / 255.0
image = np.moveaxis(image, 2, 0)
image = torch.from_numpy(np.expand_dims(image, axis=0))
image = image.to(shared.device, dtype=devices.dtype_vae)
if opts.sd_vae_encode_method != 'Full':
self.extra_generation_params['VAE Encoder'] = opts.sd_vae_encode_method
samples = images_tensor_to_samples(image, approximation_indexes.get(opts.sd_vae_encode_method), self.sd_model)
decoded_samples = None
devices.torch_gc()
if self.latent_scale_mode is None:
decoded_samples = torch.stack(decode_latent_batch(self.sd_model, samples, target_device=devices.cpu, check_for_nans=True)).to(dtype=torch.float32)
else:
decoded_samples = None
# here we generate an image normally
x = self.rng.next()
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
del x
if not self.enable_hr:
return samples
devices.torch_gc()
if self.latent_scale_mode is None:
decoded_samples = torch.stack(decode_latent_batch(self.sd_model, samples, target_device=devices.cpu, check_for_nans=True)).to(dtype=torch.float32)
else:
decoded_samples = None
with sd_models.SkipWritingToConfig():
sd_models.reload_model_weights(info=self.hr_checkpoint_info)
@@ -1351,12 +1538,14 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
mask_blur_x: int = 4
mask_blur_y: int = 4
mask_blur: int = None
mask_round: bool = True
inpainting_fill: int = 0
inpaint_full_res: bool = True
inpaint_full_res_padding: int = 0
inpainting_mask_invert: int = 0
initial_noise_multiplier: float = None
latent_mask: Image = None
force_task_id: str = None
image_mask: Any = field(default=None, init=False)
@@ -1386,6 +1575,8 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
self.mask_blur_y = value
def init(self, all_prompts, all_seeds, all_subseeds):
self.extra_generation_params["Denoising strength"] = self.denoising_strength
self.image_cfg_scale: float = self.image_cfg_scale if shared.sd_model.cond_stage_key == "edit" else None
self.sampler = sd_samplers.create_sampler(self.sampler_name, self.sd_model)
@@ -1396,10 +1587,11 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
if image_mask is not None:
# image_mask is passed in as RGBA by Gradio to support alpha masks,
# but we still want to support binary masks.
image_mask = create_binary_mask(image_mask)
image_mask = create_binary_mask(image_mask, round=self.mask_round)
if self.inpainting_mask_invert:
image_mask = ImageOps.invert(image_mask)
self.extra_generation_params["Mask mode"] = "Inpaint not masked"
if self.mask_blur_x > 0:
np_mask = np.array(image_mask)
@@ -1413,16 +1605,29 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
np_mask = cv2.GaussianBlur(np_mask, (1, kernel_size), self.mask_blur_y)
image_mask = Image.fromarray(np_mask)
if self.mask_blur_x > 0 or self.mask_blur_y > 0:
self.extra_generation_params["Mask blur"] = self.mask_blur
if self.inpaint_full_res:
self.mask_for_overlay = image_mask
mask = image_mask.convert('L')
crop_region = masking.get_crop_region(np.array(mask), self.inpaint_full_res_padding)
crop_region = masking.expand_crop_region(crop_region, self.width, self.height, mask.width, mask.height)
x1, y1, x2, y2 = crop_region
mask = mask.crop(crop_region)
image_mask = images.resize_image(2, mask, self.width, self.height)
self.paste_to = (x1, y1, x2-x1, y2-y1)
crop_region = masking.get_crop_region_v2(mask, self.inpaint_full_res_padding)
if crop_region:
crop_region = masking.expand_crop_region(crop_region, self.width, self.height, mask.width, mask.height)
x1, y1, x2, y2 = crop_region
mask = mask.crop(crop_region)
image_mask = images.resize_image(2, mask, self.width, self.height)
self.paste_to = (x1, y1, x2-x1, y2-y1)
self.extra_generation_params["Inpaint area"] = "Only masked"
self.extra_generation_params["Masked area padding"] = self.inpaint_full_res_padding
else:
crop_region = None
image_mask = None
self.mask_for_overlay = None
self.inpaint_full_res = False
massage = 'Unable to perform "Inpaint Only mask" because mask is blank, switch to img2img mode.'
model_hijack.comments.append(massage)
logging.info(massage)
else:
image_mask = images.resize_image(self.resize_mode, image_mask, self.width, self.height)
np_mask = np.array(image_mask)
@@ -1442,7 +1647,7 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
# Save init image
if opts.save_init_img:
self.init_img_hash = hashlib.md5(img.tobytes()).hexdigest()
images.save_image(img, path=opts.outdir_init_images, basename=None, forced_filename=self.init_img_hash, save_to_dirs=False)
images.save_image(img, path=opts.outdir_init_images, basename=None, forced_filename=self.init_img_hash, save_to_dirs=False, existing_info=img.info)
image = images.flatten(img, opts.img2img_background_color)
@@ -1450,6 +1655,8 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
image = images.resize_image(self.resize_mode, image, self.width, self.height)
if image_mask is not None:
if self.mask_for_overlay.size != (image.width, image.height):
self.mask_for_overlay = images.resize_image(self.resize_mode, self.mask_for_overlay, image.width, image.height)
image_masked = Image.new('RGBa', (image.width, image.height))
image_masked.paste(image.convert("RGBA").convert("RGBa"), mask=ImageOps.invert(self.mask_for_overlay.convert('L')))
@@ -1464,6 +1671,9 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
if self.inpainting_fill != 1:
image = masking.fill(image, latent_mask)
if self.inpainting_fill == 0:
self.extra_generation_params["Masked content"] = 'fill'
if add_color_corrections:
self.color_corrections.append(setup_color_correction(image))
@@ -1503,7 +1713,8 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
latmask = init_mask.convert('RGB').resize((self.init_latent.shape[3], self.init_latent.shape[2]))
latmask = np.moveaxis(np.array(latmask, dtype=np.float32), 2, 0) / 255
latmask = latmask[0]
latmask = np.around(latmask)
if self.mask_round:
latmask = np.around(latmask)
latmask = np.tile(latmask[None], (4, 1, 1))
self.mask = torch.asarray(1.0 - latmask).to(shared.device).type(self.sd_model.dtype)
@@ -1512,10 +1723,13 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
# this needs to be fixed to be done in sample() using actual seeds for batches
if self.inpainting_fill == 2:
self.init_latent = self.init_latent * self.mask + create_random_tensors(self.init_latent.shape[1:], all_seeds[0:self.init_latent.shape[0]]) * self.nmask
self.extra_generation_params["Masked content"] = 'latent noise'
elif self.inpainting_fill == 3:
self.init_latent = self.init_latent * self.mask
self.extra_generation_params["Masked content"] = 'latent nothing'
self.image_conditioning = self.img2img_image_conditioning(image * 2 - 1, self.init_latent, image_mask)
self.image_conditioning = self.img2img_image_conditioning(image * 2 - 1, self.init_latent, image_mask, self.mask_round)
def sample(self, conditioning, unconditional_conditioning, seeds, subseeds, subseed_strength, prompts):
x = self.rng.next()
@@ -1527,7 +1741,14 @@ class StableDiffusionProcessingImg2Img(StableDiffusionProcessing):
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
if self.mask is not None:
samples = samples * self.nmask + self.init_latent * self.mask
blended_samples = samples * self.nmask + self.init_latent * self.mask
if self.scripts is not None:
mba = scripts.MaskBlendArgs(samples, self.nmask, self.init_latent, self.mask, blended_samples)
self.scripts.on_mask_blend(self, mba)
blended_samples = mba.blended_latent
samples = blended_samples
del x
devices.torch_gc()
+49
View File
@@ -0,0 +1,49 @@
from modules import scripts, shared, script_callbacks
import re
def strip_comments(text):
text = re.sub('(^|\n)#[^\n]*(\n|$)', '\n', text) # while line comment
text = re.sub('#[^\n]*(\n|$)', '\n', text) # in the middle of the line comment
return text
class ScriptStripComments(scripts.Script):
def title(self):
return "Comments"
def show(self, is_img2img):
return scripts.AlwaysVisible
def process(self, p, *args):
if not shared.opts.enable_prompt_comments:
return
p.all_prompts = [strip_comments(x) for x in p.all_prompts]
p.all_negative_prompts = [strip_comments(x) for x in p.all_negative_prompts]
p.main_prompt = strip_comments(p.main_prompt)
p.main_negative_prompt = strip_comments(p.main_negative_prompt)
if getattr(p, 'enable_hr', False):
p.all_hr_prompts = [strip_comments(x) for x in p.all_hr_prompts]
p.all_hr_negative_prompts = [strip_comments(x) for x in p.all_hr_negative_prompts]
p.hr_prompt = strip_comments(p.hr_prompt)
p.hr_negative_prompt = strip_comments(p.hr_negative_prompt)
def before_token_counter(params: script_callbacks.BeforeTokenCounterParams):
if not shared.opts.enable_prompt_comments:
return
params.prompt = strip_comments(params.prompt)
script_callbacks.on_before_token_counter(before_token_counter)
shared.options_templates.update(shared.options_section(('sd', "Stable Diffusion", "sd"), {
"enable_prompt_comments": shared.OptionInfo(True, "Enable comments").info("Use # anywhere in the prompt to hide the text between # and the end of the line from the generation."),
}))
+4 -3
View File
@@ -1,6 +1,7 @@
import gradio as gr
from modules import scripts, sd_models
from modules.infotext_utils import PasteField
from modules.ui_common import create_refresh_button
from modules.ui_components import InputAccordion
@@ -31,9 +32,9 @@ class ScriptRefiner(scripts.ScriptBuiltinUI):
return None if info is None else info.title
self.infotext_fields = [
(enable_refiner, lambda d: 'Refiner' in d),
(refiner_checkpoint, lambda d: lookup_checkpoint(d.get('Refiner'))),
(refiner_switch_at, 'Refiner switch at'),
PasteField(enable_refiner, lambda d: 'Refiner' in d),
PasteField(refiner_checkpoint, lambda d: lookup_checkpoint(d.get('Refiner')), api="refiner_checkpoint"),
PasteField(refiner_switch_at, 'Refiner switch at', api="refiner_switch_at"),
]
return enable_refiner, refiner_checkpoint, refiner_switch_at

Some files were not shown because too many files have changed in this diff Show More