Compare commits

...

894 Commits

Author SHA1 Message Date
Kohaku-Blueleaf 9adeed18f1 Fix unload of bundled emb 2023-10-22 17:57:59 +08:00
Kohaku-Blueleaf 891ccb767c Fix lint 2023-10-10 15:07:25 +08:00
Kohaku-Blueleaf 81e94de318 Add warning when meet emb name conflicting
Choose standalone embedding (in /embeddings folder) first
2023-10-10 14:44:20 +08:00
Kohaku-Blueleaf 2282eb8dd5 Remove dev debug print 2023-10-10 12:11:00 +08:00
Kohaku-Blueleaf 3d8b1af6be Support string_to_param nested dict
format:
bundle_emb.EMBNAME.string_to_param.KEYNAME
2023-10-10 12:09:33 +08:00
Kohaku-Blueleaf 2aa485b5af add lora bundle system 2023-10-09 22:52:09 +08:00
AUTOMATIC1111 7d60076b8b case-insensitive search for settings 2023-10-03 16:22:32 +03:00
AUTOMATIC1111 77171923f8 Merge pull request #13475 from wkpark/regress-fix
fix regression
2023-10-03 12:38:11 +03:00
AUTOMATIC1111 c4ffeb857e Merge pull request #13480 from AUTOMATIC1111/popup-fix
Fix accidentally closing popup dialogs
2023-10-03 12:37:46 +03:00
missionfloyd e5381320b9 Lint 2023-10-02 22:33:03 -06:00
missionfloyd 86a46e8189 Fix accidentally closing popup dialogs 2023-10-02 22:22:15 -06:00
Won-Kyu Park c2279da522 fix re_param_code (regression bug PR #13458) 2023-10-03 01:16:41 +09:00
AUTOMATIC1111 dc2074c46d Merge pull request #13466 from AUTOMATIC1111/denoising-none
Change denoising_strength default to None.
2023-10-02 13:05:27 +03:00
AUTOMATIC1111 362675e75b Merge pull request #13469 from PermissionDenied7335/master
I found a code snippet in webui.sh that disables python venv and moved it to the appropriate location
2023-10-02 12:47:02 +03:00
PermissionDenied7335 6ab0b65ed1 Added an option not to enable venv 2023-10-02 15:43:59 +08:00
missionfloyd 3f763d41e8 Change denoising_strength default to None. 2023-10-01 22:38:27 -06:00
AUTOMATIC1111 e3c849da06 Merge pull request #13458 from wkpark/fieldname-regex
fix fieldname regex
2023-10-01 11:49:42 +03:00
AUTOMATIC1111 c0113872c5 add search field to settings 2023-10-01 11:48:41 +03:00
Won-Kyu Park deeec0b343 fix fieldname regex to accept additional [-/] chars 2023-10-01 16:19:59 +09:00
AUTOMATIC1111 c7e810a985 add onEdit function for js and rework token-counter.js to use it 2023-10-01 10:15:23 +03:00
AUTOMATIC1111 7026b96476 Merge pull request #13444 from AUTOMATIC1111/edit-attn-delimiters
edit-attention: Allow editing whitespace delimiters
2023-10-01 07:04:08 +03:00
missionfloyd 56ef5e9d48 Remove end parenthesis from weight 2023-09-30 21:44:05 -06:00
missionfloyd 0eb5fde2fd Remove unneeded code 2023-09-30 21:20:58 -06:00
missionfloyd 0935d2c304 Use checkboxes for whitespace delimiters 2023-09-30 18:37:44 -06:00
AUTOMATIC1111 b2f9709538 get #13121 to work without restart 2023-09-30 10:29:10 +03:00
AUTOMATIC1111 5cc7bf3876 reword sd_checkpoint_dropdown_use_short setting and add explanation 2023-09-30 10:10:57 +03:00
AUTOMATIC1111 416fbde726 Merge pull request #13121 from AUTOMATIC1111/consolidated-allowed-preview-formats
Consolidated allowed preview formats, Fix extra network `.gif` not woking as preview
2023-09-30 10:09:45 +03:00
missionfloyd 1cc7c4bfb3 Allow editing whitespace delimiters 2023-09-30 01:09:09 -06:00
AUTOMATIC1111 951842d785 Merge pull request #13139 from AUTOMATIC1111/ckpt-dir-path-separator
fix `--ckpt-dir` path separator and option use `short name` for checkpoint dropdown
2023-09-30 10:02:28 +03:00
AUTOMATIC1111 591ad1dbc3 Merge pull request #13170 from AUTOMATIC1111/re-fix-batch-img2img-output-dir-with-script
Re fix batch img2img output dir with script
2023-09-30 09:59:21 +03:00
AUTOMATIC1111 fcfe5c179b Merge pull request #12877 from zixaphir/removeExtraNetworksFromPrompt_fix
account for customizable extra network separators in remove code
2023-09-30 09:49:37 +03:00
AUTOMATIC1111 a0e979badb Merge pull request #13178 from wpdong0727/fix-lora-bias-backup-reset
fix: lora-bias-backup don't reset cache
2023-09-30 09:48:38 +03:00
AUTOMATIC1111 3aa9f01bdc Merge pull request #13077 from sdwebui-extensions/master
fix localization when there are multi same localization file in the extensions
2023-09-30 09:47:52 +03:00
AUTOMATIC1111 4e5d2526cb Merge pull request #13189 from AUTOMATIC1111/make-InputAccordion-work-with-ui-config
make InputAccordion work with ui-config
2023-09-30 09:46:55 +03:00
AUTOMATIC1111 ab63054f95 write infotext to gif image as comment 2023-09-30 09:34:50 +03:00
AUTOMATIC1111 0c71967a53 Merge pull request #13068 from JaredTherriault/master
Load comments from gif images to gather geninfo from gif outputs
2023-09-30 09:33:14 +03:00
AUTOMATIC1111 b20cd352d9 Merge pull request #13210 from AUTOMATIC1111/fetch-version-info-when-webui_dir-is-not-work_dir-
fix issues when webui_dir is not work_dir
2023-09-30 09:23:32 +03:00
AUTOMATIC1111 3a4290f833 Merge pull request #13229 from AUTOMATIC1111/initialize-state.time_start-befroe-state.job_count
initialize state.time_start befroe state.job_count
2023-09-30 09:21:47 +03:00
AUTOMATIC1111 df48222f3e Merge pull request #13231 from der3318/better-support-for-portable-git
Better Support for Portable Git
2023-09-30 09:21:08 +03:00
AUTOMATIC1111 ee8e98711b Merge pull request #13266 from wkpark/xyz-prepare
xyz_grid: add prepare
2023-09-30 09:17:24 +03:00
AUTOMATIC1111 87b50397a6 add missing import, simplify code, use patches module for #13276 2023-09-30 09:11:31 +03:00
AUTOMATIC1111 e309583f29 Merge pull request #13276 from woweenie/patch-1
patch DDPM.register_betas so that users can put given_betas in model yaml
2023-09-30 09:01:12 +03:00
AUTOMATIC1111 7ce1f3a142 Merge pull request #13281 from AUTOMATIC1111/Config-states-time-ISO-in-system-time-zone
Config states time ISO in system time zone
2023-09-30 08:59:28 +03:00
AUTOMATIC1111 db63cf7d24 Merge pull request #13282 from AUTOMATIC1111/XYZ-if-not-Include-Sub-Grids-do-not-save-Sub-Grid
XYZ if not include sub grids do not save sub grid
2023-09-30 08:58:07 +03:00
AUTOMATIC1111 cdafbcaad2 Merge pull request #13313 from chu8129/dev
use orderdict as lru cache:opt/bug
2023-09-30 08:55:54 +03:00
AUTOMATIC1111 34055f9d0c Merge pull request #13302 from Zolxys/patch-1
Fix: --sd_model in "Prompts from file or textbox" script is not working
2023-09-30 08:49:26 +03:00
AUTOMATIC1111 9b17416580 Merge pull request #13372 from ezt19/patch-1
Update dragdrop.js
2023-09-30 08:46:48 +03:00
AUTOMATIC1111 833b9b62b5 Merge pull request #13395 from AUTOMATIC1111/escape-names
Fix viewing/editing metadata when filename contains an apostrophe
2023-09-30 08:32:38 +03:00
AUTOMATIC1111 3b0be0f12f Merge pull request #13411 from AUTOMATIC1111/update-card-metadata
Update card on correct tab when editing metadata
2023-09-30 08:32:07 +03:00
AUTOMATIC1111 4083639c3c Merge pull request #13418 from akx/torchsde-bump
Bump to torchsde==0.2.6
2023-09-30 08:31:30 +03:00
AUTOMATIC1111 8a758383d2 Merge pull request #13412 from AUTOMATIC1111/data-sort-name-fix
Fix data-sort-name containing spaces
2023-09-30 08:24:37 +03:00
AUTOMATIC1111 ad3b8a1c41 alternative solution to #13434 2023-09-30 08:23:12 +03:00
AUTOMATIC1111 1b9ca01e4f Merge pull request #13253 from LeonZhao28/feature_skip_load_model_at_start
add --skip-load-model-at-start
2023-09-30 08:15:00 +03:00
Aarni Koskela 30f4f25b2e Bump to torchsde==0.2.6 2023-09-27 10:21:14 +03:00
missionfloyd a69daae012 Fix data-sort-name containing spaces 2023-09-26 22:02:52 -06:00
missionfloyd 99aa702015 Update card on correct tab 2023-09-26 21:08:55 -06:00
missionfloyd d00f6dca28 Escape item names 2023-09-25 22:08:24 -06:00
ezt19 fdecf813b6 Update dragdrop.js
Fixing a problem when u cannot put two images and they are going into two different places for images.
2023-09-23 20:41:28 +00:00
王秋文/qwwang 8e355fbd75 fix 2023-09-18 16:45:42 +08:00
Zolxys 701feabf49 Fix: --sd_model in "Promts from file or textbox" script is not working
Fix for bug report #8079
2023-09-17 11:37:15 -05:00
w-e-w d2878a8b0b XYZ if not Include Sub Grids do not save Sub Grid 2023-09-16 09:54:14 +09:00
w-e-w 663fb87976 Config states time ISO in system time zone 2023-09-16 09:11:54 +09:00
woweenie d9d94141dc patch DDPM.register_betas so that users can put given_betas in model yaml 2023-09-15 18:59:44 +02:00
qiuwen.wang 813535d38b use dict[key]=model; did not update orderdict order, should use move to end 2023-09-15 18:23:23 +08:00
Won-Kyu Park afd0624587 xyz_grid: add prepare option to AxisOption 2023-09-15 17:30:36 +09:00
Leon ab3d3528a1 add --skip-load-model-at-start 2023-09-14 18:42:56 +08:00
Der Chien 0ad38a9b87 20230913 setup GIT_PYTHON_GIT_EXECUTABLE for GitPython 2023-09-13 20:20:01 +08:00
w-e-w cf1edc2b54 initialize state.time_start befroe state.job_count 2023-09-13 16:27:02 +09:00
w-e-w 5b761b49ad correct webpath when webui_dir is not work_dir 2023-09-13 16:05:55 +09:00
AUTOMATIC1111 102b6617da Merge pull request #13213 from AUTOMATIC1111/fix-add_option-overriding-config-with-default
Fix major issue add_option overriding config with default
2023-09-12 17:50:44 +03:00
w-e-w 93015964c7 fix add_option overriding config with default 2023-09-12 22:53:09 +09:00
w-e-w 6fb2194d9c fetch version info when webui_dir is not work_dir 2023-09-12 16:50:56 +09:00
AUTOMATIC1111 59544321aa initial work on sd_unet for SDXL 2023-09-11 21:17:40 +03:00
w-e-w c485a7d12e make InputAccordion work with ui-config 2023-09-11 13:47:44 +09:00
liubo0902 413123f08a Update localization.py 2023-09-11 09:22:27 +08:00
dongwenpu 7d4d871d46 fix: lora-bias-backup don't reset cache 2023-09-10 17:53:42 +08:00
zixaphir 26d0d87f5b Remove extra spaces 2023-09-09 17:26:46 -07:00
zixaphir d6478a60aa Remove extra network separator without regex 2023-09-09 17:22:10 -07:00
w-e-w ab57417175 prevent accessing non-existing keys 2023-09-09 22:35:50 +09:00
w-e-w f8042cb323 Ensure not override images with script enabled 2023-09-09 22:35:07 +09:00
AUTOMATIC1111 924642331b Merge pull request #12846 from a666/deprecated-types
Fix some deprecated types
2023-09-09 10:31:56 +03:00
AUTOMATIC1111 c9c457eda8 stylistic changes for #13118 2023-09-09 10:27:16 +03:00
AUTOMATIC1111 73c2a03d49 Merge pull request #13118 from ljleb/fix-counter
Don't use multicond parser for negative prompt counter
2023-09-09 10:24:07 +03:00
AUTOMATIC1111 06af73bd1d linter 2023-09-09 10:23:53 +03:00
AUTOMATIC1111 9cebe308e9 return apply styles to main UI 2023-09-09 10:20:06 +03:00
AUTOMATIC1111 558808c748 Merge pull request #13119 from AUTOMATIC1111/enable_console_prompts-in-settings
enable console prompts in settings
2023-09-09 10:02:02 +03:00
w-e-w c68aabc852 lint 2023-09-09 15:59:22 +09:00
w-e-w 46ef185709 deprecate --enable-console-prompts
use --enable-console-prompts as the default value for shared.opts.enable_console_prompts
2023-09-09 15:53:10 +09:00
AUTOMATIC1111 46375f0592 fix for crash when running #12924 without --device-id 2023-09-09 09:39:37 +03:00
AUTOMATIC1111 558baffa2c Merge pull request #12924 from catboxanon/fix/cudnn
More accurate check for enabling cuDNN benchmark on 16XX cards
2023-09-09 09:33:37 +03:00
AUTOMATIC1111 4ebed495ed Merge pull request #12880 from AUTOMATIC1111/dropdown-padding-mobile
Use default dropdown padding on mobile
2023-09-09 09:29:42 +03:00
AUTOMATIC1111 e6d41b54cd Merge pull request #12976 from AUTOMATIC1111/toolbutton-tooltips
Restore missing tooltips
2023-09-09 09:29:11 +03:00
AUTOMATIC1111 e06c16e884 Merge pull request #12957 from AnyISalIn/dev
fix: update shared.opts.data when add_option
2023-09-09 09:28:33 +03:00
AUTOMATIC1111 72bc69e741 Merge pull request #12986 from AUTOMATIC1111/update-cmd-arg-description
update cmd arg description
2023-09-09 09:26:29 +03:00
AUTOMATIC1111 b33ffc11aa Merge pull request #12975 from AUTOMATIC1111/styles-copy-prompt
Add button to copy prompt to style editor
2023-09-09 09:26:03 +03:00
AUTOMATIC1111 0a2c24003c Merge pull request #12995 from uservar/patch-2
Fix bug with sigma min/max overrides.
2023-09-09 09:25:21 +03:00
AUTOMATIC1111 9e58e11ad4 Merge pull request #13028 from AUTOMATIC1111/fallback-invalid-exif
Add Fallback at images.read_info_from_image if exif data was invalid
2023-09-09 09:21:18 +03:00
AUTOMATIC1111 4c4d7dd01f fix whitespace for #13084 2023-09-09 09:15:09 +03:00
AUTOMATIC1111 adb3f2bcdd Merge pull request #13084 from AUTOMATIC1111/fix-preview-while-generation
Fix #13080 - Hypernetwork/TI preview generation
2023-09-09 09:14:01 +03:00
AUTOMATIC1111 8afabae67d Merge pull request #12929 from Beinsezii/dev
WEBUI.SH - Use torch 2.1.0 release candidate for Navi 3
2023-09-09 09:10:07 +03:00
AUTOMATIC1111 fccde0c1f7 Merge pull request #12909 from AUTOMATIC1111/Action-to-calculate-all-SD-checkpoint-hashes
Action to calculate all SD checkpoint hashes
2023-09-09 09:09:29 +03:00
AUTOMATIC1111 3ca4655a18 update for #12926 2023-09-09 09:08:31 +03:00
ljleb 349f893024 Merge branch 'dev' of https://github.com/AUTOMATIC1111/stable-diffusion-webui into fix-counter 2023-09-09 02:06:04 -04:00
ljleb 7b44b85730 refact 2023-09-09 02:01:12 -04:00
AUTOMATIC1111 329c8ab932 Merge pull request #12926 from AUTOMATIC1111/fix-batch-img2img-output-dir-with-script
fix batch img2img output dir with script
2023-09-09 08:56:32 +03:00
AUTOMATIC1111 259768f27f fix the bug in script-info API 2023-09-09 08:38:49 +03:00
AUTOMATIC1111 741e8ecb7d Merge pull request #13135 from ibrainventures/patch-2
(feat) Include Program Version in info response. Update processing.py
2023-09-09 08:18:51 +03:00
w-e-w 63485b2c55 option use short name for checkpoint dropdown 2023-09-08 10:00:27 +09:00
w-e-w e4726cccf9 parsing string to path 2023-09-08 09:46:34 +09:00
ibrainventures f11eec81e3 (feat) Include Program Version in info response. Update processing.py
This would help to organize / memorize the program version for the creation process. (as it is also unformated included inside the infotext).
2023-09-07 23:19:52 +02:00
w-e-w 45881703c5 consolidated allowed preview formats 2023-09-07 12:11:36 +09:00
w-e-w 340fce2113 enable console prompts in settings 2023-09-07 10:01:16 +09:00
w-e-w 657404b75b use original filename batch img2img with scripts 2023-09-06 20:33:43 +09:00
w-e-w 35d1c94549 save_images_add_number_suffix 2023-09-06 20:24:26 +09:00
AngelBottomless 47033afa5c Fix preview for textual inversion training 2023-09-05 22:38:02 +09:00
AngelBottomless de5bb4ca88 Fix #13080 - Hypernetwork/TI preview generation
Fixes sampler name reference

Same patch will be done for TI.
2023-09-05 22:35:17 +09:00
liubo0902 ff7027ffc0 Update localization.py 2023-09-05 15:08:59 +08:00
liubo0902 0c1c9e74cd Update localization.py 2023-09-05 15:06:47 +08:00
JaredTherriault 022639a145 Load comments from gif images to gather geninfo from gif outputs 2023-09-04 17:37:48 -07:00
JaredTherriault 5e16914a4e Merge branch 'AUTOMATIC1111:master' into master 2023-09-04 17:29:33 -07:00
JaredTherriault 8f3b02f095 Revert "Offloading custom work"
This reverts commit f3d1631aab.

This work has been offloaded now into an extension called Prompt Control.
2023-09-03 13:32:56 -07:00
AngelBottomless f593cbfec4 fallback if exif data was invalid 2023-09-03 21:07:36 +09:00
uservar a51721cb09 Fix bug with sigma min/max overrides. 2023-09-02 11:35:30 +00:00
w-e-w ba05e32789 update cmd arg description 2023-09-02 14:12:59 +09:00
missionfloyd 3e67017dfb Restore missing tooltips 2023-09-01 17:01:08 -06:00
missionfloyd d7e3ea68b3 Remove whitespace 2023-09-01 16:24:35 -06:00
missionfloyd bf0b083216 Add button to copy prompt to style editor 2023-09-01 16:14:33 -06:00
AnyISalIn 317d00b2a6 fix: update shared.opts.data when add_option
Signed-off-by: AnyISalIn <anyisalin@gmail.com>
2023-09-01 21:56:17 +08:00
Beinsezii 737a013377 WEBUI.SH Navi 3 torch 2.1.0 rc instead of nightly
With the release candidates being out for both torch and vision,
webui should default to these over nightly for a more stable experience.

Stable release isn't excpected until October 4th:
https://dev-discuss.pytorch.org/c/release-announcements/27
2023-08-31 15:03:08 -07:00
zixaphir 78c1a74660 Account for edge case where user deleted leading separator. 2023-08-31 14:18:35 -07:00
w-e-w bd9b3d15e8 fix batch img2img output dir with script 2023-09-01 04:05:58 +09:00
catboxanon 5681bf8016 More accurate check for enabling cuDNN benchmark on 16XX cards 2023-08-31 14:57:16 -04:00
w-e-w 348c6022f3 Action to calculate all SD checkpoint hashes 2023-09-01 00:56:55 +09:00
missionfloyd 76b1ad7daf Use default dropdown padding on mobile 2023-08-30 23:07:18 -06:00
AUTOMATIC1111 d39440bfb9 Merge branch 'master' into dev 2023-08-31 07:39:14 +03:00
AUTOMATIC1111 5ef669de08 Merge branch 'release_candidate' 2023-08-31 07:38:34 +03:00
AUTOMATIC1111 20158d77d9 Merge branch 'release_candidate' into dev 2023-08-31 07:37:36 +03:00
AUTOMATIC1111 e7965a5eb8 Merge pull request #12876 from ljleb/fix-re
Fix generation params regex
2023-08-31 07:34:01 +03:00
AUTOMATIC1111 3bff988f1e Merge pull request #12876 from ljleb/fix-re
Fix generation params regex
2023-08-31 07:30:03 +03:00
zixaphir 41196ccbf7 account for customizable extra network separators in remove code
previous behavior only searched for leading spaces
2023-08-30 20:20:19 -07:00
ljleb 541a3db05b fix generation params regex 2023-08-30 21:38:21 -04:00
AUTOMATIC1111 ae7291fb49 fix an issue where using hires fix with refiner on first pass with medvram would cause an exception when generating 2023-08-30 21:34:17 +03:00
AUTOMATIC1111 d43333ff71 fix an issue where VAE would remain in fp16 after an auto-switch to fp32 2023-08-30 21:13:24 +03:00
AUTOMATIC1111 0cdbd90d6b update bug report template to include sysinfo and not include all other fields that are already covered by sysinfo 2023-08-30 19:50:47 +03:00
AUTOMATIC1111 d0026da483 add --dump-sysinfo, a cmd arg to dump limited sysinfo file at startup 2023-08-30 19:48:47 +03:00
AUTOMATIC1111 8d54739de5 add information about Restore faces and Tiling into the changelog 2023-08-30 19:17:27 +03:00
AUTOMATIC1111 135b61bc0b fix inpainting models in txt2img creating black pictures 2023-08-30 19:08:17 +03:00
AUTOMATIC1111 6adf2b71c2 fix inpainting models in txt2img creating black pictures 2023-08-30 19:08:04 +03:00
AUTOMATIC1111 87cca029d7 add an option to choose how to combine hires fix and refiner 2023-08-30 18:24:21 +03:00
AUTOMATIC1111 ae0b2cc196 add an option to choose how to combine hires fix and refiner 2023-08-30 18:22:50 +03:00
AUTOMATIC1111 1ac11b3dae Merge pull request #12865 from AUTOMATIC1111/another-convert-to-system-time-zone
extension update time, convert to system time zone
2023-08-30 11:00:38 +03:00
AUTOMATIC1111 0ff8b8fb54 Merge pull request #12865 from AUTOMATIC1111/another-convert-to-system-time-zone
extension update time, convert to system time zone
2023-08-30 11:00:29 +03:00
w-e-w c985d23c52 extension update time, convert to system time zone 2023-08-30 16:18:31 +09:00
AUTOMATIC1111 87a083d1b2 Merge pull request #12864 from AUTOMATIC1111/extension-time-format-time-zone
patch Extension time format in systme time zone
2023-08-30 09:45:23 +03:00
AUTOMATIC1111 644b537014 Merge pull request #12864 from AUTOMATIC1111/extension-time-format-time-zone
patch Extension time format in systme time zone
2023-08-30 09:45:12 +03:00
w-e-w 67cd4ec0aa lint 2023-08-30 15:37:13 +09:00
w-e-w 28b084ca25 extension time format in system time zone 2023-08-30 15:28:46 +09:00
AUTOMATIC1111 503bd3fc0f keep order in list of checkpoints when loading model that doesn't have a checksum 2023-08-30 08:54:41 +03:00
AUTOMATIC1111 f874b1bcad keep order in list of checkpoints when loading model that doesn't have a checksum 2023-08-30 08:54:31 +03:00
AUTOMATIC1111 9e7de49fc5 update changelog 2023-08-30 08:28:46 +03:00
AUTOMATIC1111 06bc1f4f67 Merge pull request #12851 from bluelovers/pr/extension-time-001
chore: change extension time format
2023-08-30 08:24:08 +03:00
AUTOMATIC1111 338d0b6103 go back to single path for filenames in extra networks metadata dialog 2023-08-30 08:23:59 +03:00
AUTOMATIC1111 3989d7e88b Merge pull request #12838 from bluelovers/pr/file-metadata-path-001
display file metadata `path` , `ss_output_name`
2023-08-30 08:23:50 +03:00
AUTOMATIC1111 afea99a72b get progressbar to display correctly in extensions tab 2023-08-30 08:23:47 +03:00
AUTOMATIC1111 965c728914 Merge pull request #12839 from ibrainventures/patch-1
[RC 1.6.0 - zoom is partly hidden] Update style.css
2023-08-30 08:23:44 +03:00
AUTOMATIC1111 46f3ee9594 Merge pull request #12854 from catboxanon/fix/quicksettings-dropdown-unfocus
Do not change quicksettings dropdown option when value returned is `None`
2023-08-30 08:23:42 +03:00
AUTOMATIC1111 323dcadea2 Merge pull request #12855 from dhwz/dev
don't print empty lines
2023-08-30 08:23:40 +03:00
AUTOMATIC1111 642faa1f65 Merge pull request #12856 from catboxanon/extra-noise-noisy-latent
Add noisy latent to `ExtraNoiseParams` for callback
2023-08-30 08:23:37 +03:00
AUTOMATIC1111 d156d5bffd Merge pull request #12851 from bluelovers/pr/extension-time-001
chore: change extension time format
2023-08-30 08:23:11 +03:00
AUTOMATIC1111 edf3ad5aed go back to single path for filenames in extra networks metadata dialog 2023-08-30 08:22:06 +03:00
AUTOMATIC1111 4aaae3dc65 Merge pull request #12838 from bluelovers/pr/file-metadata-path-001
display file metadata `path` , `ss_output_name`
2023-08-30 08:07:15 +03:00
AUTOMATIC1111 9a4a1aac81 get progressbar to display correctly in extensions tab 2023-08-30 08:05:18 +03:00
AUTOMATIC1111 ee373a737c Merge pull request #12839 from ibrainventures/patch-1
[RC 1.6.0 - zoom is partly hidden] Update style.css
2023-08-30 07:43:38 +03:00
AUTOMATIC1111 9e248fb24e Merge pull request #12854 from catboxanon/fix/quicksettings-dropdown-unfocus
Do not change quicksettings dropdown option when value returned is `None`
2023-08-30 07:41:46 +03:00
AUTOMATIC1111 08603378e8 Merge pull request #12855 from dhwz/dev
don't print empty lines
2023-08-30 07:27:45 +03:00
AUTOMATIC1111 834f4c7cd3 Merge pull request #12856 from catboxanon/extra-noise-noisy-latent
Add noisy latent to `ExtraNoiseParams` for callback
2023-08-30 07:27:13 +03:00
catboxanon 549b475be9 Add noisy latent to ExtraNoiseParams for callback 2023-08-29 14:22:04 -04:00
dhwz 7e5fcdaf69 don't print empty lines 2023-08-29 18:49:42 +02:00
catboxanon e3939f3339 Do not change quicksettings value when value returned is None 2023-08-29 12:19:10 -04:00
bluelovers cb2a4f2424 chore: change extension time format 2023-08-29 22:47:10 +08:00
bluelovers f564d8ed2c refactor: refactor function 2023-08-29 22:11:18 +08:00
ibrainventures ba7d0d225a Update style.css 2023-08-29 15:31:01 +02:00
AUTOMATIC1111 04b90328c0 revert SGM noise multiplier change for img2img because it breaks hires fix 2023-08-29 15:38:33 +03:00
AUTOMATIC1111 a0af2852b6 revert SGM noise multiplier change for img2img because it breaks hires fix 2023-08-29 15:38:05 +03:00
a666 b6c1a1bbbf Fix some deprecated types 2023-08-29 00:54:57 -06:00
AUTOMATIC1111 00e393ce10 Merge pull request #12833 from catboxanon/fix/dont-print-blank-stdout
Don't print blank stdout in extension installers
2023-08-29 09:02:11 +03:00
AUTOMATIC1111 84d41e49b3 Merge pull request #12833 from catboxanon/fix/dont-print-blank-stdout
Don't print blank stdout in extension installers
2023-08-29 09:00:34 +03:00
AUTOMATIC1111 0c9282b84d Merge pull request #12832 from catboxanon/fix/skip-install-extensions
Honor `--skip-install` for extension installers
2023-08-29 08:58:10 +03:00
AUTOMATIC1111 18ba89863d Merge pull request #12832 from catboxanon/fix/skip-install-extensions
Honor `--skip-install` for extension installers
2023-08-29 08:58:01 +03:00
AUTOMATIC1111 444f102964 Merge pull request #12834 from catboxanon/fix/notification-tab-switch
Fix notification not playing when built-in webui tab is inactive
2023-08-29 08:55:58 +03:00
AUTOMATIC1111 9e8464db1e Merge pull request #12834 from catboxanon/fix/notification-tab-switch
Fix notification not playing when built-in webui tab is inactive
2023-08-29 08:55:45 +03:00
AUTOMATIC1111 738e133b24 Merge pull request #12818 from catboxanon/sgm
Add option to align with sgm repo's sampling implementation
2023-08-29 08:54:32 +03:00
AUTOMATIC1111 01a257eb07 Merge pull request #12818 from catboxanon/sgm
Add option to align with sgm repo's sampling implementation
2023-08-29 08:54:09 +03:00
AUTOMATIC1111 6558716018 Merge pull request #12837 from bluelovers/pr/file-metadata-break-001
style: file-metadata word-break
2023-08-29 08:53:37 +03:00
AUTOMATIC1111 9c87ae0d9d Merge pull request #12837 from bluelovers/pr/file-metadata-break-001
style: file-metadata word-break
2023-08-29 08:52:58 +03:00
catboxanon 7ab16e99ee Add option to align with sgm repo sampling implementation 2023-08-29 01:51:13 -04:00
AUTOMATIC1111 8a7a4275a8 Merge pull request #12842 from dhwz/dev
remove xformers Python version check
2023-08-29 08:44:11 +03:00
AUTOMATIC1111 3269572753 Merge pull request #12842 from dhwz/dev
remove xformers Python version check
2023-08-29 08:32:48 +03:00
dhwz 5070ab8004 remove xformers Python version check 2023-08-29 07:16:32 +02:00
ibrainventures 02e7824e6a [RC 1.6.1 - zoom is partly hidden] Update style.css
If a image / batch result image is higher or wider than the current viewport, and is zoomed (left corner zoom icon) it is cutted off  on the top and also to the left. This new rule seems to be the culprit.
2023-08-29 02:04:07 +02:00
bluelovers d83a1ba65b feat: display file metadata ss_output_name
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12289
2023-08-29 06:33:00 +08:00
bluelovers 1bb21f3510 feat: display file metadata path
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12289
2023-08-29 06:25:16 +08:00
bluelovers 739686b1c5 style: file-metadata word-break 2023-08-29 06:19:22 +08:00
AUTOMATIC1111 c0f9821c35 always show NV as RNG source in infotext 2023-08-28 22:23:29 +03:00
AUTOMATIC1111 cd48308a2a always show NV as RNG source in infotext 2023-08-28 22:22:35 +03:00
catboxanon 592b0dcfa7 Fix notification not playing when built-in webui tab is inactive 2023-08-28 12:09:37 -04:00
catboxanon 20df81b0cc Honor --skip-install for extension installers 2023-08-28 11:26:50 -04:00
catboxanon 99acbd5ebe Don't print blank stdout in extension installers 2023-08-28 11:17:47 -04:00
AUTOMATIC1111 d1c93c3822 Merge pull request #12827 from omahs/patch-1
Fix minor typos
2023-08-28 15:04:07 +03:00
AUTOMATIC1111 9e14cac318 Merge branch 'dev' into patch-1 2023-08-28 15:03:46 +03:00
omahs f898833ea3 fix typos 2023-08-28 10:43:13 +02:00
JaredTherriault f3d1631aab Offloading custom work
-custom_statics works to do mass replace strings, intended for copy-pasting gen info from internet generations and replacing unsavory prompts with safer prompts for my own sanity
-tried to implement this into generation_parameters_copypaste but it didn't work out this iteration, presumably because we return a string and the calling method is looking for an object type
-updated webui-user.bat to set a custom temp directory (for disk space concerns) and to apply xformers (for generation speed)

I probably won't be merging any of this work into the main repo since I don't want to mess with anyone else's prompts, this is just intended to keep my workspace safe from anything I don't want to see. Eventually this should be done in an extension which I could then publish, but I need to learn a lot more about the extension and callback systems in the main repo first. just uploading this to my fork for now so i don't lose the current progress.
2023-08-27 21:54:05 -07:00
AUTOMATIC1111 8632452627 Merge pull request #12815 from AUTOMATIC1111/consolidate-local-check
consolidate local check
2023-08-28 07:53:37 +03:00
AUTOMATIC1111 86708463f1 Merge pull request #12819 from catboxanon/fix/rng-infotext
Add missing infotext for RNG in options
2023-08-28 07:20:48 +03:00
AUTOMATIC1111 66146ed72b Merge pull request #12819 from catboxanon/fix/rng-infotext
Add missing infotext for RNG in options
2023-08-28 07:20:33 +03:00
catboxanon 2b8484a29d Add missing infotext for RNG 2023-08-27 16:25:26 -04:00
w-e-w 18e3e6d6ab consolidate local check 2023-08-28 03:43:27 +09:00
AUTOMATIC1111 bfc5c08109 Merge pull request #12814 from AUTOMATIC1111/non-local-condition
non-local condition
2023-08-27 21:29:59 +03:00
AUTOMATIC1111 ad266d795e Merge pull request #12814 from AUTOMATIC1111/non-local-condition
non-local condition
2023-08-27 21:29:48 +03:00
w-e-w e422f19ee9 non-local condition 2023-08-28 03:27:07 +09:00
AUTOMATIC1111 d0d5075914 update changelog 2023-08-27 20:24:25 +03:00
AUTOMATIC1111 896fde789e hide --gradio-auth and --api-auth values from /internal/sysinfo report 2023-08-27 20:17:01 +03:00
AUTOMATIC1111 d63117ace5 hide --gradio-auth and --api-auth values from /internal/sysinfo report 2023-08-27 20:16:50 +03:00
AUTOMATIC1111 66d7630705 lint 2023-08-27 10:11:22 +03:00
AUTOMATIC1111 63d3150dc4 lint 2023-08-27 10:11:14 +03:00
AUTOMATIC1111 cb81087b59 update changelog 2023-08-27 09:45:12 +03:00
AUTOMATIC1111 6139b145f0 fix style editing dialog breaking if it's opened in both img2img and txt2img tabs 2023-08-27 09:45:08 +03:00
AUTOMATIC1111 f331821b27 Merge pull request #12780 from catboxanon/xyz-hide-samplers
Don't show hidden samplers in dropdown for XYZ script
2023-08-27 09:45:06 +03:00
AUTOMATIC1111 5359dc0a10 Merge pull request #12792 from catboxanon/image-cropper-hide
Hide broken image crop tool
2023-08-27 09:45:03 +03:00
AUTOMATIC1111 7989765faa Merge pull request #12797 from Madrawn/vae_resolve_bug
Small typo: vae resolve bug
2023-08-27 09:45:00 +03:00
AUTOMATIC1111 783a5754d5 Merge pull request #12795 from catboxanon/prevent-duplicate-resize-handler-mk2
Prevent duplicate resize handler
2023-08-27 09:44:56 +03:00
AUTOMATIC1111 897312de46 update changelog 2023-08-27 09:44:13 +03:00
AUTOMATIC1111 23c6b5f124 fix style editing dialog breaking if it's opened in both img2img and txt2img tabs 2023-08-27 09:39:49 +03:00
AUTOMATIC1111 c2463b5323 Merge pull request #12780 from catboxanon/xyz-hide-samplers
Don't show hidden samplers in dropdown for XYZ script
2023-08-27 09:28:12 +03:00
AUTOMATIC1111 ed2a05fc3f Merge pull request #12792 from catboxanon/image-cropper-hide
Hide broken image crop tool
2023-08-27 09:26:50 +03:00
AUTOMATIC1111 e3174a1a42 Merge pull request #12797 from Madrawn/vae_resolve_bug
Small typo: vae resolve bug
2023-08-27 09:26:18 +03:00
AUTOMATIC1111 07878c6ca8 Merge pull request #12795 from catboxanon/prevent-duplicate-resize-handler-mk2
Prevent duplicate resize handler
2023-08-27 09:24:42 +03:00
AUTOMATIC1111 5e30f737b0 fix for Reload UI function: if you reload UI on one tab, other opened tabs will no longer stop working 2023-08-27 09:19:13 +03:00
AUTOMATIC1111 bd5c16e8da fix for Reload UI function: if you reload UI on one tab, other opened tabs will no longer stop working 2023-08-27 09:19:02 +03:00
AUTOMATIC1111 f2c55523c0 update changelog 2023-08-27 09:17:51 +03:00
AUTOMATIC1111 cb5f0823c6 update gradio to 3.41.2 2023-08-27 08:45:40 +03:00
AUTOMATIC1111 9dd0c4add5 update changelog 2023-08-27 08:45:25 +03:00
AUTOMATIC1111 1b46863f24 update gradio to 3.41.2 2023-08-27 08:45:16 +03:00
AUTOMATIC1111 3d83683a28 fix error that causes some extra networks to be disabled if both <lora:> and <lyco:> are present in the prompt 2023-08-27 08:41:48 +03:00
AUTOMATIC1111 b7f0e81562 fix error that causes some extra networks to be disabled if both <lora:> and <lyco:> are present in the prompt 2023-08-27 08:41:26 +03:00
catboxanon 9d8d279d0d Prevent duplicate resize handler 2023-08-26 17:30:09 -04:00
Daniel Dengler d888490f85 Merge remote-tracking branch 'origin/dev' into vae_resolve_bug 2023-08-26 23:23:11 +02:00
Daniel Dengler 168eac319d is_automatic is missing () for call 2023-08-26 23:22:57 +02:00
catboxanon 73f69a7453 Fix CSS whitespace 2023-08-26 07:04:11 -04:00
catboxanon ec54257cb2 Hide broken image crop tool for now 2023-08-26 07:00:09 -04:00
AUTOMATIC1111 72ee347eab update pnginfo checkpoint to return dict with parsed values 2023-08-26 06:52:18 +03:00
AUTOMATIC1111 ac1abf3de6 fix defaults settings page breaking when any of main UI tabs are hidden 2023-08-26 06:34:23 +03:00
AUTOMATIC1111 bb90b0ff42 fix defaults settings page breaking when any of main UI tabs are hidden 2023-08-26 06:34:00 +03:00
catboxanon db56bdce33 Don't show hidden samplers in dropdown for XYZ script 2023-08-25 16:04:06 -04:00
AUTOMATIC1111 f3a1027869 Merge pull request #12774 from SpenserCai/extensions_api
support installed extensions list api
2023-08-25 19:03:12 +03:00
SpenserCai dd07b5193e fix format error 2023-08-25 22:23:17 +08:00
SpenserCai 3369fb27df support installed extensions list api 2023-08-25 22:15:35 +08:00
AUTOMATIC1111 4c6788644a Merge branch 'release_candidate' into dev 2023-08-25 16:24:45 +03:00
AUTOMATIC1111 a6cedafb27 Merge pull request #12767 from AUTOMATIC1111/img2img-batch-PNG_info-model_hash
img2img batch PNG info model hash
2023-08-25 11:41:31 +03:00
AUTOMATIC1111 e004384e46 Merge branch 'dev' into release_candidate 2023-08-25 11:40:49 +03:00
AUTOMATIC1111 e835e61f3a Merge pull request #12754 from daswer123/improve_integration
Zoom and Pan: Resize handler
2023-08-25 11:40:13 +03:00
w-e-w 4130e5db3d img2img batch PNG info model hash 2023-08-25 10:12:19 +09:00
AUTOMATIC1111 c8c73eae59 fix incorrect save/display of new values in Defaults page in settings 2023-08-24 22:03:24 +03:00
Danil Boldyrev c39efa6ba6 Zoom and Pan: Resize handler 2023-08-24 17:30:35 +03:00
AUTOMATIC1111 935d9d899c update info about gradio in changelog file 2023-08-24 11:16:29 +03:00
AUTOMATIC1111 189229bbf9 Merge branch 'dev' into release_candidate 2023-08-24 11:09:04 +03:00
AUTOMATIC1111 b6c0217405 update changelog 2023-08-24 11:06:23 +03:00
AUTOMATIC1111 995ff5902f add infotext for use_old_scheduling option 2023-08-24 10:07:54 +03:00
AUTOMATIC1111 b0211ff7f8 bump gradio version 2023-08-24 09:41:30 +03:00
AUTOMATIC1111 0027ce1f6e Merge pull request #12457 from rubberbaron/shared-hires-prompt-test
prompt editing timeline has separate range for first pass and hires-fix pass
2023-08-24 09:41:16 +03:00
AUTOMATIC1111 06f18186dc Merge pull request #12745 from AUTOMATIC1111/draw-extra-network-buttons-above-description
draw extra network buttons above description
2023-08-24 09:37:17 +03:00
AUTOMATIC1111 2c570f641c Merge pull request #12749 from daswer123/improve_integration
Zoom and pan: Improve integration
2023-08-24 09:36:53 +03:00
Danil Boldyrev fa68d66c98 remove console.log 2023-08-24 01:42:37 +03:00
Danil Boldyrev 32e790a47e Fixing and improving integration 2023-08-24 01:40:06 +03:00
w-e-w ddf3d1a7ac draw extra network buttons above description 2023-08-24 00:34:28 +09:00
AUTOMATIC1111 c9c8485bc1 Merge branch 'release_candidate' 2023-08-23 15:48:09 +03:00
AUTOMATIC1111 31f2be3dce update changelog 2023-08-23 15:47:11 +03:00
AUTOMATIC1111 250c416474 update doggettx cross attention optimization to not use an unreasonable amount of memory in some edge cases -- suggestion by MorkTheOrk 2023-08-23 15:44:38 +03:00
AUTOMATIC1111 12171ca961 fix memory leak when generation fails 2023-08-23 15:40:31 +03:00
AUTOMATIC1111 bae91855f5 Merge pull request #12737 from yajunzhng/master
tell RealESRGANer which device to run on, could be cuda, M1, or other…
2023-08-23 12:30:17 +03:00
yajun f29b4cd7cb tell RealESRGANer which device to run on, could be cuda, M1, or other GPU 2023-08-23 14:31:38 +08:00
AUTOMATIC1111 0232a987bb set devices.dtype_unet correctly 2023-08-23 07:10:43 +03:00
Danil Boldyrev 6a87e35bef lint 2023-08-23 03:35:09 +03:00
Danil Boldyrev 8fd1558179 Removed the old code 2023-08-23 03:21:28 +03:00
AUTOMATIC1111 04cfcf91d9 fix endless progress requests 2023-08-22 21:05:25 +03:00
AUTOMATIC1111 3ec5ce9416 add type annotations for extra fields of shared.sd_model 2023-08-22 19:05:03 +03:00
AUTOMATIC1111 016554e437 add --medvram-sdxl 2023-08-22 18:49:08 +03:00
AUTOMATIC1111 bb7dd7b646 use an atomic operation to replace the cache with the new version 2023-08-22 17:45:47 +03:00
AUTOMATIC1111 9c82b34be7 Merge pull request #12727 from daswer123/improve_integration
Zoom and pan: Improved integration
2023-08-22 17:19:15 +03:00
Danil Boldyrev 54fbdcf467 Improve integration, fix for new gradio 2023-08-22 16:43:23 +03:00
AUTOMATIC1111 2e9289bcbf Merge pull request #12722 from ravi9/intel-readme
Update README.md with install instructions on Intel CPUs, GPUs
2023-08-22 15:26:23 +03:00
AUTOMATIC1111 7fd0ccdffc Merge pull request #12723 from MMP0/dev-resize-handle-fix
Resize handle improvements and bug fixes
2023-08-22 15:25:28 +03:00
MMP0 ed49c7c246 Fix double click event not firing 2023-08-22 21:21:06 +09:00
AUTOMATIC1111 0d90064e9e eslint 2023-08-22 13:57:05 +03:00
AUTOMATIC1111 9158d0fd12 fix broken generate button if not using live previews 2023-08-22 13:54:45 +03:00
MMP0 c4b11ec54e Replace tabs with spaces 2023-08-22 18:48:17 +09:00
AUTOMATIC1111 9e4019c5ff make it possible to localize tooltips and placeholders 2023-08-22 12:00:29 +03:00
MMP0 96edfb560b Limit mouse detection to primary button only 2023-08-22 17:19:26 +09:00
AUTOMATIC1111 f6c52f4f41 for live previews, only hide gallery after at least one live previews pic has been received
fix blinking for live previews
fix a clientside live previews exception that happens when you kill serverside during sampling
match the size of live preview image to gallery image
2023-08-22 11:02:14 +03:00
Ravi Panchumarthy 7d94e5f33b Update README.md with Intel install instructions 2023-08-22 00:54:01 -07:00
AUTOMATIC1111 e8a9d213e4 dump current stack traces when exiting with SIGINT 2023-08-22 10:49:52 +03:00
MMP0 0998256fc5 Prevent text selection and cursor changes 2023-08-22 16:45:34 +09:00
AUTOMATIC1111 a459075d26 actual solution to the uncommon hanging problem that is seemingly caused by multiple progress requests working on same tensor 2023-08-22 10:41:10 +03:00
MMP0 70283a9f4a Expand the hit area of resize handle 2023-08-22 16:40:50 +09:00
MMP0 e1b37a066d Fix resize handle overflowing in Safari 2023-08-22 16:35:49 +09:00
AUTOMATIC1111 d7c9c61420 attemped solution to the uncommon hanging problem that is seemingly caused by live previews working on the tensor as denoising 2023-08-22 09:55:20 +03:00
AUTOMATIC1111 79fd17ee63 remove unneeded example_inputs from gradio config 2023-08-22 08:18:01 +03:00
AUTOMATIC1111 7a3a6e3855 Merge pull request #12713 from AUTOMATIC1111/XYZ-RNG
add RNG source to XYZ
2023-08-22 07:31:26 +03:00
AUTOMATIC1111 f83996cd9f Merge pull request #12714 from catboxanon/resize-handle-reset
Reset columns on resize handle double click
2023-08-22 07:30:52 +03:00
AUTOMATIC1111 7da73cbcca Merge pull request #12717 from brkirch/make-temp-directory
Create Gradio temp directory if necessary
2023-08-22 07:30:25 +03:00
brkirch 299b8096bc Make Gradio temp directory if it doesn't exist
Gradio normally creates the temp directory in `pil_to_temp_file()` (https://github.com/gradio-app/gradio/blob/861d752a83da0f95e9f79173069b69eababeed39/gradio/components/base.py#L313) but since the Gradio implementation of `pil_to_temp_file()` is replaced with `save_pil_to_file()`, the Gradio temp directory should also be created by `save_pil_to_file()` when necessary.
2023-08-21 17:36:17 -04:00
catboxanon aed52d1632 Reset columns on resize handle dblclick 2023-08-21 12:40:27 -04:00
w-e-w 9dce2aa735 add RNG source to XYZ 2023-08-21 23:08:47 +09:00
AUTOMATIC1111 953c3eab7b forbid Full live preview method for medvram and add a setting to undo the forbidding 2023-08-21 15:54:30 +03:00
AUTOMATIC1111 18fb522660 citation mk2 2023-08-21 15:27:04 +03:00
AUTOMATIC1111 bd6f070882 add citation 2023-08-21 15:22:47 +03:00
AUTOMATIC1111 a3fdef4ed4 Merge pull request #12707 from AnyISalIn/dev
feat: replace threading.Lock() to FIFOLock
2023-08-21 15:09:26 +03:00
AUTOMATIC1111 dfd6ea3fca ditch --always-batch-cond-uncond in favor of an UI setting 2023-08-21 15:07:10 +03:00
AnyISalIn 71a0f6ef85 feat: replace threading.Lock() to FIFOLock
Signed-off-by: AnyISalIn <anyisalin@gmail.com>
2023-08-21 17:49:58 +08:00
AUTOMATIC1111 d02c4da483 also prevent changing API options via override_settings 2023-08-21 08:58:15 +03:00
AUTOMATIC1111 df595ae313 make resize handle available to extensions 2023-08-21 08:48:46 +03:00
AUTOMATIC1111 b4d21e7113 prevent API options from being changed via API 2023-08-21 08:48:45 +03:00
AUTOMATIC1111 d722d6de36 Merge pull request #12667 from AUTOMATIC1111/switch-to-PNG-when-images-too-large
switch to PNG when images too large
2023-08-21 07:50:50 +03:00
AUTOMATIC1111 76ae1019b9 add settings for http/https URLs in source images in api 2023-08-21 07:38:07 +03:00
AUTOMATIC1111 a7f18b2297 Merge pull request #12698 from Akegarasu/fix-ssrf-in-api
fix potential ssrf attack in #12663
2023-08-21 07:19:48 +03:00
AUTOMATIC1111 d3632368e6 Merge pull request #12704 from fraz0815/master
Update torch for Navi 31 (7900 XT/XTX)
2023-08-21 07:11:17 +03:00
AUTOMATIC1111 5a3fe7a8d1 Merge pull request #12685 from Uminosachi/fix-vae-mismatch
Fix SD VAE switch error after model reuse
2023-08-21 07:10:19 +03:00
Uminosachi be301f224d Fix for consistency with shared.opts.sd_vae of UI 2023-08-21 11:28:53 +09:00
fraz0815 db6c7ff084 Update torch for Navi 31 (7900 XT/XTX)
Navi 3 needs at least 5.5 which is only on the nightly chain, previous versions are no longer online (torch==2.1.0.dev-20230614+rocm5.5 torchvision==0.16.0.dev-20230614+rocm5.5 torchaudio==2.1.0.dev-20230614+rocm5.5).
so switch to nightly rocm5.6 without explicit versions this time
2023-08-20 22:59:30 +02:00
akiba 268dc9b308 fix potential ssrf attack in #12663 2023-08-20 23:17:50 +08:00
Uminosachi 549b0fc526 Change where VAE state are stored in model 2023-08-20 23:06:51 +09:00
AUTOMATIC1111 42b72fe246 fix for small images in live previews not being scaled up 2023-08-20 14:57:48 +03:00
AUTOMATIC1111 f65d0dc081 Merge pull request #12689 from AUTOMATIC1111/patch-config-status
Patch config status handle corrupted files
2023-08-20 14:20:27 +03:00
Uminosachi af5d2e8e5f Change to access sd_model attribute with dot 2023-08-20 20:08:22 +09:00
Uminosachi 5159edbf0e Store base_vae and loaded_vae_file in sd_model 2023-08-20 19:44:37 +09:00
AUTOMATIC1111 4a2bf65fea make mobile built-in extension actually do something 2023-08-20 13:40:11 +03:00
AUTOMATIC1111 db5c304e29 make live previews play nice with window/slider resizes 2023-08-20 13:38:35 +03:00
AUTOMATIC1111 a0d721e109 make live preview display work independently from progress bar 2023-08-20 13:00:59 +03:00
w-e-w 2c10fda399 make it obvious that a config_status is corrupted
also format HTML removing unnecessary text blocks
2023-08-20 18:48:23 +09:00
w-e-w 7ca20adc6d no need to use OrderedDict 2023-08-20 18:48:23 +09:00
w-e-w e0e64bcdf6 assert key created_at exist in config_states 2023-08-20 18:48:23 +09:00
AUTOMATIC1111 499cef3c2b Merge pull request #12684 from AUTOMATIC1111/fix-xyz-swap-axes
fix xyz swap axes
2023-08-20 12:46:34 +03:00
AUTOMATIC1111 2571767204 Merge pull request #12687 from catboxanon/resize-handle
Add resize-handle (built-in extension)
2023-08-20 12:42:12 +03:00
w-e-w 36ecff71ae catch error when loading config_states
and save config_states with indent
2023-08-20 15:36:39 +09:00
catboxanon a3c8510c05 Add resize-handler extension 2023-08-20 02:31:32 -04:00
Uminosachi 042e1d5d0b Fix SD VAE switch error after model reuse 2023-08-20 15:00:14 +09:00
w-e-w ae17c775dc fix xyz swap axes
make csv_string_to_list_strip function
2023-08-20 14:29:26 +09:00
w-e-w 8ce613bb3a switch to PNG when images too large 2023-08-19 16:50:43 +09:00
AUTOMATIC1111 9d2299ed0b implement undo hijack for SDXL 2023-08-19 10:16:27 +03:00
AUTOMATIC1111 35db3665b3 possible fix for dictionary changed size during iteration 2023-08-19 08:39:48 +03:00
AUTOMATIC1111 5a5913828c Merge pull request #12616 from catboxanon/extra-noise-callback
Add extra noise callback
2023-08-19 08:36:44 +03:00
AUTOMATIC1111 448d6bef37 Merge pull request #12599 from AUTOMATIC1111/ram_optim
RAM optimization round 2
2023-08-19 08:36:20 +03:00
AUTOMATIC1111 7056fdf2be Merge pull request #12630 from catboxanon/fix/nans-mk2
Attempt to resolve NaN issue with unstable VAEs in fp32 mk2
2023-08-19 08:34:46 +03:00
AUTOMATIC1111 3d81fd714b Merge pull request #12633 from catboxanon/fix/img2img-bg-color
Fix img2img background color for transparent images option not being used
2023-08-19 08:33:22 +03:00
AUTOMATIC1111 58a9082411 Merge pull request #12635 from catboxanon/fix/full-page-img
Make image viewer actually fit the whole page
2023-08-19 08:32:45 +03:00
AUTOMATIC1111 99a64edea8 do not assign to vae_dict 2023-08-19 08:31:06 +03:00
AUTOMATIC1111 d75b521af8 Merge pull request #12638 from Cschlaefli/fix-api-vae-model-refresh
fix issues with api model-refresh and vae-refresh
2023-08-19 08:28:47 +03:00
AUTOMATIC1111 296c8f6a4a Merge pull request #12639 from AUTOMATIC1111/more-hash
More hash filename patterns
2023-08-19 08:28:00 +03:00
AUTOMATIC1111 99cd8de234 Merge pull request #12645 from catboxanon/css/sticky-column
Make results column sticky
2023-08-19 08:27:28 +03:00
AUTOMATIC1111 5590be7a8c Merge pull request #12644 from AUTOMATIC1111/fix-model-override-logic
fix model override logic
2023-08-19 08:26:39 +03:00
AUTOMATIC1111 f084e6bbd0 revert xformers back to 0.0.20 2023-08-19 08:22:12 +03:00
AUTOMATIC1111 cd719b08bd Merge pull request #12663 from SpenserCai/get_image_from_url
api support get image from url
2023-08-19 08:08:19 +03:00
AUTOMATIC1111 90e560bb75 Merge pull request #12648 from catboxanon/feat/gallery-tweaks
Gallery: Set preview to `True`, allow custom height
2023-08-19 08:06:13 +03:00
AUTOMATIC1111 9182dd7e5d Merge pull request #12634 from catboxanon/feat/live-preview-fast-interrupt
Improve interrupt speed
2023-08-19 08:05:36 +03:00
AUTOMATIC1111 f739e3e05d second appearance 2023-08-19 08:04:48 +03:00
AUTOMATIC1111 e7a044a2d1 Merge pull request #12653 from S-Del/fix/typo
fix typo `txt2txt` -> `txt2img`
2023-08-19 08:03:40 +03:00
AUTOMATIC1111 ca72db23d2 Merge pull request #12660 from dansgithubuser/fork
Get python print statements to show up in docker logs
2023-08-19 08:03:19 +03:00
AUTOMATIC1111 e4a2a705ad Merge pull request #12661 from XDOneDude/master
update xformers to 0.0.21 and some fixes
2023-08-19 08:02:18 +03:00
AUTOMATIC1111 bb91bb5e83 Merge pull request #12662 from bluelovers/bluelovers-patch-1-1
refactor: Update ui.js
2023-08-19 08:01:05 +03:00
SpenserCai 4760c3c0b5 api support get image from url 2023-08-19 12:19:21 +08:00
bluelovers 1631e96a98 refactor: Update ui.js 2023-08-19 10:38:43 +08:00
XDOneDude 61c1261e4e more grammar fixes 2023-08-18 21:56:15 -04:00
XDOneDude 956e1d8d90 xformers update 2023-08-18 21:25:59 -04:00
Dan 453a5ac1d0 run python unbuffered so output shows up in docker logs 2023-08-18 21:09:27 -04:00
S-Del 64d5fa1efd fix typo txt2txt -> txt2img 2023-08-18 22:32:20 +09:00
catboxanon 9d1d63afca Exit out of hires fix if interrupted earlier 2023-08-18 05:55:10 -04:00
catboxanon 44d4e7c500 Gallery: Set preview to True, allow custom height 2023-08-18 05:15:30 -04:00
catboxanon f89f01f9d8 Make results column sticky 2023-08-18 04:18:22 -04:00
w-e-w 640cb1bb8d fix model override logic
do not need extra logic to unload refine model
2023-08-18 17:14:02 +09:00
w-e-w a81dc43fcd negative_prompt full_prompt hash 2023-08-18 15:13:12 +09:00
w-e-w 8a1f32b6a5 image hash 2023-08-18 14:04:46 +09:00
Cade Schlaefli f9c2216ffa remove unused import 2023-08-17 21:14:14 -05:00
Cade Schlaefli 959f8b32d5 fix issues with model refresh 2023-08-17 20:48:17 -05:00
catboxanon 13f1357b7f Make image viewer actually fit the whole page 2023-08-17 20:21:46 -04:00
catboxanon 3ce5fb8e5c Add option for faster live interrupt 2023-08-17 20:03:26 -04:00
catboxanon 46e8898f65 Fix img2img background color not being used 2023-08-17 19:35:34 -04:00
catboxanon 3003b10e0a Attempt to resolve NaN issue with unstable VAEs in fp32 mk2 2023-08-17 18:10:55 -04:00
AUTOMATIC1111 0dc74545c0 resolve the issue with loading fp16 checkpoints while using --no-half 2023-08-17 07:54:07 +03:00
catboxanon 254be4eeb2 Add extra noise callback 2023-08-16 21:45:19 -04:00
AUTOMATIC1111 541ef9247c Merge pull request #12607 from AUTOMATIC1111/return-empty-list-if-extensions_dir-not-exist-
fix Return empty list if extensions dir not exist
2023-08-16 18:41:02 +03:00
w-e-w e1a29266b2 return empty list if extensions_dir not exist 2023-08-17 00:24:24 +09:00
AUTOMATIC1111 fc3a57ff96 Merge pull request #12603 from AUTOMATIC1111/auto-add-data-dir-to-gradio-allowed-path
auto add data-dir to gradio-allowed-path
2023-08-16 14:48:37 +03:00
w-e-w 0cf85b24df auto add data-dir to gradio-allowed-path 2023-08-16 20:18:46 +09:00
AUTOMATIC1111 eaba3d7349 send weights to target device instead of CPU memory 2023-08-16 12:11:01 +03:00
AUTOMATIC1111 57e59c14c8 Revert "send weights to target device instead of CPU memory"
This reverts commit 0815c45bcd.
2023-08-16 11:28:00 +03:00
AUTOMATIC1111 0815c45bcd send weights to target device instead of CPU memory 2023-08-16 10:44:17 +03:00
AUTOMATIC1111 023a3a98a1 Merge pull request #12596 from AUTOMATIC1111/fix-taesd-scale
Remove wrong TAESD Latent scale
2023-08-16 09:56:12 +03:00
AUTOMATIC1111 86221269f9 RAM optimization round 2 2023-08-16 09:55:35 +03:00
Kohaku-Blueleaf d9ddc5d4cd Remove wrong scale 2023-08-16 11:21:12 +08:00
AUTOMATIC1111 a7f7701b64 Merge pull request #12589 from catboxanon/fix/css-overflow
CSS: Remove forced visible overflow for Gradio group child divs
2023-08-15 21:47:49 +03:00
AUTOMATIC1111 fd563e3274 Merge pull request #12586 from catboxanon/fix/rng-shape
RNG: Make all elements of shape `int`s
2023-08-15 21:47:02 +03:00
AUTOMATIC1111 d09d33bc2d Merge pull request #12588 from catboxanon/fix/inpaint-upload
Fix inpaint upload for alpha masks
2023-08-15 21:46:19 +03:00
catboxanon 7083391931 CSS: Remove forced visible overflow for Gradio group child divs 2023-08-15 14:44:13 -04:00
catboxanon 0f77139253 Fix inpaint upload for alpha masks, create reusable function 2023-08-15 14:24:55 -04:00
catboxanon 5b28b7dbc7 RNG: Make all elements of shape ints 2023-08-15 13:38:37 -04:00
AUTOMATIC1111 85fcb7b8df lint 2023-08-15 19:25:03 +03:00
AUTOMATIC1111 8b181c812f Merge pull request #12584 from AUTOMATIC1111/full-module-with-bias
Add ex_bias into full module
2023-08-15 19:24:15 +03:00
AUTOMATIC1111 f01682ee01 store patches for Lora in a specialized module 2023-08-15 19:23:40 +03:00
Kohaku-Blueleaf aa57a89a21 full module with ex_bias 2023-08-15 23:41:46 +08:00
AUTOMATIC1111 7327be97aa Merge pull request #12570 from NoCrypt/add-miku-theme
Add NoCrypt/miku gradio theme
2023-08-15 16:31:12 +03:00
AUTOMATIC1111 63f881a5f0 Merge pull request #12577 from brkirch/fix-vae-near-checkpoint-exception
Fix `sd_vae_as_default` being accessed instead of `sd_vae_overrides_per_model_preferences`
2023-08-15 15:29:48 +03:00
AUTOMATIC1111 dc0e63a48a Merge pull request #12578 from AUTOMATIC1111/changelog-fix
Changelog minor correction
2023-08-15 15:29:15 +03:00
w-e-w f117bb64fc Update CHANGELOG.md 2023-08-15 20:19:13 +09:00
brkirch 54209c1639 Use the new SD VAE override setting 2023-08-15 06:29:39 -04:00
AUTOMATIC1111 ec505bac41 Merge pull request #12573 from catboxanon/changelog
Add PR refs to changelog
2023-08-15 11:47:20 +03:00
catboxanon 2154662826 Add PR refs to changelog 2023-08-15 03:23:44 -04:00
AUTOMATIC1111 9ab52caf02 update changelog file 2023-08-15 09:50:57 +03:00
AUTOMATIC1111 bc61ad9ec8 Merge pull request #12564 from catboxanon/feat/img2img-noise
Add extra noise param for img2img operations
2023-08-15 09:50:20 +03:00
NoCrypt b0a6d61d73 Add NoCrypt/miku gradio theme 2023-08-15 13:22:44 +07:00
catboxanon 371b24b17c Add extra img2img noise 2023-08-15 02:19:19 -04:00
AUTOMATIC1111 79d4e81984 fix processing error that happens if batch_size is not a multiple of how many prompts/negative prompts there are #12509 2023-08-15 08:46:17 +03:00
AUTOMATIC1111 7e77a38cbc get XYZ plot to work with recent changes to refined specified in fields of p rather than in settings 2023-08-15 08:27:50 +03:00
AUTOMATIC1111 d6b79b9963 Merge pull request #12476 from AnyISalIn/dev
xyz_grid: support refiner_checkpoint and refiner_switch_at
2023-08-15 08:26:38 +03:00
AUTOMATIC1111 6f86573247 Merge pull request #12552 from brkirch/update-sdxl-commit-hash
Update SD XL commit hash
2023-08-15 08:12:21 +03:00
AUTOMATIC1111 45be87afc6 correctly add Eta DDIM to infotext when it's 1.0 and do not add it when it's 0.0. 2023-08-14 21:48:05 +03:00
AUTOMATIC1111 5daf7983d1 when refreshing cards in extra networks UI, do not discard user's custom resolution 2023-08-14 19:27:04 +03:00
AUTOMATIC1111 f23e5ce2da revert changed inpainting mask conditioning calculation after #12311 2023-08-14 17:59:03 +03:00
AUTOMATIC1111 e56b7c8419 Merge pull request #12547 from whitebell/fix-typo
Fix typo in shared_options.py
2023-08-14 13:36:10 +03:00
AUTOMATIC1111 2359c07ddf Merge pull request #12551 from AUTOMATIC1111/separate-Extra-options
separate Extra options
2023-08-14 13:35:41 +03:00
brkirch bc63339df3 Update hash for SD XL Repo 2023-08-14 06:26:36 -04:00
w-e-w a2e213bc7b separate Extra options 2023-08-14 18:50:22 +09:00
AUTOMATIC1111 6bfd4dfecf add second_order to samplers that mistakenly didn't have it 2023-08-14 12:07:38 +03:00
Robert Barron 99ab3d43a7 hires prompt timeline: merge to latests, slightly simplify diff 2023-08-14 00:43:27 -07:00
AUTOMATIC1111 353c876172 fix API always using -1 as seed 2023-08-14 10:43:18 +03:00
Robert Barron d61e31bae6 Merge remote-tracking branch 'auto1111/dev' into shared-hires-prompt-test 2023-08-14 00:35:17 -07:00
AUTOMATIC1111 f3b96d4998 return seed controls UI to how it was before 2023-08-14 10:22:52 +03:00
AUTOMATIC1111 abbecb3e73 further repair the /docs page to not break styles with the attempted fix 2023-08-14 10:15:10 +03:00
whitebell b39d9364d8 Fix typo in shared_options.py
unperdictable -> unpredictable
2023-08-14 15:58:38 +09:00
AUTOMATIC1111 c7c16f805c repair /docs page 2023-08-14 09:49:51 +03:00
AUTOMATIC1111 f37cc5f5e1 Merge pull request #12542 from AUTOMATIC1111/res-sampler
Add RES sampler and reorder the sampler list
2023-08-14 09:02:10 +03:00
AUTOMATIC1111 3a4bee1096 Merge pull request #12543 from AUTOMATIC1111/extra-norm-module
Fix MHA error with ex_bias and support ex_bias for layers which don't have bias
2023-08-14 09:01:34 +03:00
AUTOMATIC1111 c1a31ec9f7 revert to applying mask before denoising for k-diffusion, like it was before 2023-08-14 08:59:15 +03:00
Kohaku-Blueleaf f70ded8936 remove "if bias exist" check 2023-08-14 13:53:40 +08:00
Kohaku-Blueleaf aa26f8eb40 Put frequently used sampler back 2023-08-14 13:50:53 +08:00
AUTOMATIC1111 cda2f0a162 make on_before_component/on_after_component possible earlier 2023-08-14 08:49:39 +03:00
AUTOMATIC1111 aeb76ef174 repair DDIM/PLMS/UniPC batches 2023-08-14 08:49:02 +03:00
Kohaku-Blueleaf e7c03ccdce Merge branch 'dev' into extra-norm-module 2023-08-14 13:34:51 +08:00
Kohaku-Blueleaf d9cc27cb29 Fix MHA updown err and support ex-bias for no-bias layer 2023-08-14 13:32:51 +08:00
Kohaku-Blueleaf 0ea61a74be add res(dpmdd 2m sde heun) and reorder the sampler list 2023-08-14 11:46:36 +08:00
AUTOMATIC1111 007ecfbb29 also use setup callback for the refiner instead of before_process 2023-08-13 21:01:13 +03:00
AUTOMATIC1111 9cd0475c08 Merge pull request #12526 from brkirch/mps-adjust-sub-quad
Fixes for `git checkout`, MPS/macOS fixes and optimizations
2023-08-13 20:28:49 +03:00
AUTOMATIC1111 8452708560 Merge pull request #12530 from eltociear/eltociear-patch-1
Fix typo in launch_utils.py
2023-08-13 20:27:17 +03:00
AUTOMATIC1111 16781ba09a fix 2 for git code botched by previous PRs 2023-08-13 20:15:20 +03:00
Ikko Eltociear Ashimine 09ff5b5416 Fix typo in launch_utils.py
existance -> existence
2023-08-14 01:03:49 +09:00
AUTOMATIC1111 f093c9d39d fix broken XYZ plot seeds
add new callback for scripts to be used before processing
2023-08-13 17:31:10 +03:00
brkirch 2035cbbd5d Fix DDIM and PLMS samplers on MPS 2023-08-13 10:07:52 -04:00
brkirch 5df535b7c2 Remove duplicate code for torchsde randn 2023-08-13 10:07:52 -04:00
brkirch 232c931f40 Mac k-diffusion workarounds are no longer needed 2023-08-13 10:07:52 -04:00
brkirch f4dbb0c820 Change the repositories origin URLs when necessary 2023-08-13 10:07:52 -04:00
brkirch 9058620cec git checkout with commit hash 2023-08-13 10:07:14 -04:00
brkirch 2489252099 torch.empty can create issues; use torch.zeros
For MPS, using a tensor created with `torch.empty()` can cause `torch.baddbmm()` to include NaNs in the tensor it returns, even though `beta=0`. However, with a tensor of shape [1,1,1], there should be a negligible performance difference between `torch.empty()` and `torch.zeros()` anyway, so it's better to just use `torch.zeros()` for this and avoid unnecessarily creating issues.
2023-08-13 10:06:25 -04:00
brkirch 87dd685224 Make sub-quadratic the default for MPS 2023-08-13 10:06:25 -04:00
brkirch abfa4ad8bc Use fixed size for sub-quadratic chunking on MPS
Even if this causes chunks to be much smaller, performance isn't significantly impacted. This will usually reduce memory usage but should also help with poor performance when free memory is low.
2023-08-13 10:06:25 -04:00
AUTOMATIC1111 3163d1269a fix for the broken run_git calls 2023-08-13 16:51:21 +03:00
AUTOMATIC1111 1c6ca09992 Merge pull request #12510 from catboxanon/feat/extnet/hashes
Support search and display of hashes for all extra network items
2023-08-13 16:46:32 +03:00
AUTOMATIC1111 d73db17ee3 Merge pull request #12515 from catboxanon/fix/gc1
Clear sampler and garbage collect before decoding images to reduce VRAM
2023-08-13 16:45:38 +03:00
AUTOMATIC1111 127ab9114f Merge pull request #12514 from catboxanon/feat/batch-encode
Encode batch items individually to significantly reduce VRAM
2023-08-13 16:41:07 +03:00
AUTOMATIC1111 d53f3b5596 Merge pull request #12520 from catboxanon/eta
Update description of eta setting
2023-08-13 16:40:17 +03:00
AUTOMATIC1111 d41a5bb97d Merge pull request #12521 from catboxanon/feat/more-s-noise
Add `s_noise` param to more samplers
2023-08-13 16:39:25 +03:00
AUTOMATIC1111 551d2fabcc Merge pull request #12522 from catboxanon/fix/extra_params
Restore `extra_params` that was lost in merge
2023-08-13 16:38:27 +03:00
AUTOMATIC1111 db40d26d08 linter 2023-08-13 16:38:10 +03:00
catboxanon 525b55b1e9 Restore extra_params that was lost in merge 2023-08-13 09:08:34 -04:00
catboxanon ce0829d711 Merge branch 'feat/dpmpp3msde' into feat/more-s-noise 2023-08-13 08:46:58 -04:00
catboxanon ac790fc49b Discard penultimate sigma for DPM-Solver++(3M) SDE 2023-08-13 08:46:07 -04:00
catboxanon f4757032e7 Fix s_noise description 2023-08-13 08:24:28 -04:00
catboxanon d1a70c3f05 Add s_noise param to more samplers 2023-08-13 08:22:24 -04:00
AUTOMATIC1111 d8419762c1 Lora: output warnings in UI rather than fail for unfitting loras; switch to logging for error output in console 2023-08-13 15:07:37 +03:00
catboxanon 60a7405165 Update description of eta setting 2023-08-13 08:06:40 -04:00
catboxanon 1ae9dacb4b Add DPM-Solver++(3M) SDE 2023-08-13 07:57:29 -04:00
catboxanon 69f49c8d39 Clear sampler before decoding images
More significant VRAM reduction.
2023-08-13 04:40:34 -04:00
catboxanon 822597db49 Encode batches separately
Significantly reduces VRAM.
This makes encoding more inline with how decoding currently functions.
2023-08-13 04:16:48 -04:00
catboxanon 7fa5ee54b1 Support search and display of hashes for all extra network items 2023-08-13 02:32:54 -04:00
AUTOMATIC1111 da80d649fd Merge pull request #12503 from AUTOMATIC1111/extra-norm-module
Add Norm Module to lora ext and add "bias" support
2023-08-13 08:28:48 +03:00
AUTOMATIC1111 61673451ff Merge pull request #12491 from AUTOMATIC1111/xyz-csv-and-dropdown-mode
Bring back CSV mode for XYZ grid
2023-08-13 08:25:15 +03:00
AUTOMATIC1111 599f61a1e0 use dataclass for StableDiffusionProcessing 2023-08-13 08:24:16 +03:00
w-e-w 0e3bac8132 rephrase and move 2023-08-13 14:09:38 +09:00
AUTOMATIC1111 fa9370b741 add refiner to StableDiffusionProcessing class
write out correct model name in infotext, rather than the refiner model
2023-08-13 06:07:30 +03:00
Kohaku-Blueleaf 5881dcb887 remove debug print 2023-08-13 02:36:02 +08:00
Kohaku-Blueleaf a2b8305096 return None if no ex_bias 2023-08-13 02:35:04 +08:00
Kohaku-Blueleaf bd4da4474b Add extra norm module into built-in lora ext
refer to LyCORIS 1.9.0.dev6
add new option and module for training norm layer
(Which is reported to be good for style)
2023-08-13 02:27:39 +08:00
w-e-w dc5b5ee9c6 properly convert this into CSV string 2023-08-13 02:21:04 +09:00
w-e-w 299eb54308 pass csv_mode 2023-08-13 02:17:13 +09:00
w-e-w 8d9ca46e0a convert value when switching mode 2023-08-13 02:05:20 +09:00
AUTOMATIC1111 b2080756fc make "send to" buttons into small tool buttons 2023-08-12 19:03:33 +03:00
AUTOMATIC1111 9d0ec13596 fix quicksettings on Chrome 2023-08-12 18:42:59 +03:00
AUTOMATIC1111 6816ad5ed8 fix broken reuse seed 2023-08-12 18:36:30 +03:00
AUTOMATIC1111 4e8690906c update seed/subseed HTML widths 2023-08-12 18:00:30 +03:00
AUTOMATIC1111 f0b72b8121 move seed, variation seed and variation seed strength to a single row, dump resize seed from UI
add a way for scripts to register a callback for before/after just a single component's creation
2023-08-12 17:46:13 +03:00
w-e-w 7a68ac6615 rename to csv mode 2023-08-12 23:40:05 +09:00
w-e-w f131f84e13 dropdown mode chackbox 2023-08-12 23:26:25 +09:00
AUTOMATIC1111 6aa26a26d5 change quicksettings items to have variable width 2023-08-12 16:47:39 +03:00
w-e-w fd617fad00 Redundant character escape '\]' in RegExp 2023-08-12 22:24:59 +09:00
w-e-w d20eb11c9e format 2023-08-12 22:24:00 +09:00
w-e-w c8d453e915 bring back csv mode 2023-08-12 22:20:34 +09:00
AUTOMATIC1111 b293ed3061 make it possible to use hires fix together with refiner 2023-08-12 12:54:32 +03:00
AUTOMATIC1111 64311faa68 put refiner into main UI, into the new accordions section
add VAE from main model into infotext, not from refiner model
option to make scripts UI without gr.Group
fix inconsistencies with refiner when usings samplers that do more denoising than steps
2023-08-12 12:39:59 +03:00
AUTOMATIC1111 26c92f056a Merge pull request #12480 from catboxanon/fix/cc
Fix color correction by converting image to RGB
2023-08-12 09:12:30 +03:00
AUTOMATIC1111 ebc1bafb03 Merge pull request #12479 from catboxanon/fix/extras-generator
Refactor postprocessing/extras tab to use generator to resolve OOM issues
2023-08-12 08:58:14 +03:00
AUTOMATIC1111 9dae70da79 Merge pull request #12487 from AUTOMATIC1111/disable-extensions-installer-with-arg
pathc: also disable extensions installer with arg
2023-08-12 08:57:35 +03:00
w-e-w f57bc1a21b disable extensions installer with arg 2023-08-12 12:06:31 +09:00
catboxanon af27b716e5 Fix color correction by converting image to RGB 2023-08-11 12:22:11 -04:00
catboxanon 7c9c19b2a2 Refactor postprocessing to use generator to resolve OOM issues 2023-08-11 11:32:12 -04:00
AnyISalIn 3b2f51602d xyz_grid: support refiner_checkpoint and refiner_switch_at
Signed-off-by: AnyISalIn <anyisalin@gmail.com>
2023-08-11 21:40:33 +08:00
AUTOMATIC1111 ae6b30907d Merge pull request #12470 from Splendide-Imaginarius/mask-blur-property+kernel
Make `StableDiffusionProcessingImg2Img.mask_blur` a property, make more inline with PIL `GaussianBlur`
2023-08-11 15:03:18 +03:00
AUTOMATIC1111 77c52ea701 fix accordion style on img2img 2023-08-11 11:59:11 +03:00
AUTOMATIC1111 3c00e41ec0 Merge pull request #12458 from daswer123/auto-expand
Zoom and pan: Some fixes for the auto-expand
2023-08-11 07:56:31 +03:00
AUTOMATIC1111 340c1cc68d Merge pull request #12463 from catboxanon/fix/vae-hash
Properly return `None` for VAE hash when using `--no-hashing`
2023-08-11 07:55:42 +03:00
AUTOMATIC1111 2c79f2af6e Merge pull request #12466 from catboxanon/fix/lora-old-mk2
Fix broken `Lora/Networks: use old method` option
2023-08-11 07:53:12 +03:00
catboxanon 4fafc34e49 Fix to make LoRA old method setting work 2023-08-10 23:42:58 -04:00
catboxanon d456fb797a fix: Properly return None when VAE hash is None 2023-08-10 16:04:49 -04:00
AUTOMATIC1111 458eda1321 Merge pull request #12456 from AUTOMATIC1111/patch-#12453
Patch #12453
2023-08-10 17:55:31 +03:00
Robert Barron 54f926b11d fix bad merge 2023-08-10 07:48:04 -07:00
w-e-w a75d756a6f use default value if value error 2023-08-10 23:47:28 +09:00
Robert Barron 863613293e Merge branch 'shared-hires-prompt-raw' into shared-hires-prompt-test 2023-08-10 07:45:35 -07:00
AUTOMATIC1111 9af5cce4c7 Merge pull request #12454 from wfjsw/no-autofix-on-fetch
rm dir on failed clone, disable autofix for fetch
2023-08-10 17:28:29 +03:00
AUTOMATIC1111 e0906096c5 remove unnecessary GFPGAN_PACKAGE (we install GFPGAN from the requirements file) 2023-08-10 17:22:08 +03:00
AUTOMATIC1111 4549f2a9cc lint 2023-08-10 17:21:01 +03:00
AUTOMATIC1111 f4979422dd return the line lost during the merge 2023-08-10 17:18:33 +03:00
Jabasukuriputo Wang 5a705c2468 rm dir on failed clone, disable autofix for fetch 2023-08-10 09:18:10 -05:00
AUTOMATIC1111 36762f0eaf Merge pull request #12371 from AUTOMATIC1111/refiner
initial refiner support
2023-08-10 17:05:32 +03:00
AUTOMATIC1111 ac8a5d18d3 resolve merge issues 2023-08-10 17:04:59 +03:00
AUTOMATIC1111 70a01cd444 Merge branch 'dev' into refiner 2023-08-10 17:04:38 +03:00
AUTOMATIC1111 959404e0e2 Merge pull request #12453 from AUTOMATIC1111/catch-float-ValueError-default-to--1
Catch float value error default to -1
2023-08-10 16:46:40 +03:00
AUTOMATIC1111 887bcfdf65 Merge pull request #12447 from AUTOMATIC1111/extra-networks-metadata-indent-
save extra networks metadata with indent
2023-08-10 16:46:08 +03:00
AUTOMATIC1111 40ccd26b19 Merge pull request #12450 from catboxanon/cache-file
Add env var for cache file
2023-08-10 16:45:44 +03:00
w-e-w 4412398c4b catch float ValueError default -1 2023-08-10 22:44:33 +09:00
AUTOMATIC1111 942d7a118a Merge pull request #12452 from AUTOMATIC1111/use-new-style-constructor
use new style constructor
2023-08-10 16:43:27 +03:00
AUTOMATIC1111 070b034cd5 put infotext label for setting into OptionInfo definition rather than in a separate list 2023-08-10 16:42:26 +03:00
AUTOMATIC1111 9d78d317ae add VAE to infotext 2023-08-10 16:22:10 +03:00
Danil Boldyrev 045f740892 Height fix 2023-08-10 16:17:52 +03:00
AUTOMATIC1111 b13806c150 fix a bug preventing normal operation if a string is added to a gr.Number component via ui-config.json 2023-08-10 16:15:34 +03:00
AUTOMATIC1111 4f6582cb66 add precision=0 to gr.Number seed 2023-08-10 16:10:42 +03:00
AUTOMATIC1111 1b3093fe3a fix --use-textbox-seed 2023-08-10 15:58:53 +03:00
w-e-w 237b704172 use new style constructor 2023-08-10 21:42:26 +09:00
AUTOMATIC1111 4d93f48f09 fix for multiple input accordions 2023-08-10 15:32:54 +03:00
Danil Boldyrev ed01d2ee3b a another fix, a different approach 2023-08-10 13:45:25 +03:00
catboxanon 386202895f Add env var for cache file 2023-08-10 06:17:45 -04:00
AUTOMATIC1111 0883810592 comment for InputAccordion 2023-08-10 13:02:50 +03:00
AUTOMATIC1111 faca86620d linter fixes 2023-08-10 12:58:00 +03:00
AUTOMATIC1111 6c23061a7d avoid importing gradio in tests because it spams warnings 2023-08-10 12:50:03 +03:00
AUTOMATIC1111 33446acf47 face restoration and tiling moved to settings - use "Options in main UI" setting if you want them back 2023-08-10 12:41:41 +03:00
w-e-w 0a0a9d4fe9 extra networks metadata indent 2023-08-10 18:05:17 +09:00
AUTOMATIC1111 9199b6b7eb add a custom UI element that combines accordion and checkbox
rework hires fix UI to use accordion
prevent bogus progress output in console when calculating hires fix dimensions
2023-08-10 11:20:46 +03:00
AUTOMATIC1111 2c5106ed06 additional work on gradio styles;
make the accordion change affect all accordions, not just inside scripts div
2023-08-10 07:57:52 +03:00
AUTOMATIC1111 6ed1541ef5 Merge pull request #12312 from catboxanon/script-accordion-style
Add styling for script components
2023-08-10 07:05:44 +03:00
AUTOMATIC1111 736aaf348b Merge pull request #12440 from catboxanon/dev
Use better symbol for extra networks sort
2023-08-10 06:39:38 +03:00
AUTOMATIC1111 f0edd26998 Merge pull request #12439 from catboxanon/fix/slerp-import
Add slerp import for extension backwards compat
2023-08-10 06:37:44 +03:00
catboxanon ff1bfd01ba Remove up down symbol 2023-08-09 14:41:25 -04:00
catboxanon 2ceb4f81e2 Use better symbol for extra networks sort 2023-08-09 14:40:18 -04:00
catboxanon 259805947e Add slerp import for extension backwards compat 2023-08-09 14:24:16 -04:00
AUTOMATIC1111 66c32e40e8 fix gradio themes not applying 2023-08-09 21:19:33 +03:00
AUTOMATIC1111 edfae9e78a add --loglevel commandline argument for logging
remove the progressbar for extension installation in favor of logging output
2023-08-09 20:49:33 +03:00
Robert Barron d1ba46b6e1 allow first pass and hires pass to use a single prompt to do different prompt editing, hires is 1.0..2.0:
relative time range is [1..2]
  absolute time range is [steps+1..steps+hire_steps], e.g. with 30 steps and 20 hires steps, '20' is 2/3rds through first pass, and 40 is halfway through hires pass
2023-08-09 10:38:47 -07:00
AUTOMATIC1111 c7b9394daf Merge pull request #12435 from daswer123/auto-expand
Zoom and pan: fix auto-expand
2023-08-09 20:04:44 +03:00
AUTOMATIC1111 ab42f81c75 Merge pull request #12436 from catboxanon/fix/tqdm
Only import `tqdm` when needed in `launch_utils`
2023-08-09 20:03:55 +03:00
catboxanon 8b7b99f8d5 fix: Only import tqdm when needed 2023-08-09 12:18:03 -04:00
Danil Boldyrev 4a64d34001 fix auto-expand 2023-08-09 18:40:45 +03:00
AUTOMATIC1111 95821f0132 split webui.py's initialization and utility functions into separate files 2023-08-09 18:11:13 +03:00
AUTOMATIC1111 a2a97e57f0 simplify 2023-08-09 17:08:36 +03:00
AUTOMATIC1111 f2ebcee7c4 Merge pull request #11925 from wfjsw/ext-inst-pbar
Progressbar for extension installers
2023-08-09 17:03:24 +03:00
AUTOMATIC1111 eed963e972 Lora cache in memory 2023-08-09 16:54:49 +03:00
AUTOMATIC1111 7ba8f11688 fix missing restricted_opts from shared 2023-08-09 15:06:03 +03:00
AUTOMATIC1111 aa10faa591 fix checkpoint name jumping around in the list of checkpoints for no good reason 2023-08-09 14:47:44 +03:00
AUTOMATIC1111 358f55db6a Merge pull request #12424 from AUTOMATIC1111/extra-network-metadata-inherit-old-description
extra network metadata inherit old description
2023-08-09 14:41:30 +03:00
AUTOMATIC1111 c8c48640e6 Merge pull request #12426 from AUTOMATIC1111/split_shared
Split shared.py into multiple files
2023-08-09 14:40:06 +03:00
w-e-w 0cac6ab615 extra network metadata inherit old description 2023-08-09 20:35:06 +09:00
AUTOMATIC1111 2617598b7a Merge pull request #12392 from olivierlacan/fix/fastapi
Pin fastapi to > 0.90.1 to fix crash
2023-08-09 14:25:50 +03:00
AUTOMATIC1111 8eea891718 Merge pull request #12396 from Uminosachi/fix-mismatch-shared
Fix mismatch between shared.sd_model & shared.opts
2023-08-09 14:20:12 +03:00
AUTOMATIC1111 386245a264 split shared.py into multiple files; should resolve all circular reference import errors related to shared.py 2023-08-09 10:25:35 +03:00
AUTOMATIC1111 7d81ecbea6 Split history: mv temp modules/shared.py 2023-08-09 08:47:53 +03:00
AUTOMATIC1111 8cf8fc6794 Split history: merge 2023-08-09 08:47:53 +03:00
AUTOMATIC1111 da0712ee7d Split history: mv modules/shared.py temp 2023-08-09 08:47:53 +03:00
AUTOMATIC1111 a6f840b4dc Split history: mv modules/shared.py modules/shared_options.py 2023-08-09 08:47:52 +03:00
AUTOMATIC1111 0d5dc9a6e7 rework RNG to use generators instead of generating noises beforehand 2023-08-09 08:43:31 +03:00
AUTOMATIC1111 d81d3fa8cd fix styles missing from the prompt in infotext when making a grid of batch of multiplie images 2023-08-09 07:45:06 +03:00
w-e-w c102780693 extra network metadata inherit old description 2023-08-09 13:38:53 +09:00
AUTOMATIC1111 7f9dbc45b1 Merge pull request #12413 from daswer123/auto-expand
Zoom and pan: option to auto-expand a wide image
2023-08-09 07:03:30 +03:00
AUTOMATIC1111 08e538e2e6 Merge pull request #12422 from catboxanon/fix/hr-same-sampler
Fix HR `Use same sampler` option
2023-08-09 07:00:48 +03:00
catboxanon bd4b4292ef Fix hr use same sampler 2023-08-08 20:55:08 -04:00
Danil Boldyrev e12a1be1ca auto-expand enable by default for js 2023-08-09 00:14:19 +03:00
Danil Boldyrev a74c014425 auto-expand enable by default 2023-08-09 00:06:51 +03:00
AUTOMATIC1111 a2360de3f3 Merge pull request #12412 from dhwz/dev
fix typo
2023-08-08 23:30:57 +03:00
AUTOMATIC1111 0e83c67525 by request: fix tiled vae extension 2023-08-08 22:27:32 +03:00
AUTOMATIC1111 1aefb50259 add None refiner option 2023-08-08 22:17:25 +03:00
AUTOMATIC1111 ec194b6374 fix webui not switching back to original model from refiner when batch count is greater than 1 2023-08-08 22:14:02 +03:00
AUTOMATIC1111 f8ff8c0638 merge errors 2023-08-08 22:09:51 +03:00
AUTOMATIC1111 54c3e5c913 Merge branch 'dev' into refiner 2023-08-08 21:49:47 +03:00
AUTOMATIC1111 70c63c1208 pass samplers from UI by name, make it possible to use a sampler from infotext even if it's hidden in the dropdown 2023-08-08 21:28:34 +03:00
Danil Boldyrev bc7906e6d6 Ability to automatically expand a picture that does not fit in the screen 2023-08-08 21:28:16 +03:00
AUTOMATIC1111 ae1bde1aa1 put commonly used samplers on top, make DPM++ 2M Karras the default choice 2023-08-08 21:10:12 +03:00
AUTOMATIC1111 a8a256f9b5 REMOVE 2023-08-08 21:08:50 +03:00
AUTOMATIC1111 8285a149d8 add CFG denoiser implementation for DDIM, PLMS and UniPC (this is the commit when you can run both old and new implementations to compare them) 2023-08-08 21:04:44 +03:00
dhwz 2a72d76d6f fix typo 2023-08-08 19:08:37 +02:00
AUTOMATIC1111 2d8e4a6544 split sd_samplers_kdiffusion into two 2023-08-08 18:35:31 +03:00
AUTOMATIC1111 c721884cf5 Split history: mv temp modules/sd_samplers_kdiffusion.py 2023-08-08 18:32:18 +03:00
AUTOMATIC1111 ee2b8f2e1b Split history: merge 2023-08-08 18:32:18 +03:00
AUTOMATIC1111 a3e27019e4 Split history: mv modules/sd_samplers_kdiffusion.py temp 2023-08-08 18:32:17 +03:00
AUTOMATIC1111 7e88f57aaa Split history: mv modules/sd_samplers_kdiffusion.py modules/sd_samplers_cfg_denoiser.py 2023-08-08 18:32:17 +03:00
AUTOMATIC1111 902f8cf292 Merge pull request #12254 from AUTOMATIC1111/auro-autolaunch
Automatically open webui in browser when running "locally"
2023-08-08 06:44:49 +03:00
w-e-w f17c8c2eff Merge branch 'dev' into auro-autolaunch 2023-08-08 11:39:34 +09:00
w-e-w c75bda867b setting: Automatically open webui in browser on startup 2023-08-08 11:29:33 +09:00
Uminosachi 8c200c2156 Fix mismatch between shared.sd_model & shared.opts 2023-08-08 10:48:03 +09:00
Olivier Lacan b0f7f4a991 Pin fastapi to > 0.90.1 to fix crash
See https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/11642#issuecomment-1643298659

This resolves a crashing bug for me on Python 3.10 and it appears to do so
as well for others.
2023-08-07 12:46:02 -07:00
AUTOMATIC1111 01997f45ba fix extra_options_section misbehaving when there's just one extra_options element 2023-08-07 18:49:23 +03:00
AUTOMATIC1111 251140fc88 Merge pull request #12379 from diegocr/dev
Allow to open images in new browser tab by MMB.
2023-08-07 17:48:13 +03:00
Diego Casorran aea0fa9fd5 Allow to open images in new browser tab by MMB.
Signed-off-by: Diego Casorran <dcasorran@gmail.com>
2023-08-07 14:53:42 +02:00
AUTOMATIC1111 912356133a Merge pull request #12387 from huaizong/feature/whz/fix-api-only-mode-lora-nowork
Feature/whz/fix api only mode lora nowork
2023-08-07 13:36:26 +03:00
王怀宗 250a95b6fe fix: enable before_ui_callback when api only mode (fixes #7984) 2023-08-07 18:08:07 +08:00
AUTOMATIC1111 fd67eafc65 Merge pull request #12385 from catboxanon/dev
Remove deprecated style method
2023-08-07 09:43:59 +03:00
AUTOMATIC1111 4c72377bbf Options in main UI update
- correctly read values from pasted infotext
- setting for column count
- infotext paste: do not add a field to override settings if some other component is already handling it
2023-08-07 09:42:13 +03:00
catboxanon 7d8f55ec7c Remove style method 2023-08-07 01:45:10 -04:00
AUTOMATIC1111 0ea20a0d52 rework #12230 to not have duplicate code 2023-08-07 08:38:18 +03:00
AUTOMATIC1111 5cf37ca89f Merge pull request #12230 from wfjsw/git-clone-autofix
Git autofix
2023-08-07 08:27:27 +03:00
AUTOMATIC1111 3453710d10 Merge pull request #12375 from catboxanon/k-diffusion-sigma
Clean up k-diffusion sigma params
2023-08-07 08:20:05 +03:00
AUTOMATIC1111 6e7828e1d2 apply unet overrides after switching model 2023-08-07 08:16:20 +03:00
AUTOMATIC1111 c96e4750d8 SD VAE rework 2
- the setting for preferring opts.sd_vae has been inverted and reworded
- resolve_vae function made easier to read and now returns an object rather than a tuple
- if the checkbox for overriding per-model preferences is checked, opts.sd_vae overrides checkpoint user metadata
- changing VAE in user metadata  for currently loaded model immediately applies the selection
2023-08-07 08:07:20 +03:00
catboxanon 7bcfb4654f Add info to k-diffusion sigma params 2023-08-06 12:41:21 -04:00
catboxanon 976963ab6d Clean up k-diffusion sigma params 2023-08-06 12:30:23 -04:00
AUTOMATIC1111 5a0db84b6c add infotext
add proper support for recalculating conds in k-diffusion samplers
remove support for compvis samplers
2023-08-06 17:53:33 +03:00
AUTOMATIC1111 5a38a9c0ee Merge pull request #12369 from diegocr/dev
add explicit content-type header for image/webp
2023-08-06 17:52:28 +03:00
AUTOMATIC1111 956e69bf3a lint! 2023-08-06 17:07:08 +03:00
AUTOMATIC1111 f1975b0213 initial refiner support 2023-08-06 17:01:07 +03:00
Diego Casorran e866c35462 add explicit content-type header for image/webp 2023-08-06 12:25:04 +00:00
AUTOMATIC1111 57e8a11d17 enable cond cache by default 2023-08-06 13:25:51 +03:00
AUTOMATIC1111 f9950da3e3 create dir for gradio themes cache if it's missing 2023-08-06 12:39:28 +03:00
AUTOMATIC1111 aa42c0ff8e repair broken live previews if using VAE with half 2023-08-06 07:41:24 +03:00
AUTOMATIC1111 06da34d47a Merge pull request #12358 from catboxanon/sigma-infotext
Add missing k-diffusion sigma params to infotext
2023-08-06 06:56:07 +03:00
AUTOMATIC1111 5cae08f2c3 fix rework saving incomplete images 2023-08-06 06:55:19 +03:00
catboxanon 8f31b139b8 Assume 0 = inf for s_tmax 2023-08-05 23:50:33 -04:00
catboxanon ce4be668fe Read kdiffusion sigma params from opts 2023-08-05 23:42:20 -04:00
AUTOMATIC1111 2e8b40004e Merge pull request #12355 from AUTOMATIC1111/gradio-theme-cache
Gradio theme cache
2023-08-06 06:37:48 +03:00
catboxanon 1e8482356c Merge branch 'dev' into sigma-infotext 2023-08-05 23:37:38 -04:00
w-e-w e9c591b101 Gradio theme cache 2023-08-06 12:33:20 +09:00
AUTOMATIC1111 ee96a6a588 do the same for s_tmax #12345 2023-08-06 06:32:41 +03:00
AUTOMATIC1111 92b99f3273 Merge pull request #12354 from catboxanon/fix/s-noise
Allow `s_noise` override to actually be used
2023-08-06 06:26:57 +03:00
AUTOMATIC1111 ee75416e3e Merge branch 'dev' into fix/s-noise 2023-08-06 06:25:35 +03:00
AUTOMATIC1111 d86d12e911 rework saving incomplete images 2023-08-06 06:21:36 +03:00
AUTOMATIC1111 2844d9597b Merge pull request #12338 from AUTOMATIC1111/dont-save-incomplete-images
don't save incomplete images
2023-08-06 06:05:47 +03:00
AUTOMATIC1111 dd1e2726f3 Merge pull request #12352 from bannsec/bannsec-patch-1
Update README.md
2023-08-06 06:05:28 +03:00
catboxanon f18a032190 Correct s_noise fix 2023-08-05 23:05:25 -04:00
AUTOMATIC1111 9cbde6c9fd Merge pull request #12356 from catboxanon/fix/s-churn-max
Increase `s_churn` max value
2023-08-06 05:56:05 +03:00
AUTOMATIC1111 f4e4992a4a Merge pull request #12357 from catboxanon/s-tmax
Add option for `s_tmax`
2023-08-06 05:55:20 +03:00
catboxanon 31506f0771 Add sigma params to infotext 2023-08-05 22:37:25 -04:00
catboxanon 85c2c138d2 Attempt to read s_tmax from arg first if option not found 2023-08-05 21:51:46 -04:00
catboxanon c11104fed5 Add s_tmax 2023-08-05 21:42:03 -04:00
catboxanon dfc01c68cd Increase s_churn max value 2023-08-05 21:23:58 -04:00
catboxanon 496cef956b Allow s_noise override to actually be used 2023-08-05 21:14:13 -04:00
bannsec b315c20756 Update README.md
Correct install instructions on linux and provide additional required apt packages
Fixes #12351
2023-08-05 14:07:35 -04:00
AUTOMATIC1111 c6278c15a8 add explanation for gradio themes 2023-08-05 17:11:37 +03:00
AUTOMATIC1111 0a0a6b2a4d Merge pull request #12346 from dhwz/dev
add new gradio themes
2023-08-05 17:08:38 +03:00
dhwz 1f7fc4d7a3 fix whitespace 2023-08-05 16:07:57 +02:00
dhwz 8ece321df3 add new gradio themes 2023-08-05 16:03:06 +02:00
w-e-w 1d7dcdb6c3 Option to not save incomplete images 2023-08-05 19:07:53 +09:00
AUTOMATIC1111 60183eebc3 add description to VAE setting page 2023-08-05 11:18:13 +03:00
AUTOMATIC1111 36ca80d004 put VAE into a separate settings page 2023-08-05 10:43:06 +03:00
AUTOMATIC1111 3f451f3042 do not add VAE Encoder/Decoder to infotext if it's the default 2023-08-05 10:36:26 +03:00
AUTOMATIC1111 c980dca234 Merge pull request #12331 from AUTOMATIC1111/need_Reload-UI_not_Restart
only need Reload UI not Restart
2023-08-05 09:37:18 +03:00
AUTOMATIC1111 f879cac1e7 Merge pull request #12311 from AUTOMATIC1111/efficient-vae-methods
Add TAESD(or more) options for all the VAE encode/decode operation
2023-08-05 09:24:26 +03:00
AUTOMATIC1111 ad510b2cd3 fix refresh button for styles 2023-08-05 09:17:36 +03:00
AUTOMATIC1111 c74c708ed8 add checkbox to show/hide dirs for extra networks 2023-08-05 09:15:18 +03:00
AUTOMATIC1111 e053e21af6 put localStorage stuff into its own file 2023-08-05 08:48:03 +03:00
w-e-w 7a64601428 need Reload UI not Restart 2023-08-05 14:21:28 +09:00
Kohaku-Blueleaf b85ec2b9b6 Fix some merge mistakes 2023-08-05 13:14:00 +08:00
Kohaku-Blueleaf d56a9cfe6a Merge branch 'dev' into efficient-vae-methods 2023-08-05 13:12:37 +08:00
AUTOMATIC1111 a32f270a47 Merge pull request #11808 from AUTOMATIC1111/extra-networks-always-visible
Always show extra networks tabs in the UI
2023-08-05 08:07:26 +03:00
AUTOMATIC1111 8197f24dbc remove the extra networks button 2023-08-05 08:07:13 +03:00
AUTOMATIC1111 ef1698fd6d Merge branch 'dev' into extra-networks-always-visible 2023-08-05 08:01:38 +03:00
Splendide Imaginarius 56888644a6 Reduce mask blur kernel size to 2.5 sigmas
This more closely matches the old behavior of PIL's Gaussian blur, and
fixes breakage when tiling.

See https://github.com/Coyote-A/ultimate-upscale-for-automatic1111/issues/111#issuecomment-1663504109

Thanks to Алексей Трофимов and eunnone for reporting the issue.
2023-08-05 04:54:23 +00:00
AUTOMATIC1111 c613416af3 Merge pull request #12227 from AUTOMATIC1111/multiple_loaded_models
option to keep multiple models in memory
2023-08-05 07:52:50 +03:00
AUTOMATIC1111 22ecb78b51 Merge branch 'dev' into multiple_loaded_models 2023-08-05 07:52:29 +03:00
Kohaku-Blueleaf a6b245e46f dix 2023-08-05 12:49:35 +08:00
AUTOMATIC1111 0ae2767ae6 Merge pull request #12181 from AUTOMATIC1111/hires_checkpoint
Hires fix change checkpoint
2023-08-05 07:47:34 +03:00
AUTOMATIC1111 e64263653a Merge pull request #12327 from catboxanon/fix/filename-invalid-chars
Add tab and carriage return to invalid filename chars
2023-08-05 07:47:07 +03:00
AUTOMATIC1111 d2b842ce07 move img2img settings to their own section 2023-08-05 07:46:22 +03:00
Kohaku-Blueleaf d8371d0b3c update info 2023-08-05 12:37:46 +08:00
AUTOMATIC1111 e7140a36c0 change default color to white 2023-08-05 07:36:25 +03:00
Kohaku-Blueleaf aa744cadc8 add infotext 2023-08-05 12:35:40 +08:00
AUTOMATIC1111 63cac3c3cc Merge pull request #12326 from AUTOMATIC1111/configurable-masks-color-and-default-brush-color-
configurable masks color and default brush color
2023-08-05 07:34:22 +03:00
catboxanon bcff763b6e Add tab and carriage return to invalid filename chars 2023-08-04 22:59:47 -04:00
Kohaku-Blueleaf 9ac2989edd Merge branch 'dev' into efficient-vae-methods 2023-08-05 10:43:17 +08:00
w-e-w 1d60a609a9 configurable masks color and default brush color 2023-08-05 09:34:26 +09:00
AUTOMATIC1111 4560176640 added VAE selection to checkpoint user metadata 2023-08-04 22:05:50 +03:00
AUTOMATIC1111 31a9966b9d Merge pull request #12319 from catboxanon/fix/alternating-words-empty
Prompt parser: Account for empty field in alternating words syntax
2023-08-04 20:35:25 +03:00
AUTOMATIC1111 c57cb6e89c Merge pull request #12318 from catboxanon/sysinfo-new-page
Open raw sysinfo link in new page
2023-08-04 20:31:10 +03:00
catboxanon b6596cdb19 Prompt parser: account for empty field in alternating words syntax 2023-08-04 13:26:37 -04:00
catboxanon 9213d5cb3b Open raw sysinfo link in new page 2023-08-04 12:26:37 -04:00
AUTOMATIC1111 682ff8936d glorious, glorious wonderful clear milky white butter smooth color for inpainting
you are the best, gradio
how I yearned for this day
i always believed in you
i knew you had it in you
this day marks a new beginning
thank you, everyone
thank you
2023-08-04 18:51:25 +03:00
AUTOMATIC1111 f08a69e629 Merge pull request #12310 from catboxanon/fix/gradio-3-39-0-textbox-overflow
Fix Gradio 3.39.0 textbox overflow
2023-08-04 15:55:25 +03:00
AUTOMATIC1111 fadbab3781 Curse you, gradio!!! fixes broken refresh button #12309 2023-08-04 14:56:39 +03:00
catboxanon 3ca3c7f1c6 Add styling for script components 2023-08-04 07:20:32 -04:00
catboxanon daee41e0d6 Fix Gradio 3.39.0 textbox overflow 2023-08-04 06:45:12 -04:00
Kohaku-Blueleaf 21000f13a1 replace get_first_stage_encoding 2023-08-04 18:23:14 +08:00
AUTOMATIC1111 a0e74c4db4 Merge pull request #12308 from catboxanon/fix/gradio-3-39-0-inpaint-mask
Fix inpaint mask for Gradio 3.39.0
2023-08-04 13:16:50 +03:00
Kohaku-Blueleaf 073342c887 remove noneed scale 2023-08-04 17:55:52 +08:00
Kohaku-Blueleaf 6346d8eeaa Revert "change all encode"
This reverts commit 094c416a80.
2023-08-04 17:53:30 +08:00
Kohaku-Blueleaf 094c416a80 change all encode 2023-08-04 17:53:16 +08:00
catboxanon 99f5f8e76b Fix string quotes 2023-08-04 05:47:25 -04:00
catboxanon cd4e053e5e Simply img2img mask conversion, fix threshold 2023-08-04 05:43:53 -04:00
catboxanon 2dc2bc4ab5 Fix string quotes 2023-08-04 05:40:13 -04:00
catboxanon e219211ff6 Remove unused import in img2img 2023-08-04 05:35:47 -04:00
catboxanon df9fd1d3ae Fix inpaint mask for Gradio 3.39.0 2023-08-04 05:31:38 -04:00
AUTOMATIC1111 2e613a6ffc Merge pull request #12304 from catboxanon/fix/extras-infotext-paste
Correctly toggle extras checkbox for infotext paste
2023-08-04 12:04:11 +03:00
catboxanon f5994e84a2 Cleanup extras checkbox infotext paste check 2023-08-04 04:57:01 -04:00
AUTOMATIC1111 c93857922a Merge pull request #12201 from AnyISalIn/dev
fix: sdxl model invalid configuration after the hijack
2023-08-04 11:53:19 +03:00
AUTOMATIC1111 6391128b41 Merge pull request #12306 from catboxanon/fix/hires-infotext-paste
Only enable hires fix if hires scale or upscaler found in params for infotext paste
2023-08-04 11:52:17 +03:00
catboxanon 7c5480eb96 Cleanup hr infotext paste check mk2 2023-08-04 04:42:35 -04:00
catboxanon 67312653d7 Cleanup hr infotext paste check 2023-08-04 04:40:56 -04:00
AUTOMATIC1111 e81b431701 Merge pull request #12307 from daxijiu/dev
fix some content are ignore by localization
2023-08-04 11:33:34 +03:00
daxijiu 695300929a Merge pull request #1 from daxijiu/fix-some-content-are-ignore-by-localization
fix some content  are ignore by localization
2023-08-04 16:12:41 +08:00
daxijiu 82b415c9c1 fix some content are ignore by localization
in setting "Face restoration model" and "Select which Real-ESRGAN models" and in extras "upscaler 1 & 2" are ignore by localization
2023-08-04 16:03:49 +08:00
catboxanon d89a915b74 Only enable hr fix if hr scale or upscale in infotext on paste 2023-08-04 04:03:37 -04:00
catboxanon ac8dfd9386 Toggle extras checkbox for infotext paste 2023-08-04 03:52:22 -04:00
Kohaku-Blueleaf 1f6bfdea80 move the modified decode into smapler_common 2023-08-04 14:38:52 +08:00
Kohaku-Blueleaf 70e66e81e5 Merge branch 'dev' into efficient-vae-methods 2023-08-04 14:38:16 +08:00
AUTOMATIC1111 f0c1063a70 resolve some of circular import issues for kohaku 2023-08-04 09:13:46 +03:00
AUTOMATIC1111 09165916fa Merge pull request #12297 from AUTOMATIC1111/sort-VAE
sort VAE
2023-08-04 08:53:47 +03:00
Kohaku-Blueleaf c134a48016 Fix code style 2023-08-04 13:40:20 +08:00
Kohaku-Blueleaf 75336dfc84 add TAESD for i2i and t2i 2023-08-04 13:38:52 +08:00
AUTOMATIC1111 3f9e09a615 Merge pull request #11831 from wzgrx/dev
Dev The requirements.txt installation version is required to be updated. I have tested the latest version and SD can be used normally
2023-08-04 08:12:33 +03:00
AUTOMATIC1111 01486f6896 Merge pull request #12300 from catboxanon/dev
Add exponential scheduler variant to sampler selection for DPM-Solver++(2M) SDE sampler
2023-08-04 08:11:13 +03:00
AUTOMATIC1111 56c3f94ba3 Merge branch 'dev' into dev 2023-08-04 08:05:21 +03:00
AUTOMATIC1111 073c0ebba3 add gradio version warning 2023-08-04 08:04:23 +03:00
AUTOMATIC1111 362789a379 gradio 3.39 2023-08-04 08:04:23 +03:00
w-e-w 7f1d087cba sort VAE 2023-08-04 14:01:22 +09:00
catboxanon 3bd2c68eb4 Add exponential scheduler for DPM-Solver++(2M) SDE
Better quality results than Karras.
Related discussion: https://gist.github.com/crowsonkb/3ed16fba35c73ece7cf4b9a2095f2b78
2023-08-04 00:51:49 -04:00
AUTOMATIC1111 71efc5bda8 Merge pull request #12298 from catboxanon/xyz-sampler
XYZ: Support hires sampler
2023-08-04 07:47:35 +03:00
w-e-w f4d9297127 use samplers_for_img2img for Hires sampler 2023-08-04 13:27:25 +09:00
AUTOMATIC1111 220e298417 Merge pull request #12294 from AUTOMATIC1111/cmd_arg-disable-extensions
add cmd_arg --disable-extensions all extra
2023-08-04 07:26:34 +03:00
catboxanon f7813fad1c XYZ: Use default label format for hires sampler
If both sampler and hires sampler are used this makes the distinction more clear.
2023-08-04 00:19:30 -04:00
catboxanon 8b37734244 XYZ: Support hires sampler, cleanup 2023-08-04 00:10:14 -04:00
w-e-w bbfff771d7 --disable-all-extensions --disable-extra-extensions 2023-08-04 12:44:52 +09:00
AnyISalIn 24f21583cd fix: prevent cache model.state_dict() after model hijack
Signed-off-by: AnyISalIn <anyisalin@gmail.com>
2023-08-04 11:43:27 +08:00
AUTOMATIC1111 09c1be9674 put some of the shared functionality into toprow
write a comment for the toprow
2023-08-03 23:31:14 +03:00
AUTOMATIC1111 af528552d6 fix linter issues 2023-08-03 23:31:14 +03:00
AUTOMATIC1111 20549a50cb add style editor dialog
rework toprow for img2img and txt2img to use a class with fields
fix the console error when editing checkpoint user metadata
2023-08-03 23:31:13 +03:00
AUTOMATIC1111 8e840e1519 Merge pull request #12269 from AUTOMATIC1111/TI-Hash-fix
fix missing TI hash
2023-08-03 12:56:19 +03:00
w-e-w f56a309432 fix missing TI hash 2023-08-03 18:46:49 +09:00
AUTOMATIC1111 0904df84e2 minor performance improvements for philox 2023-08-03 07:53:03 +03:00
AUTOMATIC1111 fca42949a3 rework torchsde._brownian.brownian_interval replacement to use device.randn_local and respect the NV setting. 2023-08-03 07:18:55 +03:00
Splendide Imaginarius a1825ee741 Make StableDiffusionProcessingImg2Img.mask_blur a property
Fixes breakage when mask_blur is set after construction.

See https://github.com/Coyote-A/ultimate-upscale-for-automatic1111/issues/111#issuecomment-1652091424

Thanks to Алексей Трофимов and eunnone for reporting the issue.
2023-08-03 02:07:00 +00:00
AUTOMATIC1111 84b6fcd02c add NV option for Random number generator source setting, which allows to generate same pictures on CPU/AMD/Mac as on NVidia videocards. 2023-08-03 00:00:23 +03:00
AUTOMATIC1111 ccb9233934 add yet another torch_gc to reclaim some of VRAM after the initial stage of img2img 2023-08-02 18:53:09 +03:00
AUTOMATIC1111 10ff071e33 update doggettx cross attention optimization to not use an unreasonable amount of memory in some edge cases -- suggestion by MorkTheOrk 2023-08-02 18:37:16 +03:00
AUTOMATIC1111 390bffa81b repair merge error 2023-08-01 17:13:15 +03:00
AUTOMATIC1111 0c9b1e7969 Merge branch 'dev' into multiple_loaded_models 2023-08-01 16:55:55 +03:00
AUTOMATIC1111 6a0d498c8e support tooltip kwarg for gradio elements 2023-08-01 12:50:23 +03:00
AUTOMATIC1111 401ba1b879 XYZ plot do not fail if an exception occurs 2023-08-01 09:22:53 +03:00
AUTOMATIC1111 07be13caa3 add metadata to checkpoint merger 2023-08-01 08:27:54 +03:00
AUTOMATIC1111 6d3a0c9506 move checkpoint merger UI to its own file 2023-08-01 07:43:43 +03:00
AUTOMATIC1111 0042954490 Split history: mv temp modules/ui.py 2023-08-01 07:15:16 +03:00
AUTOMATIC1111 8a4149accc Split history: merge 2023-08-01 07:15:16 +03:00
AUTOMATIC1111 b98fa1c397 Split history: mv modules/ui.py temp 2023-08-01 07:15:15 +03:00
AUTOMATIC1111 c6b826d796 Split history: mv modules/ui.py modules/ui_checkpoint_merger.py 2023-08-01 07:15:15 +03:00
AUTOMATIC1111 2860c3be3e add filename to to the table in user metadata editor 2023-08-01 07:10:42 +03:00
AUTOMATIC1111 4b43480fe8 show metadata for SD checkpoints in the extra networks UI 2023-08-01 07:08:11 +03:00
Jabasukuriputo Wang 8b036d8a82 fix 2023-08-01 11:26:59 +08:00
Jabasukuriputo Wang c46525b70b fix exception 2023-08-01 11:26:17 +08:00
Jabasukuriputo Wang 955542a654 also check on rev-parse 2023-08-01 11:24:54 +08:00
Jabasukuriputo Wang 2f1d5b6b04 attempt to fix workspace status when doing git clone 2023-08-01 11:20:59 +08:00
AUTOMATIC1111 151b8ed3a6 repair PLMS 2023-08-01 00:38:34 +03:00
AUTOMATIC1111 b235022c61 option to keep multiple models in memory 2023-08-01 00:24:48 +03:00
AUTOMATIC1111 c10633f93a fix memory leak when generation fails 2023-07-31 22:03:05 +03:00
AUTOMATIC1111 0d577aba26 Merge pull request #12207 from akx/local-storage-guard
Don't crash if out of local storage quota
2023-07-31 14:00:49 +03:00
AUTOMATIC1111 c09bc2c608 fix "clamp_scalar_cpu" not implemented for 'Half' 2023-07-31 13:20:26 +03:00
Aarni Koskela fb87a05fe8 Don't crash if out of local storage quota
Fixes #12206 (works around it)
2023-07-31 11:23:26 +02:00
AUTOMATIC1111 4d9b096663 additional memory improvements when switching between models of different types 2023-07-31 10:43:31 +03:00
AUTOMATIC1111 29d7e31d89 repair AttributeError: 'NoneType' object has no attribute 'conditioning_key' 2023-07-31 10:43:26 +03:00
AUTOMATIC1111 dca121e903 set the field to None instead 2023-07-31 09:13:07 +03:00
AUTOMATIC1111 0af4127fd1 delete the field that is preventing the model from being unloaded and is causing increased RAM usage 2023-07-30 19:36:24 +03:00
AUTOMATIC1111 a1eb49627a Merge pull request #12177 from rubberbaron/prompt-parse-whitespace-around-numbers
add support for whitespace after the number in constructions like [fo…
2023-07-30 17:23:19 +03:00
AUTOMATIC1111 02038036ff make it so that VAE NaNs autodetection also works during first pass of hires fix 2023-07-30 16:16:31 +03:00
AUTOMATIC1111 f60d9fbe29 Merge pull request #12178 from rubberbaron/xyz-grid-remove-dir
xyz_grid: in the axis labels, remove pathnames from model filenames
2023-07-30 15:32:34 +03:00
AUTOMATIC1111 cc53db6652 this time for sure 2023-07-30 15:30:33 +03:00
AUTOMATIC1111 a64fbe8928 make it possible to use checkpoints of different types (SD1, SDXL) in first and second pass of hires fix 2023-07-30 15:12:09 +03:00
AUTOMATIC1111 eec540b227 repair non-latent upscaling broken for SDXL 2023-07-30 15:04:12 +03:00
AUTOMATIC1111 77761e7bad linter 2023-07-30 14:10:33 +03:00
AUTOMATIC1111 40cd59207b make it work with SDXL 2023-07-30 14:10:26 +03:00
AUTOMATIC1111 3bca90b249 hires fix checkpoint selection 2023-07-30 13:48:27 +03:00
Robert Barron 085c903229 xyz_grid: in the legend, remove pathnames from model filenames 2023-07-30 03:35:32 -07:00
Robert Barron 8a40e30d08 add support for whitespace after the number in constructions like [foo:bar: 0.5 ] and (foo : 0.5 ) 2023-07-30 01:46:25 -07:00
AUTOMATIC1111 63a8861c19 Merge pull request #12164 from AUTOMATIC1111/rework-img2img-batch-image-save
Rework img2img batch image save
2023-07-30 11:45:33 +03:00
w-e-w fb44838176 strip output_dir 2023-07-30 14:47:24 +09:00
w-e-w 53ccdefc01 don't override default if output_dir is blank 2023-07-30 00:34:04 +09:00
w-e-w 9857537053 lint 2023-07-30 00:06:25 +09:00
w-e-w b95a41ad72 rework img2img batch image save 2023-07-30 00:02:31 +09:00
AUTOMATIC1111 6f0abbb71a textual inversion support for SDXL 2023-07-29 15:15:06 +03:00
AUTOMATIC1111 4ca9f70b59 Merge pull request #11950 from AnyISalIn/dev
feat: add refresh vae api
2023-07-29 09:37:02 +03:00
AUTOMATIC1111 e18fc29bbf put the entry for the sampler in the readme section in order of addition 2023-07-29 08:40:43 +03:00
AUTOMATIC1111 79d6e9cd32 some stylistic changes for the sampler code 2023-07-29 08:38:00 +03:00
AUTOMATIC1111 aefe1325df split the new sampler into a different file 2023-07-29 08:11:59 +03:00
AUTOMATIC1111 11dc92dc0a Split history: mv temp modules/sd_samplers_kdiffusion.py 2023-07-29 08:06:04 +03:00
AUTOMATIC1111 bdeb44aeb2 Split history: merge 2023-07-29 08:06:03 +03:00
AUTOMATIC1111 e1323fc1b7 Split history: mv modules/sd_samplers_kdiffusion.py temp 2023-07-29 08:06:03 +03:00
AUTOMATIC1111 3ac950248d Split history: mv modules/sd_samplers_kdiffusion.py modules/sd_samplers_extra.py 2023-07-29 08:06:03 +03:00
AUTOMATIC1111 bef40851af Merge pull request #11850 from lambertae/restart_sampling
Restart sampling
2023-07-29 08:03:32 +03:00
AUTOMATIC1111 9a52a30d2f Merge pull request #12107 from JetVarimax/patch-2
Fix typo
2023-07-29 07:49:22 +03:00
AUTOMATIC1111 fc163218c4 Merge pull request #12120 from DiabolicDiabetic/patch-2
IMG2IMG TIF batch fix img2img.py
2023-07-29 07:48:44 +03:00
AUTOMATIC1111 19ac0adf03 Merge pull request #12124 from Xstephen/master
Add total_tqdm clear in the end of txt2img & img2img api.
2023-07-29 07:44:00 +03:00
AUTOMATIC1111 ac81c1dd1f Merge pull request #11958 from AUTOMATIC1111/conserve-ram
Use less RAM when creating models
2023-07-29 07:43:04 +03:00
caoxipeng 6cc5a886ae Add total_tqdm clear in the end of txt2img & img2img api. 2023-07-28 11:40:10 +08:00
DiabolicDiabetic 9cbf3461f7 IMG2IMG TIF batch fix img2img.py
IMG2IMG batch tab wouldn't process tif images
2023-07-27 20:15:50 -05:00
AUTOMATIC1111 25004d4eee Merge branch 'master' into dev 2023-07-27 09:03:44 +03:00
AUTOMATIC1111 56236dfd3f Merge branch 'master' into release_candidate 2023-07-27 09:03:26 +03:00
AUTOMATIC1111 91a131aa6c update lora extension to work with python 3.8 2023-07-27 09:00:47 +03:00
AUTOMATIC1111 0cb9711a15 Merge pull request #12020 from Littleor/dev
Fix the error in rendering the name and description in the extra network UI.
2023-07-26 15:17:37 +03:00
AUTOMATIC1111 89e6dfff71 repair SDXL 2023-07-26 15:07:56 +03:00
AUTOMATIC1111 8284ebd94c fix autograd which i broke for no good reason when implementing SDXL 2023-07-26 13:03:52 +03:00
Littleor 187323a606 fix: extra network ui description allow HTML tags 2023-07-26 17:23:57 +08:00
AUTOMATIC1111 deed8439d5 Merge pull request #12032 from AUTOMATIC1111/fix-api-get-options-sd_model_checkpoint
api /sdapi/v1/options use "Any" type when default type is None
2023-07-26 11:52:42 +03:00
w-e-w 6305632493 use "Any" type when type is None 2023-07-26 17:20:04 +09:00
AUTOMATIC1111 246d1f1f70 delete scale checker script due to user demand 2023-07-26 09:19:46 +03:00
AUTOMATIC1111 ca6f90dc6d Merge pull request #12023 from AUTOMATIC1111/create_infotext_fix
Create infotext fix
2023-07-26 08:07:07 +03:00
AUTOMATIC1111 835a7dbf0e simplify PostprocessBatchListArgs 2023-07-26 07:49:57 +03:00
AUTOMATIC1111 225eb1b1a0 Merge pull request #12024 from AUTOMATIC1111/fix-check-for-updates-status-always-unknown-
fix check for updates status always "unknown"
2023-07-26 07:45:48 +03:00
w-e-w b8a903efbe fix check for updates status always "unknown" 2023-07-26 13:43:38 +09:00
AUTOMATIC1111 7c22bbd3ad attempt 2 2023-07-26 07:04:07 +03:00
AUTOMATIC1111 13e371af73 doc update 2023-07-26 06:37:13 +03:00
AUTOMATIC1111 ae36e0899f alternative solution for infotext issue 2023-07-26 06:36:06 +03:00
Littleor b73c405013 fix: error rendering name and description in extra network ui 2023-07-26 11:02:34 +08:00
lambertae 8de6d3ff77 fix progress bar & torchHijack 2023-07-25 22:35:43 -04:00
JetVarimax fd43558586 Fix typo 2023-07-25 20:31:15 +01:00
AUTOMATIC1111 d0bf509fa1 fix for #11963 2023-07-25 16:18:10 +03:00
AUTOMATIC1111 d6ec08ba89 Merge pull request #11963 from catboxanon/fix/lora-te
Fix parsing text encoder blocks in some LoRAs
2023-07-25 16:17:41 +03:00
AUTOMATIC1111 65bf3ba260 Merge pull request #11979 from AUTOMATIC1111/catch-exception-for-non-git-extensions
catch exception for non git extensions
2023-07-25 15:23:35 +03:00
AUTOMATIC1111 bed598ce7f Merge pull request #11984 from AUTOMATIC1111/api-only-subpath-(root_path)
api only subpath (rootpath)
2023-07-25 15:19:10 +03:00
w-e-w b1a16a298c api only subpath (rootpath)
Co-Authored-By: 陈杰 <pythias@gmail.com>
2023-07-25 20:51:27 +09:00
w-e-w fee593a07f catch exception for non git extensions 2023-07-25 20:01:10 +09:00
AUTOMATIC1111 fc8e23dec5 Merge branch 'master' into dev 2023-07-25 08:20:42 +03:00
catboxanon a68f469030 Fix to parse TE in some LoRAs 2023-07-24 17:54:59 -04:00
AUTOMATIC1111 f7c0a963f1 Merge pull request #11957 from ljleb/pp-batch-list
Add postprocess_batch_list script callback
2023-07-24 23:18:16 +03:00
ljleb 5b06607476 simplify 2023-07-24 15:43:06 -04:00
ljleb 6b68b59032 use local vars 2023-07-24 15:38:52 -04:00
AUTOMATIC1111 0a89cd1a58 Use less RAM when creating models 2023-07-24 22:08:08 +03:00
ljleb ca45ff1ae6 add postprocess_batch_list callback 2023-07-24 13:52:24 -04:00
AnyISalIn 1cbfafafd2 feat: add refresh vae api
Signed-off-by: AnyISalIn <anyisalin@gmail.com>
2023-07-24 19:45:08 +08:00
AUTOMATIC1111 f451994053 Merge branch 'release_candidate' into dev 2023-07-24 11:58:15 +03:00
Jabasukuriputo Wang f2a4073aea Merge branch 'dev' into ext-inst-pbar 2023-07-23 23:32:13 +08:00
AUTOMATIC1111 ec83db8978 restyle Startup profile for black users 2023-07-22 17:15:38 +03:00
AUTOMATIC1111 a8d4213317 add --log-startup option to print detailed startup progress 2023-07-22 17:15:38 +03:00
Jabasukuriputo Wang 9421c11346 Merge branch 'dev' into ext-inst-pbar 2023-07-22 21:58:59 +08:00
AUTOMATIC1111 0615b3c532 Merge pull request #11926 from wfjsw/fix-env-get-1
fix 11291#issuecomment-1646547908
2023-07-22 16:37:03 +03:00
AUTOMATIC1111 2d635c0192 Merge pull request #11927 from ljleb/fix-AND
Fix composable diffusion weight parsing
2023-07-22 16:36:40 +03:00
ljleb 88a3e1d306 fix AND linebreaks 2023-07-22 07:40:30 -04:00
ljleb 0674fabd0d fix AND linebreaks 2023-07-22 07:10:20 -04:00
AUTOMATIC1111 c76a30af41 more info for startup timings 2023-07-22 13:49:29 +03:00
Jabasukuriputo Wang 3c26734d60 nop 2023-07-22 18:33:59 +08:00
Jabasukuriputo Wang 2a7e34fe79 fix https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/11921#issuecomment-1646547908 2023-07-22 18:09:00 +08:00
Jabasukuriputo Wang b2f0040da7 fix tqdm not found on new instance 2023-07-22 17:51:15 +08:00
Jabasukuriputo Wang 7afe7375e1 display a progressbar for extension installer 2023-07-22 17:46:50 +08:00
AUTOMATIC1111 90eb731ff1 start timer early anyway 2023-07-22 12:21:05 +03:00
AUTOMATIC1111 491d42bb1c Merge pull request #11856 from wfjsw/move-start-timer
Only start timer when actually starting
2023-07-22 12:19:36 +03:00
AUTOMATIC1111 45c0f58dc6 Merge pull request #11923 from AnyISalIn/dev
[bug] If txt2img/img2img raises an exception, finally call state.end()
2023-07-22 07:03:21 +03:00
AnyISalIn 1fe2dcaa2a [bug] If txt2img/img2img raises an exception, finally call state.end()
Signed-off-by: AnyISalIn <anyisalin@gmail.com>
2023-07-22 10:00:27 +08:00
AUTOMATIC1111 075934a944 Merge pull request #11920 from wfjsw/typo-fix-1
typo fix
2023-07-21 18:01:20 +03:00
AUTOMATIC1111 ed4d7912c7 Merge pull request #11921 from wfjsw/prepend-pythonpath
prepend the pythonpath instead of overriding it
2023-07-21 18:00:03 +03:00
Jabasukuriputo Wang 16eddc622e prepend the pythonpath instead of overriding it 2023-07-21 22:00:03 +08:00
w-e-w bc91f15ed3 typo fix 2023-07-21 22:56:41 +09:00
Jabasukuriputo Wang 118529a6dc typo fix 2023-07-21 21:49:33 +08:00
Jabasukuriputo Wang 33694baea1 avoid importing timer when it is not strictly needed 2023-07-21 17:15:44 +08:00
lambertae f873890298 new restart scheme 2023-07-20 21:27:43 -04:00
lambertae 128d59c9cc fix ruff 2023-07-20 20:36:40 -04:00
lambertae 2f57a559ac allow choise of restart_list & use karras from kdiffusion 2023-07-20 20:34:41 -04:00
AUTOMATIC1111 2f98f7c924 Merge branch 'release_candidate' into dev 2023-07-20 19:16:55 +03:00
lambertae 6233268964 add credit 2023-07-20 02:27:28 -04:00
lambertae ddbf4a73f5 restart-sampler with correct steps 2023-07-20 02:24:18 -04:00
AUTOMATIC1111 4bf64976c1 Merge branch 'release_candidate' into dev 2023-07-19 20:23:48 +03:00
AUTOMATIC1111 5677296d1b Merge pull request #11878 from Bourne-M/patch-1
【bug】reload altclip model error
2023-07-19 16:26:12 +03:00
yfzhou cb75734896 【bug】reload altclip model error
When using BertSeriesModelWithTransformation as the cond_stage_model, the undo_hijack should be performed using the FrozenXLMREmbedderWithCustomWords type; otherwise, it will result in a failed model reload.
2023-07-19 17:53:28 +08:00
Jabasukuriputo Wang fc3bdf8c11 Merge branch 'dev' into move-start-timer 2023-07-19 10:33:31 +08:00
AUTOMATIC1111 0fae47e974 Merge pull request #11867 from AUTOMATIC1111/add-dropdown-extra_sort_order-lable
add dropdown extra_sort_order lable
2023-07-18 23:23:26 +03:00
w-e-w c278e60131 add dropdown extra_sort_order lable 2023-07-19 04:58:30 +09:00
wfjsw 3c570421d3 move start timer 2023-07-18 19:00:16 +08:00
lambertae 7bb0fbed13 code styling 2023-07-18 01:02:04 -04:00
lambertae 37e048a7e2 fix floating error 2023-07-18 00:55:02 -04:00
lambertae 15a94d6cf7 remove useless header 2023-07-18 00:39:26 -04:00
lambertae 40a18d38a8 add restart sampler 2023-07-18 00:32:01 -04:00
wzgrx 952effa8b1 Update requirements_versions.txt 2023-07-17 18:50:29 +08:00
wzgrx 0dcf6436a8 Update requirements.txt 2023-07-17 18:49:53 +08:00
AUTOMATIC1111 95c5c4d64e fix tabs height on small screens 2023-07-17 11:18:08 +03:00
w-e-w 543ea5730b fix extra search button 2023-07-17 16:35:41 +09:00
AUTOMATIC1111 643836007f more tweaking for cards section height 2023-07-16 14:46:05 +03:00
AUTOMATIC1111 24bad5dc7b change extra networks list to have constant height and scrolling 2023-07-16 13:59:15 +03:00
AUTOMATIC1111 57d61de25c fix unneded reload from disk 2023-07-16 11:52:29 +03:00
AUTOMATIC1111 5ef7590324 always show extra networks tabs in the UI 2023-07-16 11:38:59 +03:00
142 changed files with 7615 additions and 3916 deletions
+7
View File
@@ -74,6 +74,7 @@ module.exports = {
create_submit_args: "readonly",
restart_reload: "readonly",
updateInput: "readonly",
onEdit: "readonly",
//extraNetworks.js
requestGet: "readonly",
popup: "readonly",
@@ -87,5 +88,11 @@ module.exports = {
modalNextImage: "readonly",
// token-counters.js
setupTokenCounters: "readonly",
// localStorage.js
localSet: "readonly",
localGet: "readonly",
localRemove: "readonly",
// resizeHandle.js
setupResizeHandle: "writable"
}
};
+7 -71
View File
@@ -26,7 +26,7 @@ body:
id: steps
attributes:
label: Steps to reproduce the problem
description: Please provide us with precise step by step information on how to reproduce the bug
description: Please provide us with precise step by step instructions on how to reproduce the bug
value: |
1. Go to ....
2. Press ....
@@ -37,64 +37,14 @@ body:
id: what-should
attributes:
label: What should have happened?
description: Tell what you think the normal behavior should be
description: Tell us what you think the normal behavior should be
validations:
required: true
- type: input
id: commit
- type: textarea
id: sysinfo
attributes:
label: Version or Commit where the problem happens
description: "Which webui version or commit are you running ? (Do not write *Latest Version/repo/commit*, as this means nothing and will have changed by the time we read your issue. Rather, copy the **Version: v1.2.3** link at the bottom of the UI, or from the cmd/terminal if you can't launch it.)"
validations:
required: true
- type: dropdown
id: py-version
attributes:
label: What Python version are you running on ?
multiple: false
options:
- Python 3.10.x
- Python 3.11.x (above, no supported yet)
- Python 3.9.x (below, no recommended)
- type: dropdown
id: platforms
attributes:
label: What platforms do you use to access the UI ?
multiple: true
options:
- Windows
- Linux
- MacOS
- iOS
- Android
- Other/Cloud
- type: dropdown
id: device
attributes:
label: What device are you running WebUI on?
multiple: true
options:
- Nvidia GPUs (RTX 20 above)
- Nvidia GPUs (GTX 16 below)
- AMD GPUs (RX 6000 above)
- AMD GPUs (RX 5000 below)
- CPU
- Other GPUs
- type: dropdown
id: cross_attention_opt
attributes:
label: Cross attention optimization
description: What cross attention optimization are you using, Settings -> Optimizations -> Cross attention optimization
multiple: false
options:
- Automatic
- xformers
- sdp-no-mem
- sdp
- Doggettx
- V1
- InvokeAI
- "None "
label: Sysinfo
description: System info file, generated by WebUI. You can generate it in settings, on the Sysinfo page. Drag the file into the field to upload it. If you submit your report without including the sysinfo file, the report will be closed. If needed, review the report to make sure it includes no personal information you don't want to share. If you can't start WebUI, you can use --dump-sysinfo commandline argument to generate the file.
validations:
required: true
- type: dropdown
@@ -108,21 +58,7 @@ body:
- Brave
- Apple Safari
- Microsoft Edge
- type: textarea
id: cmdargs
attributes:
label: Command Line Arguments
description: Are you using any launching parameters/command line arguments (modified webui-user .bat/.sh) ? If yes, please write them below. Write "No" otherwise.
render: Shell
validations:
required: true
- type: textarea
id: extensions
attributes:
label: List of extensions
description: Are you using any extensions other than built-ins? If yes, provide a list, you can copy it at "Extensions" tab. Write "No" otherwise.
validations:
required: true
- Other
- type: textarea
id: logs
attributes:
+155
View File
@@ -1,3 +1,158 @@
## 1.6.0
### Features:
* refiner support [#12371](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12371)
* add NV option for Random number generator source setting, which allows to generate same pictures on CPU/AMD/Mac as on NVidia videocards
* add style editor dialog
* hires fix: add an option to use a different checkpoint for second pass ([#12181](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12181))
* option to keep multiple loaded models in memory ([#12227](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12227))
* new samplers: Restart, DPM++ 2M SDE Exponential, DPM++ 2M SDE Heun, DPM++ 2M SDE Heun Karras, DPM++ 2M SDE Heun Exponential, DPM++ 3M SDE, DPM++ 3M SDE Karras, DPM++ 3M SDE Exponential ([#12300](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12300), [#12519](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12519), [#12542](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12542))
* rework DDIM, PLMS, UniPC to use CFG denoiser same as in k-diffusion samplers:
* makes all of them work with img2img
* makes prompt composition posssible (AND)
* makes them available for SDXL
* always show extra networks tabs in the UI ([#11808](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/11808))
* use less RAM when creating models ([#11958](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/11958), [#12599](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12599))
* textual inversion inference support for SDXL
* extra networks UI: show metadata for SD checkpoints
* checkpoint merger: add metadata support
* prompt editing and attention: add support for whitespace after the number ([ red : green : 0.5 ]) (seed breaking change) ([#12177](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12177))
* VAE: allow selecting own VAE for each checkpoint (in user metadata editor)
* VAE: add selected VAE to infotext
* options in main UI: add own separate setting for txt2img and img2img, correctly read values from pasted infotext, add setting for column count ([#12551](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12551))
* add resize handle to txt2img and img2img tabs, allowing to change the amount of horizontable space given to generation parameters and resulting image gallery ([#12687](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12687), [#12723](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12723))
* change default behavior for batching cond/uncond -- now it's on by default, and is disabled by an UI setting (Optimizatios -> Batch cond/uncond) - if you are on lowvram/medvram and are getting OOM exceptions, you will need to enable it
* show current position in queue and make it so that requests are processed in the order of arrival ([#12707](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12707))
* add `--medvram-sdxl` flag that only enables `--medvram` for SDXL models
* prompt editing timeline has separate range for first pass and hires-fix pass (seed breaking change) ([#12457](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12457))
### Minor:
* img2img batch: RAM savings, VRAM savings, .tif, .tiff in img2img batch ([#12120](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12120), [#12514](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12514), [#12515](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12515))
* postprocessing/extras: RAM savings ([#12479](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12479))
* XYZ: in the axis labels, remove pathnames from model filenames
* XYZ: support hires sampler ([#12298](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12298))
* XYZ: new option: use text inputs instead of dropdowns ([#12491](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12491))
* add gradio version warning
* sort list of VAE checkpoints ([#12297](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12297))
* use transparent white for mask in inpainting, along with an option to select the color ([#12326](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12326))
* move some settings to their own section: img2img, VAE
* add checkbox to show/hide dirs for extra networks
* Add TAESD(or more) options for all the VAE encode/decode operation ([#12311](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12311))
* gradio theme cache, new gradio themes, along with explanation that the user can input his own values ([#12346](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12346), [#12355](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12355))
* sampler fixes/tweaks: s_tmax, s_churn, s_noise, s_tmax ([#12354](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12354), [#12356](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12356), [#12357](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12357), [#12358](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12358), [#12375](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12375), [#12521](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12521))
* update README.md with correct instructions for Linux installation ([#12352](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12352))
* option to not save incomplete images, on by default ([#12338](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12338))
* enable cond cache by default
* git autofix for repos that are corrupted ([#12230](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12230))
* allow to open images in new browser tab by middle mouse button ([#12379](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12379))
* automatically open webui in browser when running "locally" ([#12254](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12254))
* put commonly used samplers on top, make DPM++ 2M Karras the default choice
* zoom and pan: option to auto-expand a wide image, improved integration ([#12413](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12413), [#12727](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12727))
* option to cache Lora networks in memory
* rework hires fix UI to use accordion
* face restoration and tiling moved to settings - use "Options in main UI" setting if you want them back
* change quicksettings items to have variable width
* Lora: add Norm module, add support for bias ([#12503](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12503))
* Lora: output warnings in UI rather than fail for unfitting loras; switch to logging for error output in console
* support search and display of hashes for all extra network items ([#12510](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12510))
* add extra noise param for img2img operations ([#12564](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12564))
* support for Lora with bias ([#12584](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12584))
* make interrupt quicker ([#12634](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12634))
* configurable gallery height ([#12648](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12648))
* make results column sticky ([#12645](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12645))
* more hash filename patterns ([#12639](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12639))
* make image viewer actually fit the whole page ([#12635](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12635))
* make progress bar work independently from live preview display which results in it being updated a lot more often
* forbid Full live preview method for medvram and add a setting to undo the forbidding
* make it possible to localize tooltips and placeholders
* add option to align with sgm repo's sampling implementation ([#12818](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12818))
* Restore faces and Tiling generation parameters have been moved to settings out of main UI
* if you want to put them back into main UI, use `Options in main UI` setting on the UI page.
### Extensions and API:
* gradio 3.41.2
* also bump versions for packages: transformers, GitPython, accelerate, scikit-image, timm, tomesd
* support tooltip kwarg for gradio elements: gr.Textbox(label='hello', tooltip='world')
* properly clear the total console progressbar when using txt2img and img2img from API
* add cmd_arg --disable-extra-extensions and --disable-all-extensions ([#12294](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12294))
* shared.py and webui.py split into many files
* add --loglevel commandline argument for logging
* add a custom UI element that combines accordion and checkbox
* avoid importing gradio in tests because it spams warnings
* put infotext label for setting into OptionInfo definition rather than in a separate list
* make `StableDiffusionProcessingImg2Img.mask_blur` a property, make more inline with PIL `GaussianBlur` ([#12470](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12470))
* option to make scripts UI without gr.Group
* add a way for scripts to register a callback for before/after just a single component's creation
* use dataclass for StableDiffusionProcessing
* store patches for Lora in a specialized module instead of inside torch
* support http/https URLs in API ([#12663](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12663), [#12698](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12698))
* add extra noise callback ([#12616](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12616))
* dump current stack traces when exiting with SIGINT
* add type annotations for extra fields of shared.sd_model
### Bug Fixes:
* Don't crash if out of local storage quota for javascriot localStorage
* XYZ plot do not fail if an exception occurs
* fix missing TI hash in infotext if generation uses both negative and positive TI ([#12269](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12269))
* localization fixes ([#12307](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12307))
* fix sdxl model invalid configuration after the hijack
* correctly toggle extras checkbox for infotext paste ([#12304](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12304))
* open raw sysinfo link in new page ([#12318](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12318))
* prompt parser: Account for empty field in alternating words syntax ([#12319](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12319))
* add tab and carriage return to invalid filename chars ([#12327](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12327))
* fix api only Lora not working ([#12387](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12387))
* fix options in main UI misbehaving when there's just one element
* make it possible to use a sampler from infotext even if it's hidden in the dropdown
* fix styles missing from the prompt in infotext when making a grid of batch of multiplie images
* prevent bogus progress output in console when calculating hires fix dimensions
* fix --use-textbox-seed
* fix broken `Lora/Networks: use old method` option ([#12466](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12466))
* properly return `None` for VAE hash when using `--no-hashing` ([#12463](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12463))
* MPS/macOS fixes and optimizations ([#12526](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12526))
* add second_order to samplers that mistakenly didn't have it
* when refreshing cards in extra networks UI, do not discard user's custom resolution
* fix processing error that happens if batch_size is not a multiple of how many prompts/negative prompts there are ([#12509](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12509))
* fix inpaint upload for alpha masks ([#12588](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12588))
* fix exception when image sizes are not integers ([#12586](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12586))
* fix incorrect TAESD Latent scale ([#12596](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12596))
* auto add data-dir to gradio-allowed-path ([#12603](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12603))
* fix exception if extensuions dir is missing ([#12607](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12607))
* fix issues with api model-refresh and vae-refresh ([#12638](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12638))
* fix img2img background color for transparent images option not being used ([#12633](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12633))
* attempt to resolve NaN issue with unstable VAEs in fp32 mk2 ([#12630](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12630))
* implement missing undo hijack for SDXL
* fix xyz swap axes ([#12684](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12684))
* fix errors in backup/restore tab if any of config files are broken ([#12689](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12689))
* fix SD VAE switch error after model reuse ([#12685](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12685))
* fix trying to create images too large for the chosen format ([#12667](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12667))
* create Gradio temp directory if necessary ([#12717](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12717))
* prevent possible cache loss if exiting as it's being written by using an atomic operation to replace the cache with the new version
* set devices.dtype_unet correctly
* run RealESRGAN on GPU for non-CUDA devices ([#12737](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12737))
* prevent extra network buttons being obscured by description for very small card sizes ([#12745](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12745))
* fix error that causes some extra networks to be disabled if both <lora:> and <lyco:> are present in the prompt
* fix defaults settings page breaking when any of main UI tabs are hidden
* fix incorrect save/display of new values in Defaults page in settings
* fix for Reload UI function: if you reload UI on one tab, other opened tabs will no longer stop working
* fix an error that prevents VAE being reloaded after an option change if a VAE near the checkpoint exists ([#12797](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12737))
* hide broken image crop tool ([#12792](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12737))
* don't show hidden samplers in dropdown for XYZ script ([#12780](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12737))
* fix style editing dialog breaking if it's opened in both img2img and txt2img tabs
* fix a bug allowing users to bypass gradio and API authentication (reported by vysecurity)
* fix notification not playing when built-in webui tab is inactive ([#12834](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12834))
* honor `--skip-install` for extension installers ([#12832](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12832))
* don't print blank stdout in extension installers ([#12833](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12832), [#12855](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12855))
* do not change quicksettings dropdown option when value returned is `None` ([#12854](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12854))
* get progressbar to display correctly in extensions tab
## 1.5.2
### Bug Fixes:
* fix memory leak when generation fails
* update doggettx cross attention optimization to not use an unreasonable amount of memory in some edge cases -- suggestion by MorkTheOrk
## 1.5.1
### Minor:
+7
View File
@@ -0,0 +1,7 @@
cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- given-names: AUTOMATIC1111
title: "Stable Diffusion Web UI"
date-released: 2022-08-22
url: "https://github.com/AUTOMATIC1111/stable-diffusion-webui"
+10 -6
View File
@@ -78,7 +78,7 @@ A browser interface based on Gradio library for Stable Diffusion.
- Clip skip
- Hypernetworks
- Loras (same as Hypernetworks but more pretty)
- A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
- A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
- Can select to load a different VAE from settings screen
- Estimated completion time in progress bar
- API
@@ -88,19 +88,22 @@ A browser interface based on Gradio library for Stable Diffusion.
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
- Now without any bad letters!
- Load checkpoints in safetensors format
- Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64
- Eased resolution restriction: generated image's dimensions must be a multiple of 8 rather than 64
- Now with a license!
- Reorder elements in the UI from settings screen
## Installation and Running
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for:
- [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended)
- [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
- [Intel CPUs, Intel GPUs (both integrated and discrete)](https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon) (external wiki page)
Alternatively, use online services (like Google Colab):
- [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
### Installation on Windows 10/11 with NVidia-GPUs using release package
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract it's contents.
1. Download `sd.webui.zip` from [v1.0.0-pre](https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre) and extract its contents.
2. Run `update.bat`.
3. Run `run.bat`.
> For more details see [Install-and-Run-on-NVidia-GPUs](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs)
@@ -115,7 +118,7 @@ Alternatively, use online services (like Google Colab):
1. Install the dependencies:
```bash
# Debian-based:
sudo apt install wget git python3 python3-venv
sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0
# Red Hat-based:
sudo dnf install wget git python3
# Arch-based:
@@ -123,7 +126,7 @@ sudo pacman -S wget git python3
```
2. Navigate to the directory you would like the webui to be installed and execute the following command:
```bash
bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
```
3. Run `webui.sh`.
4. Check `webui-user.sh` for options.
@@ -169,5 +172,6 @@ Licenses for borrowed code can be found in `Settings -> Licenses` screen, and al
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
- LyCORIS - KohakuBlueleaf
- Restart sampling - lambertae - https://github.com/Newbeeer/diffusion_restart_sampling
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
- (You)
@@ -6,9 +6,14 @@ class ExtraNetworkLora(extra_networks.ExtraNetwork):
def __init__(self):
super().__init__('lora')
self.errors = {}
"""mapping of network names to the number of errors the network had during operation"""
def activate(self, p, params_list):
additional = shared.opts.sd_lora
self.errors.clear()
if additional != "None" and additional in networks.available_networks and not any(x for x in params_list if x.items[0] == additional):
p.all_prompts = [x + f"<lora:{additional}:{shared.opts.extra_networks_default_multiplier}>" for x in p.all_prompts]
params_list.append(extra_networks.ExtraNetworkParams(items=[additional, shared.opts.extra_networks_default_multiplier]))
@@ -56,4 +61,7 @@ class ExtraNetworkLora(extra_networks.ExtraNetwork):
p.extra_generation_params["Lora hashes"] = ", ".join(network_hashes)
def deactivate(self, p):
pass
if self.errors:
p.comment("Networks with errors: " + ", ".join(f"{k} ({v})" for k, v in self.errors.items()))
self.errors.clear()
+33
View File
@@ -0,0 +1,33 @@
import sys
import copy
import logging
class ColoredFormatter(logging.Formatter):
COLORS = {
"DEBUG": "\033[0;36m", # CYAN
"INFO": "\033[0;32m", # GREEN
"WARNING": "\033[0;33m", # YELLOW
"ERROR": "\033[0;31m", # RED
"CRITICAL": "\033[0;37;41m", # WHITE ON RED
"RESET": "\033[0m", # RESET COLOR
}
def format(self, record):
colored_record = copy.copy(record)
levelname = colored_record.levelname
seq = self.COLORS.get(levelname, self.COLORS["RESET"])
colored_record.levelname = f"{seq}{levelname}{self.COLORS['RESET']}"
return super().format(colored_record)
logger = logging.getLogger("lora")
logger.propagate = False
if not logger.handlers:
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(
ColoredFormatter("[%(name)s]-%(levelname)s: %(message)s")
)
logger.addHandler(handler)
+31
View File
@@ -0,0 +1,31 @@
import torch
import networks
from modules import patches
class LoraPatches:
def __init__(self):
self.Linear_forward = patches.patch(__name__, torch.nn.Linear, 'forward', networks.network_Linear_forward)
self.Linear_load_state_dict = patches.patch(__name__, torch.nn.Linear, '_load_from_state_dict', networks.network_Linear_load_state_dict)
self.Conv2d_forward = patches.patch(__name__, torch.nn.Conv2d, 'forward', networks.network_Conv2d_forward)
self.Conv2d_load_state_dict = patches.patch(__name__, torch.nn.Conv2d, '_load_from_state_dict', networks.network_Conv2d_load_state_dict)
self.GroupNorm_forward = patches.patch(__name__, torch.nn.GroupNorm, 'forward', networks.network_GroupNorm_forward)
self.GroupNorm_load_state_dict = patches.patch(__name__, torch.nn.GroupNorm, '_load_from_state_dict', networks.network_GroupNorm_load_state_dict)
self.LayerNorm_forward = patches.patch(__name__, torch.nn.LayerNorm, 'forward', networks.network_LayerNorm_forward)
self.LayerNorm_load_state_dict = patches.patch(__name__, torch.nn.LayerNorm, '_load_from_state_dict', networks.network_LayerNorm_load_state_dict)
self.MultiheadAttention_forward = patches.patch(__name__, torch.nn.MultiheadAttention, 'forward', networks.network_MultiheadAttention_forward)
self.MultiheadAttention_load_state_dict = patches.patch(__name__, torch.nn.MultiheadAttention, '_load_from_state_dict', networks.network_MultiheadAttention_load_state_dict)
def undo(self):
self.Linear_forward = patches.undo(__name__, torch.nn.Linear, 'forward')
self.Linear_load_state_dict = patches.undo(__name__, torch.nn.Linear, '_load_from_state_dict')
self.Conv2d_forward = patches.undo(__name__, torch.nn.Conv2d, 'forward')
self.Conv2d_load_state_dict = patches.undo(__name__, torch.nn.Conv2d, '_load_from_state_dict')
self.GroupNorm_forward = patches.undo(__name__, torch.nn.GroupNorm, 'forward')
self.GroupNorm_load_state_dict = patches.undo(__name__, torch.nn.GroupNorm, '_load_from_state_dict')
self.LayerNorm_forward = patches.undo(__name__, torch.nn.LayerNorm, 'forward')
self.LayerNorm_load_state_dict = patches.undo(__name__, torch.nn.LayerNorm, '_load_from_state_dict')
self.MultiheadAttention_forward = patches.undo(__name__, torch.nn.MultiheadAttention, 'forward')
self.MultiheadAttention_load_state_dict = patches.undo(__name__, torch.nn.MultiheadAttention, '_load_from_state_dict')
+6 -2
View File
@@ -93,6 +93,7 @@ class Network: # LoraModule
self.unet_multiplier = 1.0
self.dyn_dim = None
self.modules = {}
self.bundle_embeddings = {}
self.mtime = None
self.mentioned_name = None
@@ -133,7 +134,7 @@ class NetworkModule:
return 1.0
def finalize_updown(self, updown, orig_weight, output_shape):
def finalize_updown(self, updown, orig_weight, output_shape, ex_bias=None):
if self.bias is not None:
updown = updown.reshape(self.bias.shape)
updown += self.bias.to(orig_weight.device, dtype=orig_weight.dtype)
@@ -145,7 +146,10 @@ class NetworkModule:
if orig_weight.size().numel() == updown.size().numel():
updown = updown.reshape(orig_weight.shape)
return updown * self.calc_scale() * self.multiplier()
if ex_bias is not None:
ex_bias = ex_bias * self.multiplier()
return updown * self.calc_scale() * self.multiplier(), ex_bias
def calc_updown(self, target):
raise NotImplementedError()
+6 -1
View File
@@ -14,9 +14,14 @@ class NetworkModuleFull(network.NetworkModule):
super().__init__(net, weights)
self.weight = weights.w.get("diff")
self.ex_bias = weights.w.get("diff_b")
def calc_updown(self, orig_weight):
output_shape = self.weight.shape
updown = self.weight.to(orig_weight.device, dtype=orig_weight.dtype)
if self.ex_bias is not None:
ex_bias = self.ex_bias.to(orig_weight.device, dtype=orig_weight.dtype)
else:
ex_bias = None
return self.finalize_updown(updown, orig_weight, output_shape)
return self.finalize_updown(updown, orig_weight, output_shape, ex_bias)
+28
View File
@@ -0,0 +1,28 @@
import network
class ModuleTypeNorm(network.ModuleType):
def create_module(self, net: network.Network, weights: network.NetworkWeights):
if all(x in weights.w for x in ["w_norm", "b_norm"]):
return NetworkModuleNorm(net, weights)
return None
class NetworkModuleNorm(network.NetworkModule):
def __init__(self, net: network.Network, weights: network.NetworkWeights):
super().__init__(net, weights)
self.w_norm = weights.w.get("w_norm")
self.b_norm = weights.w.get("b_norm")
def calc_updown(self, orig_weight):
output_shape = self.w_norm.shape
updown = self.w_norm.to(orig_weight.device, dtype=orig_weight.dtype)
if self.b_norm is not None:
ex_bias = self.b_norm.to(orig_weight.device, dtype=orig_weight.dtype)
else:
ex_bias = None
return self.finalize_updown(updown, orig_weight, output_shape, ex_bias)
+213 -40
View File
@@ -1,17 +1,23 @@
import logging
import os
import re
import lora_patches
import network
import network_lora
import network_hada
import network_ia3
import network_lokr
import network_full
import network_norm
import torch
from typing import Union
from modules import shared, devices, sd_models, errors, scripts, sd_hijack
from modules.textual_inversion.textual_inversion import Embedding
from lora_logger import logger
module_types = [
network_lora.ModuleTypeLora(),
@@ -19,6 +25,7 @@ module_types = [
network_ia3.ModuleTypeIa3(),
network_lokr.ModuleTypeLokr(),
network_full.ModuleTypeFull(),
network_norm.ModuleTypeNorm(),
]
@@ -31,6 +38,8 @@ suffix_conversion = {
"resnets": {
"conv1": "in_layers_2",
"conv2": "out_layers_3",
"norm1": "in_layers_0",
"norm2": "out_layers_0",
"time_emb_proj": "emb_layers_1",
"conv_shortcut": "skip_connection",
}
@@ -143,9 +152,19 @@ def load_network(name, network_on_disk):
is_sd2 = 'model_transformer_resblocks' in shared.sd_model.network_layer_mapping
matched_networks = {}
bundle_embeddings = {}
for key_network, weight in sd.items():
key_network_without_network_parts, network_part = key_network.split(".", 1)
if key_network_without_network_parts == "bundle_emb":
emb_name, vec_name = network_part.split(".", 1)
emb_dict = bundle_embeddings.get(emb_name, {})
if vec_name.split('.')[0] == 'string_to_param':
_, k2 = vec_name.split('.', 1)
emb_dict['string_to_param'] = {k2: weight}
else:
emb_dict[vec_name] = weight
bundle_embeddings[emb_name] = emb_dict
key = convert_diffusers_name_to_compvis(key_network_without_network_parts, is_sd2)
sd_module = shared.sd_model.network_layer_mapping.get(key, None)
@@ -189,18 +208,66 @@ def load_network(name, network_on_disk):
net.modules[key] = net_module
embeddings = {}
for emb_name, data in bundle_embeddings.items():
# textual inversion embeddings
if 'string_to_param' in data:
param_dict = data['string_to_param']
param_dict = getattr(param_dict, '_parameters', param_dict) # fix for torch 1.12.1 loading saved file from torch 1.11
assert len(param_dict) == 1, 'embedding file has multiple terms in it'
emb = next(iter(param_dict.items()))[1]
vec = emb.detach().to(devices.device, dtype=torch.float32)
shape = vec.shape[-1]
vectors = vec.shape[0]
elif type(data) == dict and 'clip_g' in data and 'clip_l' in data: # SDXL embedding
vec = {k: v.detach().to(devices.device, dtype=torch.float32) for k, v in data.items()}
shape = data['clip_g'].shape[-1] + data['clip_l'].shape[-1]
vectors = data['clip_g'].shape[0]
elif type(data) == dict and type(next(iter(data.values()))) == torch.Tensor: # diffuser concepts
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
emb = next(iter(data.values()))
if len(emb.shape) == 1:
emb = emb.unsqueeze(0)
vec = emb.detach().to(devices.device, dtype=torch.float32)
shape = vec.shape[-1]
vectors = vec.shape[0]
else:
raise Exception(f"Couldn't identify {emb_name} in lora: {name} as neither textual inversion embedding nor diffuser concept.")
embedding = Embedding(vec, emb_name)
embedding.vectors = vectors
embedding.shape = shape
embedding.loaded = None
embeddings[emb_name] = embedding
net.bundle_embeddings = embeddings
if keys_failed_to_match:
print(f"Failed to match keys when loading network {network_on_disk.filename}: {keys_failed_to_match}")
logging.debug(f"Network {network_on_disk.filename} didn't match keys: {keys_failed_to_match}")
return net
def purge_networks_from_memory():
while len(networks_in_memory) > shared.opts.lora_in_memory_limit and len(networks_in_memory) > 0:
name = next(iter(networks_in_memory))
networks_in_memory.pop(name, None)
devices.torch_gc()
def load_networks(names, te_multipliers=None, unet_multipliers=None, dyn_dims=None):
emb_db = sd_hijack.model_hijack.embedding_db
already_loaded = {}
for net in loaded_networks:
if net.name in names:
already_loaded[net.name] = net
for emb_name, embedding in net.bundle_embeddings.items():
if embedding.loaded:
embedding.loaded = None
emb_db.register_embedding_by_name(None, shared.sd_model, emb_name)
loaded_networks.clear()
@@ -212,15 +279,19 @@ def load_networks(names, te_multipliers=None, unet_multipliers=None, dyn_dims=No
failed_to_load_networks = []
for i, name in enumerate(names):
for i, (network_on_disk, name) in enumerate(zip(networks_on_disk, names)):
net = already_loaded.get(name, None)
network_on_disk = networks_on_disk[i]
if network_on_disk is not None:
if net is None:
net = networks_in_memory.get(name)
if net is None or os.path.getmtime(network_on_disk.filename) > net.mtime:
try:
net = load_network(name, network_on_disk)
networks_in_memory.pop(name, None)
networks_in_memory[name] = net
except Exception as e:
errors.display(e, f"loading network {network_on_disk.filename}")
continue
@@ -231,7 +302,7 @@ def load_networks(names, te_multipliers=None, unet_multipliers=None, dyn_dims=No
if net is None:
failed_to_load_networks.append(name)
print(f"Couldn't find network with name {name}")
logging.info(f"Couldn't find network with name {name}")
continue
net.te_multiplier = te_multipliers[i] if te_multipliers else 1.0
@@ -239,24 +310,54 @@ def load_networks(names, te_multipliers=None, unet_multipliers=None, dyn_dims=No
net.dyn_dim = dyn_dims[i] if dyn_dims else 1.0
loaded_networks.append(net)
for emb_name, embedding in net.bundle_embeddings.items():
if embedding.loaded is None and emb_name in emb_db.word_embeddings:
logger.warning(
f'Skip bundle embedding: "{emb_name}"'
' as it was already loaded from embeddings folder'
)
continue
embedding.loaded = False
if emb_db.expected_shape == -1 or emb_db.expected_shape == embedding.shape:
embedding.loaded = True
emb_db.register_embedding(embedding, shared.sd_model)
else:
emb_db.skipped_embeddings[name] = embedding
if failed_to_load_networks:
sd_hijack.model_hijack.comments.append("Failed to find networks: " + ", ".join(failed_to_load_networks))
sd_hijack.model_hijack.comments.append("Networks not found: " + ", ".join(failed_to_load_networks))
purge_networks_from_memory()
def network_restore_weights_from_backup(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn.MultiheadAttention]):
def network_restore_weights_from_backup(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn.GroupNorm, torch.nn.LayerNorm, torch.nn.MultiheadAttention]):
weights_backup = getattr(self, "network_weights_backup", None)
bias_backup = getattr(self, "network_bias_backup", None)
if weights_backup is None:
if weights_backup is None and bias_backup is None:
return
if isinstance(self, torch.nn.MultiheadAttention):
self.in_proj_weight.copy_(weights_backup[0])
self.out_proj.weight.copy_(weights_backup[1])
if weights_backup is not None:
if isinstance(self, torch.nn.MultiheadAttention):
self.in_proj_weight.copy_(weights_backup[0])
self.out_proj.weight.copy_(weights_backup[1])
else:
self.weight.copy_(weights_backup)
if bias_backup is not None:
if isinstance(self, torch.nn.MultiheadAttention):
self.out_proj.bias.copy_(bias_backup)
else:
self.bias.copy_(bias_backup)
else:
self.weight.copy_(weights_backup)
if isinstance(self, torch.nn.MultiheadAttention):
self.out_proj.bias = None
else:
self.bias = None
def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn.MultiheadAttention]):
def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn.GroupNorm, torch.nn.LayerNorm, torch.nn.MultiheadAttention]):
"""
Applies the currently selected set of networks to the weights of torch layer self.
If weights already have this particular set of networks applied, does nothing.
@@ -271,7 +372,10 @@ def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn
wanted_names = tuple((x.name, x.te_multiplier, x.unet_multiplier, x.dyn_dim) for x in loaded_networks)
weights_backup = getattr(self, "network_weights_backup", None)
if weights_backup is None:
if weights_backup is None and wanted_names != ():
if current_names != ():
raise RuntimeError("no backup weights found and current weights are not unchanged")
if isinstance(self, torch.nn.MultiheadAttention):
weights_backup = (self.in_proj_weight.to(devices.cpu, copy=True), self.out_proj.weight.to(devices.cpu, copy=True))
else:
@@ -279,21 +383,41 @@ def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn
self.network_weights_backup = weights_backup
bias_backup = getattr(self, "network_bias_backup", None)
if bias_backup is None:
if isinstance(self, torch.nn.MultiheadAttention) and self.out_proj.bias is not None:
bias_backup = self.out_proj.bias.to(devices.cpu, copy=True)
elif getattr(self, 'bias', None) is not None:
bias_backup = self.bias.to(devices.cpu, copy=True)
else:
bias_backup = None
self.network_bias_backup = bias_backup
if current_names != wanted_names:
network_restore_weights_from_backup(self)
for net in loaded_networks:
module = net.modules.get(network_layer_name, None)
if module is not None and hasattr(self, 'weight'):
with torch.no_grad():
updown = module.calc_updown(self.weight)
try:
with torch.no_grad():
updown, ex_bias = module.calc_updown(self.weight)
if len(self.weight.shape) == 4 and self.weight.shape[1] == 9:
# inpainting model. zero pad updown to make channel[1] 4 to 9
updown = torch.nn.functional.pad(updown, (0, 0, 0, 0, 0, 5))
if len(self.weight.shape) == 4 and self.weight.shape[1] == 9:
# inpainting model. zero pad updown to make channel[1] 4 to 9
updown = torch.nn.functional.pad(updown, (0, 0, 0, 0, 0, 5))
self.weight += updown
continue
self.weight += updown
if ex_bias is not None and hasattr(self, 'bias'):
if self.bias is None:
self.bias = torch.nn.Parameter(ex_bias)
else:
self.bias += ex_bias
except RuntimeError as e:
logging.debug(f"Network {net.name} layer {network_layer_name}: {e}")
extra_network_lora.errors[net.name] = extra_network_lora.errors.get(net.name, 0) + 1
continue
module_q = net.modules.get(network_layer_name + "_q_proj", None)
module_k = net.modules.get(network_layer_name + "_k_proj", None)
@@ -301,21 +425,33 @@ def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn
module_out = net.modules.get(network_layer_name + "_out_proj", None)
if isinstance(self, torch.nn.MultiheadAttention) and module_q and module_k and module_v and module_out:
with torch.no_grad():
updown_q = module_q.calc_updown(self.in_proj_weight)
updown_k = module_k.calc_updown(self.in_proj_weight)
updown_v = module_v.calc_updown(self.in_proj_weight)
updown_qkv = torch.vstack([updown_q, updown_k, updown_v])
updown_out = module_out.calc_updown(self.out_proj.weight)
try:
with torch.no_grad():
updown_q, _ = module_q.calc_updown(self.in_proj_weight)
updown_k, _ = module_k.calc_updown(self.in_proj_weight)
updown_v, _ = module_v.calc_updown(self.in_proj_weight)
updown_qkv = torch.vstack([updown_q, updown_k, updown_v])
updown_out, ex_bias = module_out.calc_updown(self.out_proj.weight)
self.in_proj_weight += updown_qkv
self.out_proj.weight += updown_out
continue
self.in_proj_weight += updown_qkv
self.out_proj.weight += updown_out
if ex_bias is not None:
if self.out_proj.bias is None:
self.out_proj.bias = torch.nn.Parameter(ex_bias)
else:
self.out_proj.bias += ex_bias
except RuntimeError as e:
logging.debug(f"Network {net.name} layer {network_layer_name}: {e}")
extra_network_lora.errors[net.name] = extra_network_lora.errors.get(net.name, 0) + 1
continue
if module is None:
continue
print(f'failed to calculate network weights for layer {network_layer_name}')
logging.debug(f"Network {net.name} layer {network_layer_name}: couldn't find supported operation")
extra_network_lora.errors[net.name] = extra_network_lora.errors.get(net.name, 0) + 1
self.network_current_names = wanted_names
@@ -342,7 +478,7 @@ def network_forward(module, input, original_forward):
if module is None:
continue
y = module.forward(y, input)
y = module.forward(input, y)
return y
@@ -350,48 +486,79 @@ def network_forward(module, input, original_forward):
def network_reset_cached_weight(self: Union[torch.nn.Conv2d, torch.nn.Linear]):
self.network_current_names = ()
self.network_weights_backup = None
self.network_bias_backup = None
def network_Linear_forward(self, input):
if shared.opts.lora_functional:
return network_forward(self, input, torch.nn.Linear_forward_before_network)
return network_forward(self, input, originals.Linear_forward)
network_apply_weights(self)
return torch.nn.Linear_forward_before_network(self, input)
return originals.Linear_forward(self, input)
def network_Linear_load_state_dict(self, *args, **kwargs):
network_reset_cached_weight(self)
return torch.nn.Linear_load_state_dict_before_network(self, *args, **kwargs)
return originals.Linear_load_state_dict(self, *args, **kwargs)
def network_Conv2d_forward(self, input):
if shared.opts.lora_functional:
return network_forward(self, input, torch.nn.Conv2d_forward_before_network)
return network_forward(self, input, originals.Conv2d_forward)
network_apply_weights(self)
return torch.nn.Conv2d_forward_before_network(self, input)
return originals.Conv2d_forward(self, input)
def network_Conv2d_load_state_dict(self, *args, **kwargs):
network_reset_cached_weight(self)
return torch.nn.Conv2d_load_state_dict_before_network(self, *args, **kwargs)
return originals.Conv2d_load_state_dict(self, *args, **kwargs)
def network_GroupNorm_forward(self, input):
if shared.opts.lora_functional:
return network_forward(self, input, originals.GroupNorm_forward)
network_apply_weights(self)
return originals.GroupNorm_forward(self, input)
def network_GroupNorm_load_state_dict(self, *args, **kwargs):
network_reset_cached_weight(self)
return originals.GroupNorm_load_state_dict(self, *args, **kwargs)
def network_LayerNorm_forward(self, input):
if shared.opts.lora_functional:
return network_forward(self, input, originals.LayerNorm_forward)
network_apply_weights(self)
return originals.LayerNorm_forward(self, input)
def network_LayerNorm_load_state_dict(self, *args, **kwargs):
network_reset_cached_weight(self)
return originals.LayerNorm_load_state_dict(self, *args, **kwargs)
def network_MultiheadAttention_forward(self, *args, **kwargs):
network_apply_weights(self)
return torch.nn.MultiheadAttention_forward_before_network(self, *args, **kwargs)
return originals.MultiheadAttention_forward(self, *args, **kwargs)
def network_MultiheadAttention_load_state_dict(self, *args, **kwargs):
network_reset_cached_weight(self)
return torch.nn.MultiheadAttention_load_state_dict_before_network(self, *args, **kwargs)
return originals.MultiheadAttention_load_state_dict(self, *args, **kwargs)
def list_available_networks():
@@ -459,9 +626,15 @@ def infotext_pasted(infotext, params):
params["Prompt"] += "\n" + "".join(added)
originals: lora_patches.LoraPatches = None
extra_network_lora = None
available_networks = {}
available_network_aliases = {}
loaded_networks = []
loaded_bundle_embeddings = {}
networks_in_memory = {}
available_network_hash_lookup = {}
forbidden_network_aliases = {}
+10 -34
View File
@@ -1,57 +1,30 @@
import re
import torch
import gradio as gr
from fastapi import FastAPI
import network
import networks
import lora # noqa:F401
import lora_patches
import extra_networks_lora
import ui_extra_networks_lora
from modules import script_callbacks, ui_extra_networks, extra_networks, shared
def unload():
torch.nn.Linear.forward = torch.nn.Linear_forward_before_network
torch.nn.Linear._load_from_state_dict = torch.nn.Linear_load_state_dict_before_network
torch.nn.Conv2d.forward = torch.nn.Conv2d_forward_before_network
torch.nn.Conv2d._load_from_state_dict = torch.nn.Conv2d_load_state_dict_before_network
torch.nn.MultiheadAttention.forward = torch.nn.MultiheadAttention_forward_before_network
torch.nn.MultiheadAttention._load_from_state_dict = torch.nn.MultiheadAttention_load_state_dict_before_network
networks.originals.undo()
def before_ui():
ui_extra_networks.register_page(ui_extra_networks_lora.ExtraNetworksPageLora())
extra_network = extra_networks_lora.ExtraNetworkLora()
extra_networks.register_extra_network(extra_network)
extra_networks.register_extra_network_alias(extra_network, "lyco")
networks.extra_network_lora = extra_networks_lora.ExtraNetworkLora()
extra_networks.register_extra_network(networks.extra_network_lora)
extra_networks.register_extra_network_alias(networks.extra_network_lora, "lyco")
if not hasattr(torch.nn, 'Linear_forward_before_network'):
torch.nn.Linear_forward_before_network = torch.nn.Linear.forward
if not hasattr(torch.nn, 'Linear_load_state_dict_before_network'):
torch.nn.Linear_load_state_dict_before_network = torch.nn.Linear._load_from_state_dict
if not hasattr(torch.nn, 'Conv2d_forward_before_network'):
torch.nn.Conv2d_forward_before_network = torch.nn.Conv2d.forward
if not hasattr(torch.nn, 'Conv2d_load_state_dict_before_network'):
torch.nn.Conv2d_load_state_dict_before_network = torch.nn.Conv2d._load_from_state_dict
if not hasattr(torch.nn, 'MultiheadAttention_forward_before_network'):
torch.nn.MultiheadAttention_forward_before_network = torch.nn.MultiheadAttention.forward
if not hasattr(torch.nn, 'MultiheadAttention_load_state_dict_before_network'):
torch.nn.MultiheadAttention_load_state_dict_before_network = torch.nn.MultiheadAttention._load_from_state_dict
torch.nn.Linear.forward = networks.network_Linear_forward
torch.nn.Linear._load_from_state_dict = networks.network_Linear_load_state_dict
torch.nn.Conv2d.forward = networks.network_Conv2d_forward
torch.nn.Conv2d._load_from_state_dict = networks.network_Conv2d_load_state_dict
torch.nn.MultiheadAttention.forward = networks.network_MultiheadAttention_forward
torch.nn.MultiheadAttention._load_from_state_dict = networks.network_MultiheadAttention_load_state_dict
networks.originals = lora_patches.LoraPatches()
script_callbacks.on_model_loaded(networks.assign_network_names_to_compvis_modules)
script_callbacks.on_script_unloaded(unload)
@@ -65,6 +38,7 @@ shared.options_templates.update(shared.options_section(('extra_networks', "Extra
"lora_add_hashes_to_infotext": shared.OptionInfo(True, "Add Lora hashes to infotext"),
"lora_show_all": shared.OptionInfo(False, "Always show all networks on the Lora page").info("otherwise, those detected as for incompatible version of Stable Diffusion will be hidden"),
"lora_hide_unknown_for_versions": shared.OptionInfo([], "Hide networks of unknown versions for model versions", gr.CheckboxGroup, {"choices": ["SD1", "SD2", "SDXL"]}),
"lora_in_memory_limit": shared.OptionInfo(0, "Number of Lora networks to keep cached in memory", gr.Number, {"precision": 0}),
}))
@@ -121,3 +95,5 @@ def infotext_pasted(infotext, d):
script_callbacks.on_infotext_pasted(infotext_pasted)
shared.opts.onchange("lora_in_memory_limit", networks.purge_networks_from_memory)
@@ -70,6 +70,7 @@ class LoraUserMetadataEditor(ui_extra_networks_user_metadata.UserMetadataEditor)
metadata = item.get("metadata") or {}
keys = {
'ss_output_name': "Output name:",
'ss_sd_model_name': "Model:",
'ss_clip_skip': "Clip skip:",
'ss_network_module': "Kohya module:",
@@ -167,7 +168,7 @@ class LoraUserMetadataEditor(ui_extra_networks_user_metadata.UserMetadataEditor)
random_prompt = gr.Textbox(label='Random prompt', lines=4, max_lines=4, interactive=False)
with gr.Column(scale=1, min_width=120):
generate_random_prompt = gr.Button('Generate').style(full_width=True, size="lg")
generate_random_prompt = gr.Button('Generate', size="lg", scale=1)
self.edit_notes = gr.TextArea(label='Notes', lines=4)
@@ -25,9 +25,10 @@ class ExtraNetworksPageLora(ui_extra_networks.ExtraNetworksPage):
item = {
"name": name,
"filename": lora_on_disk.filename,
"shorthash": lora_on_disk.shorthash,
"preview": self.find_preview(path),
"description": self.find_description(path),
"search_term": self.search_terms_from_path(lora_on_disk.filename),
"search_term": self.search_terms_from_path(lora_on_disk.filename) + " " + (lora_on_disk.hash or ""),
"local_preview": f"{path}.{shared.opts.samples_format}",
"metadata": lora_on_disk.metadata,
"sort_keys": {'default': index, **self.get_sort_keys(lora_on_disk.filename)},
@@ -12,8 +12,22 @@ onUiLoaded(async() => {
"Sketch": elementIDs.sketch
};
// Helper functions
// Get active tab
/**
* Waits for an element to be present in the DOM.
*/
const waitForElement = (id) => new Promise(resolve => {
const checkForElement = () => {
const element = document.querySelector(id);
if (element) return resolve(element);
setTimeout(checkForElement, 100);
};
checkForElement();
});
function getActiveTab(elements, all = false) {
const tabs = elements.img2imgTabs.querySelectorAll("button");
@@ -34,7 +48,7 @@ onUiLoaded(async() => {
// Wait until opts loaded
async function waitForOpts() {
for (;;) {
for (; ;) {
if (window.opts && Object.keys(window.opts).length) {
return window.opts;
}
@@ -42,6 +56,11 @@ onUiLoaded(async() => {
}
}
// Detect whether the element has a horizontal scroll bar
function hasHorizontalScrollbar(element) {
return element.scrollWidth > element.clientWidth;
}
// Function for defining the "Ctrl", "Shift" and "Alt" keys
function isModifierKey(event, key) {
switch (key) {
@@ -201,7 +220,8 @@ onUiLoaded(async() => {
canvas_hotkey_overlap: "KeyO",
canvas_disabled_functions: [],
canvas_show_tooltip: true,
canvas_blur_prompt: false
canvas_auto_expand: true,
canvas_blur_prompt: false,
};
const functionMap = {
@@ -249,7 +269,7 @@ onUiLoaded(async() => {
input?.addEventListener("input", () => restoreImgRedMask(elements));
}
function applyZoomAndPan(elemId) {
function applyZoomAndPan(elemId, isExtension = true) {
const targetElement = gradioApp().querySelector(elemId);
if (!targetElement) {
@@ -361,6 +381,12 @@ onUiLoaded(async() => {
panY: 0
};
if (isExtension) {
targetElement.style.overflow = "hidden";
}
targetElement.isZoomed = false;
fixCanvas();
targetElement.style.transform = `scale(${elemData[elemId].zoomLevel}) translate(${elemData[elemId].panX}px, ${elemData[elemId].panY}px)`;
@@ -371,8 +397,27 @@ onUiLoaded(async() => {
toggleOverlap("off");
fullScreenMode = false;
const closeBtn = targetElement.querySelector("button[aria-label='Remove Image']");
if (closeBtn) {
closeBtn.addEventListener("click", resetZoom);
}
if (canvas && isExtension) {
const parentElement = targetElement.closest('[id^="component-"]');
if (
canvas &&
parseFloat(canvas.style.width) > parentElement.offsetWidth &&
parseFloat(targetElement.style.width) > parentElement.offsetWidth
) {
fitToElement();
return;
}
}
if (
canvas &&
!isExtension &&
parseFloat(canvas.style.width) > 865 &&
parseFloat(targetElement.style.width) > 865
) {
@@ -381,9 +426,6 @@ onUiLoaded(async() => {
}
targetElement.style.width = "";
if (canvas) {
targetElement.style.height = canvas.style.height;
}
}
// Toggle the zIndex of the target element between two values, allowing it to overlap or be overlapped by other elements
@@ -439,7 +481,7 @@ onUiLoaded(async() => {
// Update the zoom level and pan position of the target element based on the values of the zoomLevel, panX and panY variables
function updateZoom(newZoomLevel, mouseX, mouseY) {
newZoomLevel = Math.max(0.5, Math.min(newZoomLevel, 15));
newZoomLevel = Math.max(0.1, Math.min(newZoomLevel, 15));
elemData[elemId].panX +=
mouseX - (mouseX * newZoomLevel) / elemData[elemId].zoomLevel;
@@ -450,6 +492,10 @@ onUiLoaded(async() => {
targetElement.style.transform = `translate(${elemData[elemId].panX}px, ${elemData[elemId].panY}px) scale(${newZoomLevel})`;
toggleOverlap("on");
if (isExtension) {
targetElement.style.overflow = "visible";
}
return newZoomLevel;
}
@@ -472,10 +518,12 @@ onUiLoaded(async() => {
fullScreenMode = false;
elemData[elemId].zoomLevel = updateZoom(
elemData[elemId].zoomLevel +
(operation === "+" ? delta : -delta),
(operation === "+" ? delta : -delta),
zoomPosX - targetElement.getBoundingClientRect().left,
zoomPosY - targetElement.getBoundingClientRect().top
);
targetElement.isZoomed = true;
}
}
@@ -489,10 +537,19 @@ onUiLoaded(async() => {
//Reset Zoom
targetElement.style.transform = `translate(${0}px, ${0}px) scale(${1})`;
let parentElement;
if (isExtension) {
parentElement = targetElement.closest('[id^="component-"]');
} else {
parentElement = targetElement.parentElement;
}
// Get element and screen dimensions
const elementWidth = targetElement.offsetWidth;
const elementHeight = targetElement.offsetHeight;
const parentElement = targetElement.parentElement;
const screenWidth = parentElement.clientWidth;
const screenHeight = parentElement.clientHeight;
@@ -545,8 +602,12 @@ onUiLoaded(async() => {
if (!canvas) return;
if (canvas.offsetWidth > 862) {
targetElement.style.width = canvas.offsetWidth + "px";
if (canvas.offsetWidth > 862 || isExtension) {
targetElement.style.width = (canvas.offsetWidth + 2) + "px";
}
if (isExtension) {
targetElement.style.overflow = "visible";
}
if (fullScreenMode) {
@@ -648,8 +709,48 @@ onUiLoaded(async() => {
mouseY = e.offsetY;
}
// Simulation of the function to put a long image into the screen.
// We detect if an image has a scroll bar or not, make a fullscreen to reveal the image, then reduce it to fit into the element.
// We hide the image and show it to the user when it is ready.
targetElement.isExpanded = false;
function autoExpand() {
const canvas = document.querySelector(`${elemId} canvas[key="interface"]`);
if (canvas) {
if (hasHorizontalScrollbar(targetElement) && targetElement.isExpanded === false) {
targetElement.style.visibility = "hidden";
setTimeout(() => {
fitToScreen();
resetZoom();
targetElement.style.visibility = "visible";
targetElement.isExpanded = true;
}, 10);
}
}
}
targetElement.addEventListener("mousemove", getMousePosition);
//observers
// Creating an observer with a callback function to handle DOM changes
const observer = new MutationObserver((mutationsList, observer) => {
for (let mutation of mutationsList) {
// If the style attribute of the canvas has changed, by observation it happens only when the picture changes
if (mutation.type === 'attributes' && mutation.attributeName === 'style' &&
mutation.target.tagName.toLowerCase() === 'canvas') {
targetElement.isExpanded = false;
setTimeout(resetZoom, 10);
}
}
});
// Apply auto expand if enabled
if (hotkeysConfig.canvas_auto_expand) {
targetElement.addEventListener("mousemove", autoExpand);
// Set up an observer to track attribute changes
observer.observe(targetElement, {attributes: true, childList: true, subtree: true});
}
// Handle events only inside the targetElement
let isKeyDownHandlerAttached = false;
@@ -754,6 +855,11 @@ onUiLoaded(async() => {
if (isMoving && elemId === activeElement) {
updatePanPosition(e.movementX, e.movementY);
targetElement.style.pointerEvents = "none";
if (isExtension) {
targetElement.style.overflow = "visible";
}
} else {
targetElement.style.pointerEvents = "auto";
}
@@ -764,13 +870,93 @@ onUiLoaded(async() => {
isMoving = false;
};
// Checks for extension
function checkForOutBox() {
const parentElement = targetElement.closest('[id^="component-"]');
if (parentElement.offsetWidth < targetElement.offsetWidth && !targetElement.isExpanded) {
resetZoom();
targetElement.isExpanded = true;
}
if (parentElement.offsetWidth < targetElement.offsetWidth && elemData[elemId].zoomLevel == 1) {
resetZoom();
}
if (parentElement.offsetWidth < targetElement.offsetWidth && targetElement.offsetWidth * elemData[elemId].zoomLevel > parentElement.offsetWidth && elemData[elemId].zoomLevel < 1 && !targetElement.isZoomed) {
resetZoom();
}
}
if (isExtension) {
targetElement.addEventListener("mousemove", checkForOutBox);
}
window.addEventListener('resize', (e) => {
resetZoom();
if (isExtension) {
targetElement.isExpanded = false;
targetElement.isZoomed = false;
}
});
gradioApp().addEventListener("mousemove", handleMoveByKey);
}
applyZoomAndPan(elementIDs.sketch);
applyZoomAndPan(elementIDs.inpaint);
applyZoomAndPan(elementIDs.inpaintSketch);
applyZoomAndPan(elementIDs.sketch, false);
applyZoomAndPan(elementIDs.inpaint, false);
applyZoomAndPan(elementIDs.inpaintSketch, false);
// Make the function global so that other extensions can take advantage of this solution
window.applyZoomAndPan = applyZoomAndPan;
const applyZoomAndPanIntegration = async(id, elementIDs) => {
const mainEl = document.querySelector(id);
if (id.toLocaleLowerCase() === "none") {
for (const elementID of elementIDs) {
const el = await waitForElement(elementID);
if (!el) break;
applyZoomAndPan(elementID);
}
return;
}
if (!mainEl) return;
mainEl.addEventListener("click", async() => {
for (const elementID of elementIDs) {
const el = await waitForElement(elementID);
if (!el) break;
applyZoomAndPan(elementID);
}
}, {once: true});
};
window.applyZoomAndPan = applyZoomAndPan; // Only 1 elements, argument elementID, for example applyZoomAndPan("#txt2img_controlnet_ControlNet_input_image")
window.applyZoomAndPanIntegration = applyZoomAndPanIntegration; // for any extension
/*
The function `applyZoomAndPanIntegration` takes two arguments:
1. `id`: A string identifier for the element to which zoom and pan functionality will be applied on click.
If the `id` value is "none", the functionality will be applied to all elements specified in the second argument without a click event.
2. `elementIDs`: An array of string identifiers for elements. Zoom and pan functionality will be applied to each of these elements on click of the element specified by the first argument.
If "none" is specified in the first argument, the functionality will be applied to each of these elements without a click event.
Example usage:
applyZoomAndPanIntegration("#txt2img_controlnet", ["#txt2img_controlnet_ControlNet_input_image"]);
In this example, zoom and pan functionality will be applied to the element with the identifier "txt2img_controlnet_ControlNet_input_image" upon clicking the element with the identifier "txt2img_controlnet".
*/
// More examples
// Add integration with ControlNet txt2img One TAB
// applyZoomAndPanIntegration("#txt2img_controlnet", ["#txt2img_controlnet_ControlNet_input_image"]);
// Add integration with ControlNet txt2img Tabs
// applyZoomAndPanIntegration("#txt2img_controlnet",Array.from({ length: 10 }, (_, i) => `#txt2img_controlnet_ControlNet-${i}_input_image`));
// Add integration with Inpaint Anything
// applyZoomAndPanIntegration("None", ["#ia_sam_image", "#ia_sel_mask"]);
});
@@ -9,6 +9,7 @@ shared.options_templates.update(shared.options_section(('canvas_hotkey', "Canvas
"canvas_hotkey_reset": shared.OptionInfo("R", "Reset zoom and canvas positon"),
"canvas_hotkey_overlap": shared.OptionInfo("O", "Toggle overlap").info("Technical button, neededs for testing"),
"canvas_show_tooltip": shared.OptionInfo(True, "Enable tooltip on the canvas"),
"canvas_auto_expand": shared.OptionInfo(True, "Automatically expands an image that does not fit completely in the canvas area, similar to manually pressing the S and R buttons"),
"canvas_blur_prompt": shared.OptionInfo(False, "Take the focus off the prompt when working with a canvas"),
"canvas_disabled_functions": shared.OptionInfo(["Overlap"], "Disable function that you don't use", gr.CheckboxGroup, {"choices": ["Zoom","Adjust brush size", "Moving canvas","Fullscreen","Reset Zoom","Overlap"]}),
}))
@@ -61,3 +61,6 @@
to {opacity: 1;}
}
.styler {
overflow:inherit !important;
}
@@ -1,5 +1,7 @@
import math
import gradio as gr
from modules import scripts, shared, ui_components, ui_settings
from modules import scripts, shared, ui_components, ui_settings, generation_parameters_copypaste
from modules.ui_components import FormColumn
@@ -19,18 +21,38 @@ class ExtraOptionsSection(scripts.Script):
def ui(self, is_img2img):
self.comps = []
self.setting_names = []
self.infotext_fields = []
extra_options = shared.opts.extra_options_img2img if is_img2img else shared.opts.extra_options_txt2img
mapping = {k: v for v, k in generation_parameters_copypaste.infotext_to_setting_name_mapping}
with gr.Blocks() as interface:
with gr.Accordion("Options", open=False) if shared.opts.extra_options_accordion and shared.opts.extra_options else gr.Group(), gr.Row():
for setting_name in shared.opts.extra_options:
with FormColumn():
comp = ui_settings.create_setting_component(setting_name)
with gr.Accordion("Options", open=False) if shared.opts.extra_options_accordion and extra_options else gr.Group():
self.comps.append(comp)
self.setting_names.append(setting_name)
row_count = math.ceil(len(extra_options) / shared.opts.extra_options_cols)
for row in range(row_count):
with gr.Row():
for col in range(shared.opts.extra_options_cols):
index = row * shared.opts.extra_options_cols + col
if index >= len(extra_options):
break
setting_name = extra_options[index]
with FormColumn():
comp = ui_settings.create_setting_component(setting_name)
self.comps.append(comp)
self.setting_names.append(setting_name)
setting_infotext_name = mapping.get(setting_name)
if setting_infotext_name is not None:
self.infotext_fields.append((comp, setting_infotext_name))
def get_settings_values():
return [ui_settings.get_value_for_setting(key) for key in self.setting_names]
res = [ui_settings.get_value_for_setting(key) for key in self.setting_names]
return res[0] if len(res) == 1 else res
interface.load(fn=get_settings_values, inputs=[], outputs=self.comps, queue=False, show_progress=False)
@@ -43,6 +65,10 @@ class ExtraOptionsSection(scripts.Script):
shared.options_templates.update(shared.options_section(('ui', "User interface"), {
"extra_options": shared.OptionInfo([], "Options in main UI", ui_components.DropdownMulti, lambda: {"choices": list(shared.opts.data_labels.keys())}).js("info", "settingsHintsShowQuicksettings").info("setting entries that also appear in txt2img/img2img interfaces").needs_restart(),
"extra_options_accordion": shared.OptionInfo(False, "Place options in main UI into an accordion")
"extra_options_txt2img": shared.OptionInfo([], "Options in main UI - txt2img", ui_components.DropdownMulti, lambda: {"choices": list(shared.opts.data_labels.keys())}).js("info", "settingsHintsShowQuicksettings").info("setting entries that also appear in txt2img interfaces").needs_reload_ui(),
"extra_options_img2img": shared.OptionInfo([], "Options in main UI - img2img", ui_components.DropdownMulti, lambda: {"choices": list(shared.opts.data_labels.keys())}).js("info", "settingsHintsShowQuicksettings").info("setting entries that also appear in img2img interfaces").needs_reload_ui(),
"extra_options_cols": shared.OptionInfo(1, "Options in main UI - number of columns", gr.Number, {"precision": 0}).needs_reload_ui(),
"extra_options_accordion": shared.OptionInfo(False, "Options in main UI - place into an accordion").needs_reload_ui()
}))
@@ -20,7 +20,13 @@ function reportWindowSize() {
var button = gradioApp().getElementById(tab + '_generate_box');
var target = gradioApp().getElementById(currentlyMobile ? tab + '_results' : tab + '_actions_column');
target.insertBefore(button, target.firstElementChild);
gradioApp().getElementById(tab + '_results').classList.toggle('mobile', currentlyMobile);
}
}
window.addEventListener("resize", reportWindowSize);
onUiLoaded(function() {
reportWindowSize();
});
+1 -1
View File
@@ -119,7 +119,7 @@ window.addEventListener('paste', e => {
}
const firstFreeImageField = visibleImageFields
.filter(el => el.querySelector('input[type=file]'))?.[0];
.filter(el => !el.querySelector('img'))?.[0];
dropReplaceImage(
firstFreeImageField ?
+8 -14
View File
@@ -18,22 +18,11 @@ function keyupEditAttention(event) {
const before = text.substring(0, selectionStart);
let beforeParen = before.lastIndexOf(OPEN);
if (beforeParen == -1) return false;
let beforeParenClose = before.lastIndexOf(CLOSE);
while (beforeParenClose !== -1 && beforeParenClose > beforeParen) {
beforeParen = before.lastIndexOf(OPEN, beforeParen - 1);
beforeParenClose = before.lastIndexOf(CLOSE, beforeParenClose - 1);
}
// Find closing parenthesis around current cursor
const after = text.substring(selectionStart);
let afterParen = after.indexOf(CLOSE);
if (afterParen == -1) return false;
let afterParenOpen = after.indexOf(OPEN);
while (afterParenOpen !== -1 && afterParen > afterParenOpen) {
afterParen = after.indexOf(CLOSE, afterParen + 1);
afterParenOpen = after.indexOf(OPEN, afterParenOpen + 1);
}
if (beforeParen === -1 || afterParen === -1) return false;
// Set the selection to the text between the parenthesis
const parenContent = text.substring(beforeParen + 1, selectionStart + afterParen);
@@ -46,9 +35,14 @@ function keyupEditAttention(event) {
function selectCurrentWord() {
if (selectionStart !== selectionEnd) return false;
const delimiters = opts.keyedit_delimiters + " \r\n\t";
const whitespace_delimiters = {"Tab": "\t", "Carriage Return": "\r", "Line Feed": "\n"};
let delimiters = opts.keyedit_delimiters;
// seek backward until to find beggining
for (let i of opts.keyedit_delimiters_whitespace) {
delimiters += whitespace_delimiters[i];
}
// seek backward to find beginning
while (!delimiters.includes(text[selectionStart - 1]) && selectionStart > 0) {
selectionStart--;
}
@@ -92,7 +86,7 @@ function keyupEditAttention(event) {
}
var end = text.slice(selectionEnd + 1).indexOf(closeCharacter) + 1;
var weight = parseFloat(text.slice(selectionEnd + 1, selectionEnd + 1 + end));
var weight = parseFloat(text.slice(selectionEnd + 1, selectionEnd + end));
if (isNaN(weight)) return;
weight += isPlus ? delta : -delta;
+1 -1
View File
@@ -33,7 +33,7 @@ function extensions_check() {
var id = randomId();
requestProgress(id, gradioApp().getElementById('extensions_installed_top'), null, function() {
requestProgress(id, gradioApp().getElementById('extensions_installed_html'), null, function() {
});
+57 -18
View File
@@ -1,20 +1,38 @@
function toggleCss(key, css, enable) {
var style = document.getElementById(key);
if (enable && !style) {
style = document.createElement('style');
style.id = key;
style.type = 'text/css';
document.head.appendChild(style);
}
if (style && !enable) {
document.head.removeChild(style);
}
if (style) {
style.innerHTML == '';
style.appendChild(document.createTextNode(css));
}
}
function setupExtraNetworksForTab(tabname) {
gradioApp().querySelector('#' + tabname + '_extra_tabs').classList.add('extra-networks');
var tabs = gradioApp().querySelector('#' + tabname + '_extra_tabs > div');
var search = gradioApp().querySelector('#' + tabname + '_extra_search textarea');
var searchDiv = gradioApp().getElementById(tabname + '_extra_search');
var search = searchDiv.querySelector('textarea');
var sort = gradioApp().getElementById(tabname + '_extra_sort');
var sortOrder = gradioApp().getElementById(tabname + '_extra_sortorder');
var refresh = gradioApp().getElementById(tabname + '_extra_refresh');
var showDirsDiv = gradioApp().getElementById(tabname + '_extra_show_dirs');
var showDirs = gradioApp().querySelector('#' + tabname + '_extra_show_dirs input');
search.classList.add('search');
sort.classList.add('sort');
sortOrder.classList.add('sortorder');
sort.dataset.sortkey = 'sortDefault';
tabs.appendChild(search);
tabs.appendChild(searchDiv);
tabs.appendChild(sort);
tabs.appendChild(sortOrder);
tabs.appendChild(refresh);
tabs.appendChild(showDirsDiv);
var applyFilter = function() {
var searchTerm = search.value.toLowerCase();
@@ -80,6 +98,15 @@ function setupExtraNetworksForTab(tabname) {
});
extraNetworksApplyFilter[tabname] = applyFilter;
var showDirsUpdate = function() {
var css = '#' + tabname + '_extra_tabs .extra-network-subdirs { display: none; }';
toggleCss(tabname + '_extra_show_dirs_style', css, !showDirs.checked);
localSet('extra-networks-show-dirs', showDirs.checked ? 1 : 0);
};
showDirs.checked = localGet('extra-networks-show-dirs', 1) == 1;
showDirs.addEventListener("change", showDirsUpdate);
showDirsUpdate();
}
function applyExtraNetworkFilter(tabname) {
@@ -113,14 +140,15 @@ function setupExtraNetworks() {
onUiLoaded(setupExtraNetworks);
var re_extranet = /<([^:]+:[^:]+):[\d.]+>(.*)/;
var re_extranet_g = /\s+<([^:]+:[^:]+):[\d.]+>/g;
var re_extranet = /<([^:^>]+:[^:]+):[\d.]+>(.*)/;
var re_extranet_g = /<([^:^>]+:[^:]+):[\d.]+>/g;
function tryToRemoveExtraNetworkFromPrompt(textarea, text) {
var m = text.match(re_extranet);
var replaced = false;
var newTextareaText;
if (m) {
var extraTextBeforeNet = opts.extra_networks_add_text_separator;
var extraTextAfterNet = m[2];
var partToSearch = m[1];
var foundAtPosition = -1;
@@ -134,8 +162,13 @@ function tryToRemoveExtraNetworkFromPrompt(textarea, text) {
return found;
});
if (foundAtPosition >= 0 && newTextareaText.substr(foundAtPosition, extraTextAfterNet.length) == extraTextAfterNet) {
newTextareaText = newTextareaText.substr(0, foundAtPosition) + newTextareaText.substr(foundAtPosition + extraTextAfterNet.length);
if (foundAtPosition >= 0) {
if (newTextareaText.substr(foundAtPosition, extraTextAfterNet.length) == extraTextAfterNet) {
newTextareaText = newTextareaText.substr(0, foundAtPosition) + newTextareaText.substr(foundAtPosition + extraTextAfterNet.length);
}
if (newTextareaText.substr(foundAtPosition - extraTextBeforeNet.length, extraTextBeforeNet.length) == extraTextBeforeNet) {
newTextareaText = newTextareaText.substr(0, foundAtPosition - extraTextBeforeNet.length) + newTextareaText.substr(foundAtPosition);
}
}
} else {
newTextareaText = textarea.value.replaceAll(new RegExp(text, "g"), function(found) {
@@ -179,7 +212,7 @@ function saveCardPreview(event, tabname, filename) {
}
function extraNetworksSearchButton(tabs_id, event) {
var searchTextarea = gradioApp().querySelector("#" + tabs_id + ' > div > textarea');
var searchTextarea = gradioApp().querySelector("#" + tabs_id + ' > label > textarea');
var button = event.target;
var text = button.classList.contains("search-all") ? "" : button.textContent.trim();
@@ -189,27 +222,24 @@ function extraNetworksSearchButton(tabs_id, event) {
var globalPopup = null;
var globalPopupInner = null;
function closePopup() {
if (!globalPopup) return;
globalPopup.style.display = "none";
}
function popup(contents) {
if (!globalPopup) {
globalPopup = document.createElement('div');
globalPopup.onclick = closePopup;
globalPopup.classList.add('global-popup');
var close = document.createElement('div');
close.classList.add('global-popup-close');
close.onclick = closePopup;
close.addEventListener("click", closePopup);
close.title = "Close";
globalPopup.appendChild(close);
globalPopupInner = document.createElement('div');
globalPopupInner.onclick = function(event) {
event.stopPropagation(); return false;
};
globalPopupInner.classList.add('global-popup-inner');
globalPopup.appendChild(globalPopupInner);
@@ -222,6 +252,15 @@ function popup(contents) {
globalPopup.style.display = "flex";
}
var storedPopupIds = {};
function popupId(id) {
if (!storedPopupIds[id]) {
storedPopupIds[id] = gradioApp().getElementById(id);
}
popup(storedPopupIds[id]);
}
function extraNetworksShowMetadata(text) {
var elem = document.createElement('pre');
elem.classList.add('popup-metadata');
@@ -299,13 +338,13 @@ function extraNetworksEditUserMetadata(event, tabname, extraPage, cardName) {
function extraNetworksRefreshSingleCard(page, tabname, name) {
requestGet("./sd_extra_networks/get-single-card", {page: page, tabname: tabname, name: name}, function(data) {
if (data && data.html) {
var card = gradioApp().querySelector('.card[data-name=' + JSON.stringify(name) + ']'); // likely using the wrong stringify function
var card = gradioApp().querySelector(`#${tabname}_${page.replace(" ", "_")}_cards > .card[data-name="${name}"]`);
var newDiv = document.createElement('DIV');
newDiv.innerHTML = data.html;
var newCard = newDiv.firstElementChild;
newCard.style = '';
newCard.style.display = '';
card.parentElement.insertBefore(newCard, card);
card.parentElement.removeChild(card);
}
+11
View File
@@ -190,3 +190,14 @@ onUiUpdate(function(mutationRecords) {
tooltipCheckTimer = setTimeout(processTooltipCheckNodes, 1000);
}
});
onUiLoaded(function() {
for (var comp of window.gradio_config.components) {
if (comp.props.webui_tooltip && comp.props.elem_id) {
var elem = gradioApp().getElementById(comp.props.elem_id);
if (elem) {
elem.title = comp.props.webui_tooltip;
}
}
}
});
+5
View File
@@ -136,6 +136,11 @@ function setupImageForLightbox(e) {
var event = isFirefox ? 'mousedown' : 'click';
e.addEventListener(event, function(evt) {
if (evt.button == 1) {
open(evt.target.src);
evt.preventDefault();
return;
}
if (!opts.js_modal_lightbox || evt.button != 0) return;
modalZoomSet(gradioApp().getElementById('modalImage'), opts.js_modal_lightbox_initially_zoomed);
+37
View File
@@ -0,0 +1,37 @@
var observerAccordionOpen = new MutationObserver(function(mutations) {
mutations.forEach(function(mutationRecord) {
var elem = mutationRecord.target;
var open = elem.classList.contains('open');
var accordion = elem.parentNode;
accordion.classList.toggle('input-accordion-open', open);
var checkbox = gradioApp().querySelector('#' + accordion.id + "-checkbox input");
checkbox.checked = open;
updateInput(checkbox);
var extra = gradioApp().querySelector('#' + accordion.id + "-extra");
if (extra) {
extra.style.display = open ? "" : "none";
}
});
});
function inputAccordionChecked(id, checked) {
var label = gradioApp().querySelector('#' + id + " .label-wrap");
if (label.classList.contains('open') != checked) {
label.click();
}
}
onUiLoaded(function() {
for (var accordion of gradioApp().querySelectorAll('.input-accordion')) {
var labelWrap = accordion.querySelector('.label-wrap');
observerAccordionOpen.observe(labelWrap, {attributes: true, attributeFilter: ['class']});
var extra = gradioApp().querySelector('#' + accordion.id + "-extra");
if (extra) {
labelWrap.insertBefore(extra, labelWrap.lastElementChild);
}
}
});
+26
View File
@@ -0,0 +1,26 @@
function localSet(k, v) {
try {
localStorage.setItem(k, v);
} catch (e) {
console.warn(`Failed to save ${k} to localStorage: ${e}`);
}
}
function localGet(k, def) {
try {
return localStorage.getItem(k);
} catch (e) {
console.warn(`Failed to load ${k} from localStorage: ${e}`);
}
return def;
}
function localRemove(k) {
try {
return localStorage.removeItem(k);
} catch (e) {
console.warn(`Failed to remove ${k} from localStorage: ${e}`);
}
}
+36 -7
View File
@@ -11,11 +11,11 @@ var ignore_ids_for_localization = {
train_hypernetwork: 'OPTION',
txt2img_styles: 'OPTION',
img2img_styles: 'OPTION',
setting_random_artist_categories: 'SPAN',
setting_face_restoration_model: 'SPAN',
setting_realesrgan_enabled_models: 'SPAN',
extras_upscaler_1: 'SPAN',
extras_upscaler_2: 'SPAN',
setting_random_artist_categories: 'OPTION',
setting_face_restoration_model: 'OPTION',
setting_realesrgan_enabled_models: 'OPTION',
extras_upscaler_1: 'OPTION',
extras_upscaler_2: 'OPTION',
};
var re_num = /^[.\d]+$/;
@@ -107,12 +107,41 @@ function processNode(node) {
});
}
function localizeWholePage() {
processNode(gradioApp());
function elem(comp) {
var elem_id = comp.props.elem_id ? comp.props.elem_id : "component-" + comp.id;
return gradioApp().getElementById(elem_id);
}
for (var comp of window.gradio_config.components) {
if (comp.props.webui_tooltip) {
let e = elem(comp);
let tl = e ? getTranslation(e.title) : undefined;
if (tl !== undefined) {
e.title = tl;
}
}
if (comp.props.placeholder) {
let e = elem(comp);
let textbox = e ? e.querySelector('[placeholder]') : null;
let tl = textbox ? getTranslation(textbox.placeholder) : undefined;
if (tl !== undefined) {
textbox.placeholder = tl;
}
}
}
}
function dumpTranslations() {
if (!hasLocalization()) {
// If we don't have any localization,
// we will not have traversed the app to find
// original_lines, so do that now.
processNode(gradioApp());
localizeWholePage();
}
var dumped = {};
if (localization.rtl) {
@@ -154,7 +183,7 @@ document.addEventListener("DOMContentLoaded", function() {
});
});
processNode(gradioApp());
localizeWholePage();
if (localization.rtl) { // if the language is from right to left,
(new MutationObserver((mutations, observer) => { // wait for the style to load
+1 -1
View File
@@ -15,7 +15,7 @@ onAfterUiUpdate(function() {
}
}
const galleryPreviews = gradioApp().querySelectorAll('div[id^="tab_"][style*="display: block"] div[id$="_results"] .thumbnail-item > img');
const galleryPreviews = gradioApp().querySelectorAll('div[id^="tab_"] div[id$="_results"] .thumbnail-item > img');
if (galleryPreviews == null) return;
+38 -29
View File
@@ -69,7 +69,6 @@ function requestProgress(id_task, progressbarContainer, gallery, atEnd, onProgre
var dateStart = new Date();
var wasEverActive = false;
var parentProgressbar = progressbarContainer.parentNode;
var parentGallery = gallery ? gallery.parentNode : null;
var divProgress = document.createElement('div');
divProgress.className = 'progressDiv';
@@ -80,32 +79,26 @@ function requestProgress(id_task, progressbarContainer, gallery, atEnd, onProgre
divProgress.appendChild(divInner);
parentProgressbar.insertBefore(divProgress, progressbarContainer);
if (parentGallery) {
var livePreview = document.createElement('div');
livePreview.className = 'livePreview';
parentGallery.insertBefore(livePreview, gallery);
}
var livePreview = null;
var removeProgressBar = function() {
if (!divProgress) return;
setTitle("");
parentProgressbar.removeChild(divProgress);
if (parentGallery) parentGallery.removeChild(livePreview);
if (gallery && livePreview) gallery.removeChild(livePreview);
atEnd();
divProgress = null;
};
var fun = function(id_task, id_live_preview) {
request("./internal/progress", {id_task: id_task, id_live_preview: id_live_preview}, function(res) {
var funProgress = function(id_task) {
request("./internal/progress", {id_task: id_task, live_preview: false}, function(res) {
if (res.completed) {
removeProgressBar();
return;
}
var rect = progressbarContainer.getBoundingClientRect();
if (rect.width) {
divProgress.style.width = rect.width + "px";
}
let progressText = "";
divInner.style.width = ((res.progress || 0) * 100.0) + '%';
@@ -119,7 +112,6 @@ function requestProgress(id_task, progressbarContainer, gallery, atEnd, onProgre
progressText += " ETA: " + formatTime(res.eta);
}
setTitle(progressText);
if (res.textinfo && res.textinfo.indexOf("\n") == -1) {
@@ -142,16 +134,33 @@ function requestProgress(id_task, progressbarContainer, gallery, atEnd, onProgre
return;
}
if (onProgress) {
onProgress(res);
}
setTimeout(() => {
funProgress(id_task, res.id_live_preview);
}, opts.live_preview_refresh_period || 500);
}, function() {
removeProgressBar();
});
};
var funLivePreview = function(id_task, id_live_preview) {
request("./internal/progress", {id_task: id_task, id_live_preview: id_live_preview}, function(res) {
if (!divProgress) {
return;
}
if (res.live_preview && gallery) {
rect = gallery.getBoundingClientRect();
if (rect.width) {
livePreview.style.width = rect.width + "px";
livePreview.style.height = rect.height + "px";
}
var img = new Image();
img.onload = function() {
if (!livePreview) {
livePreview = document.createElement('div');
livePreview.className = 'livePreview';
gallery.insertBefore(livePreview, gallery.firstElementChild);
}
livePreview.appendChild(img);
if (livePreview.childElementCount > 2) {
livePreview.removeChild(livePreview.firstElementChild);
@@ -160,18 +169,18 @@ function requestProgress(id_task, progressbarContainer, gallery, atEnd, onProgre
img.src = res.live_preview;
}
if (onProgress) {
onProgress(res);
}
setTimeout(() => {
fun(id_task, res.id_live_preview);
funLivePreview(id_task, res.id_live_preview);
}, opts.live_preview_refresh_period || 500);
}, function() {
removeProgressBar();
});
};
fun(id_task, 0);
funProgress(id_task, 0);
if (gallery) {
funLivePreview(id_task, 0);
}
}
+141
View File
@@ -0,0 +1,141 @@
(function() {
const GRADIO_MIN_WIDTH = 320;
const GRID_TEMPLATE_COLUMNS = '1fr 16px 1fr';
const PAD = 16;
const DEBOUNCE_TIME = 100;
const R = {
tracking: false,
parent: null,
parentWidth: null,
leftCol: null,
leftColStartWidth: null,
screenX: null,
};
let resizeTimer;
let parents = [];
function setLeftColGridTemplate(el, width) {
el.style.gridTemplateColumns = `${width}px 16px 1fr`;
}
function displayResizeHandle(parent) {
if (window.innerWidth < GRADIO_MIN_WIDTH * 2 + PAD * 4) {
parent.style.display = 'flex';
if (R.handle != null) {
R.handle.style.opacity = '0';
}
return false;
} else {
parent.style.display = 'grid';
if (R.handle != null) {
R.handle.style.opacity = '100';
}
return true;
}
}
function afterResize(parent) {
if (displayResizeHandle(parent) && parent.style.gridTemplateColumns != GRID_TEMPLATE_COLUMNS) {
const oldParentWidth = R.parentWidth;
const newParentWidth = parent.offsetWidth;
const widthL = parseInt(parent.style.gridTemplateColumns.split(' ')[0]);
const ratio = newParentWidth / oldParentWidth;
const newWidthL = Math.max(Math.floor(ratio * widthL), GRADIO_MIN_WIDTH);
setLeftColGridTemplate(parent, newWidthL);
R.parentWidth = newParentWidth;
}
}
function setup(parent) {
const leftCol = parent.firstElementChild;
const rightCol = parent.lastElementChild;
parents.push(parent);
parent.style.display = 'grid';
parent.style.gap = '0';
parent.style.gridTemplateColumns = GRID_TEMPLATE_COLUMNS;
const resizeHandle = document.createElement('div');
resizeHandle.classList.add('resize-handle');
parent.insertBefore(resizeHandle, rightCol);
resizeHandle.addEventListener('mousedown', (evt) => {
if (evt.button !== 0) return;
evt.preventDefault();
evt.stopPropagation();
document.body.classList.add('resizing');
R.tracking = true;
R.parent = parent;
R.parentWidth = parent.offsetWidth;
R.handle = resizeHandle;
R.leftCol = leftCol;
R.leftColStartWidth = leftCol.offsetWidth;
R.screenX = evt.screenX;
});
resizeHandle.addEventListener('dblclick', (evt) => {
evt.preventDefault();
evt.stopPropagation();
parent.style.gridTemplateColumns = GRID_TEMPLATE_COLUMNS;
});
afterResize(parent);
}
window.addEventListener('mousemove', (evt) => {
if (evt.button !== 0) return;
if (R.tracking) {
evt.preventDefault();
evt.stopPropagation();
const delta = R.screenX - evt.screenX;
const leftColWidth = Math.max(Math.min(R.leftColStartWidth - delta, R.parent.offsetWidth - GRADIO_MIN_WIDTH - PAD), GRADIO_MIN_WIDTH);
setLeftColGridTemplate(R.parent, leftColWidth);
}
});
window.addEventListener('mouseup', (evt) => {
if (evt.button !== 0) return;
if (R.tracking) {
evt.preventDefault();
evt.stopPropagation();
R.tracking = false;
document.body.classList.remove('resizing');
}
});
window.addEventListener('resize', () => {
clearTimeout(resizeTimer);
resizeTimer = setTimeout(function() {
for (const parent of parents) {
afterResize(parent);
}
}, DEBOUNCE_TIME);
});
setupResizeHandle = setup;
})();
onUiLoaded(function() {
for (var elem of gradioApp().querySelectorAll('.resize-handle-row')) {
if (!elem.querySelector('.resize-handle')) {
setupResizeHandle(elem);
}
}
});
+46
View File
@@ -0,0 +1,46 @@
let settingsExcludeTabsFromShowAll = {
settings_tab_defaults: 1,
settings_tab_sysinfo: 1,
settings_tab_actions: 1,
settings_tab_licenses: 1,
};
function settingsShowAllTabs() {
gradioApp().querySelectorAll('#settings > div').forEach(function(elem) {
if (settingsExcludeTabsFromShowAll[elem.id]) return;
elem.style.display = "block";
});
}
function settingsShowOneTab() {
gradioApp().querySelector('#settings_show_one_page').click();
}
onUiLoaded(function() {
var edit = gradioApp().querySelector('#settings_search');
var editTextarea = gradioApp().querySelector('#settings_search > label > input');
var buttonShowAllPages = gradioApp().getElementById('settings_show_all_pages');
var settings_tabs = gradioApp().querySelector('#settings div');
onEdit('settingsSearch', editTextarea, 250, function() {
var searchText = (editTextarea.value || "").trim().toLowerCase();
gradioApp().querySelectorAll('#settings > div[id^=settings_] div[id^=column_settings_] > *').forEach(function(elem) {
var visible = elem.textContent.trim().toLowerCase().indexOf(searchText) != -1;
elem.style.display = visible ? "" : "none";
});
if (searchText != "") {
settingsShowAllTabs();
} else {
settingsShowOneTab();
}
});
settings_tabs.insertBefore(edit, settings_tabs.firstChild);
settings_tabs.appendChild(buttonShowAllPages);
buttonShowAllPages.addEventListener("click", settingsShowAllTabs);
});
+9 -17
View File
@@ -1,10 +1,9 @@
let promptTokenCountDebounceTime = 800;
let promptTokenCountTimeouts = {};
var promptTokenCountUpdateFunctions = {};
let promptTokenCountUpdateFunctions = {};
function update_txt2img_tokens(...args) {
// Called from Gradio
update_token_counter("txt2img_token_button");
update_token_counter("txt2img_negative_token_button");
if (args.length == 2) {
return args[0];
}
@@ -14,6 +13,7 @@ function update_txt2img_tokens(...args) {
function update_img2img_tokens(...args) {
// Called from Gradio
update_token_counter("img2img_token_button");
update_token_counter("img2img_negative_token_button");
if (args.length == 2) {
return args[0];
}
@@ -21,16 +21,7 @@ function update_img2img_tokens(...args) {
}
function update_token_counter(button_id) {
if (opts.disable_token_counters) {
return;
}
if (promptTokenCountTimeouts[button_id]) {
clearTimeout(promptTokenCountTimeouts[button_id]);
}
promptTokenCountTimeouts[button_id] = setTimeout(
() => gradioApp().getElementById(button_id)?.click(),
promptTokenCountDebounceTime,
);
promptTokenCountUpdateFunctions[button_id]?.();
}
@@ -69,10 +60,11 @@ function setupTokenCounting(id, id_counter, id_button) {
prompt.parentElement.insertBefore(counter, prompt);
prompt.parentElement.style.position = "relative";
promptTokenCountUpdateFunctions[id] = function() {
update_token_counter(id_button);
};
textarea.addEventListener("input", promptTokenCountUpdateFunctions[id]);
var func = onEdit(id, textarea, 800, function() {
gradioApp().getElementById(id_button)?.click();
});
promptTokenCountUpdateFunctions[id] = func;
promptTokenCountUpdateFunctions[id_button] = func;
}
function setupTokenCounters() {
+27 -44
View File
@@ -19,28 +19,11 @@ function all_gallery_buttons() {
}
function selected_gallery_button() {
var allCurrentButtons = gradioApp().querySelectorAll('[style="display: block;"].tabitem div[id$=_gallery].gradio-gallery .thumbnail-item.thumbnail-small.selected');
var visibleCurrentButton = null;
allCurrentButtons.forEach(function(elem) {
if (elem.parentElement.offsetParent) {
visibleCurrentButton = elem;
}
});
return visibleCurrentButton;
return all_gallery_buttons().find(elem => elem.classList.contains('selected')) ?? null;
}
function selected_gallery_index() {
var buttons = all_gallery_buttons();
var button = selected_gallery_button();
var result = -1;
buttons.forEach(function(v, i) {
if (v == button) {
result = i;
}
});
return result;
return all_gallery_buttons().findIndex(elem => elem.classList.contains('selected'));
}
function extract_image_from_gallery(gallery) {
@@ -152,11 +135,11 @@ function submit() {
showSubmitButtons('txt2img', false);
var id = randomId();
localStorage.setItem("txt2img_task_id", id);
localSet("txt2img_task_id", id);
requestProgress(id, gradioApp().getElementById('txt2img_gallery_container'), gradioApp().getElementById('txt2img_gallery'), function() {
showSubmitButtons('txt2img', true);
localStorage.removeItem("txt2img_task_id");
localRemove("txt2img_task_id");
showRestoreProgressButton('txt2img', false);
});
@@ -171,11 +154,11 @@ function submit_img2img() {
showSubmitButtons('img2img', false);
var id = randomId();
localStorage.setItem("img2img_task_id", id);
localSet("img2img_task_id", id);
requestProgress(id, gradioApp().getElementById('img2img_gallery_container'), gradioApp().getElementById('img2img_gallery'), function() {
showSubmitButtons('img2img', true);
localStorage.removeItem("img2img_task_id");
localRemove("img2img_task_id");
showRestoreProgressButton('img2img', false);
});
@@ -189,9 +172,7 @@ function submit_img2img() {
function restoreProgressTxt2img() {
showRestoreProgressButton("txt2img", false);
var id = localStorage.getItem("txt2img_task_id");
id = localStorage.getItem("txt2img_task_id");
var id = localGet("txt2img_task_id");
if (id) {
requestProgress(id, gradioApp().getElementById('txt2img_gallery_container'), gradioApp().getElementById('txt2img_gallery'), function() {
@@ -205,7 +186,7 @@ function restoreProgressTxt2img() {
function restoreProgressImg2img() {
showRestoreProgressButton("img2img", false);
var id = localStorage.getItem("img2img_task_id");
var id = localGet("img2img_task_id");
if (id) {
requestProgress(id, gradioApp().getElementById('img2img_gallery_container'), gradioApp().getElementById('img2img_gallery'), function() {
@@ -218,8 +199,8 @@ function restoreProgressImg2img() {
onUiLoaded(function() {
showRestoreProgressButton('txt2img', localStorage.getItem("txt2img_task_id"));
showRestoreProgressButton('img2img', localStorage.getItem("img2img_task_id"));
showRestoreProgressButton('txt2img', localGet("txt2img_task_id"));
showRestoreProgressButton('img2img', localGet("img2img_task_id"));
});
@@ -282,21 +263,6 @@ onAfterUiUpdate(function() {
json_elem.parentElement.style.display = "none";
setupTokenCounters();
var show_all_pages = gradioApp().getElementById('settings_show_all_pages');
var settings_tabs = gradioApp().querySelector('#settings div');
if (show_all_pages && settings_tabs) {
settings_tabs.appendChild(show_all_pages);
show_all_pages.onclick = function() {
gradioApp().querySelectorAll('#settings > div').forEach(function(elem) {
if (elem.id == "settings_tab_licenses") {
return;
}
elem.style.display = "block";
});
};
}
});
onOptionsChanged(function() {
@@ -385,3 +351,20 @@ function switchWidthHeight(tabname) {
updateInput(height);
return [];
}
var onEditTimers = {};
// calls func after afterMs milliseconds has passed since the input elem has beed enited by user
function onEdit(editId, elem, afterMs, func) {
var edited = function() {
var existingTimer = onEditTimers[editId];
if (existingTimer) clearTimeout(existingTimer);
onEditTimers[editId] = setTimeout(func, afterMs);
};
elem.addEventListener("input", edited);
return edited;
}
+12 -3
View File
@@ -1,6 +1,5 @@
from modules import launch_utils
args = launch_utils.args
python = launch_utils.python
git = launch_utils.git
@@ -26,8 +25,18 @@ start = launch_utils.start
def main():
if not args.skip_prepare_environment:
prepare_environment()
if args.dump_sysinfo:
filename = launch_utils.dump_sysinfo()
print(f"Sysinfo saved as {filename}. Exiting...")
exit(0)
launch_utils.startup_timer.record("initial startup")
with launch_utils.startup_timer.subcategory("prepare environment"):
if not args.skip_prepare_environment:
prepare_environment()
if args.test_server:
configure_for_tests()
+87 -23
View File
@@ -4,6 +4,8 @@ import os
import time
import datetime
import uvicorn
import ipaddress
import requests
import gradio as gr
from threading import Lock
from io import BytesIO
@@ -15,7 +17,7 @@ from fastapi.encoders import jsonable_encoder
from secrets import compare_digest
import modules.shared as shared
from modules import sd_samplers, deepbooru, sd_hijack, images, scripts, ui, postprocessing, errors, restart
from modules import sd_samplers, deepbooru, sd_hijack, images, scripts, ui, postprocessing, errors, restart, shared_items, script_callbacks, generation_parameters_copypaste
from modules.api import models
from modules.shared import opts
from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img, process_images
@@ -23,12 +25,11 @@ from modules.textual_inversion.textual_inversion import create_embedding, train_
from modules.textual_inversion.preprocess import preprocess
from modules.hypernetworks.hypernetwork import create_hypernetwork, train_hypernetwork
from PIL import PngImagePlugin,Image
from modules.sd_models import checkpoints_list, unload_model_weights, reload_model_weights, checkpoint_aliases
from modules.sd_vae import vae_dict
from modules.sd_models import unload_model_weights, reload_model_weights, checkpoint_aliases
from modules.sd_models_config import find_checkpoint_config_near_filename
from modules.realesrgan_model import get_realesrgan_models
from modules import devices
from typing import Dict, List, Any
from typing import Any
import piexif
import piexif.helper
from contextlib import closing
@@ -56,7 +57,41 @@ def setUpscalers(req: dict):
return reqDict
def verify_url(url):
"""Returns True if the url refers to a global resource."""
import socket
from urllib.parse import urlparse
try:
parsed_url = urlparse(url)
domain_name = parsed_url.netloc
host = socket.gethostbyname_ex(domain_name)
for ip in host[2]:
ip_addr = ipaddress.ip_address(ip)
if not ip_addr.is_global:
return False
except Exception:
return False
return True
def decode_base64_to_image(encoding):
if encoding.startswith("http://") or encoding.startswith("https://"):
if not opts.api_enable_requests:
raise HTTPException(status_code=500, detail="Requests not allowed")
if opts.api_forbid_local_requests and not verify_url(encoding):
raise HTTPException(status_code=500, detail="Request to local resource not allowed")
headers = {'user-agent': opts.api_useragent} if opts.api_useragent else {}
response = requests.get(encoding, timeout=30, headers=headers)
try:
image = Image.open(BytesIO(response.content))
return image
except Exception as e:
raise HTTPException(status_code=500, detail="Invalid image url") from e
if encoding.startswith("data:image/"):
encoding = encoding.split(";")[1].split(",")[1]
try:
@@ -186,17 +221,18 @@ class Api:
self.add_api_route("/sdapi/v1/options", self.get_config, methods=["GET"], response_model=models.OptionsModel)
self.add_api_route("/sdapi/v1/options", self.set_config, methods=["POST"])
self.add_api_route("/sdapi/v1/cmd-flags", self.get_cmd_flags, methods=["GET"], response_model=models.FlagsModel)
self.add_api_route("/sdapi/v1/samplers", self.get_samplers, methods=["GET"], response_model=List[models.SamplerItem])
self.add_api_route("/sdapi/v1/upscalers", self.get_upscalers, methods=["GET"], response_model=List[models.UpscalerItem])
self.add_api_route("/sdapi/v1/latent-upscale-modes", self.get_latent_upscale_modes, methods=["GET"], response_model=List[models.LatentUpscalerModeItem])
self.add_api_route("/sdapi/v1/sd-models", self.get_sd_models, methods=["GET"], response_model=List[models.SDModelItem])
self.add_api_route("/sdapi/v1/sd-vae", self.get_sd_vaes, methods=["GET"], response_model=List[models.SDVaeItem])
self.add_api_route("/sdapi/v1/hypernetworks", self.get_hypernetworks, methods=["GET"], response_model=List[models.HypernetworkItem])
self.add_api_route("/sdapi/v1/face-restorers", self.get_face_restorers, methods=["GET"], response_model=List[models.FaceRestorerItem])
self.add_api_route("/sdapi/v1/realesrgan-models", self.get_realesrgan_models, methods=["GET"], response_model=List[models.RealesrganItem])
self.add_api_route("/sdapi/v1/prompt-styles", self.get_prompt_styles, methods=["GET"], response_model=List[models.PromptStyleItem])
self.add_api_route("/sdapi/v1/samplers", self.get_samplers, methods=["GET"], response_model=list[models.SamplerItem])
self.add_api_route("/sdapi/v1/upscalers", self.get_upscalers, methods=["GET"], response_model=list[models.UpscalerItem])
self.add_api_route("/sdapi/v1/latent-upscale-modes", self.get_latent_upscale_modes, methods=["GET"], response_model=list[models.LatentUpscalerModeItem])
self.add_api_route("/sdapi/v1/sd-models", self.get_sd_models, methods=["GET"], response_model=list[models.SDModelItem])
self.add_api_route("/sdapi/v1/sd-vae", self.get_sd_vaes, methods=["GET"], response_model=list[models.SDVaeItem])
self.add_api_route("/sdapi/v1/hypernetworks", self.get_hypernetworks, methods=["GET"], response_model=list[models.HypernetworkItem])
self.add_api_route("/sdapi/v1/face-restorers", self.get_face_restorers, methods=["GET"], response_model=list[models.FaceRestorerItem])
self.add_api_route("/sdapi/v1/realesrgan-models", self.get_realesrgan_models, methods=["GET"], response_model=list[models.RealesrganItem])
self.add_api_route("/sdapi/v1/prompt-styles", self.get_prompt_styles, methods=["GET"], response_model=list[models.PromptStyleItem])
self.add_api_route("/sdapi/v1/embeddings", self.get_embeddings, methods=["GET"], response_model=models.EmbeddingsResponse)
self.add_api_route("/sdapi/v1/refresh-checkpoints", self.refresh_checkpoints, methods=["POST"])
self.add_api_route("/sdapi/v1/refresh-vae", self.refresh_vae, methods=["POST"])
self.add_api_route("/sdapi/v1/create/embedding", self.create_embedding, methods=["POST"], response_model=models.CreateResponse)
self.add_api_route("/sdapi/v1/create/hypernetwork", self.create_hypernetwork, methods=["POST"], response_model=models.CreateResponse)
self.add_api_route("/sdapi/v1/preprocess", self.preprocess, methods=["POST"], response_model=models.PreprocessResponse)
@@ -206,7 +242,8 @@ class Api:
self.add_api_route("/sdapi/v1/unload-checkpoint", self.unloadapi, methods=["POST"])
self.add_api_route("/sdapi/v1/reload-checkpoint", self.reloadapi, methods=["POST"])
self.add_api_route("/sdapi/v1/scripts", self.get_scripts_list, methods=["GET"], response_model=models.ScriptsList)
self.add_api_route("/sdapi/v1/script-info", self.get_script_info, methods=["GET"], response_model=List[models.ScriptInfo])
self.add_api_route("/sdapi/v1/script-info", self.get_script_info, methods=["GET"], response_model=list[models.ScriptInfo])
self.add_api_route("/sdapi/v1/extensions", self.get_extensions_list, methods=["GET"], response_model=list[models.ExtensionItem])
if shared.cmd_opts.api_server_stop:
self.add_api_route("/sdapi/v1/server-kill", self.kill_webui, methods=["POST"])
@@ -329,6 +366,7 @@ class Api:
with self.queue_lock:
with closing(StableDiffusionProcessingTxt2Img(sd_model=shared.sd_model, **args)) as p:
p.is_api = True
p.scripts = script_runner
p.outpath_grids = opts.outdir_txt2img_grids
p.outpath_samples = opts.outdir_txt2img_samples
@@ -343,6 +381,7 @@ class Api:
processed = process_images(p)
finally:
shared.state.end()
shared.total_tqdm.clear()
b64images = list(map(encode_pil_to_base64, processed.images)) if send_images else []
@@ -388,6 +427,7 @@ class Api:
with self.queue_lock:
with closing(StableDiffusionProcessingImg2Img(sd_model=shared.sd_model, **args)) as p:
p.init_images = [decode_base64_to_image(x) for x in init_images]
p.is_api = True
p.scripts = script_runner
p.outpath_grids = opts.outdir_img2img_grids
p.outpath_samples = opts.outdir_img2img_samples
@@ -402,6 +442,7 @@ class Api:
processed = process_images(p)
finally:
shared.state.end()
shared.total_tqdm.clear()
b64images = list(map(encode_pil_to_base64, processed.images)) if send_images else []
@@ -433,9 +474,6 @@ class Api:
return models.ExtrasBatchImagesResponse(images=list(map(encode_pil_to_base64, result[0])), html_info=result[1])
def pnginfoapi(self, req: models.PNGInfoRequest):
if(not req.image.strip()):
return models.PNGInfoResponse(info="")
image = decode_base64_to_image(req.image.strip())
if image is None:
return models.PNGInfoResponse(info="")
@@ -444,9 +482,10 @@ class Api:
if geninfo is None:
geninfo = ""
items = {**{'parameters': geninfo}, **items}
params = generation_parameters_copypaste.parse_generation_parameters(geninfo)
script_callbacks.infotext_pasted_callback(geninfo, params)
return models.PNGInfoResponse(info=geninfo, items=items)
return models.PNGInfoResponse(info=geninfo, items=items, parameters=params)
def progressapi(self, req: models.ProgressRequest = Depends()):
# copy from check_progress_call of ui.py
@@ -524,13 +563,13 @@ class Api:
return options
def set_config(self, req: Dict[str, Any]):
def set_config(self, req: dict[str, Any]):
checkpoint_name = req.get("sd_model_checkpoint", None)
if checkpoint_name is not None and checkpoint_name not in checkpoint_aliases:
raise RuntimeError(f"model {checkpoint_name!r} not found")
for k, v in req.items():
shared.opts.set(k, v)
shared.opts.set(k, v, is_api=True)
shared.opts.save(shared.config_filename)
return
@@ -562,10 +601,12 @@ class Api:
]
def get_sd_models(self):
return [{"title": x.title, "model_name": x.model_name, "hash": x.shorthash, "sha256": x.sha256, "filename": x.filename, "config": find_checkpoint_config_near_filename(x)} for x in checkpoints_list.values()]
import modules.sd_models as sd_models
return [{"title": x.title, "model_name": x.model_name, "hash": x.shorthash, "sha256": x.sha256, "filename": x.filename, "config": find_checkpoint_config_near_filename(x)} for x in sd_models.checkpoints_list.values()]
def get_sd_vaes(self):
return [{"model_name": x, "filename": vae_dict[x]} for x in vae_dict.keys()]
import modules.sd_vae as sd_vae
return [{"model_name": x, "filename": sd_vae.vae_dict[x]} for x in sd_vae.vae_dict.keys()]
def get_hypernetworks(self):
return [{"name": name, "path": shared.hypernetworks[name]} for name in shared.hypernetworks]
@@ -608,6 +649,10 @@ class Api:
with self.queue_lock:
shared.refresh_checkpoints()
def refresh_vae(self):
with self.queue_lock:
shared_items.refresh_vae_list()
def create_embedding(self, args: dict):
try:
shared.state.begin(job="create_embedding")
@@ -724,6 +769,25 @@ class Api:
cuda = {'error': f'{err}'}
return models.MemoryResponse(ram=ram, cuda=cuda)
def get_extensions_list(self):
from modules import extensions
extensions.list_extensions()
ext_list = []
for ext in extensions.extensions:
ext: extensions.Extension
ext.read_info_from_repo()
if ext.remote is not None:
ext_list.append({
"name": ext.name,
"remote": ext.remote,
"branch": ext.branch,
"commit_hash":ext.commit_hash,
"commit_date":ext.commit_date,
"version":ext.version,
"enabled":ext.enabled
})
return ext_list
def launch(self, server_name, port, root_path):
self.app.include_router(self.router)
uvicorn.run(self.app, host=server_name, port=port, timeout_keep_alive=shared.cmd_opts.timeout_keep_alive, root_path=root_path)
+27 -18
View File
@@ -1,12 +1,10 @@
import inspect
from pydantic import BaseModel, Field, create_model
from typing import Any, Optional
from typing_extensions import Literal
from typing import Any, Optional, Literal
from inflection import underscore
from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img
from modules.shared import sd_upscalers, opts, parser
from typing import Dict, List
API_NOT_ALLOWED = [
"self",
@@ -50,10 +48,12 @@ class PydanticModelGenerator:
additional_fields = None,
):
def field_type_generator(k, v):
# field_type = str if not overrides.get(k) else overrides[k]["type"]
# print(k, v.annotation, v.default)
field_type = v.annotation
if field_type == 'Image':
# images are sent as base64 strings via API
field_type = 'str'
return Optional[field_type]
def merge_class_params(class_):
@@ -63,7 +63,6 @@ class PydanticModelGenerator:
parameters = {**parameters, **inspect.signature(classes.__init__).parameters}
return parameters
self._model_name = model_name
self._class_data = merge_class_params(class_instance)
@@ -72,7 +71,7 @@ class PydanticModelGenerator:
field=underscore(k),
field_alias=k,
field_type=field_type_generator(k, v),
field_value=v.default
field_value=None if isinstance(v.default, property) else v.default
)
for (k,v) in self._class_data.items() if k not in API_NOT_ALLOWED
]
@@ -129,12 +128,12 @@ StableDiffusionImg2ImgProcessingAPI = PydanticModelGenerator(
).generate_model()
class TextToImageResponse(BaseModel):
images: List[str] = Field(default=None, title="Image", description="The generated image in base64 format.")
images: list[str] = Field(default=None, title="Image", description="The generated image in base64 format.")
parameters: dict
info: str
class ImageToImageResponse(BaseModel):
images: List[str] = Field(default=None, title="Image", description="The generated image in base64 format.")
images: list[str] = Field(default=None, title="Image", description="The generated image in base64 format.")
parameters: dict
info: str
@@ -167,17 +166,18 @@ class FileData(BaseModel):
name: str = Field(title="File name")
class ExtrasBatchImagesRequest(ExtrasBaseRequest):
imageList: List[FileData] = Field(title="Images", description="List of images to work on. Must be Base64 strings")
imageList: list[FileData] = Field(title="Images", description="List of images to work on. Must be Base64 strings")
class ExtrasBatchImagesResponse(ExtraBaseResponse):
images: List[str] = Field(title="Images", description="The generated images in base64 format.")
images: list[str] = Field(title="Images", description="The generated images in base64 format.")
class PNGInfoRequest(BaseModel):
image: str = Field(title="Image", description="The base64 encoded PNG image")
class PNGInfoResponse(BaseModel):
info: str = Field(title="Image info", description="A string with the parameters used to generate the image")
items: dict = Field(title="Items", description="An object containing all the info the image had")
items: dict = Field(title="Items", description="A dictionary containing all the other fields the image had")
parameters: dict = Field(title="Parameters", description="A dictionary with parsed generation info fields")
class ProgressRequest(BaseModel):
skip_current_image: bool = Field(default=False, title="Skip current image", description="Skip current image serialization")
@@ -231,8 +231,8 @@ FlagsModel = create_model("Flags", **flags)
class SamplerItem(BaseModel):
name: str = Field(title="Name")
aliases: List[str] = Field(title="Aliases")
options: Dict[str, str] = Field(title="Options")
aliases: list[str] = Field(title="Aliases")
options: dict[str, str] = Field(title="Options")
class UpscalerItem(BaseModel):
name: str = Field(title="Name")
@@ -283,8 +283,8 @@ class EmbeddingItem(BaseModel):
vectors: int = Field(title="Vectors", description="The number of vectors in the embedding")
class EmbeddingsResponse(BaseModel):
loaded: Dict[str, EmbeddingItem] = Field(title="Loaded", description="Embeddings loaded for the current model")
skipped: Dict[str, EmbeddingItem] = Field(title="Skipped", description="Embeddings skipped for the current model (likely due to architecture incompatibility)")
loaded: dict[str, EmbeddingItem] = Field(title="Loaded", description="Embeddings loaded for the current model")
skipped: dict[str, EmbeddingItem] = Field(title="Skipped", description="Embeddings skipped for the current model (likely due to architecture incompatibility)")
class MemoryResponse(BaseModel):
ram: dict = Field(title="RAM", description="System memory stats")
@@ -302,11 +302,20 @@ class ScriptArg(BaseModel):
minimum: Optional[Any] = Field(default=None, title="Minimum", description="Minimum allowed value for the argumentin UI")
maximum: Optional[Any] = Field(default=None, title="Minimum", description="Maximum allowed value for the argumentin UI")
step: Optional[Any] = Field(default=None, title="Minimum", description="Step for changing value of the argumentin UI")
choices: Optional[List[str]] = Field(default=None, title="Choices", description="Possible values for the argument")
choices: Optional[list[str]] = Field(default=None, title="Choices", description="Possible values for the argument")
class ScriptInfo(BaseModel):
name: str = Field(default=None, title="Name", description="Script name")
is_alwayson: bool = Field(default=None, title="IsAlwayson", description="Flag specifying whether this script is an alwayson script")
is_img2img: bool = Field(default=None, title="IsImg2img", description="Flag specifying whether this script is an img2img script")
args: List[ScriptArg] = Field(title="Arguments", description="List of script's arguments")
args: list[ScriptArg] = Field(title="Arguments", description="List of script's arguments")
class ExtensionItem(BaseModel):
name: str = Field(title="Name", description="Extension name")
remote: str = Field(title="Remote", description="Extension Repository URL")
branch: str = Field(title="Branch", description="Extension Repository Branch")
commit_hash: str = Field(title="Commit Hash", description="Extension Repository Commit Hash")
version: str = Field(title="Version", description="Extension Version")
commit_date: str = Field(title="Commit Date", description="Extension Repository Commit Date")
enabled: bool = Field(title="Enabled", description="Flag specifying whether this extension is enabled")
+6 -2
View File
@@ -1,11 +1,12 @@
import json
import os
import os.path
import threading
import time
from modules.paths import data_path, script_path
cache_filename = os.path.join(data_path, "cache.json")
cache_filename = os.environ.get('SD_WEBUI_CACHE_FILE', os.path.join(data_path, "cache.json"))
cache_data = None
cache_lock = threading.Lock()
@@ -29,9 +30,12 @@ def dump_cache():
time.sleep(1)
with cache_lock:
with open(cache_filename, "w", encoding="utf8") as file:
cache_filename_tmp = cache_filename + "-"
with open(cache_filename_tmp, "w", encoding="utf8") as file:
json.dump(cache_data, file, indent=4)
os.replace(cache_filename_tmp, cache_filename)
dump_cache_after = None
dump_cache_thread = None
+4 -3
View File
@@ -1,11 +1,10 @@
from functools import wraps
import html
import threading
import time
from modules import shared, progress, errors
from modules import shared, progress, errors, devices, fifo_lock
queue_lock = threading.Lock()
queue_lock = fifo_lock.FIFOLock()
def wrap_queued_call(func):
@@ -75,6 +74,8 @@ def wrap_gradio_call(func, extra_outputs=None, add_stats=False):
error_message = f'{type(e).__name__}: {e}'
res = extra_outputs_array + [f"<div class='error'>{html.escape(error_message)}</div>"]
devices.torch_gc()
shared.state.skipped = False
shared.state.interrupted = False
shared.state.job_count = 0
+12 -4
View File
@@ -13,8 +13,11 @@ parser.add_argument("--reinstall-xformers", action='store_true', help="launch.py
parser.add_argument("--reinstall-torch", action='store_true', help="launch.py argument: install the appropriate version of torch even if you have some version already installed")
parser.add_argument("--update-check", action='store_true', help="launch.py argument: check for updates at startup")
parser.add_argument("--test-server", action='store_true', help="launch.py argument: configure server for testing")
parser.add_argument("--log-startup", action='store_true', help="launch.py argument: print a detailed log of what's happening at startup")
parser.add_argument("--skip-prepare-environment", action='store_true', help="launch.py argument: skip all environment preparation")
parser.add_argument("--skip-install", action='store_true', help="launch.py argument: skip installation of packages")
parser.add_argument("--dump-sysinfo", action='store_true', help="launch.py argument: dump limited sysinfo file (without information about extensions, options) to disk and quit")
parser.add_argument("--loglevel", type=str, help="log level; one of: CRITICAL, ERROR, WARNING, INFO, DEBUG", default=None)
parser.add_argument("--do-not-download-clip", action='store_true', help="do not download CLIP model even if it's not included in the checkpoint")
parser.add_argument("--data-dir", type=str, default=os.path.dirname(os.path.dirname(os.path.realpath(__file__))), help="base path where all user data is stored")
parser.add_argument("--config", type=str, default=sd_default_config, help="path to config which constructs model",)
@@ -33,9 +36,10 @@ parser.add_argument("--hypernetwork-dir", type=str, default=os.path.join(models_
parser.add_argument("--localizations-dir", type=str, default=os.path.join(script_path, 'localizations'), help="localizations directory")
parser.add_argument("--allow-code", action='store_true', help="allow custom script execution from webui")
parser.add_argument("--medvram", action='store_true', help="enable stable diffusion model optimizations for sacrificing a little speed for low VRM usage")
parser.add_argument("--medvram-sdxl", action='store_true', help="enable --medvram optimization just for SDXL models")
parser.add_argument("--lowvram", action='store_true', help="enable stable diffusion model optimizations for sacrificing a lot of speed for very low VRM usage")
parser.add_argument("--lowram", action='store_true', help="load stable diffusion checkpoint weights to VRAM instead of RAM")
parser.add_argument("--always-batch-cond-uncond", action='store_true', help="disables cond/uncond batching that is enabled to save memory with --medvram or --lowvram")
parser.add_argument("--always-batch-cond-uncond", action='store_true', help="does not do anything")
parser.add_argument("--unload-gfpgan", action='store_true', help="does not do anything.")
parser.add_argument("--precision", type=str, help="evaluate at this precision", choices=["full", "autocast"], default="autocast")
parser.add_argument("--upcast-sampling", action='store_true', help="upcast sampling. No effect with --no-half. Usually produces similar results to --no-half with better performance while using less memory.")
@@ -66,6 +70,7 @@ parser.add_argument("--opt-sdp-no-mem-attention", action='store_true', help="pre
parser.add_argument("--disable-opt-split-attention", action='store_true', help="prefer no cross-attention layer optimization for automatic choice of optimization")
parser.add_argument("--disable-nan-check", action='store_true', help="do not check if produced images/latent spaces have nans; useful for running without a checkpoint in CI")
parser.add_argument("--use-cpu", nargs='+', help="use CPU as torch device for specified modules", default=[], type=str.lower)
parser.add_argument("--disable-model-loading-ram-optimization", action='store_true', help="disable an optimization that reduces RAM use when loading a model")
parser.add_argument("--listen", action='store_true', help="launch gradio with 0.0.0.0 as server name, allowing to respond to network requests")
parser.add_argument("--port", type=int, help="launch gradio with given server port, you need root/admin rights for ports < 1024, defaults to 7860 if available", default=None)
parser.add_argument("--show-negative-prompt", action='store_true', help="does not do anything", default=False)
@@ -78,14 +83,14 @@ parser.add_argument("--gradio-auth", type=str, help='set gradio authentication l
parser.add_argument("--gradio-auth-path", type=str, help='set gradio authentication file path ex. "/path/to/auth/file" same auth format as --gradio-auth', default=None)
parser.add_argument("--gradio-img2img-tool", type=str, help='does not do anything')
parser.add_argument("--gradio-inpaint-tool", type=str, help="does not do anything")
parser.add_argument("--gradio-allowed-path", action='append', help="add path to gradio's allowed_paths, make it possible to serve files from it")
parser.add_argument("--gradio-allowed-path", action='append', help="add path to gradio's allowed_paths, make it possible to serve files from it", default=[data_path])
parser.add_argument("--opt-channelslast", action='store_true', help="change memory type for stable diffusion to channels last")
parser.add_argument("--styles-file", type=str, help="filename to use for styles", default=os.path.join(data_path, 'styles.csv'))
parser.add_argument("--autolaunch", action='store_true', help="open the webui URL in the system's default browser upon launch", default=False)
parser.add_argument("--theme", type=str, help="launches the UI with light or dark theme", default=None)
parser.add_argument("--use-textbox-seed", action='store_true', help="use textbox for seeds in UI (no up/down, but possible to input long seeds)", default=False)
parser.add_argument("--disable-console-progressbars", action='store_true', help="do not output progressbars to console", default=False)
parser.add_argument("--enable-console-prompts", action='store_true', help="print prompts to console when generating with txt2img and img2img", default=False)
parser.add_argument("--enable-console-prompts", action='store_true', help="does not do anything", default=False) # Legacy compatibility, use as default value shared.opts.enable_console_prompts
parser.add_argument('--vae-path', type=str, help='Checkpoint to use as VAE; setting this argument disables all settings related to VAE', default=None)
parser.add_argument("--disable-safe-unpickle", action='store_true', help="disable checking pytorch models for malicious code", default=False)
parser.add_argument("--api", action='store_true', help="use api=True to launch the API together with the webui (use --nowebui instead for only the API)")
@@ -107,6 +112,9 @@ parser.add_argument("--skip-version-check", action='store_true', help="Do not ch
parser.add_argument("--no-hashing", action='store_true', help="disable sha256 hashing of checkpoints to help loading performance", default=False)
parser.add_argument("--no-download-sd-model", action='store_true', help="don't download SD1.5 model even if no model is found in --ckpt-dir", default=False)
parser.add_argument('--subpath', type=str, help='customize the subpath for gradio, use with reverse proxy')
parser.add_argument('--add-stop-route', action='store_true', help='add /_stop route to stop server')
parser.add_argument('--add-stop-route', action='store_true', help='does not do anything')
parser.add_argument('--api-server-stop', action='store_true', help='enable server stop/restart/kill via api')
parser.add_argument('--timeout-keep-alive', type=int, default=30, help='set timeout_keep_alive for uvicorn')
parser.add_argument("--disable-all-extensions", action='store_true', help="prevent all extensions from running regardless of any other settings", default=False)
parser.add_argument("--disable-extra-extensions", action='store_true', help="prevent all extensions except built-in from running regardless of any other settings", default=False)
parser.add_argument("--skip-load-model-at-start", action='store_true', help="if load a model at web start, only take effect when --nowebui", )
+10 -9
View File
@@ -4,18 +4,15 @@ Supports saving and restoring webui and extensions from a known working set of c
import os
import json
import time
import tqdm
from datetime import datetime
from collections import OrderedDict
import git
from modules import shared, extensions, errors
from modules.paths_internal import script_path, config_states_dir
all_config_states = OrderedDict()
all_config_states = {}
def list_config_states():
@@ -28,15 +25,19 @@ def list_config_states():
for filename in os.listdir(config_states_dir):
if filename.endswith(".json"):
path = os.path.join(config_states_dir, filename)
with open(path, "r", encoding="utf-8") as f:
j = json.load(f)
j["filepath"] = path
config_states.append(j)
try:
with open(path, "r", encoding="utf-8") as f:
j = json.load(f)
assert "created_at" in j, '"created_at" does not exist'
j["filepath"] = path
config_states.append(j)
except Exception as e:
print(f'[ERROR]: Config states {path}, {e}')
config_states = sorted(config_states, key=lambda cs: cs["created_at"], reverse=True)
for cs in config_states:
timestamp = time.asctime(time.gmtime(cs["created_at"]))
timestamp = datetime.fromtimestamp(cs["created_at"]).strftime('%Y-%m-%d %H:%M:%S')
name = cs.get("name", "Config")
full_name = f"{name}: {timestamp}"
all_config_states[full_name] = cs
+14 -31
View File
@@ -3,7 +3,7 @@ import contextlib
from functools import lru_cache
import torch
from modules import errors
from modules import errors, shared
if sys.platform == "darwin":
from modules import mac_specific
@@ -17,8 +17,6 @@ def has_mps() -> bool:
def get_cuda_device_string():
from modules import shared
if shared.cmd_opts.device_id is not None:
return f"cuda:{shared.cmd_opts.device_id}"
@@ -40,8 +38,6 @@ def get_optimal_device():
def get_device_for(task):
from modules import shared
if task in shared.cmd_opts.use_cpu:
return cpu
@@ -64,21 +60,25 @@ def enable_tf32():
# enabling benchmark option seems to enable a range of cards to do fp16 when they otherwise can't
# see https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4407
if any(torch.cuda.get_device_capability(devid) == (7, 5) for devid in range(0, torch.cuda.device_count())):
device_id = (int(shared.cmd_opts.device_id) if shared.cmd_opts.device_id is not None and shared.cmd_opts.device_id.isdigit() else 0) or torch.cuda.current_device()
if torch.cuda.get_device_capability(device_id) == (7, 5) and torch.cuda.get_device_name(device_id).startswith("NVIDIA GeForce GTX 16"):
torch.backends.cudnn.benchmark = True
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
errors.run(enable_tf32, "Enabling TF32")
cpu = torch.device("cpu")
device = device_interrogate = device_gfpgan = device_esrgan = device_codeformer = None
dtype = torch.float16
dtype_vae = torch.float16
dtype_unet = torch.float16
cpu: torch.device = torch.device("cpu")
device: torch.device = None
device_interrogate: torch.device = None
device_gfpgan: torch.device = None
device_esrgan: torch.device = None
device_codeformer: torch.device = None
dtype: torch.dtype = torch.float16
dtype_vae: torch.dtype = torch.float16
dtype_unet: torch.dtype = torch.float16
unet_needs_upcast = False
@@ -90,26 +90,10 @@ def cond_cast_float(input):
return input.float() if unet_needs_upcast else input
def randn(seed, shape):
from modules.shared import opts
torch.manual_seed(seed)
if opts.randn_source == "CPU" or device.type == 'mps':
return torch.randn(shape, device=cpu).to(device)
return torch.randn(shape, device=device)
def randn_without_seed(shape):
from modules.shared import opts
if opts.randn_source == "CPU" or device.type == 'mps':
return torch.randn(shape, device=cpu).to(device)
return torch.randn(shape, device=device)
nv_rng = None
def autocast(disable=False):
from modules import shared
if disable:
return contextlib.nullcontext()
@@ -128,8 +112,6 @@ class NansException(Exception):
def test_for_nans(x, where):
from modules import shared
if shared.cmd_opts.disable_nan_check:
return
@@ -169,3 +151,4 @@ def first_time_calculation():
x = torch.zeros((1, 1, 3, 3)).to(device, dtype)
conv2d = torch.nn.Conv2d(1, 1, (3, 3)).to(device, dtype)
conv2d(x)
+52 -1
View File
@@ -14,7 +14,8 @@ def record_exception():
if exception_records and exception_records[-1] == e:
return
exception_records.append((e, tb))
from modules import sysinfo
exception_records.append(sysinfo.format_exception(e, tb))
if len(exception_records) > 5:
exception_records.pop(0)
@@ -83,3 +84,53 @@ def run(code, task):
code()
except Exception as e:
display(task, e)
def check_versions():
from packaging import version
from modules import shared
import torch
import gradio
expected_torch_version = "2.0.0"
expected_xformers_version = "0.0.20"
expected_gradio_version = "3.41.2"
if version.parse(torch.__version__) < version.parse(expected_torch_version):
print_error_explanation(f"""
You are running torch {torch.__version__}.
The program is tested to work with torch {expected_torch_version}.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.
Use --skip-version-check commandline argument to disable this check.
""".strip())
if shared.xformers_available:
import xformers
if version.parse(xformers.__version__) < version.parse(expected_xformers_version):
print_error_explanation(f"""
You are running xformers {xformers.__version__}.
The program is tested to work with xformers {expected_xformers_version}.
To reinstall the desired version, run with commandline flag --reinstall-xformers.
Use --skip-version-check commandline argument to disable this check.
""".strip())
if gradio.__version__ != expected_gradio_version:
print_error_explanation(f"""
You are running gradio {gradio.__version__}.
The program is designed to work with gradio {expected_gradio_version}.
Using a different version of gradio is extremely likely to break the program.
Reasons why you have the mismatched gradio version can be:
- you use --skip-install flag.
- you use webui.py to start the program instead of launch.py.
- an extension installs the incompatible gradio version.
Use --skip-version-check commandline argument to disable this check.
""".strip())
+8 -6
View File
@@ -1,7 +1,7 @@
import os
import threading
from modules import shared, errors, cache
from modules import shared, errors, cache, scripts
from modules.gitpython_hack import Repo
from modules.paths_internal import extensions_dir, extensions_builtin_dir, script_path # noqa: F401
@@ -11,9 +11,9 @@ os.makedirs(extensions_dir, exist_ok=True)
def active():
if shared.opts.disable_all_extensions == "all":
if shared.cmd_opts.disable_all_extensions or shared.opts.disable_all_extensions == "all":
return []
elif shared.opts.disable_all_extensions == "extra":
elif shared.cmd_opts.disable_extra_extensions or shared.opts.disable_all_extensions == "extra":
return [x for x in extensions if x.enabled and x.is_builtin]
else:
return [x for x in extensions if x.enabled]
@@ -90,8 +90,6 @@ class Extension:
self.have_info_from_repo = True
def list_files(self, subdir, extension):
from modules import scripts
dirpath = os.path.join(self.path, subdir)
if not os.path.isdir(dirpath):
return []
@@ -141,8 +139,12 @@ def list_extensions():
if not os.path.isdir(extensions_dir):
return
if shared.opts.disable_all_extensions == "all":
if shared.cmd_opts.disable_all_extensions:
print("*** \"--disable-all-extensions\" arg was used, will not load any extensions ***")
elif shared.opts.disable_all_extensions == "all":
print("*** \"Disable all extensions\" option was set, will not load any extensions ***")
elif shared.cmd_opts.disable_extra_extensions:
print("*** \"--disable-extra-extensions\" arg was used, will only load built-in extensions ***")
elif shared.opts.disable_all_extensions == "extra":
print("*** \"Disable all extensions\" option was set, will only load built-in extensions ***")
+62 -17
View File
@@ -1,4 +1,7 @@
import json
import os
import re
import logging
from collections import defaultdict
from modules import errors
@@ -84,27 +87,55 @@ class ExtraNetwork:
raise NotImplementedError
def lookup_extra_networks(extra_network_data):
"""returns a dict mapping ExtraNetwork objects to lists of arguments for those extra networks.
Example input:
{
'lora': [<modules.extra_networks.ExtraNetworkParams object at 0x0000020690D58310>],
'lyco': [<modules.extra_networks.ExtraNetworkParams object at 0x0000020690D58F70>],
'hypernet': [<modules.extra_networks.ExtraNetworkParams object at 0x0000020690D5A800>]
}
Example output:
{
<extra_networks_lora.ExtraNetworkLora object at 0x0000020581BEECE0>: [<modules.extra_networks.ExtraNetworkParams object at 0x0000020690D58310>, <modules.extra_networks.ExtraNetworkParams object at 0x0000020690D58F70>],
<modules.extra_networks_hypernet.ExtraNetworkHypernet object at 0x0000020581BEEE60>: [<modules.extra_networks.ExtraNetworkParams object at 0x0000020690D5A800>]
}
"""
res = {}
for extra_network_name, extra_network_args in list(extra_network_data.items()):
extra_network = extra_network_registry.get(extra_network_name, None)
alias = extra_network_aliases.get(extra_network_name, None)
if alias is not None and extra_network is None:
extra_network = alias
if extra_network is None:
logging.info(f"Skipping unknown extra network: {extra_network_name}")
continue
res.setdefault(extra_network, []).extend(extra_network_args)
return res
def activate(p, extra_network_data):
"""call activate for extra networks in extra_network_data in specified order, then call
activate for all remaining registered networks with an empty argument list"""
activated = []
for extra_network_name, extra_network_args in extra_network_data.items():
extra_network = extra_network_registry.get(extra_network_name, None)
if extra_network is None:
extra_network = extra_network_aliases.get(extra_network_name, None)
if extra_network is None:
print(f"Skipping unknown extra network: {extra_network_name}")
continue
for extra_network, extra_network_args in lookup_extra_networks(extra_network_data).items():
try:
extra_network.activate(p, extra_network_args)
activated.append(extra_network)
except Exception as e:
errors.display(e, f"activating extra network {extra_network_name} with arguments {extra_network_args}")
errors.display(e, f"activating extra network {extra_network.name} with arguments {extra_network_args}")
for extra_network_name, extra_network in extra_network_registry.items():
if extra_network in activated:
@@ -123,19 +154,16 @@ def deactivate(p, extra_network_data):
"""call deactivate for extra networks in extra_network_data in specified order, then call
deactivate for all remaining registered networks"""
for extra_network_name in extra_network_data:
extra_network = extra_network_registry.get(extra_network_name, None)
if extra_network is None:
continue
data = lookup_extra_networks(extra_network_data)
for extra_network in data:
try:
extra_network.deactivate(p)
except Exception as e:
errors.display(e, f"deactivating extra network {extra_network_name}")
errors.display(e, f"deactivating extra network {extra_network.name}")
for extra_network_name, extra_network in extra_network_registry.items():
args = extra_network_data.get(extra_network_name, None)
if args is not None:
if extra_network in data:
continue
try:
@@ -177,3 +205,20 @@ def parse_prompts(prompts):
return res, extra_data
def get_user_metadata(filename):
if filename is None:
return {}
basename, ext = os.path.splitext(filename)
metadata_filename = basename + '.json'
metadata = {}
try:
if os.path.isfile(metadata_filename):
with open(metadata_filename, "r", encoding="utf8") as file:
metadata = json.load(file)
except Exception as e:
errors.display(e, f"reading extra network user metadata from {metadata_filename}")
return metadata
+33 -6
View File
@@ -7,7 +7,7 @@ import json
import torch
import tqdm
from modules import shared, images, sd_models, sd_vae, sd_models_config
from modules import shared, images, sd_models, sd_vae, sd_models_config, errors
from modules.ui_common import plaintext_to_html
import gradio as gr
import safetensors.torch
@@ -72,7 +72,20 @@ def to_half(tensor, enable):
return tensor
def run_modelmerger(id_task, primary_model_name, secondary_model_name, tertiary_model_name, interp_method, multiplier, save_as_half, custom_name, checkpoint_format, config_source, bake_in_vae, discard_weights, save_metadata):
def read_metadata(primary_model_name, secondary_model_name, tertiary_model_name):
metadata = {}
for checkpoint_name in [primary_model_name, secondary_model_name, tertiary_model_name]:
checkpoint_info = sd_models.checkpoints_list.get(checkpoint_name, None)
if checkpoint_info is None:
continue
metadata.update(checkpoint_info.metadata)
return json.dumps(metadata, indent=4, ensure_ascii=False)
def run_modelmerger(id_task, primary_model_name, secondary_model_name, tertiary_model_name, interp_method, multiplier, save_as_half, custom_name, checkpoint_format, config_source, bake_in_vae, discard_weights, save_metadata, add_merge_recipe, copy_metadata_fields, metadata_json):
shared.state.begin(job="model-merge")
def fail(message):
@@ -241,11 +254,25 @@ def run_modelmerger(id_task, primary_model_name, secondary_model_name, tertiary_
shared.state.textinfo = "Saving"
print(f"Saving to {output_modelname}...")
metadata = None
metadata = {}
if save_metadata and copy_metadata_fields:
if primary_model_info:
metadata.update(primary_model_info.metadata)
if secondary_model_info:
metadata.update(secondary_model_info.metadata)
if tertiary_model_info:
metadata.update(tertiary_model_info.metadata)
if save_metadata:
metadata = {"format": "pt"}
try:
metadata.update(json.loads(metadata_json))
except Exception as e:
errors.display(e, "readin metadata from json")
metadata["format"] = "pt"
if save_metadata and add_merge_recipe:
merge_recipe = {
"type": "webui", # indicate this model was merged with webui's built-in merger
"primary_model_hash": primary_model_info.sha256,
@@ -261,7 +288,6 @@ def run_modelmerger(id_task, primary_model_name, secondary_model_name, tertiary_
"is_inpainting": result_is_inpainting_model,
"is_instruct_pix2pix": result_is_instruct_pix2pix_model
}
metadata["sd_merge_recipe"] = json.dumps(merge_recipe)
sd_merge_models = {}
@@ -281,11 +307,12 @@ def run_modelmerger(id_task, primary_model_name, secondary_model_name, tertiary_
if tertiary_model_info:
add_model_metadata(tertiary_model_info)
metadata["sd_merge_recipe"] = json.dumps(merge_recipe)
metadata["sd_merge_models"] = json.dumps(sd_merge_models)
_, extension = os.path.splitext(output_modelname)
if extension.lower() == ".safetensors":
safetensors.torch.save_file(theta_0, output_modelname, metadata=metadata)
safetensors.torch.save_file(theta_0, output_modelname, metadata=metadata if len(metadata)>0 else None)
else:
torch.save(theta_0, output_modelname)
+37
View File
@@ -0,0 +1,37 @@
import threading
import collections
# reference: https://gist.github.com/vitaliyp/6d54dd76ca2c3cdfc1149d33007dc34a
class FIFOLock(object):
def __init__(self):
self._lock = threading.Lock()
self._inner_lock = threading.Lock()
self._pending_threads = collections.deque()
def acquire(self, blocking=True):
with self._inner_lock:
lock_acquired = self._lock.acquire(False)
if lock_acquired:
return True
elif not blocking:
return False
release_event = threading.Event()
self._pending_threads.append(release_event)
release_event.wait()
return self._lock.acquire()
def release(self):
with self._inner_lock:
if self._pending_threads:
release_event = self._pending_threads.popleft()
release_event.set()
self._lock.release()
__enter__ = acquire
def __exit__(self, t, v, tb):
self.release()
+28 -22
View File
@@ -6,10 +6,10 @@ import re
import gradio as gr
from modules.paths import data_path
from modules import shared, ui_tempdir, script_callbacks
from modules import shared, ui_tempdir, script_callbacks, processing
from PIL import Image
re_param_code = r'\s*([\w ]+):\s*("(?:\\"[^,]|\\"|\\|[^\"])+"|[^,]*)(?:,|$)'
re_param_code = r'\s*(\w[\w \-/]+):\s*("(?:\\.|[^\\"])+"|[^,]*)(?:,|$)'
re_param = re.compile(re_param_code)
re_imagesize = re.compile(r"^(\d+)x(\d+)$")
re_hypernet_hash = re.compile("\(([0-9a-f]+)\)$")
@@ -32,6 +32,7 @@ class ParamBinding:
def reset():
paste_fields.clear()
registered_param_bindings.clear()
def quote(text):
@@ -198,7 +199,6 @@ def restore_old_hires_fix_params(res):
height = int(res.get("Size-2", 512))
if firstpass_width == 0 or firstpass_height == 0:
from modules import processing
firstpass_width, firstpass_height = processing.old_hires_fix_first_pass_dimensions(width, height)
res['Size-1'] = firstpass_width
@@ -280,6 +280,9 @@ Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 965400086, Size: 512x512, Model
if "Hires sampler" not in res:
res["Hires sampler"] = "Use same sampler"
if "Hires checkpoint" not in res:
res["Hires checkpoint"] = "Use same checkpoint"
if "Hires prompt" not in res:
res["Hires prompt"] = ""
@@ -304,32 +307,28 @@ Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 965400086, Size: 512x512, Model
if "Schedule rho" not in res:
res["Schedule rho"] = 0
if "VAE Encoder" not in res:
res["VAE Encoder"] = "Full"
if "VAE Decoder" not in res:
res["VAE Decoder"] = "Full"
return res
infotext_to_setting_name_mapping = [
('Clip skip', 'CLIP_stop_at_last_layers', ),
]
"""Mapping of infotext labels to setting names. Only left for backwards compatibility - use OptionInfo(..., infotext='...') instead.
Example content:
infotext_to_setting_name_mapping = [
('Conditional mask weight', 'inpainting_mask_weight'),
('Model hash', 'sd_model_checkpoint'),
('ENSD', 'eta_noise_seed_delta'),
('Schedule type', 'k_sched_type'),
('Schedule max sigma', 'sigma_max'),
('Schedule min sigma', 'sigma_min'),
('Schedule rho', 'rho'),
('Noise multiplier', 'initial_noise_multiplier'),
('Eta', 'eta_ancestral'),
('Eta DDIM', 'eta_ddim'),
('Discard penultimate sigma', 'always_discard_next_to_last_sigma'),
('UniPC variant', 'uni_pc_variant'),
('UniPC skip type', 'uni_pc_skip_type'),
('UniPC order', 'uni_pc_order'),
('UniPC lower order final', 'uni_pc_lower_order_final'),
('Token merging ratio', 'token_merging_ratio'),
('Token merging ratio hr', 'token_merging_ratio_hr'),
('RNG', 'randn_source'),
('NGMS', 's_min_uncond'),
('Pad conds', 'pad_cond_uncond'),
]
"""
def create_override_settings_dict(text_pairs):
@@ -350,7 +349,8 @@ def create_override_settings_dict(text_pairs):
params[k] = v.strip()
for param_name, setting_name in infotext_to_setting_name_mapping:
mapping = [(info.infotext, k) for k, info in shared.opts.data_labels.items() if info.infotext]
for param_name, setting_name in mapping + infotext_to_setting_name_mapping:
value = params.get(param_name, None)
if value is None:
@@ -399,10 +399,16 @@ def connect_paste(button, paste_fields, input_comp, override_settings_component,
return res
if override_settings_component is not None:
already_handled_fields = {key: 1 for _, key in paste_fields}
def paste_settings(params):
vals = {}
for param_name, setting_name in infotext_to_setting_name_mapping:
mapping = [(info.infotext, k) for k, info in shared.opts.data_labels.items() if info.infotext]
for param_name, setting_name in mapping + infotext_to_setting_name_mapping:
if param_name in already_handled_fields:
continue
v = params.get(param_name, None)
if v is None:
continue
+1 -1
View File
@@ -23,7 +23,7 @@ class Git(git.Git):
)
return self._parse_object_header(ret)
def stream_object_data(self, ref: str) -> tuple[str, str, int, "Git.CatFileContentStream"]:
def stream_object_data(self, ref: str) -> tuple[str, str, int, Git.CatFileContentStream]:
# Not really streaming, per se; this buffers the entire object in memory.
# Shouldn't be a problem for our use case, since we're only using this for
# object headers (commit objects).
+73
View File
@@ -0,0 +1,73 @@
import gradio as gr
from modules import scripts, ui_tempdir, patches
def add_classes_to_gradio_component(comp):
"""
this adds gradio-* to the component for css styling (ie gradio-button to gr.Button), as well as some others
"""
comp.elem_classes = [f"gradio-{comp.get_block_name()}", *(comp.elem_classes or [])]
if getattr(comp, 'multiselect', False):
comp.elem_classes.append('multiselect')
def IOComponent_init(self, *args, **kwargs):
self.webui_tooltip = kwargs.pop('tooltip', None)
if scripts.scripts_current is not None:
scripts.scripts_current.before_component(self, **kwargs)
scripts.script_callbacks.before_component_callback(self, **kwargs)
res = original_IOComponent_init(self, *args, **kwargs)
add_classes_to_gradio_component(self)
scripts.script_callbacks.after_component_callback(self, **kwargs)
if scripts.scripts_current is not None:
scripts.scripts_current.after_component(self, **kwargs)
return res
def Block_get_config(self):
config = original_Block_get_config(self)
webui_tooltip = getattr(self, 'webui_tooltip', None)
if webui_tooltip:
config["webui_tooltip"] = webui_tooltip
config.pop('example_inputs', None)
return config
def BlockContext_init(self, *args, **kwargs):
res = original_BlockContext_init(self, *args, **kwargs)
add_classes_to_gradio_component(self)
return res
def Blocks_get_config_file(self, *args, **kwargs):
config = original_Blocks_get_config_file(self, *args, **kwargs)
for comp_config in config["components"]:
if "example_inputs" in comp_config:
comp_config["example_inputs"] = {"serialized": []}
return config
original_IOComponent_init = patches.patch(__name__, obj=gr.components.IOComponent, field="__init__", replacement=IOComponent_init)
original_Block_get_config = patches.patch(__name__, obj=gr.blocks.Block, field="get_config", replacement=Block_get_config)
original_BlockContext_init = patches.patch(__name__, obj=gr.blocks.BlockContext, field="__init__", replacement=BlockContext_init)
original_Blocks_get_config_file = patches.patch(__name__, obj=gr.blocks.Blocks, field="get_config_file", replacement=Blocks_get_config_file)
ui_tempdir.install_ui_tempdir_override()
+4 -5
View File
@@ -10,7 +10,7 @@ import torch
import tqdm
from einops import rearrange, repeat
from ldm.util import default
from modules import devices, processing, sd_models, shared, sd_samplers, hashes, sd_hijack_checkpoint, errors
from modules import devices, sd_models, shared, sd_samplers, hashes, sd_hijack_checkpoint, errors
from modules.textual_inversion import textual_inversion, logging
from modules.textual_inversion.learn_schedule import LearnRateScheduler
from torch import einsum
@@ -468,9 +468,8 @@ def create_hypernetwork(name, enable_sizes, overwrite_old, layer_structure=None,
shared.reload_hypernetworks()
def train_hypernetwork(id_task, hypernetwork_name, learn_rate, batch_size, gradient_step, data_root, log_directory, training_width, training_height, varsize, steps, clip_grad_mode, clip_grad_value, shuffle_tags, tag_drop_out, latent_sampling_method, use_weight, create_image_every, save_hypernetwork_every, template_filename, preview_from_txt2img, preview_prompt, preview_negative_prompt, preview_steps, preview_sampler_index, preview_cfg_scale, preview_seed, preview_width, preview_height):
# images allows training previews to have infotext. Importing it at the top causes a circular import problem.
from modules import images
def train_hypernetwork(id_task, hypernetwork_name: str, learn_rate: float, batch_size: int, gradient_step: int, data_root: str, log_directory: str, training_width: int, training_height: int, varsize: bool, steps: int, clip_grad_mode: str, clip_grad_value: float, shuffle_tags: bool, tag_drop_out: bool, latent_sampling_method: str, use_weight: bool, create_image_every: int, save_hypernetwork_every: int, template_filename: str, preview_from_txt2img: bool, preview_prompt: str, preview_negative_prompt: str, preview_steps: int, preview_sampler_name: str, preview_cfg_scale: float, preview_seed: int, preview_width: int, preview_height: int):
from modules import images, processing
save_hypernetwork_every = save_hypernetwork_every or 0
create_image_every = create_image_every or 0
@@ -699,7 +698,7 @@ def train_hypernetwork(id_task, hypernetwork_name, learn_rate, batch_size, gradi
p.prompt = preview_prompt
p.negative_prompt = preview_negative_prompt
p.steps = preview_steps
p.sampler_name = sd_samplers.samplers[preview_sampler_index].name
p.sampler_name = sd_samplers.samplers_map[preview_sampler_name.lower()]
p.cfg_scale = preview_cfg_scale
p.seed = preview_seed
p.width = preview_width
+52 -17
View File
@@ -21,8 +21,6 @@ from modules import sd_samplers, shared, script_callbacks, errors
from modules.paths_internal import roboto_ttf_file
from modules.shared import opts
import modules.sd_vae as sd_vae
LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS)
@@ -318,7 +316,7 @@ def resize_image(resize_mode, im, width, height, upscaler_name=None):
return res
invalid_filename_chars = '<>:"/\\|?*\n'
invalid_filename_chars = '<>:"/\\|?*\n\r\t'
invalid_filename_prefix = ' '
invalid_filename_postfix = ' .'
re_nonletters = re.compile(r'[\s' + string.punctuation + ']+')
@@ -342,16 +340,6 @@ def sanitize_filename_part(text, replace_spaces=True):
class FilenameGenerator:
def get_vae_filename(self): #get the name of the VAE file.
if sd_vae.loaded_vae_file is None:
return "NoneType"
file_name = os.path.basename(sd_vae.loaded_vae_file)
split_file_name = file_name.split('.')
if len(split_file_name) > 1 and split_file_name[0] == '':
return split_file_name[1] # if the first character of the filename is "." then [1] is obtained.
else:
return split_file_name[0]
replacements = {
'seed': lambda self: self.seed if self.seed is not None else '',
'seed_first': lambda self: self.seed if self.p.batch_size == 1 else self.p.all_seeds[0],
@@ -367,7 +355,9 @@ class FilenameGenerator:
'date': lambda self: datetime.datetime.now().strftime('%Y-%m-%d'),
'datetime': lambda self, *args: self.datetime(*args), # accepts formats: [datetime], [datetime<Format>], [datetime<Format><Time Zone>]
'job_timestamp': lambda self: getattr(self.p, "job_timestamp", shared.state.job_timestamp),
'prompt_hash': lambda self: hashlib.sha256(self.prompt.encode()).hexdigest()[0:8],
'prompt_hash': lambda self, *args: self.string_hash(self.prompt, *args),
'negative_prompt_hash': lambda self, *args: self.string_hash(self.p.negative_prompt, *args),
'full_prompt_hash': lambda self, *args: self.string_hash(f"{self.p.prompt} {self.p.negative_prompt}", *args), # a space in between to create a unique string
'prompt': lambda self: sanitize_filename_part(self.prompt),
'prompt_no_styles': lambda self: self.prompt_no_style(),
'prompt_spaces': lambda self: sanitize_filename_part(self.prompt, replace_spaces=False),
@@ -380,7 +370,8 @@ class FilenameGenerator:
'denoising': lambda self: self.p.denoising_strength if self.p and self.p.denoising_strength else NOTHING_AND_SKIP_PREVIOUS_TEXT,
'user': lambda self: self.p.user,
'vae_filename': lambda self: self.get_vae_filename(),
'none': lambda self: '', # Overrides the default so you can get just the sequence number
'none': lambda self: '', # Overrides the default, so you can get just the sequence number
'image_hash': lambda self, *args: self.image_hash(*args) # accepts formats: [image_hash<length>] default full hash
}
default_time_format = '%Y%m%d%H%M%S'
@@ -391,6 +382,22 @@ class FilenameGenerator:
self.image = image
self.zip = zip
def get_vae_filename(self):
"""Get the name of the VAE file."""
import modules.sd_vae as sd_vae
if sd_vae.loaded_vae_file is None:
return "NoneType"
file_name = os.path.basename(sd_vae.loaded_vae_file)
split_file_name = file_name.split('.')
if len(split_file_name) > 1 and split_file_name[0] == '':
return split_file_name[1] # if the first character of the filename is "." then [1] is obtained.
else:
return split_file_name[0]
def hasprompt(self, *args):
lower = self.prompt.lower()
if self.p is None or self.prompt is None:
@@ -444,6 +451,14 @@ class FilenameGenerator:
return sanitize_filename_part(formatted_time, replace_spaces=False)
def image_hash(self, *args):
length = int(args[0]) if (args and args[0] != "") else None
return hashlib.sha256(self.image.tobytes()).hexdigest()[0:length]
def string_hash(self, text, *args):
length = int(args[0]) if (args and args[0] != "") else 8
return hashlib.sha256(text.encode()).hexdigest()[0:length]
def apply(self, x):
res = ''
@@ -546,6 +561,8 @@ def save_image_with_geninfo(image, geninfo, filename, extension=None, existing_p
})
piexif.insert(exif_bytes, filename)
elif extension.lower() == ".gif":
image.save(filename, format=image_format, comment=geninfo)
else:
image.save(filename, format=image_format, quality=opts.jpeg_quality)
@@ -585,6 +602,11 @@ def save_image(image, path, basename, seed=None, prompt=None, extension='png', i
"""
namegen = FilenameGenerator(p, seed, prompt, image)
# WebP and JPG formats have maximum dimension limits of 16383 and 65535 respectively. switch to PNG which has a much higher limit
if (image.height > 65535 or image.width > 65535) and extension.lower() in ("jpg", "jpeg") or (image.height > 16383 or image.width > 16383) and extension.lower() == "webp":
print('Image dimensions too large; saving as PNG')
extension = ".png"
if save_to_dirs is None:
save_to_dirs = (grid and opts.grid_save_to_dirs) or (not grid and opts.save_to_dirs and not no_prompt)
@@ -641,7 +663,13 @@ def save_image(image, path, basename, seed=None, prompt=None, extension='png', i
save_image_with_geninfo(image_to_save, info, temp_file_path, extension, existing_pnginfo=params.pnginfo, pnginfo_section_name=pnginfo_section_name)
os.replace(temp_file_path, filename_without_extension + extension)
filename = filename_without_extension + extension
if shared.opts.save_images_replace_action != "Replace":
n = 0
while os.path.exists(filename):
n += 1
filename = f"{filename_without_extension}-{n}{extension}"
os.replace(temp_file_path, filename)
fullfn_without_extension, extension = os.path.splitext(params.filename)
if hasattr(os, 'statvfs'):
@@ -698,7 +726,12 @@ def read_info_from_image(image: Image.Image) -> tuple[str | None, dict]:
geninfo = items.pop('parameters', None)
if "exif" in items:
exif = piexif.load(items["exif"])
exif_data = items["exif"]
try:
exif = piexif.load(exif_data)
except OSError:
# memory / exif was not valid so piexif tried to read from a file
exif = None
exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b'')
try:
exif_comment = piexif.helper.UserComment.load(exif_comment)
@@ -708,6 +741,8 @@ def read_info_from_image(image: Image.Image) -> tuple[str | None, dict]:
if exif_comment:
items['exif comment'] = exif_comment
geninfo = exif_comment
elif "comment" in items: # for gif
geninfo = items["comment"].decode('utf8', errors="ignore")
for field in IGNORED_INFO_KEYS:
items.pop(field, None)
+33 -43
View File
@@ -3,14 +3,14 @@ from contextlib import closing
from pathlib import Path
import numpy as np
from PIL import Image, ImageOps, ImageFilter, ImageEnhance, ImageChops, UnidentifiedImageError
from PIL import Image, ImageOps, ImageFilter, ImageEnhance, UnidentifiedImageError
import gradio as gr
from modules import sd_samplers, images as imgutil
from modules import images as imgutil
from modules.generation_parameters_copypaste import create_override_settings_dict, parse_generation_parameters
from modules.processing import Processed, StableDiffusionProcessingImg2Img, process_images
from modules.shared import opts, state
from modules.images import save_image
from modules.sd_models import get_closet_checkpoint_match
import modules.shared as shared
import modules.processing as processing
from modules.ui import plaintext_to_html
@@ -18,9 +18,10 @@ import modules.scripts
def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=False, scale_by=1.0, use_png_info=False, png_info_props=None, png_info_dir=None):
output_dir = output_dir.strip()
processing.fix_seed(p)
images = list(shared.walk_files(input_dir, allowed_extensions=(".png", ".jpg", ".jpeg", ".webp")))
images = list(shared.walk_files(input_dir, allowed_extensions=(".png", ".jpg", ".jpeg", ".webp", ".tif", ".tiff")))
is_inpaint_batch = False
if inpaint_mask_dir:
@@ -32,11 +33,6 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal
print(f"Will process {len(images)} images, creating {p.n_iter * p.batch_size} new images for each.")
save_normally = output_dir == ''
p.do_not_save_grid = True
p.do_not_save_samples = not save_normally
state.job_count = len(images) * p.n_iter
# extract "default" params to use in case getting png info fails
@@ -46,7 +42,8 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal
cfg_scale = p.cfg_scale
sampler_name = p.sampler_name
steps = p.steps
override_settings = p.override_settings
sd_model_checkpoint_override = get_closet_checkpoint_match(override_settings.get("sd_model_checkpoint", None))
for i, image in enumerate(images):
state.job = f"{i+1} out of {len(images)}"
if state.skipped:
@@ -109,42 +106,44 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal
p.sampler_name = parsed_parameters.get("Sampler", sampler_name)
p.steps = int(parsed_parameters.get("Steps", steps))
model_info = get_closet_checkpoint_match(parsed_parameters.get("Model hash", None))
if model_info is not None:
p.override_settings['sd_model_checkpoint'] = model_info.name
elif sd_model_checkpoint_override:
p.override_settings['sd_model_checkpoint'] = sd_model_checkpoint_override
else:
p.override_settings.pop("sd_model_checkpoint", None)
if output_dir:
p.outpath_samples = output_dir
p.override_settings['save_to_dirs'] = False
p.override_settings['save_images_replace_action'] = "Add number suffix"
if p.n_iter > 1 or p.batch_size > 1:
p.override_settings['samples_filename_pattern'] = f'{image_path.stem}-[generation_number]'
else:
p.override_settings['samples_filename_pattern'] = f'{image_path.stem}'
proc = modules.scripts.scripts_img2img.run(p, *args)
if proc is None:
proc = process_images(p)
for n, processed_image in enumerate(proc.images):
filename = image_path.stem
infotext = proc.infotext(p, n)
relpath = os.path.dirname(os.path.relpath(image, input_dir))
if n > 0:
filename += f"-{n}"
if not save_normally:
os.makedirs(os.path.join(output_dir, relpath), exist_ok=True)
if processed_image.mode == 'RGBA':
processed_image = processed_image.convert("RGB")
save_image(processed_image, os.path.join(output_dir, relpath), None, extension=opts.samples_format, info=infotext, forced_filename=filename, save_to_dirs=False)
p.override_settings.pop('save_images_replace_action', None)
process_images(p)
def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_styles, init_img, sketch, init_img_with_mask, inpaint_color_sketch, inpaint_color_sketch_orig, init_img_inpaint, init_mask_inpaint, steps: int, sampler_index: int, mask_blur: int, mask_alpha: float, inpainting_fill: int, restore_faces: bool, tiling: bool, n_iter: int, batch_size: int, cfg_scale: float, image_cfg_scale: float, denoising_strength: float, seed: int, subseed: int, subseed_strength: float, seed_resize_from_h: int, seed_resize_from_w: int, seed_enable_extras: bool, selected_scale_tab: int, height: int, width: int, scale_by: float, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, img2img_batch_inpaint_mask_dir: str, override_settings_texts, img2img_batch_use_png_info: bool, img2img_batch_png_info_props: list, img2img_batch_png_info_dir: str, request: gr.Request, *args):
def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_styles, init_img, sketch, init_img_with_mask, inpaint_color_sketch, inpaint_color_sketch_orig, init_img_inpaint, init_mask_inpaint, steps: int, sampler_name: str, mask_blur: int, mask_alpha: float, inpainting_fill: int, n_iter: int, batch_size: int, cfg_scale: float, image_cfg_scale: float, denoising_strength: float, selected_scale_tab: int, height: int, width: int, scale_by: float, resize_mode: int, inpaint_full_res: bool, inpaint_full_res_padding: int, inpainting_mask_invert: int, img2img_batch_input_dir: str, img2img_batch_output_dir: str, img2img_batch_inpaint_mask_dir: str, override_settings_texts, img2img_batch_use_png_info: bool, img2img_batch_png_info_props: list, img2img_batch_png_info_dir: str, request: gr.Request, *args):
override_settings = create_override_settings_dict(override_settings_texts)
is_batch = mode == 5
if mode == 0: # img2img
image = init_img.convert("RGB")
image = init_img
mask = None
elif mode == 1: # img2img sketch
image = sketch.convert("RGB")
image = sketch
mask = None
elif mode == 2: # inpaint
image, mask = init_img_with_mask["image"], init_img_with_mask["mask"]
alpha_mask = ImageOps.invert(image.split()[-1]).convert('L').point(lambda x: 255 if x > 0 else 0, mode='1')
mask = mask.convert('L').point(lambda x: 255 if x > 128 else 0, mode='1')
mask = ImageChops.lighter(alpha_mask, mask).convert('L')
image = image.convert("RGB")
mask = processing.create_binary_mask(mask)
elif mode == 3: # inpaint sketch
image = inpaint_color_sketch
orig = inpaint_color_sketch_orig or inpaint_color_sketch
@@ -153,7 +152,6 @@ def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_s
mask = ImageEnhance.Brightness(mask).enhance(1 - mask_alpha / 100)
blur = ImageFilter.GaussianBlur(mask_blur)
image = Image.composite(image.filter(blur), orig, mask.filter(blur))
image = image.convert("RGB")
elif mode == 4: # inpaint upload mask
image = init_img_inpaint
mask = init_mask_inpaint
@@ -180,21 +178,13 @@ def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_s
prompt=prompt,
negative_prompt=negative_prompt,
styles=prompt_styles,
seed=seed,
subseed=subseed,
subseed_strength=subseed_strength,
seed_resize_from_h=seed_resize_from_h,
seed_resize_from_w=seed_resize_from_w,
seed_enable_extras=seed_enable_extras,
sampler_name=sd_samplers.samplers_for_img2img[sampler_index].name,
sampler_name=sampler_name,
batch_size=batch_size,
n_iter=n_iter,
steps=steps,
cfg_scale=cfg_scale,
width=width,
height=height,
restore_faces=restore_faces,
tiling=tiling,
init_images=[image],
mask=mask,
mask_blur=mask_blur,
@@ -213,7 +203,7 @@ def img2img(id_task: str, mode: int, prompt: str, negative_prompt: str, prompt_s
p.user = request.username
if shared.cmd_opts.enable_console_prompts:
if shared.opts.enable_console_prompts:
print(f"\nimg2img: {prompt}", file=shared.progress_print_out)
if mask:
+168
View File
@@ -0,0 +1,168 @@
import importlib
import logging
import sys
import warnings
from threading import Thread
from modules.timer import startup_timer
def imports():
logging.getLogger("torch.distributed.nn").setLevel(logging.ERROR) # sshh...
logging.getLogger("xformers").addFilter(lambda record: 'A matching Triton is not available' not in record.getMessage())
import torch # noqa: F401
startup_timer.record("import torch")
import pytorch_lightning # noqa: F401
startup_timer.record("import torch")
warnings.filterwarnings(action="ignore", category=DeprecationWarning, module="pytorch_lightning")
warnings.filterwarnings(action="ignore", category=UserWarning, module="torchvision")
import gradio # noqa: F401
startup_timer.record("import gradio")
from modules import paths, timer, import_hook, errors # noqa: F401
startup_timer.record("setup paths")
import ldm.modules.encoders.modules # noqa: F401
startup_timer.record("import ldm")
import sgm.modules.encoders.modules # noqa: F401
startup_timer.record("import sgm")
from modules import shared_init
shared_init.initialize()
startup_timer.record("initialize shared")
from modules import processing, gradio_extensons, ui # noqa: F401
startup_timer.record("other imports")
def check_versions():
from modules.shared_cmd_options import cmd_opts
if not cmd_opts.skip_version_check:
from modules import errors
errors.check_versions()
def initialize():
from modules import initialize_util
initialize_util.fix_torch_version()
initialize_util.fix_asyncio_event_loop_policy()
initialize_util.validate_tls_options()
initialize_util.configure_sigint_handler()
initialize_util.configure_opts_onchange()
from modules import modelloader
modelloader.cleanup_models()
from modules import sd_models
sd_models.setup_model()
startup_timer.record("setup SD model")
from modules.shared_cmd_options import cmd_opts
from modules import codeformer_model
warnings.filterwarnings(action="ignore", category=UserWarning, module="torchvision.transforms.functional_tensor")
codeformer_model.setup_model(cmd_opts.codeformer_models_path)
startup_timer.record("setup codeformer")
from modules import gfpgan_model
gfpgan_model.setup_model(cmd_opts.gfpgan_models_path)
startup_timer.record("setup gfpgan")
initialize_rest(reload_script_modules=False)
def initialize_rest(*, reload_script_modules=False):
"""
Called both from initialize() and when reloading the webui.
"""
from modules.shared_cmd_options import cmd_opts
from modules import sd_samplers
sd_samplers.set_samplers()
startup_timer.record("set samplers")
from modules import extensions
extensions.list_extensions()
startup_timer.record("list extensions")
from modules import initialize_util
initialize_util.restore_config_state_file()
startup_timer.record("restore config state file")
from modules import shared, upscaler, scripts
if cmd_opts.ui_debug_mode:
shared.sd_upscalers = upscaler.UpscalerLanczos().scalers
scripts.load_scripts()
return
from modules import sd_models
sd_models.list_models()
startup_timer.record("list SD models")
from modules import localization
localization.list_localizations(cmd_opts.localizations_dir)
startup_timer.record("list localizations")
with startup_timer.subcategory("load scripts"):
scripts.load_scripts()
if reload_script_modules:
for module in [module for name, module in sys.modules.items() if name.startswith("modules.ui")]:
importlib.reload(module)
startup_timer.record("reload script modules")
from modules import modelloader
modelloader.load_upscalers()
startup_timer.record("load upscalers")
from modules import sd_vae
sd_vae.refresh_vae_list()
startup_timer.record("refresh VAE")
from modules import textual_inversion
textual_inversion.textual_inversion.list_textual_inversion_templates()
startup_timer.record("refresh textual inversion templates")
from modules import script_callbacks, sd_hijack_optimizations, sd_hijack
script_callbacks.on_list_optimizers(sd_hijack_optimizations.list_optimizers)
sd_hijack.list_optimizers()
startup_timer.record("scripts list_optimizers")
from modules import sd_unet
sd_unet.list_unets()
startup_timer.record("scripts list_unets")
def load_model():
"""
Accesses shared.sd_model property to load model.
After it's available, if it has been loaded before this access by some extension,
its optimization may be None because the list of optimizaers has neet been filled
by that time, so we apply optimization again.
"""
shared.sd_model # noqa: B018
if sd_hijack.current_optimizer is None:
sd_hijack.apply_optimizations()
from modules import devices
devices.first_time_calculation()
if not shared.cmd_opts.skip_load_model_at_start:
Thread(target=load_model).start()
from modules import shared_items
shared_items.reload_hypernetworks()
startup_timer.record("reload hypernetworks")
from modules import ui_extra_networks
ui_extra_networks.initialize()
ui_extra_networks.register_default_pages()
from modules import extra_networks
extra_networks.initialize()
extra_networks.register_default_extra_networks()
startup_timer.record("initialize extra networks")
+202
View File
@@ -0,0 +1,202 @@
import json
import os
import signal
import sys
import re
from modules.timer import startup_timer
def gradio_server_name():
from modules.shared_cmd_options import cmd_opts
if cmd_opts.server_name:
return cmd_opts.server_name
else:
return "0.0.0.0" if cmd_opts.listen else None
def fix_torch_version():
import torch
# Truncate version number of nightly/local build of PyTorch to not cause exceptions with CodeFormer or Safetensors
if ".dev" in torch.__version__ or "+git" in torch.__version__:
torch.__long_version__ = torch.__version__
torch.__version__ = re.search(r'[\d.]+[\d]', torch.__version__).group(0)
def fix_asyncio_event_loop_policy():
"""
The default `asyncio` event loop policy only automatically creates
event loops in the main threads. Other threads must create event
loops explicitly or `asyncio.get_event_loop` (and therefore
`.IOLoop.current`) will fail. Installing this policy allows event
loops to be created automatically on any thread, matching the
behavior of Tornado versions prior to 5.0 (or 5.0 on Python 2).
"""
import asyncio
if sys.platform == "win32" and hasattr(asyncio, "WindowsSelectorEventLoopPolicy"):
# "Any thread" and "selector" should be orthogonal, but there's not a clean
# interface for composing policies so pick the right base.
_BasePolicy = asyncio.WindowsSelectorEventLoopPolicy # type: ignore
else:
_BasePolicy = asyncio.DefaultEventLoopPolicy
class AnyThreadEventLoopPolicy(_BasePolicy): # type: ignore
"""Event loop policy that allows loop creation on any thread.
Usage::
asyncio.set_event_loop_policy(AnyThreadEventLoopPolicy())
"""
def get_event_loop(self) -> asyncio.AbstractEventLoop:
try:
return super().get_event_loop()
except (RuntimeError, AssertionError):
# This was an AssertionError in python 3.4.2 (which ships with debian jessie)
# and changed to a RuntimeError in 3.4.3.
# "There is no current event loop in thread %r"
loop = self.new_event_loop()
self.set_event_loop(loop)
return loop
asyncio.set_event_loop_policy(AnyThreadEventLoopPolicy())
def restore_config_state_file():
from modules import shared, config_states
config_state_file = shared.opts.restore_config_state_file
if config_state_file == "":
return
shared.opts.restore_config_state_file = ""
shared.opts.save(shared.config_filename)
if os.path.isfile(config_state_file):
print(f"*** About to restore extension state from file: {config_state_file}")
with open(config_state_file, "r", encoding="utf-8") as f:
config_state = json.load(f)
config_states.restore_extension_config(config_state)
startup_timer.record("restore extension config")
elif config_state_file:
print(f"!!! Config state backup not found: {config_state_file}")
def validate_tls_options():
from modules.shared_cmd_options import cmd_opts
if not (cmd_opts.tls_keyfile and cmd_opts.tls_certfile):
return
try:
if not os.path.exists(cmd_opts.tls_keyfile):
print("Invalid path to TLS keyfile given")
if not os.path.exists(cmd_opts.tls_certfile):
print(f"Invalid path to TLS certfile: '{cmd_opts.tls_certfile}'")
except TypeError:
cmd_opts.tls_keyfile = cmd_opts.tls_certfile = None
print("TLS setup invalid, running webui without TLS")
else:
print("Running with TLS")
startup_timer.record("TLS")
def get_gradio_auth_creds():
"""
Convert the gradio_auth and gradio_auth_path commandline arguments into
an iterable of (username, password) tuples.
"""
from modules.shared_cmd_options import cmd_opts
def process_credential_line(s):
s = s.strip()
if not s:
return None
return tuple(s.split(':', 1))
if cmd_opts.gradio_auth:
for cred in cmd_opts.gradio_auth.split(','):
cred = process_credential_line(cred)
if cred:
yield cred
if cmd_opts.gradio_auth_path:
with open(cmd_opts.gradio_auth_path, 'r', encoding="utf8") as file:
for line in file.readlines():
for cred in line.strip().split(','):
cred = process_credential_line(cred)
if cred:
yield cred
def dumpstacks():
import threading
import traceback
id2name = {th.ident: th.name for th in threading.enumerate()}
code = []
for threadId, stack in sys._current_frames().items():
code.append(f"\n# Thread: {id2name.get(threadId, '')}({threadId})")
for filename, lineno, name, line in traceback.extract_stack(stack):
code.append(f"""File: "{filename}", line {lineno}, in {name}""")
if line:
code.append(" " + line.strip())
print("\n".join(code))
def configure_sigint_handler():
# make the program just exit at ctrl+c without waiting for anything
def sigint_handler(sig, frame):
print(f'Interrupted with signal {sig} in {frame}')
dumpstacks()
os._exit(0)
if not os.environ.get("COVERAGE_RUN"):
# Don't install the immediate-quit handler when running under coverage,
# as then the coverage report won't be generated.
signal.signal(signal.SIGINT, sigint_handler)
def configure_opts_onchange():
from modules import shared, sd_models, sd_vae, ui_tempdir, sd_hijack
from modules.call_queue import wrap_queued_call
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
shared.opts.onchange("sd_vae", wrap_queued_call(lambda: sd_vae.reload_vae_weights()), call=False)
shared.opts.onchange("sd_vae_overrides_per_model_preferences", wrap_queued_call(lambda: sd_vae.reload_vae_weights()), call=False)
shared.opts.onchange("temp_dir", ui_tempdir.on_tmpdir_changed)
shared.opts.onchange("gradio_theme", shared.reload_gradio_theme)
shared.opts.onchange("cross_attention_optimization", wrap_queued_call(lambda: sd_hijack.model_hijack.redo_hijack(shared.sd_model)), call=False)
startup_timer.record("opts onchange")
def setup_middleware(app):
from starlette.middleware.gzip import GZipMiddleware
app.middleware_stack = None # reset current middleware to allow modifying user provided list
app.add_middleware(GZipMiddleware, minimum_size=1000)
configure_cors_middleware(app)
app.build_middleware_stack() # rebuild middleware stack on-the-fly
def configure_cors_middleware(app):
from starlette.middleware.cors import CORSMiddleware
from modules.shared_cmd_options import cmd_opts
cors_options = {
"allow_methods": ["*"],
"allow_headers": ["*"],
"allow_credentials": True,
}
if cmd_opts.cors_allow_origins:
cors_options["allow_origins"] = cmd_opts.cors_allow_origins.split(',')
if cmd_opts.cors_allow_origins_regex:
cors_options["allow_origin_regex"] = cmd_opts.cors_allow_origins_regex
app.add_middleware(CORSMiddleware, **cors_options)
+2 -3
View File
@@ -186,9 +186,8 @@ class InterrogateModels:
res = ""
shared.state.begin(job="interrogate")
try:
if shared.cmd_opts.lowvram or shared.cmd_opts.medvram:
lowvram.send_everything_to_cpu()
devices.torch_gc()
lowvram.send_everything_to_cpu()
devices.torch_gc()
self.load()
+86 -31
View File
@@ -1,7 +1,9 @@
# this scripts installs necessary requirements and launches main program in webui.py
import logging
import re
import subprocess
import os
import shutil
import sys
import importlib.util
import platform
@@ -10,11 +12,11 @@ from functools import lru_cache
from modules import cmd_args, errors
from modules.paths_internal import script_path, extensions_dir
from modules import timer
timer.startup_timer.record("start")
from modules.timer import startup_timer
from modules import logging_config
args, _ = cmd_args.parser.parse_known_args()
logging_config.setup_logging(args.loglevel)
python = sys.executable
git = os.environ.get('GIT', "git")
@@ -62,7 +64,7 @@ Use --skip-python-version-check to suppress this warning.
@lru_cache()
def commit_hash():
try:
return subprocess.check_output([git, "rev-parse", "HEAD"], shell=False, encoding='utf8').strip()
return subprocess.check_output([git, "-C", script_path, "rev-parse", "HEAD"], shell=False, encoding='utf8').strip()
except Exception:
return "<none>"
@@ -70,7 +72,7 @@ def commit_hash():
@lru_cache()
def git_tag():
try:
return subprocess.check_output([git, "describe", "--tags"], shell=False, encoding='utf8').strip()
return subprocess.check_output([git, "-C", script_path, "describe", "--tags"], shell=False, encoding='utf8').strip()
except Exception:
try:
@@ -141,6 +143,25 @@ def check_run_python(code: str) -> bool:
return result.returncode == 0
def git_fix_workspace(dir, name):
run(f'"{git}" -C "{dir}" fetch --refetch --no-auto-gc', f"Fetching all contents for {name}", f"Couldn't fetch {name}", live=True)
run(f'"{git}" -C "{dir}" gc --aggressive --prune=now', f"Pruning {name}", f"Couldn't prune {name}", live=True)
return
def run_git(dir, name, command, desc=None, errdesc=None, custom_env=None, live: bool = default_command_live, autofix=True):
try:
return run(f'"{git}" -C "{dir}" {command}', desc=desc, errdesc=errdesc, custom_env=custom_env, live=live)
except RuntimeError:
if not autofix:
raise
print(f"{errdesc}, attempting autofix...")
git_fix_workspace(dir, name)
return run(f'"{git}" -C "{dir}" {command}', desc=desc, errdesc=errdesc, custom_env=custom_env, live=live)
def git_clone(url, dir, name, commithash=None):
# TODO clone into temporary dir and move if successful
@@ -148,15 +169,24 @@ def git_clone(url, dir, name, commithash=None):
if commithash is None:
return
current_hash = run(f'"{git}" -C "{dir}" rev-parse HEAD', None, f"Couldn't determine {name}'s hash: {commithash}", live=False).strip()
current_hash = run_git(dir, name, 'rev-parse HEAD', None, f"Couldn't determine {name}'s hash: {commithash}", live=False).strip()
if current_hash == commithash:
return
run(f'"{git}" -C "{dir}" fetch', f"Fetching updates for {name}...", f"Couldn't fetch {name}")
run(f'"{git}" -C "{dir}" checkout {commithash}', f"Checking out commit for {name} with hash: {commithash}...", f"Couldn't checkout commit {commithash} for {name}", live=True)
if run_git(dir, name, 'config --get remote.origin.url', None, f"Couldn't determine {name}'s origin URL", live=False).strip() != url:
run_git(dir, name, f'remote set-url origin "{url}"', None, f"Failed to set {name}'s origin URL", live=False)
run_git(dir, name, 'fetch', f"Fetching updates for {name}...", f"Couldn't fetch {name}", autofix=False)
run_git(dir, name, f'checkout {commithash}', f"Checking out commit for {name} with hash: {commithash}...", f"Couldn't checkout commit {commithash} for {name}", live=True)
return
run(f'"{git}" clone "{url}" "{dir}"', f"Cloning {name} into {dir}...", f"Couldn't clone {name}", live=True)
try:
run(f'"{git}" clone "{url}" "{dir}"', f"Cloning {name} into {dir}...", f"Couldn't clone {name}", live=True)
except RuntimeError:
shutil.rmtree(dir, ignore_errors=True)
raise
if commithash is not None:
run(f'"{git}" -C "{dir}" checkout {commithash}', None, "Couldn't checkout {name}'s hash: {commithash}")
@@ -198,7 +228,9 @@ def run_extension_installer(extension_dir):
env = os.environ.copy()
env['PYTHONPATH'] = f"{os.path.abspath('.')}{os.pathsep}{env.get('PYTHONPATH', '')}"
print(run(f'"{python}" "{path_installer}"', errdesc=f"Error running install.py for extension {extension_dir}", custom_env=env))
stdout = run(f'"{python}" "{path_installer}"', errdesc=f"Error running install.py for extension {extension_dir}", custom_env=env).strip()
if stdout:
print(stdout)
except Exception as e:
errors.report(str(e))
@@ -216,7 +248,7 @@ def list_extensions(settings_file):
disabled_extensions = set(settings.get('disabled_extensions', []))
disable_all_extensions = settings.get('disable_all_extensions', 'none')
if disable_all_extensions != 'none':
if disable_all_extensions != 'none' or args.disable_extra_extensions or args.disable_all_extensions or not os.path.isdir(extensions_dir):
return []
return [x for x in os.listdir(extensions_dir) if x not in disabled_extensions]
@@ -226,8 +258,15 @@ def run_extensions_installers(settings_file):
if not os.path.isdir(extensions_dir):
return
for dirname_extension in list_extensions(settings_file):
run_extension_installer(os.path.join(extensions_dir, dirname_extension))
with startup_timer.subcategory("run extensions installers"):
for dirname_extension in list_extensions(settings_file):
logging.debug(f"Installing {dirname_extension}")
path = os.path.join(extensions_dir, dirname_extension)
if os.path.isdir(path):
run_extension_installer(path)
startup_timer.record(dirname_extension)
re_requirement = re.compile(r"\s*([-_a-zA-Z0-9]+)\s*(?:==\s*([-+_.a-zA-Z0-9]+))?\s*")
@@ -274,7 +313,6 @@ def prepare_environment():
requirements_file = os.environ.get('REQS_FILE', "requirements_versions.txt")
xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.20')
gfpgan_package = os.environ.get('GFPGAN_PACKAGE', "https://github.com/TencentARC/GFPGAN/archive/8d2447a2d918f8eba5a4a01463fd48e45126a379.zip")
clip_package = os.environ.get('CLIP_PACKAGE', "https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip")
openclip_package = os.environ.get('OPENCLIP_PACKAGE', "https://github.com/mlfoundations/open_clip/archive/bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b.zip")
@@ -285,13 +323,13 @@ def prepare_environment():
blip_repo = os.environ.get('BLIP_REPO', 'https://github.com/salesforce/BLIP.git')
stable_diffusion_commit_hash = os.environ.get('STABLE_DIFFUSION_COMMIT_HASH', "cf1d67a6fd5ea1aa600c4df58e5b47da45f6bdbf")
stable_diffusion_xl_commit_hash = os.environ.get('STABLE_DIFFUSION_XL_COMMIT_HASH', "5c10deee76adad0032b412294130090932317a87")
k_diffusion_commit_hash = os.environ.get('K_DIFFUSION_COMMIT_HASH', "c9fe758757e022f05ca5a53fa8fac28889e4f1cf")
stable_diffusion_xl_commit_hash = os.environ.get('STABLE_DIFFUSION_XL_COMMIT_HASH', "45c443b316737a4ab6e40413d7794a7f5657c19f")
k_diffusion_commit_hash = os.environ.get('K_DIFFUSION_COMMIT_HASH', "ab527a9a6d347f364e3d185ba6d714e22d80cb3c")
codeformer_commit_hash = os.environ.get('CODEFORMER_COMMIT_HASH', "c5b4593074ba6214284d6acd5f1719b6c5d739af")
blip_commit_hash = os.environ.get('BLIP_COMMIT_HASH', "48211a1594f1321b00f14c9f7a5b4813144b2fb9")
try:
# the existance of this file is a signal to webui.sh/bat that webui needs to be restarted when it stops execution
# the existence of this file is a signal to webui.sh/bat that webui needs to be restarted when it stops execution
os.remove(os.path.join(script_path, "tmp", "restart"))
os.environ.setdefault('SD_WEBUI_RESTARTING', '1')
except OSError:
@@ -300,8 +338,11 @@ def prepare_environment():
if not args.skip_python_version_check:
check_python_version()
startup_timer.record("checks")
commit = commit_hash()
tag = git_tag()
startup_timer.record("git version info")
print(f"Python {sys.version}")
print(f"Version: {tag}")
@@ -309,36 +350,30 @@ def prepare_environment():
if args.reinstall_torch or not is_installed("torch") or not is_installed("torchvision"):
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
startup_timer.record("install torch")
if not args.skip_torch_cuda_test and not check_run_python("import torch; assert torch.cuda.is_available()"):
raise RuntimeError(
'Torch is not able to use GPU; '
'add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'
)
if not is_installed("gfpgan"):
run_pip(f"install {gfpgan_package}", "gfpgan")
startup_timer.record("torch GPU test")
if not is_installed("clip"):
run_pip(f"install {clip_package}", "clip")
startup_timer.record("install clip")
if not is_installed("open_clip"):
run_pip(f"install {openclip_package}", "open_clip")
startup_timer.record("install open_clip")
if (not is_installed("xformers") or args.reinstall_xformers) and args.xformers:
if platform.system() == "Windows":
if platform.python_version().startswith("3.10"):
run_pip(f"install -U -I --no-deps {xformers_package}", "xformers", live=True)
else:
print("Installation of xformers is not supported in this version of Python.")
print("You can also check this and build manually: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers#building-xformers-on-windows-by-duckness")
if not is_installed("xformers"):
exit(0)
elif platform.system() == "Linux":
run_pip(f"install -U -I --no-deps {xformers_package}", "xformers")
run_pip(f"install -U -I --no-deps {xformers_package}", "xformers")
startup_timer.record("install xformers")
if not is_installed("ngrok") and args.ngrok:
run_pip("install ngrok", "ngrok")
startup_timer.record("install ngrok")
os.makedirs(os.path.join(script_path, dir_repos), exist_ok=True)
@@ -348,22 +383,29 @@ def prepare_environment():
git_clone(codeformer_repo, repo_dir('CodeFormer'), "CodeFormer", codeformer_commit_hash)
git_clone(blip_repo, repo_dir('BLIP'), "BLIP", blip_commit_hash)
startup_timer.record("clone repositores")
if not is_installed("lpips"):
run_pip(f"install -r \"{os.path.join(repo_dir('CodeFormer'), 'requirements.txt')}\"", "requirements for CodeFormer")
startup_timer.record("install CodeFormer requirements")
if not os.path.isfile(requirements_file):
requirements_file = os.path.join(script_path, requirements_file)
if not requirements_met(requirements_file):
run_pip(f"install -r \"{requirements_file}\"", "requirements")
startup_timer.record("install requirements")
run_extensions_installers(settings_file=args.ui_settings_file)
if not args.skip_install:
run_extensions_installers(settings_file=args.ui_settings_file)
if args.update_check:
version_check(commit)
startup_timer.record("check version")
if args.update_all_extensions:
git_pull_recursive(extensions_dir)
startup_timer.record("update extensions")
if "--exit" in sys.argv:
print("Exiting because of --exit argument")
@@ -392,3 +434,16 @@ def start():
webui.api_only()
else:
webui.webui()
def dump_sysinfo():
from modules import sysinfo
import datetime
text = sysinfo.get()
filename = f"sysinfo-{datetime.datetime.utcnow().strftime('%Y-%m-%d-%H-%M')}.txt"
with open(filename, "w", encoding="utf8") as file:
file.write(text)
return filename
+13 -11
View File
@@ -1,7 +1,7 @@
import json
import os
from modules import errors
from modules import errors, scripts
localizations = {}
@@ -14,22 +14,24 @@ def list_localizations(dirname):
if ext.lower() != ".json":
continue
localizations[fn] = os.path.join(dirname, file)
localizations[fn] = [os.path.join(dirname, file)]
from modules import scripts
for file in scripts.list_scripts("localizations", ".json"):
fn, ext = os.path.splitext(file.filename)
localizations[fn] = file.path
if fn not in localizations:
localizations[fn] = []
localizations[fn].append(file.path)
def localization_js(current_localization_name: str) -> str:
fn = localizations.get(current_localization_name, None)
fns = localizations.get(current_localization_name, None)
data = {}
if fn is not None:
try:
with open(fn, "r", encoding="utf8") as file:
data = json.load(file)
except Exception:
errors.report(f"Error loading localization from {fn}", exc_info=True)
if fns is not None:
for fn in fns:
try:
with open(fn, "r", encoding="utf8") as file:
data.update(json.load(file))
except Exception:
errors.report(f"Error loading localization from {fn}", exc_info=True)
return f"window.localization = {json.dumps(data)}"
+16
View File
@@ -0,0 +1,16 @@
import os
import logging
def setup_logging(loglevel):
if loglevel is None:
loglevel = os.environ.get("SD_WEBUI_LOG_LEVEL")
if loglevel:
log_level = getattr(logging, loglevel.upper(), None) or logging.INFO
logging.basicConfig(
level=log_level,
format='%(asctime)s %(levelname)s [%(name)s] %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
)
+19 -2
View File
@@ -1,5 +1,5 @@
import torch
from modules import devices
from modules import devices, shared
module_in_gpu = None
cpu = torch.device("cpu")
@@ -14,7 +14,24 @@ def send_everything_to_cpu():
module_in_gpu = None
def is_needed(sd_model):
return shared.cmd_opts.lowvram or shared.cmd_opts.medvram or shared.cmd_opts.medvram_sdxl and hasattr(sd_model, 'conditioner')
def apply(sd_model):
enable = is_needed(sd_model)
shared.parallel_processing_allowed = not enable
if enable:
setup_for_low_vram(sd_model, not shared.cmd_opts.lowvram)
else:
sd_model.lowvram = False
def setup_for_low_vram(sd_model, use_medvram):
if getattr(sd_model, 'lowvram', False):
return
sd_model.lowvram = True
parents = {}
@@ -127,4 +144,4 @@ def setup_for_low_vram(sd_model, use_medvram):
def is_enabled(sd_model):
return getattr(sd_model, 'lowvram', False)
return sd_model.lowvram
+2 -5
View File
@@ -4,6 +4,7 @@ import torch
import platform
from modules.sd_hijack_utils import CondFunc
from packaging import version
from modules import shared
log = logging.getLogger(__name__)
@@ -30,8 +31,7 @@ has_mps = check_for_mps()
def torch_mps_gc() -> None:
try:
from modules.shared import state
if state.current_latent is not None:
if shared.state.current_latent is not None:
log.debug("`current_latent` is set, skipping MPS garbage collection")
return
from torch.mps import empty_cache
@@ -52,9 +52,6 @@ def cumsum_fix(input, cumsum_func, *args, **kwargs):
if has_mps:
# MPS fix for randn in torchsde
CondFunc('torchsde._brownian.brownian_interval._randn', lambda _, size, dtype, device, seed: torch.randn(size, dtype=dtype, device=torch.device("cpu"), generator=torch.Generator(torch.device("cpu")).manual_seed(int(seed))).to(device), lambda _, size, dtype, device, seed: device.type == 'mps')
if platform.mac_ver()[0].startswith("13.2."):
# MPS workaround for https://github.com/pytorch/pytorch/issues/95188, thanks to danieldk (https://github.com/explosion/curated-transformers/pull/124)
CondFunc('torch.nn.functional.linear', lambda _, input, weight, bias: (torch.matmul(input, weight.t()) + bias) if bias is not None else torch.matmul(input, weight.t()), lambda _, input, weight, bias: input.numel() > 10485760)
+247
View File
@@ -0,0 +1,247 @@
import json
import sys
import gradio as gr
from modules import errors
from modules.shared_cmd_options import cmd_opts
class OptionInfo:
def __init__(self, default=None, label="", component=None, component_args=None, onchange=None, section=None, refresh=None, comment_before='', comment_after='', infotext=None, restrict_api=False):
self.default = default
self.label = label
self.component = component
self.component_args = component_args
self.onchange = onchange
self.section = section
self.refresh = refresh
self.do_not_save = False
self.comment_before = comment_before
"""HTML text that will be added after label in UI"""
self.comment_after = comment_after
"""HTML text that will be added before label in UI"""
self.infotext = infotext
self.restrict_api = restrict_api
"""If True, the setting will not be accessible via API"""
def link(self, label, url):
self.comment_before += f"[<a href='{url}' target='_blank'>{label}</a>]"
return self
def js(self, label, js_func):
self.comment_before += f"[<a onclick='{js_func}(); return false'>{label}</a>]"
return self
def info(self, info):
self.comment_after += f"<span class='info'>({info})</span>"
return self
def html(self, html):
self.comment_after += html
return self
def needs_restart(self):
self.comment_after += " <span class='info'>(requires restart)</span>"
return self
def needs_reload_ui(self):
self.comment_after += " <span class='info'>(requires Reload UI)</span>"
return self
class OptionHTML(OptionInfo):
def __init__(self, text):
super().__init__(str(text).strip(), label='', component=lambda **kwargs: gr.HTML(elem_classes="settings-info", **kwargs))
self.do_not_save = True
def options_section(section_identifier, options_dict):
for v in options_dict.values():
v.section = section_identifier
return options_dict
options_builtin_fields = {"data_labels", "data", "restricted_opts", "typemap"}
class Options:
typemap = {int: float}
def __init__(self, data_labels: dict[str, OptionInfo], restricted_opts):
self.data_labels = data_labels
self.data = {k: v.default for k, v in self.data_labels.items()}
self.restricted_opts = restricted_opts
def __setattr__(self, key, value):
if key in options_builtin_fields:
return super(Options, self).__setattr__(key, value)
if self.data is not None:
if key in self.data or key in self.data_labels:
assert not cmd_opts.freeze_settings, "changing settings is disabled"
info = self.data_labels.get(key, None)
if info.do_not_save:
return
comp_args = info.component_args if info else None
if isinstance(comp_args, dict) and comp_args.get('visible', True) is False:
raise RuntimeError(f"not possible to set {key} because it is restricted")
if cmd_opts.hide_ui_dir_config and key in self.restricted_opts:
raise RuntimeError(f"not possible to set {key} because it is restricted")
self.data[key] = value
return
return super(Options, self).__setattr__(key, value)
def __getattr__(self, item):
if item in options_builtin_fields:
return super(Options, self).__getattribute__(item)
if self.data is not None:
if item in self.data:
return self.data[item]
if item in self.data_labels:
return self.data_labels[item].default
return super(Options, self).__getattribute__(item)
def set(self, key, value, is_api=False, run_callbacks=True):
"""sets an option and calls its onchange callback, returning True if the option changed and False otherwise"""
oldval = self.data.get(key, None)
if oldval == value:
return False
option = self.data_labels[key]
if option.do_not_save:
return False
if is_api and option.restrict_api:
return False
try:
setattr(self, key, value)
except RuntimeError:
return False
if run_callbacks and option.onchange is not None:
try:
option.onchange()
except Exception as e:
errors.display(e, f"changing setting {key} to {value}")
setattr(self, key, oldval)
return False
return True
def get_default(self, key):
"""returns the default value for the key"""
data_label = self.data_labels.get(key)
if data_label is None:
return None
return data_label.default
def save(self, filename):
assert not cmd_opts.freeze_settings, "saving settings is disabled"
with open(filename, "w", encoding="utf8") as file:
json.dump(self.data, file, indent=4)
def same_type(self, x, y):
if x is None or y is None:
return True
type_x = self.typemap.get(type(x), type(x))
type_y = self.typemap.get(type(y), type(y))
return type_x == type_y
def load(self, filename):
with open(filename, "r", encoding="utf8") as file:
self.data = json.load(file)
# 1.6.0 VAE defaults
if self.data.get('sd_vae_as_default') is not None and self.data.get('sd_vae_overrides_per_model_preferences') is None:
self.data['sd_vae_overrides_per_model_preferences'] = not self.data.get('sd_vae_as_default')
# 1.1.1 quicksettings list migration
if self.data.get('quicksettings') is not None and self.data.get('quicksettings_list') is None:
self.data['quicksettings_list'] = [i.strip() for i in self.data.get('quicksettings').split(',')]
# 1.4.0 ui_reorder
if isinstance(self.data.get('ui_reorder'), str) and self.data.get('ui_reorder') and "ui_reorder_list" not in self.data:
self.data['ui_reorder_list'] = [i.strip() for i in self.data.get('ui_reorder').split(',')]
bad_settings = 0
for k, v in self.data.items():
info = self.data_labels.get(k, None)
if info is not None and not self.same_type(info.default, v):
print(f"Warning: bad setting value: {k}: {v} ({type(v).__name__}; expected {type(info.default).__name__})", file=sys.stderr)
bad_settings += 1
if bad_settings > 0:
print(f"The program is likely to not work with bad settings.\nSettings file: {filename}\nEither fix the file, or delete it and restart.", file=sys.stderr)
def onchange(self, key, func, call=True):
item = self.data_labels.get(key)
item.onchange = func
if call:
func()
def dumpjson(self):
d = {k: self.data.get(k, v.default) for k, v in self.data_labels.items()}
d["_comments_before"] = {k: v.comment_before for k, v in self.data_labels.items() if v.comment_before is not None}
d["_comments_after"] = {k: v.comment_after for k, v in self.data_labels.items() if v.comment_after is not None}
return json.dumps(d)
def add_option(self, key, info):
self.data_labels[key] = info
if key not in self.data:
self.data[key] = info.default
def reorder(self):
"""reorder settings so that all items related to section always go together"""
section_ids = {}
settings_items = self.data_labels.items()
for _, item in settings_items:
if item.section not in section_ids:
section_ids[item.section] = len(section_ids)
self.data_labels = dict(sorted(settings_items, key=lambda x: section_ids[x[1].section]))
def cast_value(self, key, value):
"""casts an arbitrary to the same type as this setting's value with key
Example: cast_value("eta_noise_seed_delta", "12") -> returns 12 (an int rather than str)
"""
if value is None:
return None
default_value = self.data_labels[key].default
if default_value is None:
default_value = getattr(self, key, None)
if default_value is None:
return None
expected_type = type(default_value)
if expected_type == bool and value == "False":
value = False
else:
value = expected_type(value)
return value
+64
View File
@@ -0,0 +1,64 @@
from collections import defaultdict
def patch(key, obj, field, replacement):
"""Replaces a function in a module or a class.
Also stores the original function in this module, possible to be retrieved via original(key, obj, field).
If the function is already replaced by this caller (key), an exception is raised -- use undo() before that.
Arguments:
key: identifying information for who is doing the replacement. You can use __name__.
obj: the module or the class
field: name of the function as a string
replacement: the new function
Returns:
the original function
"""
patch_key = (obj, field)
if patch_key in originals[key]:
raise RuntimeError(f"patch for {field} is already applied")
original_func = getattr(obj, field)
originals[key][patch_key] = original_func
setattr(obj, field, replacement)
return original_func
def undo(key, obj, field):
"""Undoes the peplacement by the patch().
If the function is not replaced, raises an exception.
Arguments:
key: identifying information for who is doing the replacement. You can use __name__.
obj: the module or the class
field: name of the function as a string
Returns:
Always None
"""
patch_key = (obj, field)
if patch_key not in originals[key]:
raise RuntimeError(f"there is no patch for {field} to undo")
original_func = originals[key].pop(patch_key)
setattr(obj, field, original_func)
return None
def original(key, obj, field):
"""Returns the original function for the patch created by the patch() function"""
patch_key = (obj, field)
return originals[key].get(patch_key, None)
originals = defaultdict(dict)
+1 -1
View File
@@ -1,6 +1,6 @@
import os
import sys
from modules.paths_internal import models_path, script_path, data_path, extensions_dir, extensions_builtin_dir # noqa: F401
from modules.paths_internal import models_path, script_path, data_path, extensions_dir, extensions_builtin_dir, cwd # noqa: F401
import modules.safe # noqa: F401
+1
View File
@@ -8,6 +8,7 @@ import shlex
commandline_args = os.environ.get('COMMANDLINE_ARGS', "")
sys.argv += shlex.split(commandline_args)
cwd = os.getcwd()
modules_path = os.path.dirname(os.path.realpath(__file__))
script_path = os.path.dirname(modules_path)
+30 -31
View File
@@ -11,37 +11,32 @@ def run_postprocessing(extras_mode, image, image_folder, input_dir, output_dir,
shared.state.begin(job="extras")
image_data = []
image_names = []
outputs = []
if extras_mode == 1:
for img in image_folder:
if isinstance(img, Image.Image):
image = img
fn = ''
else:
image = Image.open(os.path.abspath(img.name))
fn = os.path.splitext(img.orig_name)[0]
image_data.append(image)
image_names.append(fn)
elif extras_mode == 2:
assert not shared.cmd_opts.hide_ui_dir_config, '--hide-ui-dir-config option must be disabled'
assert input_dir, 'input directory not selected'
def get_images(extras_mode, image, image_folder, input_dir):
if extras_mode == 1:
for img in image_folder:
if isinstance(img, Image.Image):
image = img
fn = ''
else:
image = Image.open(os.path.abspath(img.name))
fn = os.path.splitext(img.orig_name)[0]
yield image, fn
elif extras_mode == 2:
assert not shared.cmd_opts.hide_ui_dir_config, '--hide-ui-dir-config option must be disabled'
assert input_dir, 'input directory not selected'
image_list = shared.listfiles(input_dir)
for filename in image_list:
try:
image = Image.open(filename)
except Exception:
continue
image_data.append(image)
image_names.append(filename)
else:
assert image, 'image not selected'
image_data.append(image)
image_names.append(None)
image_list = shared.listfiles(input_dir)
for filename in image_list:
try:
image = Image.open(filename)
except Exception:
continue
yield image, filename
else:
assert image, 'image not selected'
yield image, None
if extras_mode == 2 and output_dir != '':
outpath = output_dir
@@ -50,14 +45,16 @@ def run_postprocessing(extras_mode, image, image_folder, input_dir, output_dir,
infotext = ''
for image, name in zip(image_data, image_names):
for image_data, name in get_images(extras_mode, image, image_folder, input_dir):
image_data: Image.Image
shared.state.textinfo = name
parameters, existing_pnginfo = images.read_info_from_image(image)
parameters, existing_pnginfo = images.read_info_from_image(image_data)
if parameters:
existing_pnginfo["parameters"] = parameters
pp = scripts_postprocessing.PostprocessedImage(image.convert("RGB"))
pp = scripts_postprocessing.PostprocessedImage(image_data.convert("RGB"))
scripts.scripts_postproc.run(pp, args)
@@ -78,6 +75,8 @@ def run_postprocessing(extras_mode, image, image_folder, input_dir, output_dir,
if extras_mode != 2 or show_extras_results:
outputs.append(pp.image)
image_data.close()
devices.torch_gc()
return outputs, ui_common.plaintext_to_html(infotext), ''
+491 -342
View File
File diff suppressed because it is too large Load Diff
+49
View File
@@ -0,0 +1,49 @@
import gradio as gr
from modules import scripts, sd_models
from modules.ui_common import create_refresh_button
from modules.ui_components import InputAccordion
class ScriptRefiner(scripts.ScriptBuiltinUI):
section = "accordions"
create_group = False
def __init__(self):
pass
def title(self):
return "Refiner"
def show(self, is_img2img):
return scripts.AlwaysVisible
def ui(self, is_img2img):
with InputAccordion(False, label="Refiner", elem_id=self.elem_id("enable")) as enable_refiner:
with gr.Row():
refiner_checkpoint = gr.Dropdown(label='Checkpoint', elem_id=self.elem_id("checkpoint"), choices=sd_models.checkpoint_tiles(), value='', tooltip="switch to another model in the middle of generation")
create_refresh_button(refiner_checkpoint, sd_models.list_models, lambda: {"choices": sd_models.checkpoint_tiles()}, self.elem_id("checkpoint_refresh"))
refiner_switch_at = gr.Slider(value=0.8, label="Switch at", minimum=0.01, maximum=1.0, step=0.01, elem_id=self.elem_id("switch_at"), tooltip="fraction of sampling steps when the switch to refiner model should happen; 1=never, 0.5=switch in the middle of generation")
def lookup_checkpoint(title):
info = sd_models.get_closet_checkpoint_match(title)
return None if info is None else info.title
self.infotext_fields = [
(enable_refiner, lambda d: 'Refiner' in d),
(refiner_checkpoint, lambda d: lookup_checkpoint(d.get('Refiner'))),
(refiner_switch_at, 'Refiner switch at'),
]
return enable_refiner, refiner_checkpoint, refiner_switch_at
def setup(self, p, enable_refiner, refiner_checkpoint, refiner_switch_at):
# the actual implementation is in sd_samplers_common.py, apply_refiner
if not enable_refiner or refiner_checkpoint in (None, "", "None"):
p.refiner_checkpoint = None
p.refiner_switch_at = None
else:
p.refiner_checkpoint = refiner_checkpoint
p.refiner_switch_at = refiner_switch_at
+111
View File
@@ -0,0 +1,111 @@
import json
import gradio as gr
from modules import scripts, ui, errors
from modules.shared import cmd_opts
from modules.ui_components import ToolButton
class ScriptSeed(scripts.ScriptBuiltinUI):
section = "seed"
create_group = False
def __init__(self):
self.seed = None
self.reuse_seed = None
self.reuse_subseed = None
def title(self):
return "Seed"
def show(self, is_img2img):
return scripts.AlwaysVisible
def ui(self, is_img2img):
with gr.Row(elem_id=self.elem_id("seed_row")):
if cmd_opts.use_textbox_seed:
self.seed = gr.Textbox(label='Seed', value="", elem_id=self.elem_id("seed"), min_width=100)
else:
self.seed = gr.Number(label='Seed', value=-1, elem_id=self.elem_id("seed"), min_width=100, precision=0)
random_seed = ToolButton(ui.random_symbol, elem_id=self.elem_id("random_seed"), tooltip="Set seed to -1, which will cause a new random number to be used every time")
reuse_seed = ToolButton(ui.reuse_symbol, elem_id=self.elem_id("reuse_seed"), tooltip="Reuse seed from last generation, mostly useful if it was randomized")
seed_checkbox = gr.Checkbox(label='Extra', elem_id=self.elem_id("subseed_show"), value=False)
with gr.Group(visible=False, elem_id=self.elem_id("seed_extras")) as seed_extras:
with gr.Row(elem_id=self.elem_id("subseed_row")):
subseed = gr.Number(label='Variation seed', value=-1, elem_id=self.elem_id("subseed"), precision=0)
random_subseed = ToolButton(ui.random_symbol, elem_id=self.elem_id("random_subseed"))
reuse_subseed = ToolButton(ui.reuse_symbol, elem_id=self.elem_id("reuse_subseed"))
subseed_strength = gr.Slider(label='Variation strength', value=0.0, minimum=0, maximum=1, step=0.01, elem_id=self.elem_id("subseed_strength"))
with gr.Row(elem_id=self.elem_id("seed_resize_from_row")):
seed_resize_from_w = gr.Slider(minimum=0, maximum=2048, step=8, label="Resize seed from width", value=0, elem_id=self.elem_id("seed_resize_from_w"))
seed_resize_from_h = gr.Slider(minimum=0, maximum=2048, step=8, label="Resize seed from height", value=0, elem_id=self.elem_id("seed_resize_from_h"))
random_seed.click(fn=None, _js="function(){setRandomSeed('" + self.elem_id("seed") + "')}", show_progress=False, inputs=[], outputs=[])
random_subseed.click(fn=None, _js="function(){setRandomSeed('" + self.elem_id("subseed") + "')}", show_progress=False, inputs=[], outputs=[])
seed_checkbox.change(lambda x: gr.update(visible=x), show_progress=False, inputs=[seed_checkbox], outputs=[seed_extras])
self.infotext_fields = [
(self.seed, "Seed"),
(seed_checkbox, lambda d: "Variation seed" in d or "Seed resize from-1" in d),
(subseed, "Variation seed"),
(subseed_strength, "Variation seed strength"),
(seed_resize_from_w, "Seed resize from-1"),
(seed_resize_from_h, "Seed resize from-2"),
]
self.on_after_component(lambda x: connect_reuse_seed(self.seed, reuse_seed, x.component, False), elem_id=f'generation_info_{self.tabname}')
self.on_after_component(lambda x: connect_reuse_seed(subseed, reuse_subseed, x.component, True), elem_id=f'generation_info_{self.tabname}')
return self.seed, seed_checkbox, subseed, subseed_strength, seed_resize_from_w, seed_resize_from_h
def setup(self, p, seed, seed_checkbox, subseed, subseed_strength, seed_resize_from_w, seed_resize_from_h):
p.seed = seed
if seed_checkbox and subseed_strength > 0:
p.subseed = subseed
p.subseed_strength = subseed_strength
if seed_checkbox and seed_resize_from_w > 0 and seed_resize_from_h > 0:
p.seed_resize_from_w = seed_resize_from_w
p.seed_resize_from_h = seed_resize_from_h
def connect_reuse_seed(seed: gr.Number, reuse_seed: gr.Button, generation_info: gr.Textbox, is_subseed):
""" Connects a 'reuse (sub)seed' button's click event so that it copies last used
(sub)seed value from generation info the to the seed field. If copying subseed and subseed strength
was 0, i.e. no variation seed was used, it copies the normal seed value instead."""
def copy_seed(gen_info_string: str, index):
res = -1
try:
gen_info = json.loads(gen_info_string)
index -= gen_info.get('index_of_first_image', 0)
if is_subseed and gen_info.get('subseed_strength', 0) > 0:
all_subseeds = gen_info.get('all_subseeds', [-1])
res = all_subseeds[index if 0 <= index < len(all_subseeds) else 0]
else:
all_seeds = gen_info.get('all_seeds', [-1])
res = all_seeds[index if 0 <= index < len(all_seeds) else 0]
except json.decoder.JSONDecodeError:
if gen_info_string:
errors.report(f"Error parsing JSON generation info: {gen_info_string}")
return [res, gr.update()]
reuse_seed.click(
fn=copy_seed,
_js="(x, y) => [x, selected_gallery_index()]",
show_progress=False,
inputs=[generation_info, seed],
outputs=[seed, seed]
)
+27 -22
View File
@@ -48,6 +48,7 @@ def add_task_to_queue(id_job):
class ProgressRequest(BaseModel):
id_task: str = Field(default=None, title="Task ID", description="id of the task to get progress for")
id_live_preview: int = Field(default=-1, title="Live preview image ID", description="id of last received last preview image")
live_preview: bool = Field(default=True, title="Include live preview", description="boolean flag indicating whether to include the live preview image")
class ProgressResponse(BaseModel):
@@ -71,7 +72,12 @@ def progressapi(req: ProgressRequest):
completed = req.id_task in finished_tasks
if not active:
return ProgressResponse(active=active, queued=queued, completed=completed, id_live_preview=-1, textinfo="In queue..." if queued else "Waiting...")
textinfo = "Waiting..."
if queued:
sorted_queued = sorted(pending_tasks.keys(), key=lambda x: pending_tasks[x])
queue_index = sorted_queued.index(req.id_task)
textinfo = "In queue: {}/{}".format(queue_index + 1, len(sorted_queued))
return ProgressResponse(active=active, queued=queued, completed=completed, id_live_preview=-1, textinfo=textinfo)
progress = 0
@@ -89,31 +95,30 @@ def progressapi(req: ProgressRequest):
predicted_duration = elapsed_since_start / progress if progress > 0 else None
eta = predicted_duration - elapsed_since_start if predicted_duration is not None else None
live_preview = None
id_live_preview = req.id_live_preview
shared.state.set_current_image()
if opts.live_previews_enable and shared.state.id_live_preview != req.id_live_preview:
image = shared.state.current_image
if image is not None:
buffered = io.BytesIO()
if opts.live_previews_image_format == "png":
# using optimize for large images takes an enormous amount of time
if max(*image.size) <= 256:
save_kwargs = {"optimize": True}
if opts.live_previews_enable and req.live_preview:
shared.state.set_current_image()
if shared.state.id_live_preview != req.id_live_preview:
image = shared.state.current_image
if image is not None:
buffered = io.BytesIO()
if opts.live_previews_image_format == "png":
# using optimize for large images takes an enormous amount of time
if max(*image.size) <= 256:
save_kwargs = {"optimize": True}
else:
save_kwargs = {"optimize": False, "compress_level": 1}
else:
save_kwargs = {"optimize": False, "compress_level": 1}
save_kwargs = {}
else:
save_kwargs = {}
image.save(buffered, format=opts.live_previews_image_format, **save_kwargs)
base64_image = base64.b64encode(buffered.getvalue()).decode('ascii')
live_preview = f"data:image/{opts.live_previews_image_format};base64,{base64_image}"
id_live_preview = shared.state.id_live_preview
else:
live_preview = None
else:
live_preview = None
image.save(buffered, format=opts.live_previews_image_format, **save_kwargs)
base64_image = base64.b64encode(buffered.getvalue()).decode('ascii')
live_preview = f"data:image/{opts.live_previews_image_format};base64,{base64_image}"
id_live_preview = shared.state.id_live_preview
return ProgressResponse(active=active, queued=queued, completed=completed, progress=progress, eta=eta, live_preview=live_preview, id_live_preview=id_live_preview, textinfo=shared.state.textinfo)
+45 -20
View File
@@ -2,7 +2,6 @@ from __future__ import annotations
import re
from collections import namedtuple
from typing import List
import lark
# a prompt like this: "fantasy landscape with a [mountain:lake:0.25] and [an oak:a christmas tree:0.75][ in foreground::0.6][ in background:0.25] [shoddy:masterful:0.5]"
@@ -19,14 +18,14 @@ prompt: (emphasized | scheduled | alternate | plain | WHITESPACE)*
!emphasized: "(" prompt ")"
| "(" prompt ":" prompt ")"
| "[" prompt "]"
scheduled: "[" [prompt ":"] prompt ":" [WHITESPACE] NUMBER "]"
alternate: "[" prompt ("|" prompt)+ "]"
scheduled: "[" [prompt ":"] prompt ":" [WHITESPACE] NUMBER [WHITESPACE] "]"
alternate: "[" prompt ("|" [prompt])+ "]"
WHITESPACE: /\s+/
plain: /([^\\\[\]():|]|\\.)+/
%import common.SIGNED_NUMBER -> NUMBER
""")
def get_learned_conditioning_prompt_schedules(prompts, steps):
def get_learned_conditioning_prompt_schedules(prompts, base_steps, hires_steps=None, use_old_scheduling=False):
"""
>>> g = lambda p: get_learned_conditioning_prompt_schedules([p], 10)[0]
>>> g("test")
@@ -53,18 +52,43 @@ def get_learned_conditioning_prompt_schedules(prompts, steps):
[[3, '((a][:b:c '], [10, '((a][:b:c d']]
>>> g("[a|(b:1.1)]")
[[1, 'a'], [2, '(b:1.1)'], [3, 'a'], [4, '(b:1.1)'], [5, 'a'], [6, '(b:1.1)'], [7, 'a'], [8, '(b:1.1)'], [9, 'a'], [10, '(b:1.1)']]
>>> g("[fe|]male")
[[1, 'female'], [2, 'male'], [3, 'female'], [4, 'male'], [5, 'female'], [6, 'male'], [7, 'female'], [8, 'male'], [9, 'female'], [10, 'male']]
>>> g("[fe|||]male")
[[1, 'female'], [2, 'male'], [3, 'male'], [4, 'male'], [5, 'female'], [6, 'male'], [7, 'male'], [8, 'male'], [9, 'female'], [10, 'male']]
>>> g = lambda p: get_learned_conditioning_prompt_schedules([p], 10, 10)[0]
>>> g("a [b:.5] c")
[[10, 'a b c']]
>>> g("a [b:1.5] c")
[[5, 'a c'], [10, 'a b c']]
"""
if hires_steps is None or use_old_scheduling:
int_offset = 0
flt_offset = 0
steps = base_steps
else:
int_offset = base_steps
flt_offset = 1.0
steps = hires_steps
def collect_steps(steps, tree):
res = [steps]
class CollectSteps(lark.Visitor):
def scheduled(self, tree):
tree.children[-1] = float(tree.children[-1])
if tree.children[-1] < 1:
tree.children[-1] *= steps
tree.children[-1] = min(steps, int(tree.children[-1]))
res.append(tree.children[-1])
s = tree.children[-2]
v = float(s)
if use_old_scheduling:
v = v*steps if v<1 else v
else:
if "." in s:
v = (v - flt_offset) * steps
else:
v = (v - int_offset)
tree.children[-2] = min(steps, int(v))
if tree.children[-2] >= 1:
res.append(tree.children[-2])
def alternate(self, tree):
res.extend(range(1, steps+1))
@@ -75,13 +99,14 @@ def get_learned_conditioning_prompt_schedules(prompts, steps):
def at_step(step, tree):
class AtStep(lark.Transformer):
def scheduled(self, args):
before, after, _, when = args
before, after, _, when, _ = args
yield before or () if step <= when else after
def alternate(self, args):
yield next(args[(step - 1)%len(args)])
args = ["" if not arg else arg for arg in args]
yield args[(step - 1) % len(args)]
def start(self, args):
def flatten(x):
if type(x) == str:
if isinstance(x, str):
yield x
else:
for gen in x:
@@ -129,7 +154,7 @@ class SdConditioning(list):
def get_learned_conditioning(model, prompts: SdConditioning | list[str], steps):
def get_learned_conditioning(model, prompts: SdConditioning | list[str], steps, hires_steps=None, use_old_scheduling=False):
"""converts a list of prompts into a list of prompt schedules - each schedule is a list of ScheduledPromptConditioning, specifying the comdition (cond),
and the sampling step at which this condition is to be replaced by the next one.
@@ -149,7 +174,7 @@ def get_learned_conditioning(model, prompts: SdConditioning | list[str], steps):
"""
res = []
prompt_schedules = get_learned_conditioning_prompt_schedules(prompts, steps)
prompt_schedules = get_learned_conditioning_prompt_schedules(prompts, steps, hires_steps, use_old_scheduling)
cache = {}
for prompt, prompt_schedule in zip(prompts, prompt_schedules):
@@ -214,17 +239,17 @@ def get_multicond_prompt_list(prompts: SdConditioning | list[str]):
class ComposableScheduledPromptConditioning:
def __init__(self, schedules, weight=1.0):
self.schedules: List[ScheduledPromptConditioning] = schedules
self.schedules: list[ScheduledPromptConditioning] = schedules
self.weight: float = weight
class MulticondLearnedConditioning:
def __init__(self, shape, batch):
self.shape: tuple = shape # the shape field is needed to send this object to DDIM/PLMS
self.batch: List[List[ComposableScheduledPromptConditioning]] = batch
self.batch: list[list[ComposableScheduledPromptConditioning]] = batch
def get_multicond_learned_conditioning(model, prompts, steps) -> MulticondLearnedConditioning:
def get_multicond_learned_conditioning(model, prompts, steps, hires_steps=None, use_old_scheduling=False) -> MulticondLearnedConditioning:
"""same as get_learned_conditioning, but returns a list of ScheduledPromptConditioning along with the weight objects for each prompt.
For each prompt, the list is obtained by splitting the prompt using the AND separator.
@@ -233,7 +258,7 @@ def get_multicond_learned_conditioning(model, prompts, steps) -> MulticondLearne
res_indexes, prompt_flat_list, prompt_indexes = get_multicond_prompt_list(prompts)
learned_conditioning = get_learned_conditioning(model, prompt_flat_list, steps)
learned_conditioning = get_learned_conditioning(model, prompt_flat_list, steps, hires_steps, use_old_scheduling)
res = []
for indexes in res_indexes:
@@ -252,7 +277,7 @@ class DictWithShape(dict):
return self["crossattn"].shape
def reconstruct_cond_batch(c: List[List[ScheduledPromptConditioning]], current_step):
def reconstruct_cond_batch(c: list[list[ScheduledPromptConditioning]], current_step):
param = c[0][0].cond
is_dict = isinstance(param, dict)
@@ -333,7 +358,7 @@ re_attention = re.compile(r"""
\\|
\(|
\[|
:([+-]?[.\d]+)\)|
:\s*([+-]?[.\d]+)\s*\)|
\)|
]|
[^\\()\[\]:]+|
+1
View File
@@ -55,6 +55,7 @@ class UpscalerRealESRGAN(Upscaler):
half=not cmd_opts.no_half and not cmd_opts.upcast_sampling,
tile=opts.ESRGAN_tile,
tile_pad=opts.ESRGAN_tile_overlap,
device=self.device,
)
upsampled = upsampler.enhance(np.array(img), outscale=info.scale)[0]
+3 -1
View File
@@ -14,7 +14,9 @@ def is_restartable() -> bool:
def restart_program() -> None:
"""creates file tmp/restart and immediately stops the process, which webui.bat/webui.sh interpret as a command to start webui again"""
(Path(script_path) / "tmp" / "restart").touch()
tmpdir = Path(script_path) / "tmp"
tmpdir.mkdir(parents=True, exist_ok=True)
(tmpdir / "restart").touch()
stop_program()
+170
View File
@@ -0,0 +1,170 @@
import torch
from modules import devices, rng_philox, shared
def randn(seed, shape, generator=None):
"""Generate a tensor with random numbers from a normal distribution using seed.
Uses the seed parameter to set the global torch seed; to generate more with that seed, use randn_like/randn_without_seed."""
manual_seed(seed)
if shared.opts.randn_source == "NV":
return torch.asarray((generator or nv_rng).randn(shape), device=devices.device)
if shared.opts.randn_source == "CPU" or devices.device.type == 'mps':
return torch.randn(shape, device=devices.cpu, generator=generator).to(devices.device)
return torch.randn(shape, device=devices.device, generator=generator)
def randn_local(seed, shape):
"""Generate a tensor with random numbers from a normal distribution using seed.
Does not change the global random number generator. You can only generate the seed's first tensor using this function."""
if shared.opts.randn_source == "NV":
rng = rng_philox.Generator(seed)
return torch.asarray(rng.randn(shape), device=devices.device)
local_device = devices.cpu if shared.opts.randn_source == "CPU" or devices.device.type == 'mps' else devices.device
local_generator = torch.Generator(local_device).manual_seed(int(seed))
return torch.randn(shape, device=local_device, generator=local_generator).to(devices.device)
def randn_like(x):
"""Generate a tensor with random numbers from a normal distribution using the previously initialized genrator.
Use either randn() or manual_seed() to initialize the generator."""
if shared.opts.randn_source == "NV":
return torch.asarray(nv_rng.randn(x.shape), device=x.device, dtype=x.dtype)
if shared.opts.randn_source == "CPU" or x.device.type == 'mps':
return torch.randn_like(x, device=devices.cpu).to(x.device)
return torch.randn_like(x)
def randn_without_seed(shape, generator=None):
"""Generate a tensor with random numbers from a normal distribution using the previously initialized genrator.
Use either randn() or manual_seed() to initialize the generator."""
if shared.opts.randn_source == "NV":
return torch.asarray((generator or nv_rng).randn(shape), device=devices.device)
if shared.opts.randn_source == "CPU" or devices.device.type == 'mps':
return torch.randn(shape, device=devices.cpu, generator=generator).to(devices.device)
return torch.randn(shape, device=devices.device, generator=generator)
def manual_seed(seed):
"""Set up a global random number generator using the specified seed."""
if shared.opts.randn_source == "NV":
global nv_rng
nv_rng = rng_philox.Generator(seed)
return
torch.manual_seed(seed)
def create_generator(seed):
if shared.opts.randn_source == "NV":
return rng_philox.Generator(seed)
device = devices.cpu if shared.opts.randn_source == "CPU" or devices.device.type == 'mps' else devices.device
generator = torch.Generator(device).manual_seed(int(seed))
return generator
# from https://discuss.pytorch.org/t/help-regarding-slerp-function-for-generative-model-sampling/32475/3
def slerp(val, low, high):
low_norm = low/torch.norm(low, dim=1, keepdim=True)
high_norm = high/torch.norm(high, dim=1, keepdim=True)
dot = (low_norm*high_norm).sum(1)
if dot.mean() > 0.9995:
return low * val + high * (1 - val)
omega = torch.acos(dot)
so = torch.sin(omega)
res = (torch.sin((1.0-val)*omega)/so).unsqueeze(1)*low + (torch.sin(val*omega)/so).unsqueeze(1) * high
return res
class ImageRNG:
def __init__(self, shape, seeds, subseeds=None, subseed_strength=0.0, seed_resize_from_h=0, seed_resize_from_w=0):
self.shape = tuple(map(int, shape))
self.seeds = seeds
self.subseeds = subseeds
self.subseed_strength = subseed_strength
self.seed_resize_from_h = seed_resize_from_h
self.seed_resize_from_w = seed_resize_from_w
self.generators = [create_generator(seed) for seed in seeds]
self.is_first = True
def first(self):
noise_shape = self.shape if self.seed_resize_from_h <= 0 or self.seed_resize_from_w <= 0 else (self.shape[0], self.seed_resize_from_h // 8, self.seed_resize_from_w // 8)
xs = []
for i, (seed, generator) in enumerate(zip(self.seeds, self.generators)):
subnoise = None
if self.subseeds is not None and self.subseed_strength != 0:
subseed = 0 if i >= len(self.subseeds) else self.subseeds[i]
subnoise = randn(subseed, noise_shape)
if noise_shape != self.shape:
noise = randn(seed, noise_shape)
else:
noise = randn(seed, self.shape, generator=generator)
if subnoise is not None:
noise = slerp(self.subseed_strength, noise, subnoise)
if noise_shape != self.shape:
x = randn(seed, self.shape, generator=generator)
dx = (self.shape[2] - noise_shape[2]) // 2
dy = (self.shape[1] - noise_shape[1]) // 2
w = noise_shape[2] if dx >= 0 else noise_shape[2] + 2 * dx
h = noise_shape[1] if dy >= 0 else noise_shape[1] + 2 * dy
tx = 0 if dx < 0 else dx
ty = 0 if dy < 0 else dy
dx = max(-dx, 0)
dy = max(-dy, 0)
x[:, ty:ty + h, tx:tx + w] = noise[:, dy:dy + h, dx:dx + w]
noise = x
xs.append(noise)
eta_noise_seed_delta = shared.opts.eta_noise_seed_delta or 0
if eta_noise_seed_delta:
self.generators = [create_generator(seed + eta_noise_seed_delta) for seed in self.seeds]
return torch.stack(xs).to(shared.device)
def next(self):
if self.is_first:
self.is_first = False
return self.first()
xs = []
for generator in self.generators:
x = randn_without_seed(self.shape, generator=generator)
xs.append(x)
return torch.stack(xs).to(shared.device)
devices.randn = randn
devices.randn_local = randn_local
devices.randn_like = randn_like
devices.randn_without_seed = randn_without_seed
devices.manual_seed = manual_seed
+102
View File
@@ -0,0 +1,102 @@
"""RNG imitiating torch cuda randn on CPU. You are welcome.
Usage:
```
g = Generator(seed=0)
print(g.randn(shape=(3, 4)))
```
Expected output:
```
[[-0.92466259 -0.42534415 -2.6438457 0.14518388]
[-0.12086647 -0.57972564 -0.62285122 -0.32838709]
[-1.07454231 -0.36314407 -1.67105067 2.26550497]]
```
"""
import numpy as np
philox_m = [0xD2511F53, 0xCD9E8D57]
philox_w = [0x9E3779B9, 0xBB67AE85]
two_pow32_inv = np.array([2.3283064e-10], dtype=np.float32)
two_pow32_inv_2pi = np.array([2.3283064e-10 * 6.2831855], dtype=np.float32)
def uint32(x):
"""Converts (N,) np.uint64 array into (2, N) np.unit32 array."""
return x.view(np.uint32).reshape(-1, 2).transpose(1, 0)
def philox4_round(counter, key):
"""A single round of the Philox 4x32 random number generator."""
v1 = uint32(counter[0].astype(np.uint64) * philox_m[0])
v2 = uint32(counter[2].astype(np.uint64) * philox_m[1])
counter[0] = v2[1] ^ counter[1] ^ key[0]
counter[1] = v2[0]
counter[2] = v1[1] ^ counter[3] ^ key[1]
counter[3] = v1[0]
def philox4_32(counter, key, rounds=10):
"""Generates 32-bit random numbers using the Philox 4x32 random number generator.
Parameters:
counter (numpy.ndarray): A 4xN array of 32-bit integers representing the counter values (offset into generation).
key (numpy.ndarray): A 2xN array of 32-bit integers representing the key values (seed).
rounds (int): The number of rounds to perform.
Returns:
numpy.ndarray: A 4xN array of 32-bit integers containing the generated random numbers.
"""
for _ in range(rounds - 1):
philox4_round(counter, key)
key[0] = key[0] + philox_w[0]
key[1] = key[1] + philox_w[1]
philox4_round(counter, key)
return counter
def box_muller(x, y):
"""Returns just the first out of two numbers generated by BoxMuller transform algorithm."""
u = x * two_pow32_inv + two_pow32_inv / 2
v = y * two_pow32_inv_2pi + two_pow32_inv_2pi / 2
s = np.sqrt(-2.0 * np.log(u))
r1 = s * np.sin(v)
return r1.astype(np.float32)
class Generator:
"""RNG that produces same outputs as torch.randn(..., device='cuda') on CPU"""
def __init__(self, seed):
self.seed = seed
self.offset = 0
def randn(self, shape):
"""Generate a sequence of n standard normal random variables using the Philox 4x32 random number generator and the Box-Muller transform."""
n = 1
for x in shape:
n *= x
counter = np.zeros((4, n), dtype=np.uint32)
counter[0] = self.offset
counter[2] = np.arange(n, dtype=np.uint32) # up to 2^32 numbers can be generated - if you want more you'd need to spill into counter[3]
self.offset += 1
key = np.empty(n, dtype=np.uint64)
key.fill(self.seed)
key = uint32(key)
g = philox4_32(counter, key)
return box_muller(g[0], g[1]).reshape(shape) # discard g[2] and g[3]
+32 -3
View File
@@ -1,7 +1,7 @@
import inspect
import os
from collections import namedtuple
from typing import Optional, Dict, Any
from typing import Optional, Any
from fastapi import FastAPI
from gradio import Blocks
@@ -28,6 +28,18 @@ class ImageSaveParams:
"""dictionary with parameters for image's PNG info data; infotext will have the key 'parameters'"""
class ExtraNoiseParams:
def __init__(self, noise, x, xi):
self.noise = noise
"""Random noise generated by the seed"""
self.x = x
"""Latent representation of the image"""
self.xi = xi
"""Noisy latent representation of the image"""
class CFGDenoiserParams:
def __init__(self, x, image_cond, sigma, sampling_step, total_sampling_steps, text_cond, text_uncond):
self.x = x
@@ -100,6 +112,7 @@ callback_map = dict(
callbacks_ui_settings=[],
callbacks_before_image_saved=[],
callbacks_image_saved=[],
callbacks_extra_noise=[],
callbacks_cfg_denoiser=[],
callbacks_cfg_denoised=[],
callbacks_cfg_after_cfg=[],
@@ -189,6 +202,14 @@ def image_saved_callback(params: ImageSaveParams):
report_exception(c, 'image_saved_callback')
def extra_noise_callback(params: ExtraNoiseParams):
for c in callback_map['callbacks_extra_noise']:
try:
c.callback(params)
except Exception:
report_exception(c, 'callbacks_extra_noise')
def cfg_denoiser_callback(params: CFGDenoiserParams):
for c in callback_map['callbacks_cfg_denoiser']:
try:
@@ -237,7 +258,7 @@ def image_grid_callback(params: ImageGridLoopParams):
report_exception(c, 'image_grid')
def infotext_pasted_callback(infotext: str, params: Dict[str, Any]):
def infotext_pasted_callback(infotext: str, params: dict[str, Any]):
for c in callback_map['callbacks_infotext_pasted']:
try:
c.callback(infotext, params)
@@ -367,6 +388,14 @@ def on_image_saved(callback):
add_callback(callback_map['callbacks_image_saved'], callback)
def on_extra_noise(callback):
"""register a function to be called before adding extra noise in img2img or hires fix;
The callback is called with one argument:
- params: ExtraNoiseParams - contains noise determined by seed and latent representation of image
"""
add_callback(callback_map['callbacks_extra_noise'], callback)
def on_cfg_denoiser(callback):
"""register a function to be called in the kdiffussion cfg_denoiser method after building the inner model inputs.
The callback is called with one argument:
@@ -420,7 +449,7 @@ def on_infotext_pasted(callback):
"""register a function to be called before applying an infotext.
The callback is called with two arguments:
- infotext: str - raw infotext.
- result: Dict[str, any] - parsed infotext parameters.
- result: dict[str, any] - parsed infotext parameters.
"""
add_callback(callback_map['callbacks_infotext_pasted'], callback)
+141 -58
View File
@@ -3,6 +3,7 @@ import re
import sys
import inspect
from collections import namedtuple
from dataclasses import dataclass
import gradio as gr
@@ -21,6 +22,11 @@ class PostprocessBatchListArgs:
self.images = images
@dataclass
class OnComponent:
component: gr.blocks.Block
class Script:
name = None
"""script's internal name derived from title"""
@@ -35,9 +41,13 @@ class Script:
is_txt2img = False
is_img2img = False
tabname = None
group = None
"""A gr.Group component that has all script's UI inside it"""
"""A gr.Group component that has all script's UI inside it."""
create_group = True
"""If False, for alwayson scripts, a group component will not be created."""
infotext_fields = None
"""if set in ui(), this is a list of pairs of gradio component + text; the text will be used when
@@ -52,6 +62,15 @@ class Script:
api_info = None
"""Generated value of type modules.api.models.ScriptInfo with information about the script for API"""
on_before_component_elem_id = None
"""list of callbacks to be called before a component with an elem_id is created"""
on_after_component_elem_id = None
"""list of callbacks to be called after a component with an elem_id is created"""
setup_for_ui_only = False
"""If true, the script setup will only be run in Gradio UI, not in API"""
def title(self):
"""this function should return the title of the script. This is what will be displayed in the dropdown menu."""
@@ -90,9 +109,16 @@ class Script:
pass
def setup(self, p, *args):
"""For AlwaysVisible scripts, this function is called when the processing object is set up, before any processing starts.
args contains all values returned by components from ui().
"""
pass
def before_process(self, p, *args):
"""
This function is called very early before processing begins for AlwaysVisible scripts.
This function is called very early during processing begins for AlwaysVisible scripts.
You can modify the processing object (p) here, inject hooks, etc.
args contains all values returned by components from ui()
"""
@@ -212,6 +238,29 @@ class Script:
pass
def on_before_component(self, callback, *, elem_id):
"""
Calls callback before a component is created. The callback function is called with a single argument of type OnComponent.
May be called in show() or ui() - but it may be too late in latter as some components may already be created.
This function is an alternative to before_component in that it also cllows to run before a component is created, but
it doesn't require to be called for every created component - just for the one you need.
"""
if self.on_before_component_elem_id is None:
self.on_before_component_elem_id = []
self.on_before_component_elem_id.append((elem_id, callback))
def on_after_component(self, callback, *, elem_id):
"""
Calls callback after a component is created. The callback function is called with a single argument of type OnComponent.
"""
if self.on_after_component_elem_id is None:
self.on_after_component_elem_id = []
self.on_after_component_elem_id.append((elem_id, callback))
def describe(self):
"""unused"""
return ""
@@ -220,7 +269,7 @@ class Script:
"""helper function to generate id for a HTML element, constructs final id out of script name, tab and user-supplied item_id"""
need_tabname = self.show(True) == self.show(False)
tabkind = 'img2img' if self.is_img2img else 'txt2txt'
tabkind = 'img2img' if self.is_img2img else 'txt2img'
tabname = f"{tabkind}_" if need_tabname else ""
title = re.sub(r'[^a-z_0-9]', '', re.sub(r'\s', '_', self.title().lower()))
@@ -232,6 +281,19 @@ class Script:
"""
pass
class ScriptBuiltinUI(Script):
setup_for_ui_only = True
def elem_id(self, item_id):
"""helper function to generate id for a HTML element, constructs final id out of tab and user-supplied item_id"""
need_tabname = self.show(True) == self.show(False)
tabname = ('img2img' if self.is_img2img else 'txt2img') + "_" if need_tabname else ""
return f'{tabname}{item_id}'
current_basedir = paths.script_path
@@ -250,7 +312,7 @@ postprocessing_scripts_data = []
ScriptClassData = namedtuple("ScriptClassData", ["script_class", "path", "basedir", "module"])
def list_scripts(scriptdirname, extension):
def list_scripts(scriptdirname, extension, *, include_extensions=True):
scripts_list = []
basedir = os.path.join(paths.script_path, scriptdirname)
@@ -258,8 +320,9 @@ def list_scripts(scriptdirname, extension):
for filename in sorted(os.listdir(basedir)):
scripts_list.append(ScriptFile(paths.script_path, filename, os.path.join(basedir, filename)))
for ext in extensions.active():
scripts_list += ext.list_files(scriptdirname, extension)
if include_extensions:
for ext in extensions.active():
scripts_list += ext.list_files(scriptdirname, extension)
scripts_list = [x for x in scripts_list if os.path.splitext(x.path)[1].lower() == extension and os.path.isfile(x.path)]
@@ -288,7 +351,7 @@ def load_scripts():
postprocessing_scripts_data.clear()
script_callbacks.clear_callbacks()
scripts_list = list_scripts("scripts", ".py")
scripts_list = list_scripts("scripts", ".py") + list_scripts("modules/processing_scripts", ".py", include_extensions=False)
syspath = sys.path
@@ -349,10 +412,17 @@ class ScriptRunner:
self.selectable_scripts = []
self.alwayson_scripts = []
self.titles = []
self.title_map = {}
self.infotext_fields = []
self.paste_field_names = []
self.inputs = [None]
self.on_before_component_elem_id = {}
"""dict of callbacks to be called before an element is created; key=elem_id, value=list of callbacks"""
self.on_after_component_elem_id = {}
"""dict of callbacks to be called after an element is created; key=elem_id, value=list of callbacks"""
def initialize_scripts(self, is_img2img):
from modules import scripts_auto_postprocessing
@@ -367,6 +437,7 @@ class ScriptRunner:
script.filename = script_data.path
script.is_txt2img = not is_img2img
script.is_img2img = is_img2img
script.tabname = "img2img" if is_img2img else "txt2img"
visibility = script.show(script.is_img2img)
@@ -379,6 +450,28 @@ class ScriptRunner:
self.scripts.append(script)
self.selectable_scripts.append(script)
self.apply_on_before_component_callbacks()
def apply_on_before_component_callbacks(self):
for script in self.scripts:
on_before = script.on_before_component_elem_id or []
on_after = script.on_after_component_elem_id or []
for elem_id, callback in on_before:
if elem_id not in self.on_before_component_elem_id:
self.on_before_component_elem_id[elem_id] = []
self.on_before_component_elem_id[elem_id].append((callback, script))
for elem_id, callback in on_after:
if elem_id not in self.on_after_component_elem_id:
self.on_after_component_elem_id[elem_id] = []
self.on_after_component_elem_id[elem_id].append((callback, script))
on_before.clear()
on_after.clear()
def create_script_ui(self, script):
import modules.api.models as api_models
@@ -398,11 +491,15 @@ class ScriptRunner:
arg_info = api_models.ScriptArg(label=control.label or "")
for field in ("value", "minimum", "maximum", "step", "choices"):
for field in ("value", "minimum", "maximum", "step"):
v = getattr(control, field, None)
if v is not None:
setattr(arg_info, field, v)
choices = getattr(control, 'choices', None) # as of gradio 3.41, some items in choices are strings, and some are tuples where the first elem is the string
if choices is not None:
arg_info.choices = [x[0] if isinstance(x, tuple) else x for x in choices]
api_args.append(arg_info)
script.api_info = api_models.ScriptInfo(
@@ -429,15 +526,20 @@ class ScriptRunner:
if script.alwayson and script.section != section:
continue
with gr.Group(visible=script.alwayson) as group:
self.create_script_ui(script)
if script.create_group:
with gr.Group(visible=script.alwayson) as group:
self.create_script_ui(script)
script.group = group
script.group = group
else:
self.create_script_ui(script)
def prepare_ui(self):
self.inputs = [None]
def setup_ui(self):
all_titles = [wrap_call(script.title, script.filename, "title") or script.filename for script in self.scripts]
self.title_map = {title.lower(): script for title, script in zip(all_titles, self.scripts)}
self.titles = [wrap_call(script.title, script.filename, "title") or f"{script.filename} [error]" for script in self.selectable_scripts]
self.setup_ui_for_section(None)
@@ -484,6 +586,8 @@ class ScriptRunner:
self.infotext_fields.append((dropdown, lambda x: gr.update(value=x.get('Script', 'None'))))
self.infotext_fields.extend([(script.group, onload_script_visibility) for script in self.selectable_scripts])
self.apply_on_before_component_callbacks()
return self.inputs
def run(self, p, *args):
@@ -577,6 +681,12 @@ class ScriptRunner:
errors.report(f"Error running postprocess_image: {script.filename}", exc_info=True)
def before_component(self, component, **kwargs):
for callback, script in self.on_before_component_elem_id.get(kwargs.get("elem_id"), []):
try:
callback(OnComponent(component=component))
except Exception:
errors.report(f"Error running on_before_component: {script.filename}", exc_info=True)
for script in self.scripts:
try:
script.before_component(component, **kwargs)
@@ -584,12 +694,21 @@ class ScriptRunner:
errors.report(f"Error running before_component: {script.filename}", exc_info=True)
def after_component(self, component, **kwargs):
for callback, script in self.on_after_component_elem_id.get(component.elem_id, []):
try:
callback(OnComponent(component=component))
except Exception:
errors.report(f"Error running on_after_component: {script.filename}", exc_info=True)
for script in self.scripts:
try:
script.after_component(component, **kwargs)
except Exception:
errors.report(f"Error running after_component: {script.filename}", exc_info=True)
def script(self, title):
return self.title_map.get(title.lower())
def reload_sources(self, cache):
for si, script in list(enumerate(self.scripts)):
args_from = script.args_from
@@ -608,7 +727,6 @@ class ScriptRunner:
self.scripts[si].args_from = args_from
self.scripts[si].args_to = args_to
def before_hr(self, p):
for script in self.alwayson_scripts:
try:
@@ -617,6 +735,17 @@ class ScriptRunner:
except Exception:
errors.report(f"Error running before_hr: {script.filename}", exc_info=True)
def setup_scrips(self, p, *, is_ui=True):
for script in self.alwayson_scripts:
if not is_ui and script.setup_for_ui_only:
continue
try:
script_args = p.script_args[script.args_from:script.args_to]
script.setup(p, *script_args)
except Exception:
errors.report(f"Error running setup: {script.filename}", exc_info=True)
scripts_txt2img: ScriptRunner = None
scripts_img2img: ScriptRunner = None
@@ -631,49 +760,3 @@ def reload_script_body_only():
reload_scripts = load_scripts # compatibility alias
def add_classes_to_gradio_component(comp):
"""
this adds gradio-* to the component for css styling (ie gradio-button to gr.Button), as well as some others
"""
comp.elem_classes = [f"gradio-{comp.get_block_name()}", *(comp.elem_classes or [])]
if getattr(comp, 'multiselect', False):
comp.elem_classes.append('multiselect')
def IOComponent_init(self, *args, **kwargs):
if scripts_current is not None:
scripts_current.before_component(self, **kwargs)
script_callbacks.before_component_callback(self, **kwargs)
res = original_IOComponent_init(self, *args, **kwargs)
add_classes_to_gradio_component(self)
script_callbacks.after_component_callback(self, **kwargs)
if scripts_current is not None:
scripts_current.after_component(self, **kwargs)
return res
original_IOComponent_init = gr.components.IOComponent.__init__
gr.components.IOComponent.__init__ = IOComponent_init
def BlockContext_init(self, *args, **kwargs):
res = original_BlockContext_init(self, *args, **kwargs)
add_classes_to_gradio_component(self)
return res
original_BlockContext_init = gr.blocks.BlockContext.__init__
gr.blocks.BlockContext.__init__ = BlockContext_init
+144 -5
View File
@@ -3,8 +3,31 @@ import open_clip
import torch
import transformers.utils.hub
from modules import shared
class DisableInitialization:
class ReplaceHelper:
def __init__(self):
self.replaced = []
def replace(self, obj, field, func):
original = getattr(obj, field, None)
if original is None:
return None
self.replaced.append((obj, field, original))
setattr(obj, field, func)
return original
def restore(self):
for obj, field, original in self.replaced:
setattr(obj, field, original)
self.replaced.clear()
class DisableInitialization(ReplaceHelper):
"""
When an object of this class enters a `with` block, it starts:
- preventing torch's layer initialization functions from working
@@ -21,7 +44,7 @@ class DisableInitialization:
"""
def __init__(self, disable_clip=True):
self.replaced = []
super().__init__()
self.disable_clip = disable_clip
def replace(self, obj, field, func):
@@ -86,8 +109,124 @@ class DisableInitialization:
self.transformers_utils_hub_get_from_cache = self.replace(transformers.utils.hub, 'get_from_cache', transformers_utils_hub_get_from_cache)
def __exit__(self, exc_type, exc_val, exc_tb):
for obj, field, original in self.replaced:
setattr(obj, field, original)
self.restore()
self.replaced.clear()
class InitializeOnMeta(ReplaceHelper):
"""
Context manager that causes all parameters for linear/conv2d/mha layers to be allocated on meta device,
which results in those parameters having no values and taking no memory. model.to() will be broken and
will need to be repaired by using LoadStateDictOnMeta below when loading params from state dict.
Usage:
```
with sd_disable_initialization.InitializeOnMeta():
sd_model = instantiate_from_config(sd_config.model)
```
"""
def __enter__(self):
if shared.cmd_opts.disable_model_loading_ram_optimization:
return
def set_device(x):
x["device"] = "meta"
return x
linear_init = self.replace(torch.nn.Linear, '__init__', lambda *args, **kwargs: linear_init(*args, **set_device(kwargs)))
conv2d_init = self.replace(torch.nn.Conv2d, '__init__', lambda *args, **kwargs: conv2d_init(*args, **set_device(kwargs)))
mha_init = self.replace(torch.nn.MultiheadAttention, '__init__', lambda *args, **kwargs: mha_init(*args, **set_device(kwargs)))
self.replace(torch.nn.Module, 'to', lambda *args, **kwargs: None)
def __exit__(self, exc_type, exc_val, exc_tb):
self.restore()
class LoadStateDictOnMeta(ReplaceHelper):
"""
Context manager that allows to read parameters from state_dict into a model that has some of its parameters in the meta device.
As those parameters are read from state_dict, they will be deleted from it, so by the end state_dict will be mostly empty, to save memory.
Meant to be used together with InitializeOnMeta above.
Usage:
```
with sd_disable_initialization.LoadStateDictOnMeta(state_dict):
model.load_state_dict(state_dict, strict=False)
```
"""
def __init__(self, state_dict, device, weight_dtype_conversion=None):
super().__init__()
self.state_dict = state_dict
self.device = device
self.weight_dtype_conversion = weight_dtype_conversion or {}
self.default_dtype = self.weight_dtype_conversion.get('')
def get_weight_dtype(self, key):
key_first_term, _ = key.split('.', 1)
return self.weight_dtype_conversion.get(key_first_term, self.default_dtype)
def __enter__(self):
if shared.cmd_opts.disable_model_loading_ram_optimization:
return
sd = self.state_dict
device = self.device
def load_from_state_dict(original, module, state_dict, prefix, *args, **kwargs):
used_param_keys = []
for name, param in module._parameters.items():
if param is None:
continue
key = prefix + name
sd_param = sd.pop(key, None)
if sd_param is not None:
state_dict[key] = sd_param.to(dtype=self.get_weight_dtype(key))
used_param_keys.append(key)
if param.is_meta:
dtype = sd_param.dtype if sd_param is not None else param.dtype
module._parameters[name] = torch.nn.parameter.Parameter(torch.zeros_like(param, device=device, dtype=dtype), requires_grad=param.requires_grad)
for name in module._buffers:
key = prefix + name
sd_param = sd.pop(key, None)
if sd_param is not None:
state_dict[key] = sd_param
used_param_keys.append(key)
original(module, state_dict, prefix, *args, **kwargs)
for key in used_param_keys:
state_dict.pop(key, None)
def load_state_dict(original, module, state_dict, strict=True):
"""torch makes a lot of copies of the dictionary with weights, so just deleting entries from state_dict does not help
because the same values are stored in multiple copies of the dict. The trick used here is to give torch a dict with
all weights on meta device, i.e. deleted, and then it doesn't matter how many copies torch makes.
In _load_from_state_dict, the correct weight will be obtained from a single dict with the right weights (sd).
The dangerous thing about this is if _load_from_state_dict is not called, (if some exotic module overloads
the function and does not call the original) the state dict will just fail to load because weights
would be on the meta device.
"""
if state_dict == sd:
state_dict = {k: v.to(device="meta", dtype=v.dtype) for k, v in state_dict.items()}
original(module, state_dict, strict=strict)
module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
module_load_from_state_dict = self.replace(torch.nn.Module, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(module_load_from_state_dict, *args, **kwargs))
linear_load_from_state_dict = self.replace(torch.nn.Linear, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(linear_load_from_state_dict, *args, **kwargs))
conv2d_load_from_state_dict = self.replace(torch.nn.Conv2d, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(conv2d_load_from_state_dict, *args, **kwargs))
mha_load_from_state_dict = self.replace(torch.nn.MultiheadAttention, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(mha_load_from_state_dict, *args, **kwargs))
layer_norm_load_from_state_dict = self.replace(torch.nn.LayerNorm, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(layer_norm_load_from_state_dict, *args, **kwargs))
group_norm_load_from_state_dict = self.replace(torch.nn.GroupNorm, '_load_from_state_dict', lambda *args, **kwargs: load_from_state_dict(group_norm_load_from_state_dict, *args, **kwargs))
def __exit__(self, exc_type, exc_val, exc_tb):
self.restore()
+39 -14
View File
@@ -2,8 +2,7 @@ import torch
from torch.nn.functional import silu
from types import MethodType
import modules.textual_inversion.textual_inversion
from modules import devices, sd_hijack_optimizations, shared, script_callbacks, errors, sd_unet
from modules import devices, sd_hijack_optimizations, shared, script_callbacks, errors, sd_unet, patches
from modules.hypernetworks import hypernetwork
from modules.shared import cmd_opts
from modules import sd_hijack_clip, sd_hijack_open_clip, sd_hijack_unet, sd_hijack_xlmr, xlmr
@@ -11,6 +10,7 @@ from modules import sd_hijack_clip, sd_hijack_open_clip, sd_hijack_unet, sd_hija
import ldm.modules.attention
import ldm.modules.diffusionmodules.model
import ldm.modules.diffusionmodules.openaimodel
import ldm.models.diffusion.ddpm
import ldm.models.diffusion.ddim
import ldm.models.diffusion.plms
import ldm.modules.encoders.modules
@@ -30,12 +30,16 @@ ldm.modules.attention.MemoryEfficientCrossAttention = ldm.modules.attention.Cros
ldm.modules.attention.BasicTransformerBlock.ATTENTION_MODES["softmax-xformers"] = ldm.modules.attention.CrossAttention
# silence new console spam from SD2
ldm.modules.attention.print = lambda *args: None
ldm.modules.diffusionmodules.model.print = lambda *args: None
ldm.modules.attention.print = shared.ldm_print
ldm.modules.diffusionmodules.model.print = shared.ldm_print
ldm.util.print = shared.ldm_print
ldm.models.diffusion.ddpm.print = shared.ldm_print
optimizers = []
current_optimizer: sd_hijack_optimizations.SdOptimization = None
ldm_original_forward = patches.patch(__file__, ldm.modules.diffusionmodules.openaimodel.UNetModel, "forward", sd_unet.UNetModel_forward)
sgm_original_forward = patches.patch(__file__, sgm.modules.diffusionmodules.openaimodel.UNetModel, "forward", sd_unet.UNetModel_forward)
def list_optimizers():
new_optimizers = script_callbacks.list_optimizers_callback()
@@ -164,12 +168,13 @@ class StableDiffusionModelHijack:
clip = None
optimization_method = None
embedding_db = modules.textual_inversion.textual_inversion.EmbeddingDatabase()
def __init__(self):
import modules.textual_inversion.textual_inversion
self.extra_generation_params = {}
self.comments = []
self.embedding_db = modules.textual_inversion.textual_inversion.EmbeddingDatabase()
self.embedding_db.add_embedding_dir(cmd_opts.embeddings_dir)
def apply_optimizations(self, option=None):
@@ -197,7 +202,7 @@ class StableDiffusionModelHijack:
conditioner.embedders[i] = sd_hijack_clip.FrozenCLIPEmbedderForSDXLWithCustomWords(embedder, self)
text_cond_models.append(conditioner.embedders[i])
if typename == 'FrozenOpenCLIPEmbedder2':
embedder.model.token_embedding = EmbeddingsWithFixes(embedder.model.token_embedding, self)
embedder.model.token_embedding = EmbeddingsWithFixes(embedder.model.token_embedding, self, textual_inversion_key='clip_g')
conditioner.embedders[i] = sd_hijack_open_clip.FrozenOpenCLIPEmbedder2WithCustomWords(embedder, self)
text_cond_models.append(conditioner.embedders[i])
@@ -237,13 +242,30 @@ class StableDiffusionModelHijack:
self.layers = flatten(m)
if not hasattr(ldm.modules.diffusionmodules.openaimodel, 'copy_of_UNetModel_forward_for_webui'):
ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui = ldm.modules.diffusionmodules.openaimodel.UNetModel.forward
if isinstance(m, ldm.models.diffusion.ddpm.LatentDiffusion):
sd_unet.original_forward = ldm_original_forward
elif isinstance(m, sgm.models.diffusion.DiffusionEngine):
sd_unet.original_forward = sgm_original_forward
else:
sd_unet.original_forward = None
ldm.modules.diffusionmodules.openaimodel.UNetModel.forward = sd_unet.UNetModel_forward
def undo_hijack(self, m):
if type(m.cond_stage_model) == sd_hijack_xlmr.FrozenXLMREmbedderWithCustomWords:
conditioner = getattr(m, 'conditioner', None)
if conditioner:
for i in range(len(conditioner.embedders)):
embedder = conditioner.embedders[i]
if isinstance(embedder, (sd_hijack_open_clip.FrozenOpenCLIPEmbedderWithCustomWords, sd_hijack_open_clip.FrozenOpenCLIPEmbedder2WithCustomWords)):
embedder.wrapped.model.token_embedding = embedder.wrapped.model.token_embedding.wrapped
conditioner.embedders[i] = embedder.wrapped
if isinstance(embedder, sd_hijack_clip.FrozenCLIPEmbedderForSDXLWithCustomWords):
embedder.wrapped.transformer.text_model.embeddings.token_embedding = embedder.wrapped.transformer.text_model.embeddings.token_embedding.wrapped
conditioner.embedders[i] = embedder.wrapped
if hasattr(m, 'cond_stage_model'):
delattr(m, 'cond_stage_model')
elif type(m.cond_stage_model) == sd_hijack_xlmr.FrozenXLMREmbedderWithCustomWords:
m.cond_stage_model = m.cond_stage_model.wrapped
elif type(m.cond_stage_model) == sd_hijack_clip.FrozenCLIPEmbedderWithCustomWords:
@@ -263,7 +285,8 @@ class StableDiffusionModelHijack:
self.layers = None
self.clip = None
ldm.modules.diffusionmodules.openaimodel.UNetModel.forward = ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui
sd_unet.original_forward = None
def apply_circular(self, enable):
if self.circular_enabled == enable:
@@ -292,10 +315,11 @@ class StableDiffusionModelHijack:
class EmbeddingsWithFixes(torch.nn.Module):
def __init__(self, wrapped, embeddings):
def __init__(self, wrapped, embeddings, textual_inversion_key='clip_l'):
super().__init__()
self.wrapped = wrapped
self.embeddings = embeddings
self.textual_inversion_key = textual_inversion_key
def forward(self, input_ids):
batch_fixes = self.embeddings.fixes
@@ -309,7 +333,8 @@ class EmbeddingsWithFixes(torch.nn.Module):
vecs = []
for fixes, tensor in zip(batch_fixes, inputs_embeds):
for offset, embedding in fixes:
emb = devices.cond_cast_unet(embedding.vec)
vec = embedding.vec[self.textual_inversion_key] if isinstance(embedding.vec, dict) else embedding.vec
emb = devices.cond_cast_unet(vec)
emb_len = min(tensor.shape[0] - offset - 1, emb.shape[0])
tensor = torch.cat([tensor[0:offset + 1], emb[0:emb_len], tensor[offset + 1 + emb_len:]])
+3 -1
View File
@@ -161,7 +161,7 @@ class FrozenCLIPEmbedderWithCustomWordsBase(torch.nn.Module):
position += 1
continue
emb_len = int(embedding.vec.shape[0])
emb_len = int(embedding.vectors)
if len(chunk.tokens) + emb_len > self.chunk_length:
next_chunk()
@@ -245,6 +245,8 @@ class FrozenCLIPEmbedderWithCustomWordsBase(torch.nn.Module):
hashes.append(f"{name}: {shorthash}")
if hashes:
if self.hijack.extra_generation_params.get("TI hashes"):
hashes.append(self.hijack.extra_generation_params.get("TI hashes"))
self.hijack.extra_generation_params["TI hashes"] = ", ".join(hashes)
if getattr(self.wrapped, 'return_pooled', False):
-97
View File
@@ -1,97 +0,0 @@
import torch
import ldm.models.diffusion.ddpm
import ldm.models.diffusion.ddim
import ldm.models.diffusion.plms
from ldm.models.diffusion.ddim import noise_like
from ldm.models.diffusion.sampling_util import norm_thresholding
@torch.no_grad()
def p_sample_plms(self, x, c, t, index, repeat_noise=False, use_original_steps=False, quantize_denoised=False,
temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None,
unconditional_guidance_scale=1., unconditional_conditioning=None, old_eps=None, t_next=None, dynamic_threshold=None):
b, *_, device = *x.shape, x.device
def get_model_output(x, t):
if unconditional_conditioning is None or unconditional_guidance_scale == 1.:
e_t = self.model.apply_model(x, t, c)
else:
x_in = torch.cat([x] * 2)
t_in = torch.cat([t] * 2)
if isinstance(c, dict):
assert isinstance(unconditional_conditioning, dict)
c_in = {}
for k in c:
if isinstance(c[k], list):
c_in[k] = [
torch.cat([unconditional_conditioning[k][i], c[k][i]])
for i in range(len(c[k]))
]
else:
c_in[k] = torch.cat([unconditional_conditioning[k], c[k]])
else:
c_in = torch.cat([unconditional_conditioning, c])
e_t_uncond, e_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
e_t = e_t_uncond + unconditional_guidance_scale * (e_t - e_t_uncond)
if score_corrector is not None:
assert self.model.parameterization == "eps"
e_t = score_corrector.modify_score(self.model, e_t, x, t, c, **corrector_kwargs)
return e_t
alphas = self.model.alphas_cumprod if use_original_steps else self.ddim_alphas
alphas_prev = self.model.alphas_cumprod_prev if use_original_steps else self.ddim_alphas_prev
sqrt_one_minus_alphas = self.model.sqrt_one_minus_alphas_cumprod if use_original_steps else self.ddim_sqrt_one_minus_alphas
sigmas = self.model.ddim_sigmas_for_original_num_steps if use_original_steps else self.ddim_sigmas
def get_x_prev_and_pred_x0(e_t, index):
# select parameters corresponding to the currently considered timestep
a_t = torch.full((b, 1, 1, 1), alphas[index], device=device)
a_prev = torch.full((b, 1, 1, 1), alphas_prev[index], device=device)
sigma_t = torch.full((b, 1, 1, 1), sigmas[index], device=device)
sqrt_one_minus_at = torch.full((b, 1, 1, 1), sqrt_one_minus_alphas[index],device=device)
# current prediction for x_0
pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()
if quantize_denoised:
pred_x0, _, *_ = self.model.first_stage_model.quantize(pred_x0)
if dynamic_threshold is not None:
pred_x0 = norm_thresholding(pred_x0, dynamic_threshold)
# direction pointing to x_t
dir_xt = (1. - a_prev - sigma_t**2).sqrt() * e_t
noise = sigma_t * noise_like(x.shape, device, repeat_noise) * temperature
if noise_dropout > 0.:
noise = torch.nn.functional.dropout(noise, p=noise_dropout)
x_prev = a_prev.sqrt() * pred_x0 + dir_xt + noise
return x_prev, pred_x0
e_t = get_model_output(x, t)
if len(old_eps) == 0:
# Pseudo Improved Euler (2nd order)
x_prev, pred_x0 = get_x_prev_and_pred_x0(e_t, index)
e_t_next = get_model_output(x_prev, t_next)
e_t_prime = (e_t + e_t_next) / 2
elif len(old_eps) == 1:
# 2nd order Pseudo Linear Multistep (Adams-Bashforth)
e_t_prime = (3 * e_t - old_eps[-1]) / 2
elif len(old_eps) == 2:
# 3nd order Pseudo Linear Multistep (Adams-Bashforth)
e_t_prime = (23 * e_t - 16 * old_eps[-1] + 5 * old_eps[-2]) / 12
elif len(old_eps) >= 3:
# 4nd order Pseudo Linear Multistep (Adams-Bashforth)
e_t_prime = (55 * e_t - 59 * old_eps[-1] + 37 * old_eps[-2] - 9 * old_eps[-3]) / 24
x_prev, pred_x0 = get_x_prev_and_pred_x0(e_t_prime, index)
return x_prev, pred_x0, e_t
def do_inpainting_hijack():
# p_sample_plms is needed because PLMS can't work with dicts as conditionings
ldm.models.diffusion.plms.PLMSSampler.p_sample_plms = p_sample_plms
+12 -5
View File
@@ -1,6 +1,7 @@
from __future__ import annotations
import math
import psutil
import platform
import torch
from torch import einsum
@@ -94,7 +95,10 @@ class SdOptimizationSdp(SdOptimizationSdpNoMem):
class SdOptimizationSubQuad(SdOptimization):
name = "sub-quadratic"
cmd_opt = "opt_sub_quad_attention"
priority = 10
@property
def priority(self):
return 1000 if shared.device.type == 'mps' else 10
def apply(self):
ldm.modules.attention.CrossAttention.forward = sub_quad_attention_forward
@@ -120,7 +124,7 @@ class SdOptimizationInvokeAI(SdOptimization):
@property
def priority(self):
return 1000 if not torch.cuda.is_available() else 10
return 1000 if shared.device.type != 'mps' and not torch.cuda.is_available() else 10
def apply(self):
ldm.modules.attention.CrossAttention.forward = split_cross_attention_forward_invokeAI
@@ -256,9 +260,9 @@ def split_cross_attention_forward(self, x, context=None, mask=None, **kwargs):
raise RuntimeError(f'Not enough memory, use lower resolution (max approx. {max_res}x{max_res}). '
f'Need: {mem_required / 64 / gb:0.1f}GB free, Have:{mem_free_total / gb:0.1f}GB free')
slice_size = q.shape[1] // steps if (q.shape[1] % steps) == 0 else q.shape[1]
slice_size = q.shape[1] // steps
for i in range(0, q.shape[1], slice_size):
end = i + slice_size
end = min(i + slice_size, q.shape[1])
s1 = einsum('b i d, b j d -> b i j', q[:, i:end], k)
s2 = s1.softmax(dim=-1, dtype=q.dtype)
@@ -427,7 +431,10 @@ def sub_quad_attention(q, k, v, q_chunk_size=1024, kv_chunk_size=None, kv_chunk_
qk_matmul_size_bytes = batch_x_heads * bytes_per_token * q_tokens * k_tokens
if chunk_threshold is None:
chunk_threshold_bytes = int(get_available_vram() * 0.9) if q.device.type == 'mps' else int(get_available_vram() * 0.7)
if q.device.type == 'mps':
chunk_threshold_bytes = 268435456 * (2 if platform.processor() == 'i386' else bytes_per_token)
else:
chunk_threshold_bytes = int(get_available_vram() * 0.7)
elif chunk_threshold == 0:
chunk_threshold_bytes = None
else:
+259 -63
View File
@@ -7,17 +7,17 @@ import threading
import torch
import re
import safetensors.torch
from omegaconf import OmegaConf
from omegaconf import OmegaConf, ListConfig
from os import mkdir
from urllib import request
import ldm.modules.midas as midas
from ldm.util import instantiate_from_config
from modules import paths, shared, modelloader, devices, script_callbacks, sd_vae, sd_disable_initialization, errors, hashes, sd_models_config, sd_unet, sd_models_xl
from modules.sd_hijack_inpainting import do_inpainting_hijack
from modules import paths, shared, modelloader, devices, script_callbacks, sd_vae, sd_disable_initialization, errors, hashes, sd_models_config, sd_unet, sd_models_xl, cache, extra_networks, processing, lowvram, sd_hijack, patches
from modules.timer import Timer
import tomesd
import numpy as np
model_dir = "Stable-diffusion"
model_path = os.path.abspath(os.path.join(paths.models_path, model_dir))
@@ -28,13 +28,34 @@ checkpoint_alisases = checkpoint_aliases # for compatibility with old name
checkpoints_loaded = collections.OrderedDict()
def replace_key(d, key, new_key, value):
keys = list(d.keys())
d[new_key] = value
if key not in keys:
return d
index = keys.index(key)
keys[index] = new_key
new_d = {k: d[k] for k in keys}
d.clear()
d.update(new_d)
return d
class CheckpointInfo:
def __init__(self, filename):
self.filename = filename
abspath = os.path.abspath(filename)
abs_ckpt_dir = os.path.abspath(shared.cmd_opts.ckpt_dir) if shared.cmd_opts.ckpt_dir is not None else None
if shared.cmd_opts.ckpt_dir is not None and abspath.startswith(shared.cmd_opts.ckpt_dir):
name = abspath.replace(shared.cmd_opts.ckpt_dir, '')
self.is_safetensors = os.path.splitext(filename)[1].lower() == ".safetensors"
if abs_ckpt_dir and abspath.startswith(abs_ckpt_dir):
name = abspath.replace(abs_ckpt_dir, '')
elif abspath.startswith(model_path):
name = abspath.replace(model_path, '')
else:
@@ -43,6 +64,19 @@ class CheckpointInfo:
if name.startswith("\\") or name.startswith("/"):
name = name[1:]
def read_metadata():
metadata = read_metadata_from_safetensors(filename)
self.modelspec_thumbnail = metadata.pop('modelspec.thumbnail', None)
return metadata
self.metadata = {}
if self.is_safetensors:
try:
self.metadata = cache.cached_data_for_file('safetensors-metadata', "checkpoint/" + name, filename, read_metadata)
except Exception as e:
errors.display(e, f"reading metadata for {filename}")
self.name = name
self.name_for_extra = os.path.splitext(os.path.basename(filename))[0]
self.model_name = os.path.splitext(name.replace("/", "_").replace("\\", "_"))[0]
@@ -52,17 +86,11 @@ class CheckpointInfo:
self.shorthash = self.sha256[0:10] if self.sha256 else None
self.title = name if self.shorthash is None else f'{name} [{self.shorthash}]'
self.short_title = self.name_for_extra if self.shorthash is None else f'{self.name_for_extra} [{self.shorthash}]'
self.ids = [self.hash, self.model_name, self.title, name, f'{name} [{self.hash}]'] + ([self.shorthash, self.sha256, f'{self.name} [{self.shorthash}]'] if self.shorthash else [])
self.metadata = {}
_, ext = os.path.splitext(self.filename)
if ext.lower() == ".safetensors":
try:
self.metadata = read_metadata_from_safetensors(filename)
except Exception as e:
errors.display(e, f"reading checkpoint metadata: {filename}")
self.ids = [self.hash, self.model_name, self.title, name, self.name_for_extra, f'{name} [{self.hash}]']
if self.shorthash:
self.ids += [self.shorthash, self.sha256, f'{self.name} [{self.shorthash}]', f'{self.name_for_extra} [{self.shorthash}]']
def register(self):
checkpoints_list[self.title] = self
@@ -74,13 +102,20 @@ class CheckpointInfo:
if self.sha256 is None:
return
self.shorthash = self.sha256[0:10]
shorthash = self.sha256[0:10]
if self.shorthash == self.sha256[0:10]:
return self.shorthash
self.shorthash = shorthash
if self.shorthash not in self.ids:
self.ids += [self.shorthash, self.sha256, f'{self.name} [{self.shorthash}]']
self.ids += [self.shorthash, self.sha256, f'{self.name} [{self.shorthash}]', f'{self.name_for_extra} [{self.shorthash}]']
checkpoints_list.pop(self.title)
old_title = self.title
self.title = f'{self.name} [{self.shorthash}]'
self.short_title = f'{self.name_for_extra} [{self.shorthash}]'
replace_key(checkpoints_list, old_title, self.title, self)
self.register()
return self.shorthash
@@ -96,19 +131,16 @@ except Exception:
def setup_model():
"""called once at startup to do various one-time tasks related to SD models"""
os.makedirs(model_path, exist_ok=True)
enable_midas_autodownload()
patch_given_betas()
def checkpoint_tiles():
def convert(name):
return int(name) if name.isdigit() else name.lower()
def alphanumeric_key(key):
return [convert(c) for c in re.split('([0-9]+)', key)]
return sorted([x.title for x in checkpoints_list.values()], key=alphanumeric_key)
def checkpoint_tiles(use_short=False):
return [x.short_title if use_short else x.title for x in checkpoints_list.values()]
def list_models():
@@ -131,12 +163,18 @@ def list_models():
elif cmd_ckpt is not None and cmd_ckpt != shared.default_sd_model_file:
print(f"Checkpoint in --ckpt argument not found (Possible it was moved to {model_path}: {cmd_ckpt}", file=sys.stderr)
for filename in sorted(model_list, key=str.lower):
for filename in model_list:
checkpoint_info = CheckpointInfo(filename)
checkpoint_info.register()
re_strip_checksum = re.compile(r"\s*\[[^]]+]\s*$")
def get_closet_checkpoint_match(search_string):
if not search_string:
return None
checkpoint_info = checkpoint_aliases.get(search_string, None)
if checkpoint_info is not None:
return checkpoint_info
@@ -145,6 +183,11 @@ def get_closet_checkpoint_match(search_string):
if found:
return found[0]
search_string_without_checksum = re.sub(re_strip_checksum, '', search_string)
found = sorted([info for info in checkpoints_list.values() if search_string_without_checksum in info.title], key=lambda x: len(x.title))
if found:
return found[0]
return None
@@ -271,6 +314,8 @@ def get_checkpoint_state_dict(checkpoint_info: CheckpointInfo, timer):
if checkpoint_info in checkpoints_loaded:
# use checkpoint cache
print(f"Loading weights [{sd_model_hash}] from cache")
# move to end as latest
checkpoints_loaded.move_to_end(checkpoint_info)
return checkpoints_loaded[checkpoint_info]
print(f"Loading weights [{sd_model_hash}] from {checkpoint_info.filename}")
@@ -280,11 +325,27 @@ def get_checkpoint_state_dict(checkpoint_info: CheckpointInfo, timer):
return res
class SkipWritingToConfig:
"""This context manager prevents load_model_weights from writing checkpoint name to the config when it loads weight."""
skip = False
previous = None
def __enter__(self):
self.previous = SkipWritingToConfig.skip
SkipWritingToConfig.skip = True
return self
def __exit__(self, exc_type, exc_value, exc_traceback):
SkipWritingToConfig.skip = self.previous
def load_model_weights(model, checkpoint_info: CheckpointInfo, state_dict, timer):
sd_model_hash = checkpoint_info.calculate_shorthash()
timer.record("calculate hash")
shared.opts.data["sd_model_checkpoint"] = checkpoint_info.title
if not SkipWritingToConfig.skip:
shared.opts.data["sd_model_checkpoint"] = checkpoint_info.title
if state_dict is None:
state_dict = get_checkpoint_state_dict(checkpoint_info, timer)
@@ -297,18 +358,23 @@ def load_model_weights(model, checkpoint_info: CheckpointInfo, state_dict, timer
sd_models_xl.extend_sdxl(model)
model.load_state_dict(state_dict, strict=False)
del state_dict
timer.record("apply weights to model")
if shared.opts.sd_checkpoint_cache > 0:
# cache newly loaded model
checkpoints_loaded[checkpoint_info] = model.state_dict().copy()
checkpoints_loaded[checkpoint_info] = state_dict
del state_dict
if shared.cmd_opts.opt_channelslast:
model.to(memory_format=torch.channels_last)
timer.record("apply channels_last")
if not shared.cmd_opts.no_half:
if shared.cmd_opts.no_half:
model.float()
devices.dtype_unet = torch.float32
timer.record("apply float()")
else:
vae = model.first_stage_model
depth_model = getattr(model, 'depth_model', None)
@@ -324,9 +390,9 @@ def load_model_weights(model, checkpoint_info: CheckpointInfo, state_dict, timer
if depth_model:
model.depth_model = depth_model
devices.dtype_unet = torch.float16
timer.record("apply half()")
devices.dtype_unet = torch.float16 if model.is_sdxl and not shared.cmd_opts.no_half else model.model.diffusion_model.dtype
devices.unet_needs_upcast = shared.cmd_opts.upcast_sampling and devices.dtype == torch.float16 and devices.dtype_unet == torch.float16
model.first_stage_model.to(devices.dtype_vae)
@@ -346,7 +412,7 @@ def load_model_weights(model, checkpoint_info: CheckpointInfo, state_dict, timer
sd_vae.delete_base_vae()
sd_vae.clear_loaded_vae()
vae_file, vae_source = sd_vae.resolve_vae(checkpoint_info.filename)
vae_file, vae_source = sd_vae.resolve_vae(checkpoint_info.filename).tuple()
sd_vae.load_vae(model, vae_file, vae_source)
timer.record("load VAE")
@@ -394,6 +460,20 @@ def enable_midas_autodownload():
midas.api.load_model = load_model_wrapper
def patch_given_betas():
import ldm.models.diffusion.ddpm
def patched_register_schedule(*args, **kwargs):
"""a modified version of register_schedule function that converts plain list from Omegaconf into numpy"""
if isinstance(args[1], ListConfig):
args = (args[0], np.array(args[1]), *args[2:])
original_register_schedule(*args, **kwargs)
original_register_schedule = patches.patch(__name__, ldm.models.diffusion.ddpm.DDPM, 'register_schedule', patched_register_schedule)
def repair_config(sd_config):
if not hasattr(sd_config.model.params, "use_ema"):
@@ -423,6 +503,7 @@ sdxl_refiner_clip_weight = 'conditioner.embedders.0.model.ln_final.weight'
class SdModelData:
def __init__(self):
self.sd_model = None
self.loaded_sd_models = []
self.was_loaded_at_least_once = False
self.lock = threading.Lock()
@@ -437,6 +518,7 @@ class SdModelData:
try:
load_model()
except Exception as e:
errors.display(e, "loading stable diffusion model", full_traceback=True)
print("", file=sys.stderr)
@@ -445,14 +527,30 @@ class SdModelData:
return self.sd_model
def set_sd_model(self, v):
def set_sd_model(self, v, already_loaded=False):
self.sd_model = v
if already_loaded:
sd_vae.base_vae = getattr(v, "base_vae", None)
sd_vae.loaded_vae_file = getattr(v, "loaded_vae_file", None)
sd_vae.checkpoint_info = v.sd_checkpoint_info
try:
self.loaded_sd_models.remove(v)
except ValueError:
pass
if v is not None:
self.loaded_sd_models.insert(0, v)
model_data = SdModelData()
def get_empty_cond(sd_model):
p = processing.StableDiffusionProcessingTxt2Img()
extra_networks.activate(p, {})
if hasattr(sd_model, 'conditioner'):
d = sd_model.get_learned_conditioning([""])
return d['crossattn']
@@ -460,20 +558,46 @@ def get_empty_cond(sd_model):
return sd_model.cond_stage_model([""])
def send_model_to_cpu(m):
if m.lowvram:
lowvram.send_everything_to_cpu()
else:
m.to(devices.cpu)
devices.torch_gc()
def model_target_device(m):
if lowvram.is_needed(m):
return devices.cpu
else:
return devices.device
def send_model_to_device(m):
lowvram.apply(m)
if not m.lowvram:
m.to(shared.device)
def send_model_to_trash(m):
m.to(device="meta")
devices.torch_gc()
def load_model(checkpoint_info=None, already_loaded_state_dict=None):
from modules import lowvram, sd_hijack
from modules import sd_hijack
checkpoint_info = checkpoint_info or select_checkpoint()
timer = Timer()
if model_data.sd_model:
sd_hijack.model_hijack.undo_hijack(model_data.sd_model)
send_model_to_trash(model_data.sd_model)
model_data.sd_model = None
gc.collect()
devices.torch_gc()
do_inpainting_hijack()
timer = Timer()
timer.record("unload existing model")
if already_loaded_state_dict is not None:
state_dict = already_loaded_state_dict
@@ -495,25 +619,35 @@ def load_model(checkpoint_info=None, already_loaded_state_dict=None):
sd_model = None
try:
with sd_disable_initialization.DisableInitialization(disable_clip=clip_is_included_into_sd or shared.cmd_opts.do_not_download_clip):
sd_model = instantiate_from_config(sd_config.model)
except Exception:
pass
with sd_disable_initialization.InitializeOnMeta():
sd_model = instantiate_from_config(sd_config.model)
except Exception as e:
errors.display(e, "creating model quickly", full_traceback=True)
if sd_model is None:
print('Failed to create model quickly; will retry using slow method.', file=sys.stderr)
sd_model = instantiate_from_config(sd_config.model)
with sd_disable_initialization.InitializeOnMeta():
sd_model = instantiate_from_config(sd_config.model)
sd_model.used_config = checkpoint_config
timer.record("create model")
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
if shared.cmd_opts.lowvram or shared.cmd_opts.medvram:
lowvram.setup_for_low_vram(sd_model, shared.cmd_opts.medvram)
if shared.cmd_opts.no_half:
weight_dtype_conversion = None
else:
sd_model.to(shared.device)
weight_dtype_conversion = {
'first_stage_model': None,
'': torch.float16,
}
with sd_disable_initialization.LoadStateDictOnMeta(state_dict, device=model_target_device(sd_model), weight_dtype_conversion=weight_dtype_conversion):
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
timer.record("load weights from state dict")
send_model_to_device(sd_model)
timer.record("move model to device")
sd_hijack.model_hijack.hijack(sd_model)
@@ -521,7 +655,7 @@ def load_model(checkpoint_info=None, already_loaded_state_dict=None):
timer.record("hijack")
sd_model.eval()
model_data.sd_model = sd_model
model_data.set_sd_model(sd_model)
model_data.was_loaded_at_least_once = True
sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True) # Reload embeddings after model load as they may or may not fit the model
@@ -542,10 +676,70 @@ def load_model(checkpoint_info=None, already_loaded_state_dict=None):
return sd_model
def reuse_model_from_already_loaded(sd_model, checkpoint_info, timer):
"""
Checks if the desired checkpoint from checkpoint_info is not already loaded in model_data.loaded_sd_models.
If it is loaded, returns that (moving it to GPU if necessary, and moving the currently loadded model to CPU if necessary).
If not, returns the model that can be used to load weights from checkpoint_info's file.
If no such model exists, returns None.
Additionaly deletes loaded models that are over the limit set in settings (sd_checkpoints_limit).
"""
already_loaded = None
for i in reversed(range(len(model_data.loaded_sd_models))):
loaded_model = model_data.loaded_sd_models[i]
if loaded_model.sd_checkpoint_info.filename == checkpoint_info.filename:
already_loaded = loaded_model
continue
if len(model_data.loaded_sd_models) > shared.opts.sd_checkpoints_limit > 0:
print(f"Unloading model {len(model_data.loaded_sd_models)} over the limit of {shared.opts.sd_checkpoints_limit}: {loaded_model.sd_checkpoint_info.title}")
model_data.loaded_sd_models.pop()
send_model_to_trash(loaded_model)
timer.record("send model to trash")
if shared.opts.sd_checkpoints_keep_in_cpu:
send_model_to_cpu(sd_model)
timer.record("send model to cpu")
if already_loaded is not None:
send_model_to_device(already_loaded)
timer.record("send model to device")
model_data.set_sd_model(already_loaded, already_loaded=True)
if not SkipWritingToConfig.skip:
shared.opts.data["sd_model_checkpoint"] = already_loaded.sd_checkpoint_info.title
shared.opts.data["sd_checkpoint_hash"] = already_loaded.sd_checkpoint_info.sha256
print(f"Using already loaded model {already_loaded.sd_checkpoint_info.title}: done in {timer.summary()}")
sd_vae.reload_vae_weights(already_loaded)
return model_data.sd_model
elif shared.opts.sd_checkpoints_limit > 1 and len(model_data.loaded_sd_models) < shared.opts.sd_checkpoints_limit:
print(f"Loading model {checkpoint_info.title} ({len(model_data.loaded_sd_models) + 1} out of {shared.opts.sd_checkpoints_limit})")
model_data.sd_model = None
load_model(checkpoint_info)
return model_data.sd_model
elif len(model_data.loaded_sd_models) > 0:
sd_model = model_data.loaded_sd_models.pop()
model_data.sd_model = sd_model
sd_vae.base_vae = getattr(sd_model, "base_vae", None)
sd_vae.loaded_vae_file = getattr(sd_model, "loaded_vae_file", None)
sd_vae.checkpoint_info = sd_model.sd_checkpoint_info
print(f"Reusing loaded model {sd_model.sd_checkpoint_info.title} to load {checkpoint_info.title}")
return sd_model
else:
return None
def reload_model_weights(sd_model=None, info=None):
from modules import lowvram, devices, sd_hijack
checkpoint_info = info or select_checkpoint()
timer = Timer()
if not sd_model:
sd_model = model_data.sd_model
@@ -554,19 +748,17 @@ def reload_model_weights(sd_model=None, info=None):
else:
current_checkpoint_info = sd_model.sd_checkpoint_info
if sd_model.sd_model_checkpoint == checkpoint_info.filename:
return
return sd_model
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
if sd_model is not None and sd_model.sd_checkpoint_info.filename == checkpoint_info.filename:
return sd_model
if sd_model is not None:
sd_unet.apply_unet("None")
if shared.cmd_opts.lowvram or shared.cmd_opts.medvram:
lowvram.send_everything_to_cpu()
else:
sd_model.to(devices.cpu)
send_model_to_cpu(sd_model)
sd_hijack.model_hijack.undo_hijack(sd_model)
timer = Timer()
state_dict = get_checkpoint_state_dict(checkpoint_info, timer)
checkpoint_config = sd_models_config.find_checkpoint_config(state_dict, checkpoint_info)
@@ -574,7 +766,9 @@ def reload_model_weights(sd_model=None, info=None):
timer.record("find config")
if sd_model is None or checkpoint_config != sd_model.used_config:
del sd_model
if sd_model is not None:
send_model_to_trash(sd_model)
load_model(checkpoint_info, already_loaded_state_dict=state_dict)
return model_data.sd_model
@@ -591,17 +785,19 @@ def reload_model_weights(sd_model=None, info=None):
script_callbacks.model_loaded_callback(sd_model)
timer.record("script callbacks")
if not shared.cmd_opts.lowvram and not shared.cmd_opts.medvram:
if not sd_model.lowvram:
sd_model.to(devices.device)
timer.record("move model to device")
print(f"Weights loaded in {timer.summary()}.")
model_data.set_sd_model(sd_model)
sd_unet.apply_unet()
return sd_model
def unload_model_weights(sd_model=None, info=None):
from modules import devices, sd_hijack
timer = Timer()
if model_data.sd_model:
+1 -2
View File
@@ -2,7 +2,7 @@ import os
import torch
from modules import shared, paths, sd_disable_initialization
from modules import shared, paths, sd_disable_initialization, devices
sd_configs_path = shared.sd_configs_path
sd_repo_configs_path = os.path.join(paths.paths['Stable Diffusion'], "configs", "stable-diffusion")
@@ -29,7 +29,6 @@ def is_using_v_parameterization_for_sd2(state_dict):
"""
import ldm.modules.diffusionmodules.openaimodel
from modules import devices
device = devices.cpu
+31
View File
@@ -0,0 +1,31 @@
from ldm.models.diffusion.ddpm import LatentDiffusion
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from modules.sd_models import CheckpointInfo
class WebuiSdModel(LatentDiffusion):
"""This class is not actually instantinated, but its fields are created and fieeld by webui"""
lowvram: bool
"""True if lowvram/medvram optimizations are enabled -- see modules.lowvram for more info"""
sd_model_hash: str
"""short hash, 10 first characters of SHA1 hash of the model file; may be None if --no-hashing flag is used"""
sd_model_checkpoint: str
"""path to the file on disk that model weights were obtained from"""
sd_checkpoint_info: 'CheckpointInfo'
"""structure with additional information about the file with model's weights"""
is_sdxl: bool
"""True if the model's architecture is SDXL"""
is_sd2: bool
"""True if the model's architecture is SD 2.x"""
is_sd1: bool
"""True if the model's architecture is SD 1.x"""
+13 -4
View File
@@ -56,6 +56,14 @@ def encode_embedding_init_text(self: sgm.modules.GeneralConditioner, init_text,
return torch.cat(res, dim=1)
def tokenize(self: sgm.modules.GeneralConditioner, texts):
for embedder in [embedder for embedder in self.embedders if hasattr(embedder, 'tokenize')]:
return embedder.tokenize(texts)
raise AssertionError('no tokenizer available')
def process_texts(self, texts):
for embedder in [embedder for embedder in self.embedders if hasattr(embedder, 'process_texts')]:
return embedder.process_texts(texts)
@@ -68,6 +76,7 @@ def get_target_prompt_token_count(self, token_count):
# those additions to GeneralConditioner make it possible to use it as model.cond_stage_model from SD1.5 in exist
sgm.modules.GeneralConditioner.encode_embedding_init_text = encode_embedding_init_text
sgm.modules.GeneralConditioner.tokenize = tokenize
sgm.modules.GeneralConditioner.process_texts = process_texts
sgm.modules.GeneralConditioner.get_target_prompt_token_count = get_target_prompt_token_count
@@ -89,10 +98,10 @@ def extend_sdxl(model):
model.conditioner.wrapped = torch.nn.Module()
sgm.modules.attention.print = lambda *args: None
sgm.modules.diffusionmodules.model.print = lambda *args: None
sgm.modules.diffusionmodules.openaimodel.print = lambda *args: None
sgm.modules.encoders.modules.print = lambda *args: None
sgm.modules.attention.print = shared.ldm_print
sgm.modules.diffusionmodules.model.print = shared.ldm_print
sgm.modules.diffusionmodules.openaimodel.print = shared.ldm_print
sgm.modules.encoders.modules.print = shared.ldm_print
# this gets the code to load the vanilla attention that we override
sgm.modules.attention.SDP_IS_AVAILABLE = True
+11 -8
View File
@@ -1,17 +1,18 @@
from modules import sd_samplers_compvis, sd_samplers_kdiffusion, shared
from modules import sd_samplers_kdiffusion, sd_samplers_timesteps, shared
# imports for functions that previously were here and are used by other modules
from modules.sd_samplers_common import samples_to_image_grid, sample_to_image # noqa: F401
all_samplers = [
*sd_samplers_kdiffusion.samplers_data_k_diffusion,
*sd_samplers_compvis.samplers_data_compvis,
*sd_samplers_timesteps.samplers_data_timesteps,
]
all_samplers_map = {x.name: x for x in all_samplers}
samplers = []
samplers_for_img2img = []
samplers_map = {}
samplers_hidden = {}
def find_sampler_config(name):
@@ -38,13 +39,11 @@ def create_sampler(name, model):
def set_samplers():
global samplers, samplers_for_img2img
global samplers, samplers_for_img2img, samplers_hidden
hidden = set(shared.opts.hide_samplers)
hidden_img2img = set(shared.opts.hide_samplers + ['PLMS', 'UniPC'])
samplers = [x for x in all_samplers if x.name not in hidden]
samplers_for_img2img = [x for x in all_samplers if x.name not in hidden_img2img]
samplers_hidden = set(shared.opts.hide_samplers)
samplers = all_samplers
samplers_for_img2img = all_samplers
samplers_map.clear()
for sampler in all_samplers:
@@ -53,4 +52,8 @@ def set_samplers():
samplers_map[alias.lower()] = sampler.name
def visible_sampler_names():
return [x.name for x in samplers if x.name not in samplers_hidden]
set_samplers()
+230
View File
@@ -0,0 +1,230 @@
import torch
from modules import prompt_parser, devices, sd_samplers_common
from modules.shared import opts, state
import modules.shared as shared
from modules.script_callbacks import CFGDenoiserParams, cfg_denoiser_callback
from modules.script_callbacks import CFGDenoisedParams, cfg_denoised_callback
from modules.script_callbacks import AfterCFGCallbackParams, cfg_after_cfg_callback
def catenate_conds(conds):
if not isinstance(conds[0], dict):
return torch.cat(conds)
return {key: torch.cat([x[key] for x in conds]) for key in conds[0].keys()}
def subscript_cond(cond, a, b):
if not isinstance(cond, dict):
return cond[a:b]
return {key: vec[a:b] for key, vec in cond.items()}
def pad_cond(tensor, repeats, empty):
if not isinstance(tensor, dict):
return torch.cat([tensor, empty.repeat((tensor.shape[0], repeats, 1))], axis=1)
tensor['crossattn'] = pad_cond(tensor['crossattn'], repeats, empty)
return tensor
class CFGDenoiser(torch.nn.Module):
"""
Classifier free guidance denoiser. A wrapper for stable diffusion model (specifically for unet)
that can take a noisy picture and produce a noise-free picture using two guidances (prompts)
instead of one. Originally, the second prompt is just an empty string, but we use non-empty
negative prompt.
"""
def __init__(self, sampler):
super().__init__()
self.model_wrap = None
self.mask = None
self.nmask = None
self.init_latent = None
self.steps = None
"""number of steps as specified by user in UI"""
self.total_steps = None
"""expected number of calls to denoiser calculated from self.steps and specifics of the selected sampler"""
self.step = 0
self.image_cfg_scale = None
self.padded_cond_uncond = False
self.sampler = sampler
self.model_wrap = None
self.p = None
self.mask_before_denoising = False
@property
def inner_model(self):
raise NotImplementedError()
def combine_denoised(self, x_out, conds_list, uncond, cond_scale):
denoised_uncond = x_out[-uncond.shape[0]:]
denoised = torch.clone(denoised_uncond)
for i, conds in enumerate(conds_list):
for cond_index, weight in conds:
denoised[i] += (x_out[cond_index] - denoised_uncond[i]) * (weight * cond_scale)
return denoised
def combine_denoised_for_edit_model(self, x_out, cond_scale):
out_cond, out_img_cond, out_uncond = x_out.chunk(3)
denoised = out_uncond + cond_scale * (out_cond - out_img_cond) + self.image_cfg_scale * (out_img_cond - out_uncond)
return denoised
def get_pred_x0(self, x_in, x_out, sigma):
return x_out
def update_inner_model(self):
self.model_wrap = None
c, uc = self.p.get_conds()
self.sampler.sampler_extra_args['cond'] = c
self.sampler.sampler_extra_args['uncond'] = uc
def forward(self, x, sigma, uncond, cond, cond_scale, s_min_uncond, image_cond):
if state.interrupted or state.skipped:
raise sd_samplers_common.InterruptedException
if sd_samplers_common.apply_refiner(self):
cond = self.sampler.sampler_extra_args['cond']
uncond = self.sampler.sampler_extra_args['uncond']
# at self.image_cfg_scale == 1.0 produced results for edit model are the same as with normal sampling,
# so is_edit_model is set to False to support AND composition.
is_edit_model = shared.sd_model.cond_stage_key == "edit" and self.image_cfg_scale is not None and self.image_cfg_scale != 1.0
conds_list, tensor = prompt_parser.reconstruct_multicond_batch(cond, self.step)
uncond = prompt_parser.reconstruct_cond_batch(uncond, self.step)
assert not is_edit_model or all(len(conds) == 1 for conds in conds_list), "AND is not supported for InstructPix2Pix checkpoint (unless using Image CFG scale = 1.0)"
if self.mask_before_denoising and self.mask is not None:
x = self.init_latent * self.mask + self.nmask * x
batch_size = len(conds_list)
repeats = [len(conds_list[i]) for i in range(batch_size)]
if shared.sd_model.model.conditioning_key == "crossattn-adm":
image_uncond = torch.zeros_like(image_cond)
make_condition_dict = lambda c_crossattn, c_adm: {"c_crossattn": [c_crossattn], "c_adm": c_adm}
else:
image_uncond = image_cond
if isinstance(uncond, dict):
make_condition_dict = lambda c_crossattn, c_concat: {**c_crossattn, "c_concat": [c_concat]}
else:
make_condition_dict = lambda c_crossattn, c_concat: {"c_crossattn": [c_crossattn], "c_concat": [c_concat]}
if not is_edit_model:
x_in = torch.cat([torch.stack([x[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [x])
sigma_in = torch.cat([torch.stack([sigma[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [sigma])
image_cond_in = torch.cat([torch.stack([image_cond[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [image_uncond])
else:
x_in = torch.cat([torch.stack([x[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [x] + [x])
sigma_in = torch.cat([torch.stack([sigma[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [sigma] + [sigma])
image_cond_in = torch.cat([torch.stack([image_cond[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [image_uncond] + [torch.zeros_like(self.init_latent)])
denoiser_params = CFGDenoiserParams(x_in, image_cond_in, sigma_in, state.sampling_step, state.sampling_steps, tensor, uncond)
cfg_denoiser_callback(denoiser_params)
x_in = denoiser_params.x
image_cond_in = denoiser_params.image_cond
sigma_in = denoiser_params.sigma
tensor = denoiser_params.text_cond
uncond = denoiser_params.text_uncond
skip_uncond = False
# alternating uncond allows for higher thresholds without the quality loss normally expected from raising it
if self.step % 2 and s_min_uncond > 0 and sigma[0] < s_min_uncond and not is_edit_model:
skip_uncond = True
x_in = x_in[:-batch_size]
sigma_in = sigma_in[:-batch_size]
self.padded_cond_uncond = False
if shared.opts.pad_cond_uncond and tensor.shape[1] != uncond.shape[1]:
empty = shared.sd_model.cond_stage_model_empty_prompt
num_repeats = (tensor.shape[1] - uncond.shape[1]) // empty.shape[1]
if num_repeats < 0:
tensor = pad_cond(tensor, -num_repeats, empty)
self.padded_cond_uncond = True
elif num_repeats > 0:
uncond = pad_cond(uncond, num_repeats, empty)
self.padded_cond_uncond = True
if tensor.shape[1] == uncond.shape[1] or skip_uncond:
if is_edit_model:
cond_in = catenate_conds([tensor, uncond, uncond])
elif skip_uncond:
cond_in = tensor
else:
cond_in = catenate_conds([tensor, uncond])
if shared.opts.batch_cond_uncond:
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
else:
x_out = torch.zeros_like(x_in)
for batch_offset in range(0, x_out.shape[0], batch_size):
a = batch_offset
b = a + batch_size
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(subscript_cond(cond_in, a, b), image_cond_in[a:b]))
else:
x_out = torch.zeros_like(x_in)
batch_size = batch_size*2 if shared.opts.batch_cond_uncond else batch_size
for batch_offset in range(0, tensor.shape[0], batch_size):
a = batch_offset
b = min(a + batch_size, tensor.shape[0])
if not is_edit_model:
c_crossattn = subscript_cond(tensor, a, b)
else:
c_crossattn = torch.cat([tensor[a:b]], uncond)
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
if not skip_uncond:
x_out[-uncond.shape[0]:] = self.inner_model(x_in[-uncond.shape[0]:], sigma_in[-uncond.shape[0]:], cond=make_condition_dict(uncond, image_cond_in[-uncond.shape[0]:]))
denoised_image_indexes = [x[0][0] for x in conds_list]
if skip_uncond:
fake_uncond = torch.cat([x_out[i:i+1] for i in denoised_image_indexes])
x_out = torch.cat([x_out, fake_uncond]) # we skipped uncond denoising, so we put cond-denoised image to where the uncond-denoised image should be
denoised_params = CFGDenoisedParams(x_out, state.sampling_step, state.sampling_steps, self.inner_model)
cfg_denoised_callback(denoised_params)
devices.test_for_nans(x_out, "unet")
if is_edit_model:
denoised = self.combine_denoised_for_edit_model(x_out, cond_scale)
elif skip_uncond:
denoised = self.combine_denoised(x_out, conds_list, uncond, 1.0)
else:
denoised = self.combine_denoised(x_out, conds_list, uncond, cond_scale)
if not self.mask_before_denoising and self.mask is not None:
denoised = self.init_latent * self.mask + self.nmask * denoised
self.sampler.last_latent = self.get_pred_x0(torch.cat([x_in[i:i + 1] for i in denoised_image_indexes]), torch.cat([x_out[i:i + 1] for i in denoised_image_indexes]), sigma)
if opts.live_preview_content == "Prompt":
preview = self.sampler.last_latent
elif opts.live_preview_content == "Negative prompt":
preview = self.get_pred_x0(x_in[-uncond.shape[0]:], x_out[-uncond.shape[0]:], sigma)
else:
preview = self.get_pred_x0(torch.cat([x_in[i:i+1] for i in denoised_image_indexes]), torch.cat([denoised[i:i+1] for i in denoised_image_indexes]), sigma)
sd_samplers_common.store_latent(preview)
after_cfg_callback_params = AfterCFGCallbackParams(denoised, state.sampling_step, state.sampling_steps)
cfg_after_cfg_callback(after_cfg_callback_params)
denoised = after_cfg_callback_params.x
self.step += 1
return denoised
+256 -14
View File
@@ -1,13 +1,22 @@
import inspect
from collections import namedtuple
import numpy as np
import torch
from PIL import Image
from modules import devices, processing, images, sd_vae_approx, sd_samplers, sd_vae_taesd
from modules import devices, images, sd_vae_approx, sd_samplers, sd_vae_taesd, shared, sd_models
from modules.shared import opts, state
import modules.shared as shared
import k_diffusion.sampling
SamplerData = namedtuple('SamplerData', ['name', 'constructor', 'aliases', 'options'])
SamplerDataTuple = namedtuple('SamplerData', ['name', 'constructor', 'aliases', 'options'])
class SamplerData(SamplerDataTuple):
def total_steps(self, steps):
if self.options.get("second_order", False):
steps = steps * 2
return steps
def setup_img2img_steps(p, steps=None):
@@ -25,19 +34,34 @@ def setup_img2img_steps(p, steps=None):
approximation_indexes = {"Full": 0, "Approx NN": 1, "Approx cheap": 2, "TAESD": 3}
def single_sample_to_image(sample, approximation=None):
if approximation is None:
def samples_to_images_tensor(sample, approximation=None, model=None):
"""Transforms 4-channel latent space images into 3-channel RGB image tensors, with values in range [-1, 1]."""
if approximation is None or (shared.state.interrupted and opts.live_preview_fast_interrupt):
approximation = approximation_indexes.get(opts.show_progress_type, 0)
from modules import lowvram
if approximation == 0 and lowvram.is_enabled(shared.sd_model) and not shared.opts.live_preview_allow_lowvram_full:
approximation = 1
if approximation == 2:
x_sample = sd_vae_approx.cheap_approximation(sample) * 0.5 + 0.5
x_sample = sd_vae_approx.cheap_approximation(sample)
elif approximation == 1:
x_sample = sd_vae_approx.model()(sample.to(devices.device, devices.dtype).unsqueeze(0))[0].detach() * 0.5 + 0.5
x_sample = sd_vae_approx.model()(sample.to(devices.device, devices.dtype)).detach()
elif approximation == 3:
x_sample = sample * 1.5
x_sample = sd_vae_taesd.model()(x_sample.to(devices.device, devices.dtype).unsqueeze(0))[0].detach()
x_sample = sd_vae_taesd.decoder_model()(sample.to(devices.device, devices.dtype)).detach()
x_sample = x_sample * 2 - 1
else:
x_sample = processing.decode_first_stage(shared.sd_model, sample.unsqueeze(0))[0] * 0.5 + 0.5
if model is None:
model = shared.sd_model
with devices.without_autocast(): # fixes an issue with unstable VAEs that are flaky even in fp32
x_sample = model.decode_first_stage(sample.to(model.first_stage_model.dtype))
return x_sample
def single_sample_to_image(sample, approximation=None):
x_sample = samples_to_images_tensor(sample.unsqueeze(0), approximation)[0] * 0.5 + 0.5
x_sample = torch.clamp(x_sample, min=0.0, max=1.0)
x_sample = 255. * np.moveaxis(x_sample.cpu().numpy(), 0, 2)
@@ -46,6 +70,12 @@ def single_sample_to_image(sample, approximation=None):
return Image.fromarray(x_sample)
def decode_first_stage(model, x):
x = x.to(devices.dtype_vae)
approx_index = approximation_indexes.get(opts.sd_vae_decode_method, 0)
return samples_to_images_tensor(x, approx_index, model)
def sample_to_image(samples, index=0, approximation=None):
return single_sample_to_image(samples[index], approximation)
@@ -54,6 +84,34 @@ def samples_to_image_grid(samples, approximation=None):
return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
def images_tensor_to_samples(image, approximation=None, model=None):
'''image[0, 1] -> latent'''
if approximation is None:
approximation = approximation_indexes.get(opts.sd_vae_encode_method, 0)
if approximation == 3:
image = image.to(devices.device, devices.dtype)
x_latent = sd_vae_taesd.encoder_model()(image)
else:
if model is None:
model = shared.sd_model
model.first_stage_model.to(devices.dtype_vae)
image = image.to(shared.device, dtype=devices.dtype_vae)
image = image * 2 - 1
if len(image) > 1:
x_latent = torch.stack([
model.get_first_stage_encoding(
model.encode_first_stage(torch.unsqueeze(img, 0))
)[0]
for img in image
])
else:
x_latent = model.get_first_stage_encoding(model.encode_first_stage(image))
return x_latent
def store_latent(decoded):
state.current_latent = decoded
@@ -85,11 +143,195 @@ class InterruptedException(BaseException):
pass
if opts.randn_source == "CPU":
def replace_torchsde_browinan():
import torchsde._brownian.brownian_interval
def torchsde_randn(size, dtype, device, seed):
generator = torch.Generator(devices.cpu).manual_seed(int(seed))
return torch.randn(size, dtype=dtype, device=devices.cpu, generator=generator).to(device)
return devices.randn_local(seed, size).to(device=device, dtype=dtype)
torchsde._brownian.brownian_interval._randn = torchsde_randn
replace_torchsde_browinan()
def apply_refiner(cfg_denoiser):
completed_ratio = cfg_denoiser.step / cfg_denoiser.total_steps
refiner_switch_at = cfg_denoiser.p.refiner_switch_at
refiner_checkpoint_info = cfg_denoiser.p.refiner_checkpoint_info
if refiner_switch_at is not None and completed_ratio < refiner_switch_at:
return False
if refiner_checkpoint_info is None or shared.sd_model.sd_checkpoint_info == refiner_checkpoint_info:
return False
if getattr(cfg_denoiser.p, "enable_hr", False):
is_second_pass = cfg_denoiser.p.is_hr_pass
if opts.hires_fix_refiner_pass == "first pass" and is_second_pass:
return False
if opts.hires_fix_refiner_pass == "second pass" and not is_second_pass:
return False
if opts.hires_fix_refiner_pass != "second pass":
cfg_denoiser.p.extra_generation_params['Hires refiner'] = opts.hires_fix_refiner_pass
cfg_denoiser.p.extra_generation_params['Refiner'] = refiner_checkpoint_info.short_title
cfg_denoiser.p.extra_generation_params['Refiner switch at'] = refiner_switch_at
with sd_models.SkipWritingToConfig():
sd_models.reload_model_weights(info=refiner_checkpoint_info)
devices.torch_gc()
cfg_denoiser.p.setup_conds()
cfg_denoiser.update_inner_model()
return True
class TorchHijack:
"""This is here to replace torch.randn_like of k-diffusion.
k-diffusion has random_sampler argument for most samplers, but not for all, so
this is needed to properly replace every use of torch.randn_like.
We need to replace to make images generated in batches to be same as images generated individually."""
def __init__(self, p):
self.rng = p.rng
def __getattr__(self, item):
if item == 'randn_like':
return self.randn_like
if hasattr(torch, item):
return getattr(torch, item)
raise AttributeError(f"'{type(self).__name__}' object has no attribute '{item}'")
def randn_like(self, x):
return self.rng.next()
class Sampler:
def __init__(self, funcname):
self.funcname = funcname
self.func = funcname
self.extra_params = []
self.sampler_noises = None
self.stop_at = None
self.eta = None
self.config: SamplerData = None # set by the function calling the constructor
self.last_latent = None
self.s_min_uncond = None
self.s_churn = 0.0
self.s_tmin = 0.0
self.s_tmax = float('inf')
self.s_noise = 1.0
self.eta_option_field = 'eta_ancestral'
self.eta_infotext_field = 'Eta'
self.eta_default = 1.0
self.conditioning_key = shared.sd_model.model.conditioning_key
self.p = None
self.model_wrap_cfg = None
self.sampler_extra_args = None
self.options = {}
def callback_state(self, d):
step = d['i']
if self.stop_at is not None and step > self.stop_at:
raise InterruptedException
state.sampling_step = step
shared.total_tqdm.update()
def launch_sampling(self, steps, func):
self.model_wrap_cfg.steps = steps
self.model_wrap_cfg.total_steps = self.config.total_steps(steps)
state.sampling_steps = steps
state.sampling_step = 0
try:
return func()
except RecursionError:
print(
'Encountered RecursionError during sampling, returning last latent. '
'rho >5 with a polyexponential scheduler may cause this error. '
'You should try to use a smaller rho value instead.'
)
return self.last_latent
except InterruptedException:
return self.last_latent
def number_of_needed_noises(self, p):
return p.steps
def initialize(self, p) -> dict:
self.p = p
self.model_wrap_cfg.p = p
self.model_wrap_cfg.mask = p.mask if hasattr(p, 'mask') else None
self.model_wrap_cfg.nmask = p.nmask if hasattr(p, 'nmask') else None
self.model_wrap_cfg.step = 0
self.model_wrap_cfg.image_cfg_scale = getattr(p, 'image_cfg_scale', None)
self.eta = p.eta if p.eta is not None else getattr(opts, self.eta_option_field, 0.0)
self.s_min_uncond = getattr(p, 's_min_uncond', 0.0)
k_diffusion.sampling.torch = TorchHijack(p)
extra_params_kwargs = {}
for param_name in self.extra_params:
if hasattr(p, param_name) and param_name in inspect.signature(self.func).parameters:
extra_params_kwargs[param_name] = getattr(p, param_name)
if 'eta' in inspect.signature(self.func).parameters:
if self.eta != self.eta_default:
p.extra_generation_params[self.eta_infotext_field] = self.eta
extra_params_kwargs['eta'] = self.eta
if len(self.extra_params) > 0:
s_churn = getattr(opts, 's_churn', p.s_churn)
s_tmin = getattr(opts, 's_tmin', p.s_tmin)
s_tmax = getattr(opts, 's_tmax', p.s_tmax) or self.s_tmax # 0 = inf
s_noise = getattr(opts, 's_noise', p.s_noise)
if 's_churn' in extra_params_kwargs and s_churn != self.s_churn:
extra_params_kwargs['s_churn'] = s_churn
p.s_churn = s_churn
p.extra_generation_params['Sigma churn'] = s_churn
if 's_tmin' in extra_params_kwargs and s_tmin != self.s_tmin:
extra_params_kwargs['s_tmin'] = s_tmin
p.s_tmin = s_tmin
p.extra_generation_params['Sigma tmin'] = s_tmin
if 's_tmax' in extra_params_kwargs and s_tmax != self.s_tmax:
extra_params_kwargs['s_tmax'] = s_tmax
p.s_tmax = s_tmax
p.extra_generation_params['Sigma tmax'] = s_tmax
if 's_noise' in extra_params_kwargs and s_noise != self.s_noise:
extra_params_kwargs['s_noise'] = s_noise
p.s_noise = s_noise
p.extra_generation_params['Sigma noise'] = s_noise
return extra_params_kwargs
def create_noise_sampler(self, x, sigmas, p):
"""For DPM++ SDE: manually create noise sampler to enable deterministic results across different batch sizes"""
if shared.opts.no_dpmpp_sde_batch_determinism:
return None
from k_diffusion.sampling import BrownianTreeNoiseSampler
sigma_min, sigma_max = sigmas[sigmas > 0].min(), sigmas.max()
current_iter_seeds = p.all_seeds[p.iteration * p.batch_size:(p.iteration + 1) * p.batch_size]
return BrownianTreeNoiseSampler(x, sigma_min, sigma_max, seed=current_iter_seeds)
def sample(self, p, x, conditioning, unconditional_conditioning, steps=None, image_conditioning=None):
raise NotImplementedError()
def sample_img2img(self, p, x, noise, conditioning, unconditional_conditioning, steps=None, image_conditioning=None):
raise NotImplementedError()
-224
View File
@@ -1,224 +0,0 @@
import math
import ldm.models.diffusion.ddim
import ldm.models.diffusion.plms
import numpy as np
import torch
from modules.shared import state
from modules import sd_samplers_common, prompt_parser, shared
import modules.models.diffusion.uni_pc
samplers_data_compvis = [
sd_samplers_common.SamplerData('DDIM', lambda model: VanillaStableDiffusionSampler(ldm.models.diffusion.ddim.DDIMSampler, model), [], {"default_eta_is_0": True, "uses_ensd": True, "no_sdxl": True}),
sd_samplers_common.SamplerData('PLMS', lambda model: VanillaStableDiffusionSampler(ldm.models.diffusion.plms.PLMSSampler, model), [], {"no_sdxl": True}),
sd_samplers_common.SamplerData('UniPC', lambda model: VanillaStableDiffusionSampler(modules.models.diffusion.uni_pc.UniPCSampler, model), [], {"no_sdxl": True}),
]
class VanillaStableDiffusionSampler:
def __init__(self, constructor, sd_model):
self.sampler = constructor(sd_model)
self.is_ddim = hasattr(self.sampler, 'p_sample_ddim')
self.is_plms = hasattr(self.sampler, 'p_sample_plms')
self.is_unipc = isinstance(self.sampler, modules.models.diffusion.uni_pc.UniPCSampler)
self.orig_p_sample_ddim = None
if self.is_plms:
self.orig_p_sample_ddim = self.sampler.p_sample_plms
elif self.is_ddim:
self.orig_p_sample_ddim = self.sampler.p_sample_ddim
self.mask = None
self.nmask = None
self.init_latent = None
self.sampler_noises = None
self.step = 0
self.stop_at = None
self.eta = None
self.config = None
self.last_latent = None
self.conditioning_key = sd_model.model.conditioning_key
def number_of_needed_noises(self, p):
return 0
def launch_sampling(self, steps, func):
state.sampling_steps = steps
state.sampling_step = 0
try:
return func()
except sd_samplers_common.InterruptedException:
return self.last_latent
def p_sample_ddim_hook(self, x_dec, cond, ts, unconditional_conditioning, *args, **kwargs):
x_dec, ts, cond, unconditional_conditioning = self.before_sample(x_dec, ts, cond, unconditional_conditioning)
res = self.orig_p_sample_ddim(x_dec, cond, ts, *args, unconditional_conditioning=unconditional_conditioning, **kwargs)
x_dec, ts, cond, unconditional_conditioning, res = self.after_sample(x_dec, ts, cond, unconditional_conditioning, res)
return res
def before_sample(self, x, ts, cond, unconditional_conditioning):
if state.interrupted or state.skipped:
raise sd_samplers_common.InterruptedException
if self.stop_at is not None and self.step > self.stop_at:
raise sd_samplers_common.InterruptedException
# Have to unwrap the inpainting conditioning here to perform pre-processing
image_conditioning = None
uc_image_conditioning = None
if isinstance(cond, dict):
if self.conditioning_key == "crossattn-adm":
image_conditioning = cond["c_adm"]
uc_image_conditioning = unconditional_conditioning["c_adm"]
else:
image_conditioning = cond["c_concat"][0]
cond = cond["c_crossattn"][0]
unconditional_conditioning = unconditional_conditioning["c_crossattn"][0]
conds_list, tensor = prompt_parser.reconstruct_multicond_batch(cond, self.step)
unconditional_conditioning = prompt_parser.reconstruct_cond_batch(unconditional_conditioning, self.step)
assert all(len(conds) == 1 for conds in conds_list), 'composition via AND is not supported for DDIM/PLMS samplers'
cond = tensor
# for DDIM, shapes must match, we can't just process cond and uncond independently;
# filling unconditional_conditioning with repeats of the last vector to match length is
# not 100% correct but should work well enough
if unconditional_conditioning.shape[1] < cond.shape[1]:
last_vector = unconditional_conditioning[:, -1:]
last_vector_repeated = last_vector.repeat([1, cond.shape[1] - unconditional_conditioning.shape[1], 1])
unconditional_conditioning = torch.hstack([unconditional_conditioning, last_vector_repeated])
elif unconditional_conditioning.shape[1] > cond.shape[1]:
unconditional_conditioning = unconditional_conditioning[:, :cond.shape[1]]
if self.mask is not None:
img_orig = self.sampler.model.q_sample(self.init_latent, ts)
x = img_orig * self.mask + self.nmask * x
# Wrap the image conditioning back up since the DDIM code can accept the dict directly.
# Note that they need to be lists because it just concatenates them later.
if image_conditioning is not None:
if self.conditioning_key == "crossattn-adm":
cond = {"c_adm": image_conditioning, "c_crossattn": [cond]}
unconditional_conditioning = {"c_adm": uc_image_conditioning, "c_crossattn": [unconditional_conditioning]}
else:
cond = {"c_concat": [image_conditioning], "c_crossattn": [cond]}
unconditional_conditioning = {"c_concat": [image_conditioning], "c_crossattn": [unconditional_conditioning]}
return x, ts, cond, unconditional_conditioning
def update_step(self, last_latent):
if self.mask is not None:
self.last_latent = self.init_latent * self.mask + self.nmask * last_latent
else:
self.last_latent = last_latent
sd_samplers_common.store_latent(self.last_latent)
self.step += 1
state.sampling_step = self.step
shared.total_tqdm.update()
def after_sample(self, x, ts, cond, uncond, res):
if not self.is_unipc:
self.update_step(res[1])
return x, ts, cond, uncond, res
def unipc_after_update(self, x, model_x):
self.update_step(x)
def initialize(self, p):
if self.is_ddim:
self.eta = p.eta if p.eta is not None else shared.opts.eta_ddim
else:
self.eta = 0.0
if self.eta != 0.0:
p.extra_generation_params["Eta DDIM"] = self.eta
if self.is_unipc:
keys = [
('UniPC variant', 'uni_pc_variant'),
('UniPC skip type', 'uni_pc_skip_type'),
('UniPC order', 'uni_pc_order'),
('UniPC lower order final', 'uni_pc_lower_order_final'),
]
for name, key in keys:
v = getattr(shared.opts, key)
if v != shared.opts.get_default(key):
p.extra_generation_params[name] = v
for fieldname in ['p_sample_ddim', 'p_sample_plms']:
if hasattr(self.sampler, fieldname):
setattr(self.sampler, fieldname, self.p_sample_ddim_hook)
if self.is_unipc:
self.sampler.set_hooks(lambda x, t, c, u: self.before_sample(x, t, c, u), lambda x, t, c, u, r: self.after_sample(x, t, c, u, r), lambda x, mx: self.unipc_after_update(x, mx))
self.mask = p.mask if hasattr(p, 'mask') else None
self.nmask = p.nmask if hasattr(p, 'nmask') else None
def adjust_steps_if_invalid(self, p, num_steps):
if ((self.config.name == 'DDIM') and p.ddim_discretize == 'uniform') or (self.config.name == 'PLMS') or (self.config.name == 'UniPC'):
if self.config.name == 'UniPC' and num_steps < shared.opts.uni_pc_order:
num_steps = shared.opts.uni_pc_order
valid_step = 999 / (1000 // num_steps)
if valid_step == math.floor(valid_step):
return int(valid_step) + 1
return num_steps
def sample_img2img(self, p, x, noise, conditioning, unconditional_conditioning, steps=None, image_conditioning=None):
steps, t_enc = sd_samplers_common.setup_img2img_steps(p, steps)
steps = self.adjust_steps_if_invalid(p, steps)
self.initialize(p)
self.sampler.make_schedule(ddim_num_steps=steps, ddim_eta=self.eta, ddim_discretize=p.ddim_discretize, verbose=False)
x1 = self.sampler.stochastic_encode(x, torch.tensor([t_enc] * int(x.shape[0])).to(shared.device), noise=noise)
self.init_latent = x
self.last_latent = x
self.step = 0
# Wrap the conditioning models with additional image conditioning for inpainting model
if image_conditioning is not None:
if self.conditioning_key == "crossattn-adm":
conditioning = {"c_adm": image_conditioning, "c_crossattn": [conditioning]}
unconditional_conditioning = {"c_adm": torch.zeros_like(image_conditioning), "c_crossattn": [unconditional_conditioning]}
else:
conditioning = {"c_concat": [image_conditioning], "c_crossattn": [conditioning]}
unconditional_conditioning = {"c_concat": [image_conditioning], "c_crossattn": [unconditional_conditioning]}
samples = self.launch_sampling(t_enc + 1, lambda: self.sampler.decode(x1, conditioning, t_enc, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning))
return samples
def sample(self, p, x, conditioning, unconditional_conditioning, steps=None, image_conditioning=None):
self.initialize(p)
self.init_latent = None
self.last_latent = x
self.step = 0
steps = self.adjust_steps_if_invalid(p, steps or p.steps)
# Wrap the conditioning models with additional image conditioning for inpainting model
# dummy_for_plms is needed because PLMS code checks the first item in the dict to have the right shape
if image_conditioning is not None:
if self.conditioning_key == "crossattn-adm":
conditioning = {"dummy_for_plms": np.zeros((conditioning.shape[0],)), "c_crossattn": [conditioning], "c_adm": image_conditioning}
unconditional_conditioning = {"c_crossattn": [unconditional_conditioning], "c_adm": torch.zeros_like(image_conditioning)}
else:
conditioning = {"dummy_for_plms": np.zeros((conditioning.shape[0],)), "c_crossattn": [conditioning], "c_concat": [image_conditioning]}
unconditional_conditioning = {"c_crossattn": [unconditional_conditioning], "c_concat": [image_conditioning]}
samples_ddim = self.launch_sampling(steps, lambda: self.sampler.sample(S=steps, conditioning=conditioning, batch_size=int(x.shape[0]), shape=x[0].shape, verbose=False, unconditional_guidance_scale=p.cfg_scale, unconditional_conditioning=unconditional_conditioning, x_T=x, eta=self.eta)[0])
return samples_ddim
+74
View File
@@ -0,0 +1,74 @@
import torch
import tqdm
import k_diffusion.sampling
@torch.no_grad()
def restart_sampler(model, x, sigmas, extra_args=None, callback=None, disable=None, s_noise=1., restart_list=None):
"""Implements restart sampling in Restart Sampling for Improving Generative Processes (2023)
Restart_list format: {min_sigma: [ restart_steps, restart_times, max_sigma]}
If restart_list is None: will choose restart_list automatically, otherwise will use the given restart_list
"""
extra_args = {} if extra_args is None else extra_args
s_in = x.new_ones([x.shape[0]])
step_id = 0
from k_diffusion.sampling import to_d, get_sigmas_karras
def heun_step(x, old_sigma, new_sigma, second_order=True):
nonlocal step_id
denoised = model(x, old_sigma * s_in, **extra_args)
d = to_d(x, old_sigma, denoised)
if callback is not None:
callback({'x': x, 'i': step_id, 'sigma': new_sigma, 'sigma_hat': old_sigma, 'denoised': denoised})
dt = new_sigma - old_sigma
if new_sigma == 0 or not second_order:
# Euler method
x = x + d * dt
else:
# Heun's method
x_2 = x + d * dt
denoised_2 = model(x_2, new_sigma * s_in, **extra_args)
d_2 = to_d(x_2, new_sigma, denoised_2)
d_prime = (d + d_2) / 2
x = x + d_prime * dt
step_id += 1
return x
steps = sigmas.shape[0] - 1
if restart_list is None:
if steps >= 20:
restart_steps = 9
restart_times = 1
if steps >= 36:
restart_steps = steps // 4
restart_times = 2
sigmas = get_sigmas_karras(steps - restart_steps * restart_times, sigmas[-2].item(), sigmas[0].item(), device=sigmas.device)
restart_list = {0.1: [restart_steps + 1, restart_times, 2]}
else:
restart_list = {}
restart_list = {int(torch.argmin(abs(sigmas - key), dim=0)): value for key, value in restart_list.items()}
step_list = []
for i in range(len(sigmas) - 1):
step_list.append((sigmas[i], sigmas[i + 1]))
if i + 1 in restart_list:
restart_steps, restart_times, restart_max = restart_list[i + 1]
min_idx = i + 1
max_idx = int(torch.argmin(abs(sigmas - restart_max), dim=0))
if max_idx < min_idx:
sigma_restart = get_sigmas_karras(restart_steps, sigmas[min_idx].item(), sigmas[max_idx].item(), device=sigmas.device)[:-1]
while restart_times > 0:
restart_times -= 1
step_list.extend([(old_sigma, new_sigma) for (old_sigma, new_sigma) in zip(sigma_restart[:-1], sigma_restart[1:])])
last_sigma = None
for old_sigma, new_sigma in tqdm.tqdm(step_list, disable=disable):
if last_sigma is None:
last_sigma = old_sigma
elif last_sigma < old_sigma:
x = x + k_diffusion.sampling.torch.randn_like(x) * s_noise * (old_sigma ** 2 - last_sigma ** 2) ** 0.5
x = heun_step(x, old_sigma, new_sigma)
last_sigma = new_sigma
return x
+73 -307
View File
@@ -1,47 +1,60 @@
from collections import deque
import torch
import inspect
import k_diffusion.sampling
from modules import prompt_parser, devices, sd_samplers_common
from modules import sd_samplers_common, sd_samplers_extra, sd_samplers_cfg_denoiser
from modules.sd_samplers_cfg_denoiser import CFGDenoiser # noqa: F401
from modules.script_callbacks import ExtraNoiseParams, extra_noise_callback
from modules.shared import opts, state
from modules.shared import opts
import modules.shared as shared
from modules.script_callbacks import CFGDenoiserParams, cfg_denoiser_callback
from modules.script_callbacks import CFGDenoisedParams, cfg_denoised_callback
from modules.script_callbacks import AfterCFGCallbackParams, cfg_after_cfg_callback
samplers_k_diffusion = [
('DPM++ 2M Karras', 'sample_dpmpp_2m', ['k_dpmpp_2m_ka'], {'scheduler': 'karras'}),
('DPM++ SDE Karras', 'sample_dpmpp_sde', ['k_dpmpp_sde_ka'], {'scheduler': 'karras', "second_order": True, "brownian_noise": True}),
('DPM++ 2M SDE Exponential', 'sample_dpmpp_2m_sde', ['k_dpmpp_2m_sde_exp'], {'scheduler': 'exponential', "brownian_noise": True}),
('DPM++ 2M SDE Karras', 'sample_dpmpp_2m_sde', ['k_dpmpp_2m_sde_ka'], {'scheduler': 'karras', "brownian_noise": True}),
('Euler a', 'sample_euler_ancestral', ['k_euler_a', 'k_euler_ancestral'], {"uses_ensd": True}),
('Euler', 'sample_euler', ['k_euler'], {}),
('LMS', 'sample_lms', ['k_lms'], {}),
('Heun', 'sample_heun', ['k_heun'], {"second_order": True}),
('DPM2', 'sample_dpm_2', ['k_dpm_2'], {'discard_next_to_last_sigma': True}),
('DPM2 a', 'sample_dpm_2_ancestral', ['k_dpm_2_a'], {'discard_next_to_last_sigma': True, "uses_ensd": True}),
('DPM2', 'sample_dpm_2', ['k_dpm_2'], {'discard_next_to_last_sigma': True, "second_order": True}),
('DPM2 a', 'sample_dpm_2_ancestral', ['k_dpm_2_a'], {'discard_next_to_last_sigma': True, "uses_ensd": True, "second_order": True}),
('DPM++ 2S a', 'sample_dpmpp_2s_ancestral', ['k_dpmpp_2s_a'], {"uses_ensd": True, "second_order": True}),
('DPM++ 2M', 'sample_dpmpp_2m', ['k_dpmpp_2m'], {}),
('DPM++ SDE', 'sample_dpmpp_sde', ['k_dpmpp_sde'], {"second_order": True, "brownian_noise": True}),
('DPM++ 2M SDE', 'sample_dpmpp_2m_sde', ['k_dpmpp_2m_sde_ka'], {"brownian_noise": True}),
('DPM++ 2M SDE Heun', 'sample_dpmpp_2m_sde', ['k_dpmpp_2m_sde_heun'], {"brownian_noise": True, "solver_type": "heun"}),
('DPM++ 2M SDE Heun Karras', 'sample_dpmpp_2m_sde', ['k_dpmpp_2m_sde_heun_ka'], {'scheduler': 'karras', "brownian_noise": True, "solver_type": "heun"}),
('DPM++ 2M SDE Heun Exponential', 'sample_dpmpp_2m_sde', ['k_dpmpp_2m_sde_heun_exp'], {'scheduler': 'exponential', "brownian_noise": True, "solver_type": "heun"}),
('DPM++ 3M SDE', 'sample_dpmpp_3m_sde', ['k_dpmpp_3m_sde'], {'discard_next_to_last_sigma': True, "brownian_noise": True}),
('DPM++ 3M SDE Karras', 'sample_dpmpp_3m_sde', ['k_dpmpp_3m_sde_ka'], {'scheduler': 'karras', 'discard_next_to_last_sigma': True, "brownian_noise": True}),
('DPM++ 3M SDE Exponential', 'sample_dpmpp_3m_sde', ['k_dpmpp_3m_sde_exp'], {'scheduler': 'exponential', 'discard_next_to_last_sigma': True, "brownian_noise": True}),
('DPM fast', 'sample_dpm_fast', ['k_dpm_fast'], {"uses_ensd": True}),
('DPM adaptive', 'sample_dpm_adaptive', ['k_dpm_ad'], {"uses_ensd": True}),
('LMS Karras', 'sample_lms', ['k_lms_ka'], {'scheduler': 'karras'}),
('DPM2 Karras', 'sample_dpm_2', ['k_dpm_2_ka'], {'scheduler': 'karras', 'discard_next_to_last_sigma': True, "uses_ensd": True, "second_order": True}),
('DPM2 a Karras', 'sample_dpm_2_ancestral', ['k_dpm_2_a_ka'], {'scheduler': 'karras', 'discard_next_to_last_sigma': True, "uses_ensd": True, "second_order": True}),
('DPM++ 2S a Karras', 'sample_dpmpp_2s_ancestral', ['k_dpmpp_2s_a_ka'], {'scheduler': 'karras', "uses_ensd": True, "second_order": True}),
('DPM++ 2M Karras', 'sample_dpmpp_2m', ['k_dpmpp_2m_ka'], {'scheduler': 'karras'}),
('DPM++ SDE Karras', 'sample_dpmpp_sde', ['k_dpmpp_sde_ka'], {'scheduler': 'karras', "second_order": True, "brownian_noise": True}),
('DPM++ 2M SDE Karras', 'sample_dpmpp_2m_sde', ['k_dpmpp_2m_sde_ka'], {'scheduler': 'karras', "brownian_noise": True}),
('Restart', sd_samplers_extra.restart_sampler, ['restart'], {'scheduler': 'karras', "second_order": True}),
]
samplers_data_k_diffusion = [
sd_samplers_common.SamplerData(label, lambda model, funcname=funcname: KDiffusionSampler(funcname, model), aliases, options)
for label, funcname, aliases, options in samplers_k_diffusion
if hasattr(k_diffusion.sampling, funcname)
if callable(funcname) or hasattr(k_diffusion.sampling, funcname)
]
sampler_extra_params = {
'sample_euler': ['s_churn', 's_tmin', 's_tmax', 's_noise'],
'sample_heun': ['s_churn', 's_tmin', 's_tmax', 's_noise'],
'sample_dpm_2': ['s_churn', 's_tmin', 's_tmax', 's_noise'],
'sample_dpm_fast': ['s_noise'],
'sample_dpm_2_ancestral': ['s_noise'],
'sample_dpmpp_2s_ancestral': ['s_noise'],
'sample_dpmpp_sde': ['s_noise'],
'sample_dpmpp_2m_sde': ['s_noise'],
'sample_dpmpp_3m_sde': ['s_noise'],
}
k_diffusion_samplers_map = {x.name: x for x in samplers_data_k_diffusion}
@@ -53,289 +66,27 @@ k_diffusion_scheduler = {
}
def catenate_conds(conds):
if not isinstance(conds[0], dict):
return torch.cat(conds)
class CFGDenoiserKDiffusion(sd_samplers_cfg_denoiser.CFGDenoiser):
@property
def inner_model(self):
if self.model_wrap is None:
denoiser = k_diffusion.external.CompVisVDenoiser if shared.sd_model.parameterization == "v" else k_diffusion.external.CompVisDenoiser
self.model_wrap = denoiser(shared.sd_model, quantize=shared.opts.enable_quantization)
return {key: torch.cat([x[key] for x in conds]) for key in conds[0].keys()}
return self.model_wrap
def subscript_cond(cond, a, b):
if not isinstance(cond, dict):
return cond[a:b]
class KDiffusionSampler(sd_samplers_common.Sampler):
def __init__(self, funcname, sd_model, options=None):
super().__init__(funcname)
return {key: vec[a:b] for key, vec in cond.items()}
def pad_cond(tensor, repeats, empty):
if not isinstance(tensor, dict):
return torch.cat([tensor, empty.repeat((tensor.shape[0], repeats, 1))], axis=1)
tensor['crossattn'] = pad_cond(tensor['crossattn'], repeats, empty)
return tensor
class CFGDenoiser(torch.nn.Module):
"""
Classifier free guidance denoiser. A wrapper for stable diffusion model (specifically for unet)
that can take a noisy picture and produce a noise-free picture using two guidances (prompts)
instead of one. Originally, the second prompt is just an empty string, but we use non-empty
negative prompt.
"""
def __init__(self, model):
super().__init__()
self.inner_model = model
self.mask = None
self.nmask = None
self.init_latent = None
self.step = 0
self.image_cfg_scale = None
self.padded_cond_uncond = False
def combine_denoised(self, x_out, conds_list, uncond, cond_scale):
denoised_uncond = x_out[-uncond.shape[0]:]
denoised = torch.clone(denoised_uncond)
for i, conds in enumerate(conds_list):
for cond_index, weight in conds:
denoised[i] += (x_out[cond_index] - denoised_uncond[i]) * (weight * cond_scale)
return denoised
def combine_denoised_for_edit_model(self, x_out, cond_scale):
out_cond, out_img_cond, out_uncond = x_out.chunk(3)
denoised = out_uncond + cond_scale * (out_cond - out_img_cond) + self.image_cfg_scale * (out_img_cond - out_uncond)
return denoised
def forward(self, x, sigma, uncond, cond, cond_scale, s_min_uncond, image_cond):
if state.interrupted or state.skipped:
raise sd_samplers_common.InterruptedException
# at self.image_cfg_scale == 1.0 produced results for edit model are the same as with normal sampling,
# so is_edit_model is set to False to support AND composition.
is_edit_model = shared.sd_model.cond_stage_key == "edit" and self.image_cfg_scale is not None and self.image_cfg_scale != 1.0
conds_list, tensor = prompt_parser.reconstruct_multicond_batch(cond, self.step)
uncond = prompt_parser.reconstruct_cond_batch(uncond, self.step)
assert not is_edit_model or all(len(conds) == 1 for conds in conds_list), "AND is not supported for InstructPix2Pix checkpoint (unless using Image CFG scale = 1.0)"
batch_size = len(conds_list)
repeats = [len(conds_list[i]) for i in range(batch_size)]
if shared.sd_model.model.conditioning_key == "crossattn-adm":
image_uncond = torch.zeros_like(image_cond)
make_condition_dict = lambda c_crossattn, c_adm: {"c_crossattn": [c_crossattn], "c_adm": c_adm}
else:
image_uncond = image_cond
if isinstance(uncond, dict):
make_condition_dict = lambda c_crossattn, c_concat: {**c_crossattn, "c_concat": [c_concat]}
else:
make_condition_dict = lambda c_crossattn, c_concat: {"c_crossattn": [c_crossattn], "c_concat": [c_concat]}
if not is_edit_model:
x_in = torch.cat([torch.stack([x[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [x])
sigma_in = torch.cat([torch.stack([sigma[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [sigma])
image_cond_in = torch.cat([torch.stack([image_cond[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [image_uncond])
else:
x_in = torch.cat([torch.stack([x[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [x] + [x])
sigma_in = torch.cat([torch.stack([sigma[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [sigma] + [sigma])
image_cond_in = torch.cat([torch.stack([image_cond[i] for _ in range(n)]) for i, n in enumerate(repeats)] + [image_uncond] + [torch.zeros_like(self.init_latent)])
denoiser_params = CFGDenoiserParams(x_in, image_cond_in, sigma_in, state.sampling_step, state.sampling_steps, tensor, uncond)
cfg_denoiser_callback(denoiser_params)
x_in = denoiser_params.x
image_cond_in = denoiser_params.image_cond
sigma_in = denoiser_params.sigma
tensor = denoiser_params.text_cond
uncond = denoiser_params.text_uncond
skip_uncond = False
# alternating uncond allows for higher thresholds without the quality loss normally expected from raising it
if self.step % 2 and s_min_uncond > 0 and sigma[0] < s_min_uncond and not is_edit_model:
skip_uncond = True
x_in = x_in[:-batch_size]
sigma_in = sigma_in[:-batch_size]
self.padded_cond_uncond = False
if shared.opts.pad_cond_uncond and tensor.shape[1] != uncond.shape[1]:
empty = shared.sd_model.cond_stage_model_empty_prompt
num_repeats = (tensor.shape[1] - uncond.shape[1]) // empty.shape[1]
if num_repeats < 0:
tensor = pad_cond(tensor, -num_repeats, empty)
self.padded_cond_uncond = True
elif num_repeats > 0:
uncond = pad_cond(uncond, num_repeats, empty)
self.padded_cond_uncond = True
if tensor.shape[1] == uncond.shape[1] or skip_uncond:
if is_edit_model:
cond_in = catenate_conds([tensor, uncond, uncond])
elif skip_uncond:
cond_in = tensor
else:
cond_in = catenate_conds([tensor, uncond])
if shared.batch_cond_uncond:
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
else:
x_out = torch.zeros_like(x_in)
for batch_offset in range(0, x_out.shape[0], batch_size):
a = batch_offset
b = a + batch_size
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(subscript_cond(cond_in, a, b), image_cond_in[a:b]))
else:
x_out = torch.zeros_like(x_in)
batch_size = batch_size*2 if shared.batch_cond_uncond else batch_size
for batch_offset in range(0, tensor.shape[0], batch_size):
a = batch_offset
b = min(a + batch_size, tensor.shape[0])
if not is_edit_model:
c_crossattn = subscript_cond(tensor, a, b)
else:
c_crossattn = torch.cat([tensor[a:b]], uncond)
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
if not skip_uncond:
x_out[-uncond.shape[0]:] = self.inner_model(x_in[-uncond.shape[0]:], sigma_in[-uncond.shape[0]:], cond=make_condition_dict(uncond, image_cond_in[-uncond.shape[0]:]))
denoised_image_indexes = [x[0][0] for x in conds_list]
if skip_uncond:
fake_uncond = torch.cat([x_out[i:i+1] for i in denoised_image_indexes])
x_out = torch.cat([x_out, fake_uncond]) # we skipped uncond denoising, so we put cond-denoised image to where the uncond-denoised image should be
denoised_params = CFGDenoisedParams(x_out, state.sampling_step, state.sampling_steps, self.inner_model)
cfg_denoised_callback(denoised_params)
devices.test_for_nans(x_out, "unet")
if opts.live_preview_content == "Prompt":
sd_samplers_common.store_latent(torch.cat([x_out[i:i+1] for i in denoised_image_indexes]))
elif opts.live_preview_content == "Negative prompt":
sd_samplers_common.store_latent(x_out[-uncond.shape[0]:])
if is_edit_model:
denoised = self.combine_denoised_for_edit_model(x_out, cond_scale)
elif skip_uncond:
denoised = self.combine_denoised(x_out, conds_list, uncond, 1.0)
else:
denoised = self.combine_denoised(x_out, conds_list, uncond, cond_scale)
if self.mask is not None:
denoised = self.init_latent * self.mask + self.nmask * denoised
after_cfg_callback_params = AfterCFGCallbackParams(denoised, state.sampling_step, state.sampling_steps)
cfg_after_cfg_callback(after_cfg_callback_params)
denoised = after_cfg_callback_params.x
self.step += 1
return denoised
class TorchHijack:
def __init__(self, sampler_noises):
# Using a deque to efficiently receive the sampler_noises in the same order as the previous index-based
# implementation.
self.sampler_noises = deque(sampler_noises)
def __getattr__(self, item):
if item == 'randn_like':
return self.randn_like
if hasattr(torch, item):
return getattr(torch, item)
raise AttributeError(f"'{type(self).__name__}' object has no attribute '{item}'")
def randn_like(self, x):
if self.sampler_noises:
noise = self.sampler_noises.popleft()
if noise.shape == x.shape:
return noise
if opts.randn_source == "CPU" or x.device.type == 'mps':
return torch.randn_like(x, device=devices.cpu).to(x.device)
else:
return torch.randn_like(x)
class KDiffusionSampler:
def __init__(self, funcname, sd_model):
denoiser = k_diffusion.external.CompVisVDenoiser if sd_model.parameterization == "v" else k_diffusion.external.CompVisDenoiser
self.model_wrap = denoiser(sd_model, quantize=shared.opts.enable_quantization)
self.funcname = funcname
self.func = getattr(k_diffusion.sampling, self.funcname)
self.extra_params = sampler_extra_params.get(funcname, [])
self.model_wrap_cfg = CFGDenoiser(self.model_wrap)
self.sampler_noises = None
self.stop_at = None
self.eta = None
self.config = None # set by the function calling the constructor
self.last_latent = None
self.s_min_uncond = None
self.conditioning_key = sd_model.model.conditioning_key
self.options = options or {}
self.func = funcname if callable(funcname) else getattr(k_diffusion.sampling, self.funcname)
def callback_state(self, d):
step = d['i']
latent = d["denoised"]
if opts.live_preview_content == "Combined":
sd_samplers_common.store_latent(latent)
self.last_latent = latent
if self.stop_at is not None and step > self.stop_at:
raise sd_samplers_common.InterruptedException
state.sampling_step = step
shared.total_tqdm.update()
def launch_sampling(self, steps, func):
state.sampling_steps = steps
state.sampling_step = 0
try:
return func()
except RecursionError:
print(
'Encountered RecursionError during sampling, returning last latent. '
'rho >5 with a polyexponential scheduler may cause this error. '
'You should try to use a smaller rho value instead.'
)
return self.last_latent
except sd_samplers_common.InterruptedException:
return self.last_latent
def number_of_needed_noises(self, p):
return p.steps
def initialize(self, p):
self.model_wrap_cfg.mask = p.mask if hasattr(p, 'mask') else None
self.model_wrap_cfg.nmask = p.nmask if hasattr(p, 'nmask') else None
self.model_wrap_cfg.step = 0
self.model_wrap_cfg.image_cfg_scale = getattr(p, 'image_cfg_scale', None)
self.eta = p.eta if p.eta is not None else opts.eta_ancestral
self.s_min_uncond = getattr(p, 's_min_uncond', 0.0)
k_diffusion.sampling.torch = TorchHijack(self.sampler_noises if self.sampler_noises is not None else [])
extra_params_kwargs = {}
for param_name in self.extra_params:
if hasattr(p, param_name) and param_name in inspect.signature(self.func).parameters:
extra_params_kwargs[param_name] = getattr(p, param_name)
if 'eta' in inspect.signature(self.func).parameters:
if self.eta != 1.0:
p.extra_generation_params["Eta"] = self.eta
extra_params_kwargs['eta'] = self.eta
return extra_params_kwargs
self.model_wrap_cfg = CFGDenoiserKDiffusion(self)
self.model_wrap = self.model_wrap_cfg.inner_model
def get_sigmas(self, p, steps):
discard_next_to_last_sigma = self.config is not None and self.config.options.get('discard_next_to_last_sigma', False)
@@ -376,6 +127,9 @@ class KDiffusionSampler:
sigma_min, sigma_max = (0.1, 10) if opts.use_old_karras_scheduler_sigmas else (self.model_wrap.sigmas[0].item(), self.model_wrap.sigmas[-1].item())
sigmas = k_diffusion.sampling.get_sigmas_karras(n=steps, sigma_min=sigma_min, sigma_max=sigma_max, device=shared.device)
elif self.config is not None and self.config.options.get('scheduler', None) == 'exponential':
m_sigma_min, m_sigma_max = (self.model_wrap.sigmas[0].item(), self.model_wrap.sigmas[-1].item())
sigmas = k_diffusion.sampling.get_sigmas_exponential(n=steps, sigma_min=m_sigma_min, sigma_max=m_sigma_max, device=shared.device)
else:
sigmas = self.model_wrap.get_sigmas(steps)
@@ -384,24 +138,21 @@ class KDiffusionSampler:
return sigmas
def create_noise_sampler(self, x, sigmas, p):
"""For DPM++ SDE: manually create noise sampler to enable deterministic results across different batch sizes"""
if shared.opts.no_dpmpp_sde_batch_determinism:
return None
from k_diffusion.sampling import BrownianTreeNoiseSampler
sigma_min, sigma_max = sigmas[sigmas > 0].min(), sigmas.max()
current_iter_seeds = p.all_seeds[p.iteration * p.batch_size:(p.iteration + 1) * p.batch_size]
return BrownianTreeNoiseSampler(x, sigma_min, sigma_max, seed=current_iter_seeds)
def sample_img2img(self, p, x, noise, conditioning, unconditional_conditioning, steps=None, image_conditioning=None):
steps, t_enc = sd_samplers_common.setup_img2img_steps(p, steps)
sigmas = self.get_sigmas(p, steps)
sigma_sched = sigmas[steps - t_enc - 1:]
xi = x + noise * sigma_sched[0]
if opts.img2img_extra_noise > 0:
p.extra_generation_params["Extra noise"] = opts.img2img_extra_noise
extra_noise_params = ExtraNoiseParams(noise, x, xi)
extra_noise_callback(extra_noise_params)
noise = extra_noise_params.noise
xi += noise * opts.img2img_extra_noise
extra_params_kwargs = self.initialize(p)
parameters = inspect.signature(self.func).parameters
@@ -421,9 +172,12 @@ class KDiffusionSampler:
noise_sampler = self.create_noise_sampler(x, sigmas, p)
extra_params_kwargs['noise_sampler'] = noise_sampler
if self.config.options.get('solver_type', None) == 'heun':
extra_params_kwargs['solver_type'] = 'heun'
self.model_wrap_cfg.init_latent = x
self.last_latent = x
extra_args = {
self.sampler_extra_args = {
'cond': conditioning,
'image_cond': image_conditioning,
'uncond': unconditional_conditioning,
@@ -431,7 +185,7 @@ class KDiffusionSampler:
's_min_uncond': self.s_min_uncond
}
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
if self.model_wrap_cfg.padded_cond_uncond:
p.extra_generation_params["Pad conds"] = True
@@ -443,34 +197,46 @@ class KDiffusionSampler:
sigmas = self.get_sigmas(p, steps)
x = x * sigmas[0]
if opts.sgm_noise_multiplier:
p.extra_generation_params["SGM noise multiplier"] = True
x = x * torch.sqrt(1.0 + sigmas[0] ** 2.0)
else:
x = x * sigmas[0]
extra_params_kwargs = self.initialize(p)
parameters = inspect.signature(self.func).parameters
if 'n' in parameters:
extra_params_kwargs['n'] = steps
if 'sigma_min' in parameters:
extra_params_kwargs['sigma_min'] = self.model_wrap.sigmas[0].item()
extra_params_kwargs['sigma_max'] = self.model_wrap.sigmas[-1].item()
if 'n' in parameters:
extra_params_kwargs['n'] = steps
else:
if 'sigmas' in parameters:
extra_params_kwargs['sigmas'] = sigmas
if self.config.options.get('brownian_noise', False):
noise_sampler = self.create_noise_sampler(x, sigmas, p)
extra_params_kwargs['noise_sampler'] = noise_sampler
if self.config.options.get('solver_type', None) == 'heun':
extra_params_kwargs['solver_type'] = 'heun'
self.last_latent = x
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
self.sampler_extra_args = {
'cond': conditioning,
'image_cond': image_conditioning,
'uncond': unconditional_conditioning,
'cond_scale': p.cfg_scale,
's_min_uncond': self.s_min_uncond
}, disable=False, callback=self.callback_state, **extra_params_kwargs))
}
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
if self.model_wrap_cfg.padded_cond_uncond:
p.extra_generation_params["Pad conds"] = True
return samples
+167
View File
@@ -0,0 +1,167 @@
import torch
import inspect
import sys
from modules import devices, sd_samplers_common, sd_samplers_timesteps_impl
from modules.sd_samplers_cfg_denoiser import CFGDenoiser
from modules.script_callbacks import ExtraNoiseParams, extra_noise_callback
from modules.shared import opts
import modules.shared as shared
samplers_timesteps = [
('DDIM', sd_samplers_timesteps_impl.ddim, ['ddim'], {}),
('PLMS', sd_samplers_timesteps_impl.plms, ['plms'], {}),
('UniPC', sd_samplers_timesteps_impl.unipc, ['unipc'], {}),
]
samplers_data_timesteps = [
sd_samplers_common.SamplerData(label, lambda model, funcname=funcname: CompVisSampler(funcname, model), aliases, options)
for label, funcname, aliases, options in samplers_timesteps
]
class CompVisTimestepsDenoiser(torch.nn.Module):
def __init__(self, model, *args, **kwargs):
super().__init__(*args, **kwargs)
self.inner_model = model
def forward(self, input, timesteps, **kwargs):
return self.inner_model.apply_model(input, timesteps, **kwargs)
class CompVisTimestepsVDenoiser(torch.nn.Module):
def __init__(self, model, *args, **kwargs):
super().__init__(*args, **kwargs)
self.inner_model = model
def predict_eps_from_z_and_v(self, x_t, t, v):
return self.inner_model.sqrt_alphas_cumprod[t.to(torch.int), None, None, None] * v + self.inner_model.sqrt_one_minus_alphas_cumprod[t.to(torch.int), None, None, None] * x_t
def forward(self, input, timesteps, **kwargs):
model_output = self.inner_model.apply_model(input, timesteps, **kwargs)
e_t = self.predict_eps_from_z_and_v(input, timesteps, model_output)
return e_t
class CFGDenoiserTimesteps(CFGDenoiser):
def __init__(self, sampler):
super().__init__(sampler)
self.alphas = shared.sd_model.alphas_cumprod
self.mask_before_denoising = True
def get_pred_x0(self, x_in, x_out, sigma):
ts = sigma.to(dtype=int)
a_t = self.alphas[ts][:, None, None, None]
sqrt_one_minus_at = (1 - a_t).sqrt()
pred_x0 = (x_in - sqrt_one_minus_at * x_out) / a_t.sqrt()
return pred_x0
@property
def inner_model(self):
if self.model_wrap is None:
denoiser = CompVisTimestepsVDenoiser if shared.sd_model.parameterization == "v" else CompVisTimestepsDenoiser
self.model_wrap = denoiser(shared.sd_model)
return self.model_wrap
class CompVisSampler(sd_samplers_common.Sampler):
def __init__(self, funcname, sd_model):
super().__init__(funcname)
self.eta_option_field = 'eta_ddim'
self.eta_infotext_field = 'Eta DDIM'
self.eta_default = 0.0
self.model_wrap_cfg = CFGDenoiserTimesteps(self)
def get_timesteps(self, p, steps):
discard_next_to_last_sigma = self.config is not None and self.config.options.get('discard_next_to_last_sigma', False)
if opts.always_discard_next_to_last_sigma and not discard_next_to_last_sigma:
discard_next_to_last_sigma = True
p.extra_generation_params["Discard penultimate sigma"] = True
steps += 1 if discard_next_to_last_sigma else 0
timesteps = torch.clip(torch.asarray(list(range(0, 1000, 1000 // steps)), device=devices.device) + 1, 0, 999)
return timesteps
def sample_img2img(self, p, x, noise, conditioning, unconditional_conditioning, steps=None, image_conditioning=None):
steps, t_enc = sd_samplers_common.setup_img2img_steps(p, steps)
timesteps = self.get_timesteps(p, steps)
timesteps_sched = timesteps[:t_enc]
alphas_cumprod = shared.sd_model.alphas_cumprod
sqrt_alpha_cumprod = torch.sqrt(alphas_cumprod[timesteps[t_enc]])
sqrt_one_minus_alpha_cumprod = torch.sqrt(1 - alphas_cumprod[timesteps[t_enc]])
xi = x * sqrt_alpha_cumprod + noise * sqrt_one_minus_alpha_cumprod
if opts.img2img_extra_noise > 0:
p.extra_generation_params["Extra noise"] = opts.img2img_extra_noise
extra_noise_params = ExtraNoiseParams(noise, x, xi)
extra_noise_callback(extra_noise_params)
noise = extra_noise_params.noise
xi += noise * opts.img2img_extra_noise * sqrt_alpha_cumprod
extra_params_kwargs = self.initialize(p)
parameters = inspect.signature(self.func).parameters
if 'timesteps' in parameters:
extra_params_kwargs['timesteps'] = timesteps_sched
if 'is_img2img' in parameters:
extra_params_kwargs['is_img2img'] = True
self.model_wrap_cfg.init_latent = x
self.last_latent = x
self.sampler_extra_args = {
'cond': conditioning,
'image_cond': image_conditioning,
'uncond': unconditional_conditioning,
'cond_scale': p.cfg_scale,
's_min_uncond': self.s_min_uncond
}
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
if self.model_wrap_cfg.padded_cond_uncond:
p.extra_generation_params["Pad conds"] = True
return samples
def sample(self, p, x, conditioning, unconditional_conditioning, steps=None, image_conditioning=None):
steps = steps or p.steps
timesteps = self.get_timesteps(p, steps)
extra_params_kwargs = self.initialize(p)
parameters = inspect.signature(self.func).parameters
if 'timesteps' in parameters:
extra_params_kwargs['timesteps'] = timesteps
self.last_latent = x
self.sampler_extra_args = {
'cond': conditioning,
'image_cond': image_conditioning,
'uncond': unconditional_conditioning,
'cond_scale': p.cfg_scale,
's_min_uncond': self.s_min_uncond
}
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
if self.model_wrap_cfg.padded_cond_uncond:
p.extra_generation_params["Pad conds"] = True
return samples
sys.modules['modules.sd_samplers_compvis'] = sys.modules[__name__]
VanillaStableDiffusionSampler = CompVisSampler # temp. compatibility with older extensions
+137
View File
@@ -0,0 +1,137 @@
import torch
import tqdm
import k_diffusion.sampling
import numpy as np
from modules import shared
from modules.models.diffusion.uni_pc import uni_pc
@torch.no_grad()
def ddim(model, x, timesteps, extra_args=None, callback=None, disable=None, eta=0.0):
alphas_cumprod = model.inner_model.inner_model.alphas_cumprod
alphas = alphas_cumprod[timesteps]
alphas_prev = alphas_cumprod[torch.nn.functional.pad(timesteps[:-1], pad=(1, 0))].to(torch.float64 if x.device.type != 'mps' else torch.float32)
sqrt_one_minus_alphas = torch.sqrt(1 - alphas)
sigmas = eta * np.sqrt((1 - alphas_prev.cpu().numpy()) / (1 - alphas.cpu()) * (1 - alphas.cpu() / alphas_prev.cpu().numpy()))
extra_args = {} if extra_args is None else extra_args
s_in = x.new_ones((x.shape[0]))
s_x = x.new_ones((x.shape[0], 1, 1, 1))
for i in tqdm.trange(len(timesteps) - 1, disable=disable):
index = len(timesteps) - 1 - i
e_t = model(x, timesteps[index].item() * s_in, **extra_args)
a_t = alphas[index].item() * s_x
a_prev = alphas_prev[index].item() * s_x
sigma_t = sigmas[index].item() * s_x
sqrt_one_minus_at = sqrt_one_minus_alphas[index].item() * s_x
pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()
dir_xt = (1. - a_prev - sigma_t ** 2).sqrt() * e_t
noise = sigma_t * k_diffusion.sampling.torch.randn_like(x)
x = a_prev.sqrt() * pred_x0 + dir_xt + noise
if callback is not None:
callback({'x': x, 'i': i, 'sigma': 0, 'sigma_hat': 0, 'denoised': pred_x0})
return x
@torch.no_grad()
def plms(model, x, timesteps, extra_args=None, callback=None, disable=None):
alphas_cumprod = model.inner_model.inner_model.alphas_cumprod
alphas = alphas_cumprod[timesteps]
alphas_prev = alphas_cumprod[torch.nn.functional.pad(timesteps[:-1], pad=(1, 0))].to(torch.float64 if x.device.type != 'mps' else torch.float32)
sqrt_one_minus_alphas = torch.sqrt(1 - alphas)
extra_args = {} if extra_args is None else extra_args
s_in = x.new_ones([x.shape[0]])
s_x = x.new_ones((x.shape[0], 1, 1, 1))
old_eps = []
def get_x_prev_and_pred_x0(e_t, index):
# select parameters corresponding to the currently considered timestep
a_t = alphas[index].item() * s_x
a_prev = alphas_prev[index].item() * s_x
sqrt_one_minus_at = sqrt_one_minus_alphas[index].item() * s_x
# current prediction for x_0
pred_x0 = (x - sqrt_one_minus_at * e_t) / a_t.sqrt()
# direction pointing to x_t
dir_xt = (1. - a_prev).sqrt() * e_t
x_prev = a_prev.sqrt() * pred_x0 + dir_xt
return x_prev, pred_x0
for i in tqdm.trange(len(timesteps) - 1, disable=disable):
index = len(timesteps) - 1 - i
ts = timesteps[index].item() * s_in
t_next = timesteps[max(index - 1, 0)].item() * s_in
e_t = model(x, ts, **extra_args)
if len(old_eps) == 0:
# Pseudo Improved Euler (2nd order)
x_prev, pred_x0 = get_x_prev_and_pred_x0(e_t, index)
e_t_next = model(x_prev, t_next, **extra_args)
e_t_prime = (e_t + e_t_next) / 2
elif len(old_eps) == 1:
# 2nd order Pseudo Linear Multistep (Adams-Bashforth)
e_t_prime = (3 * e_t - old_eps[-1]) / 2
elif len(old_eps) == 2:
# 3nd order Pseudo Linear Multistep (Adams-Bashforth)
e_t_prime = (23 * e_t - 16 * old_eps[-1] + 5 * old_eps[-2]) / 12
else:
# 4nd order Pseudo Linear Multistep (Adams-Bashforth)
e_t_prime = (55 * e_t - 59 * old_eps[-1] + 37 * old_eps[-2] - 9 * old_eps[-3]) / 24
x_prev, pred_x0 = get_x_prev_and_pred_x0(e_t_prime, index)
old_eps.append(e_t)
if len(old_eps) >= 4:
old_eps.pop(0)
x = x_prev
if callback is not None:
callback({'x': x, 'i': i, 'sigma': 0, 'sigma_hat': 0, 'denoised': pred_x0})
return x
class UniPCCFG(uni_pc.UniPC):
def __init__(self, cfg_model, extra_args, callback, *args, **kwargs):
super().__init__(None, *args, **kwargs)
def after_update(x, model_x):
callback({'x': x, 'i': self.index, 'sigma': 0, 'sigma_hat': 0, 'denoised': model_x})
self.index += 1
self.cfg_model = cfg_model
self.extra_args = extra_args
self.callback = callback
self.index = 0
self.after_update = after_update
def get_model_input_time(self, t_continuous):
return (t_continuous - 1. / self.noise_schedule.total_N) * 1000.
def model(self, x, t):
t_input = self.get_model_input_time(t)
res = self.cfg_model(x, t_input, **self.extra_args)
return res
def unipc(model, x, timesteps, extra_args=None, callback=None, disable=None, is_img2img=False):
alphas_cumprod = model.inner_model.inner_model.alphas_cumprod
ns = uni_pc.NoiseScheduleVP('discrete', alphas_cumprod=alphas_cumprod)
t_start = timesteps[-1] / 1000 + 1 / 1000 if is_img2img else None # this is likely off by a bit - if someone wants to fix it please by all means
unipc_sampler = UniPCCFG(model, extra_args, callback, ns, predict_x0=True, thresholding=False, variant=shared.opts.uni_pc_variant)
x = unipc_sampler.sample(x, steps=len(timesteps), t_start=t_start, skip_type=shared.opts.uni_pc_skip_type, method="multistep", order=shared.opts.uni_pc_order, lower_order_final=shared.opts.uni_pc_lower_order_final)
return x
+3 -3
View File
@@ -1,11 +1,11 @@
import torch.nn
import ldm.modules.diffusionmodules.openaimodel
from modules import script_callbacks, shared, devices
unet_options = []
current_unet_option = None
current_unet = None
original_forward = None
def list_unets():
@@ -47,7 +47,7 @@ def apply_unet(option=None):
if current_unet_option is None:
current_unet = None
if not (shared.cmd_opts.lowvram or shared.cmd_opts.medvram):
if not shared.sd_model.lowvram:
shared.sd_model.model.diffusion_model.to(devices.device)
return
@@ -88,5 +88,5 @@ def UNetModel_forward(self, x, timesteps=None, context=None, *args, **kwargs):
if current_unet is not None:
return current_unet.forward(x, timesteps, context, *args, **kwargs)
return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
return original_forward(self, x, timesteps, context, *args, **kwargs)
+87 -18
View File
@@ -1,6 +1,9 @@
import os
import collections
from modules import paths, shared, devices, script_callbacks, sd_models
from dataclasses import dataclass
from modules import paths, shared, devices, script_callbacks, sd_models, extra_networks, lowvram, sd_hijack, hashes
import glob
from copy import deepcopy
@@ -16,6 +19,23 @@ checkpoint_info = None
checkpoints_loaded = collections.OrderedDict()
def get_loaded_vae_name():
if loaded_vae_file is None:
return None
return os.path.basename(loaded_vae_file)
def get_loaded_vae_hash():
if loaded_vae_file is None:
return None
sha256 = hashes.sha256(loaded_vae_file, 'vae')
return sha256[0:10] if sha256 else None
def get_base_vae(model):
if base_vae is not None and checkpoint_info == model.sd_checkpoint_info and model:
return base_vae
@@ -83,6 +103,8 @@ def refresh_vae_list():
name = get_filename(filepath)
vae_dict[name] = filepath
vae_dict.update(dict(sorted(vae_dict.items(), key=lambda item: shared.natural_sort_key(item[0]))))
def find_vae_near_checkpoint(checkpoint_file):
checkpoint_path = os.path.basename(checkpoint_file).rsplit('.', 1)[0]
@@ -93,27 +115,74 @@ def find_vae_near_checkpoint(checkpoint_file):
return None
def resolve_vae(checkpoint_file):
if shared.cmd_opts.vae_path is not None:
return shared.cmd_opts.vae_path, 'from commandline argument'
@dataclass
class VaeResolution:
vae: str = None
source: str = None
resolved: bool = True
is_automatic = shared.opts.sd_vae in {"Automatic", "auto"} # "auto" for people with old config
def tuple(self):
return self.vae, self.source
vae_near_checkpoint = find_vae_near_checkpoint(checkpoint_file)
if vae_near_checkpoint is not None and (shared.opts.sd_vae_as_default or is_automatic):
return vae_near_checkpoint, 'found near the checkpoint'
def is_automatic():
return shared.opts.sd_vae in {"Automatic", "auto"} # "auto" for people with old config
def resolve_vae_from_setting() -> VaeResolution:
if shared.opts.sd_vae == "None":
return None, None
return VaeResolution()
vae_from_options = vae_dict.get(shared.opts.sd_vae, None)
if vae_from_options is not None:
return vae_from_options, 'specified in settings'
return VaeResolution(vae_from_options, 'specified in settings')
if not is_automatic:
if not is_automatic():
print(f"Couldn't find VAE named {shared.opts.sd_vae}; using None instead")
return None, None
return VaeResolution(resolved=False)
def resolve_vae_from_user_metadata(checkpoint_file) -> VaeResolution:
metadata = extra_networks.get_user_metadata(checkpoint_file)
vae_metadata = metadata.get("vae", None)
if vae_metadata is not None and vae_metadata != "Automatic":
if vae_metadata == "None":
return VaeResolution()
vae_from_metadata = vae_dict.get(vae_metadata, None)
if vae_from_metadata is not None:
return VaeResolution(vae_from_metadata, "from user metadata")
return VaeResolution(resolved=False)
def resolve_vae_near_checkpoint(checkpoint_file) -> VaeResolution:
vae_near_checkpoint = find_vae_near_checkpoint(checkpoint_file)
if vae_near_checkpoint is not None and (not shared.opts.sd_vae_overrides_per_model_preferences or is_automatic()):
return VaeResolution(vae_near_checkpoint, 'found near the checkpoint')
return VaeResolution(resolved=False)
def resolve_vae(checkpoint_file) -> VaeResolution:
if shared.cmd_opts.vae_path is not None:
return VaeResolution(shared.cmd_opts.vae_path, 'from commandline argument')
if shared.opts.sd_vae_overrides_per_model_preferences and not is_automatic():
return resolve_vae_from_setting()
res = resolve_vae_from_user_metadata(checkpoint_file)
if res.resolved:
return res
res = resolve_vae_near_checkpoint(checkpoint_file)
if res.resolved:
return res
res = resolve_vae_from_setting()
return res
def load_vae_dict(filename, map_location):
@@ -123,7 +192,7 @@ def load_vae_dict(filename, map_location):
def load_vae(model, vae_file=None, vae_source="from unknown source"):
global vae_dict, loaded_vae_file
global vae_dict, base_vae, loaded_vae_file
# save_settings = False
cache_enabled = shared.opts.sd_vae_checkpoint_cache > 0
@@ -161,6 +230,8 @@ def load_vae(model, vae_file=None, vae_source="from unknown source"):
restore_base_vae(model)
loaded_vae_file = vae_file
model.base_vae = base_vae
model.loaded_vae_file = loaded_vae_file
# don't call this from outside
@@ -178,8 +249,6 @@ unspecified = object()
def reload_vae_weights(sd_model=None, vae_file=unspecified):
from modules import lowvram, devices, sd_hijack
if not sd_model:
sd_model = shared.sd_model
@@ -187,14 +256,14 @@ def reload_vae_weights(sd_model=None, vae_file=unspecified):
checkpoint_file = checkpoint_info.filename
if vae_file == unspecified:
vae_file, vae_source = resolve_vae(checkpoint_file)
vae_file, vae_source = resolve_vae(checkpoint_file).tuple()
else:
vae_source = "from function argument"
if loaded_vae_file == vae_file:
return
if shared.cmd_opts.lowvram or shared.cmd_opts.medvram:
if sd_model.lowvram:
lowvram.send_everything_to_cpu()
else:
sd_model.to(devices.cpu)
@@ -206,7 +275,7 @@ def reload_vae_weights(sd_model=None, vae_file=unspecified):
sd_hijack.model_hijack.hijack(sd_model)
script_callbacks.model_loaded_callback(sd_model)
if not shared.cmd_opts.lowvram and not shared.cmd_opts.medvram:
if not sd_model.lowvram:
sd_model.to(devices.device)
print("VAE weights loaded.")
+1 -1
View File
@@ -81,6 +81,6 @@ def cheap_approximation(sample):
coefs = torch.tensor(coeffs).to(sample.device)
x_sample = torch.einsum("lxy,lr -> rxy", sample, coefs)
x_sample = torch.einsum("...lxy,lr -> ...rxy", sample, coefs)
return x_sample
+44 -8
View File
@@ -44,7 +44,17 @@ def decoder():
)
class TAESD(nn.Module):
def encoder():
return nn.Sequential(
conv(3, 64), Block(64, 64),
conv(64, 64, stride=2, bias=False), Block(64, 64), Block(64, 64), Block(64, 64),
conv(64, 64, stride=2, bias=False), Block(64, 64), Block(64, 64), Block(64, 64),
conv(64, 64, stride=2, bias=False), Block(64, 64), Block(64, 64), Block(64, 64),
conv(64, 4),
)
class TAESDDecoder(nn.Module):
latent_magnitude = 3
latent_shift = 0.5
@@ -55,21 +65,28 @@ class TAESD(nn.Module):
self.decoder.load_state_dict(
torch.load(decoder_path, map_location='cpu' if devices.device.type != 'cuda' else None))
@staticmethod
def unscale_latents(x):
"""[0, 1] -> raw latents"""
return x.sub(TAESD.latent_shift).mul(2 * TAESD.latent_magnitude)
class TAESDEncoder(nn.Module):
latent_magnitude = 3
latent_shift = 0.5
def __init__(self, encoder_path="taesd_encoder.pth"):
"""Initialize pretrained TAESD on the given device from the given checkpoints."""
super().__init__()
self.encoder = encoder()
self.encoder.load_state_dict(
torch.load(encoder_path, map_location='cpu' if devices.device.type != 'cuda' else None))
def download_model(model_path, model_url):
if not os.path.exists(model_path):
os.makedirs(os.path.dirname(model_path), exist_ok=True)
print(f'Downloading TAESD decoder to: {model_path}')
print(f'Downloading TAESD model to: {model_path}')
torch.hub.download_url_to_file(model_url, model_path)
def model():
def decoder_model():
model_name = "taesdxl_decoder.pth" if getattr(shared.sd_model, 'is_sdxl', False) else "taesd_decoder.pth"
loaded_model = sd_vae_taesd_models.get(model_name)
@@ -78,7 +95,7 @@ def model():
download_model(model_path, 'https://github.com/madebyollin/taesd/raw/main/' + model_name)
if os.path.exists(model_path):
loaded_model = TAESD(model_path)
loaded_model = TAESDDecoder(model_path)
loaded_model.eval()
loaded_model.to(devices.device, devices.dtype)
sd_vae_taesd_models[model_name] = loaded_model
@@ -86,3 +103,22 @@ def model():
raise FileNotFoundError('TAESD model not found')
return loaded_model.decoder
def encoder_model():
model_name = "taesdxl_encoder.pth" if getattr(shared.sd_model, 'is_sdxl', False) else "taesd_encoder.pth"
loaded_model = sd_vae_taesd_models.get(model_name)
if loaded_model is None:
model_path = os.path.join(paths_internal.models_path, "VAE-taesd", model_name)
download_model(model_path, 'https://github.com/madebyollin/taesd/raw/main/' + model_name)
if os.path.exists(model_path):
loaded_model = TAESDEncoder(model_path)
loaded_model.eval()
loaded_model.to(devices.device, devices.dtype)
sd_vae_taesd_models[model_name] = loaded_model
else:
raise FileNotFoundError('TAESD model not found')
return loaded_model.encoder
+38 -842
View File
@@ -1,771 +1,51 @@
import datetime
import json
import os
import re
import sys
import threading
import time
import logging
import gradio as gr
import torch
import tqdm
import launch
import modules.interrogate
import modules.memmon
import modules.styles
import modules.devices as devices
from modules import localization, script_loading, errors, ui_components, shared_items, cmd_args
from modules import shared_cmd_options, shared_gradio_themes, options, shared_items, sd_models_types
from modules.paths_internal import models_path, script_path, data_path, sd_configs_path, sd_default_config, sd_model_file, default_sd_model_file, extensions_dir, extensions_builtin_dir # noqa: F401
from ldm.models.diffusion.ddpm import LatentDiffusion
from typing import Optional
from modules import util
log = logging.getLogger(__name__)
cmd_opts = shared_cmd_options.cmd_opts
parser = shared_cmd_options.parser
batch_cond_uncond = True # old field, unused now in favor of shared.opts.batch_cond_uncond
parallel_processing_allowed = True
styles_filename = cmd_opts.styles_file
config_filename = cmd_opts.ui_settings_file
hide_dirs = {"visible": not cmd_opts.hide_ui_dir_config}
demo = None
parser = cmd_args.parser
device = None
script_loading.preload_extensions(extensions_dir, parser, extension_list=launch.list_extensions(launch.args.ui_settings_file))
script_loading.preload_extensions(extensions_builtin_dir, parser)
weight_load_location = None
if os.environ.get('IGNORE_CMD_ARGS_ERRORS', None) is None:
cmd_opts = parser.parse_args()
else:
cmd_opts, _ = parser.parse_known_args()
restricted_opts = {
"samples_filename_pattern",
"directories_filename_pattern",
"outdir_samples",
"outdir_txt2img_samples",
"outdir_img2img_samples",
"outdir_extras_samples",
"outdir_grids",
"outdir_txt2img_grids",
"outdir_save",
"outdir_init_images"
}
# https://huggingface.co/datasets/freddyaboulton/gradio-theme-subdomains/resolve/main/subdomains.json
gradio_hf_hub_themes = [
"gradio/glass",
"gradio/monochrome",
"gradio/seafoam",
"gradio/soft",
"freddyaboulton/dracula_revamped",
"gradio/dracula_test",
"abidlabs/dracula_test",
"abidlabs/pakistan",
"dawood/microsoft_windows",
"ysharma/steampunk"
]
cmd_opts.disable_extension_access = (cmd_opts.share or cmd_opts.listen or cmd_opts.server_name) and not cmd_opts.enable_insecure_extension_access
devices.device, devices.device_interrogate, devices.device_gfpgan, devices.device_esrgan, devices.device_codeformer = \
(devices.cpu if any(y in cmd_opts.use_cpu for y in [x, 'all']) else devices.get_optimal_device() for x in ['sd', 'interrogate', 'gfpgan', 'esrgan', 'codeformer'])
devices.dtype = torch.float32 if cmd_opts.no_half else torch.float16
devices.dtype_vae = torch.float32 if cmd_opts.no_half or cmd_opts.no_half_vae else torch.float16
device = devices.device
weight_load_location = None if cmd_opts.lowram else "cpu"
batch_cond_uncond = cmd_opts.always_batch_cond_uncond or not (cmd_opts.lowvram or cmd_opts.medvram)
parallel_processing_allowed = not cmd_opts.lowvram and not cmd_opts.medvram
xformers_available = False
config_filename = cmd_opts.ui_settings_file
os.makedirs(cmd_opts.hypernetwork_dir, exist_ok=True)
hypernetworks = {}
loaded_hypernetworks = []
state = None
def reload_hypernetworks():
from modules.hypernetworks import hypernetwork
global hypernetworks
prompt_styles = None
hypernetworks = hypernetwork.list_hypernetworks(cmd_opts.hypernetwork_dir)
class State:
skipped = False
interrupted = False
job = ""
job_no = 0
job_count = 0
processing_has_refined_job_count = False
job_timestamp = '0'
sampling_step = 0
sampling_steps = 0
current_latent = None
current_image = None
current_image_sampling_step = 0
id_live_preview = 0
textinfo = None
time_start = None
server_start = None
_server_command_signal = threading.Event()
_server_command: Optional[str] = None
@property
def need_restart(self) -> bool:
# Compatibility getter for need_restart.
return self.server_command == "restart"
@need_restart.setter
def need_restart(self, value: bool) -> None:
# Compatibility setter for need_restart.
if value:
self.server_command = "restart"
@property
def server_command(self):
return self._server_command
@server_command.setter
def server_command(self, value: Optional[str]) -> None:
"""
Set the server command to `value` and signal that it's been set.
"""
self._server_command = value
self._server_command_signal.set()
def wait_for_server_command(self, timeout: Optional[float] = None) -> Optional[str]:
"""
Wait for server command to get set; return and clear the value and signal.
"""
if self._server_command_signal.wait(timeout):
self._server_command_signal.clear()
req = self._server_command
self._server_command = None
return req
return None
def request_restart(self) -> None:
self.interrupt()
self.server_command = "restart"
log.info("Received restart request")
def skip(self):
self.skipped = True
log.info("Received skip request")
def interrupt(self):
self.interrupted = True
log.info("Received interrupt request")
def nextjob(self):
if opts.live_previews_enable and opts.show_progress_every_n_steps == -1:
self.do_set_current_image()
self.job_no += 1
self.sampling_step = 0
self.current_image_sampling_step = 0
def dict(self):
obj = {
"skipped": self.skipped,
"interrupted": self.interrupted,
"job": self.job,
"job_count": self.job_count,
"job_timestamp": self.job_timestamp,
"job_no": self.job_no,
"sampling_step": self.sampling_step,
"sampling_steps": self.sampling_steps,
}
return obj
def begin(self, job: str = "(unknown)"):
self.sampling_step = 0
self.job_count = -1
self.processing_has_refined_job_count = False
self.job_no = 0
self.job_timestamp = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
self.current_latent = None
self.current_image = None
self.current_image_sampling_step = 0
self.id_live_preview = 0
self.skipped = False
self.interrupted = False
self.textinfo = None
self.time_start = time.time()
self.job = job
devices.torch_gc()
log.info("Starting job %s", job)
def end(self):
duration = time.time() - self.time_start
log.info("Ending job %s (%.2f seconds)", self.job, duration)
self.job = ""
self.job_count = 0
devices.torch_gc()
def set_current_image(self):
"""sets self.current_image from self.current_latent if enough sampling steps have been made after the last call to this"""
if not parallel_processing_allowed:
return
if self.sampling_step - self.current_image_sampling_step >= opts.show_progress_every_n_steps and opts.live_previews_enable and opts.show_progress_every_n_steps != -1:
self.do_set_current_image()
def do_set_current_image(self):
if self.current_latent is None:
return
import modules.sd_samplers
if opts.show_progress_grid:
self.assign_current_image(modules.sd_samplers.samples_to_image_grid(self.current_latent))
else:
self.assign_current_image(modules.sd_samplers.sample_to_image(self.current_latent))
self.current_image_sampling_step = self.sampling_step
def assign_current_image(self, image):
self.current_image = image
self.id_live_preview += 1
state = State()
state.server_start = time.time()
styles_filename = cmd_opts.styles_file
prompt_styles = modules.styles.StyleDatabase(styles_filename)
interrogator = modules.interrogate.InterrogateModels("interrogate")
interrogator = None
face_restorers = []
options_templates = None
opts = None
restricted_opts = None
class OptionInfo:
def __init__(self, default=None, label="", component=None, component_args=None, onchange=None, section=None, refresh=None, comment_before='', comment_after=''):
self.default = default
self.label = label
self.component = component
self.component_args = component_args
self.onchange = onchange
self.section = section
self.refresh = refresh
self.comment_before = comment_before
"""HTML text that will be added after label in UI"""
self.comment_after = comment_after
"""HTML text that will be added before label in UI"""
def link(self, label, url):
self.comment_before += f"[<a href='{url}' target='_blank'>{label}</a>]"
return self
def js(self, label, js_func):
self.comment_before += f"[<a onclick='{js_func}(); return false'>{label}</a>]"
return self
def info(self, info):
self.comment_after += f"<span class='info'>({info})</span>"
return self
def html(self, html):
self.comment_after += html
return self
def needs_restart(self):
self.comment_after += " <span class='info'>(requires restart)</span>"
return self
def options_section(section_identifier, options_dict):
for v in options_dict.values():
v.section = section_identifier
return options_dict
def list_checkpoint_tiles():
import modules.sd_models
return modules.sd_models.checkpoint_tiles()
def refresh_checkpoints():
import modules.sd_models
return modules.sd_models.list_models()
def list_samplers():
import modules.sd_samplers
return modules.sd_samplers.all_samplers
hide_dirs = {"visible": not cmd_opts.hide_ui_dir_config}
tab_names = []
options_templates = {}
options_templates.update(options_section(('saving-images', "Saving images/grids"), {
"samples_save": OptionInfo(True, "Always save all generated images"),
"samples_format": OptionInfo('png', 'File format for images'),
"samples_filename_pattern": OptionInfo("", "Images filename pattern", component_args=hide_dirs).link("wiki", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Images-Filename-Name-and-Subdirectory"),
"save_images_add_number": OptionInfo(True, "Add number to filename when saving", component_args=hide_dirs),
"grid_save": OptionInfo(True, "Always save all generated image grids"),
"grid_format": OptionInfo('png', 'File format for grids'),
"grid_extended_filename": OptionInfo(False, "Add extended info (seed, prompt) to filename when saving grid"),
"grid_only_if_multiple": OptionInfo(True, "Do not save grids consisting of one picture"),
"grid_prevent_empty_spots": OptionInfo(False, "Prevent empty spots in grid (when set to autodetect)"),
"grid_zip_filename_pattern": OptionInfo("", "Archive filename pattern", component_args=hide_dirs).link("wiki", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Images-Filename-Name-and-Subdirectory"),
"n_rows": OptionInfo(-1, "Grid row count; use -1 for autodetect and 0 for it to be same as batch size", gr.Slider, {"minimum": -1, "maximum": 16, "step": 1}),
"font": OptionInfo("", "Font for image grids that have text"),
"grid_text_active_color": OptionInfo("#000000", "Text color for image grids", ui_components.FormColorPicker, {}),
"grid_text_inactive_color": OptionInfo("#999999", "Inactive text color for image grids", ui_components.FormColorPicker, {}),
"grid_background_color": OptionInfo("#ffffff", "Background color for image grids", ui_components.FormColorPicker, {}),
"enable_pnginfo": OptionInfo(True, "Save text information about generation parameters as chunks to png files"),
"save_txt": OptionInfo(False, "Create a text file next to every image with generation parameters."),
"save_images_before_face_restoration": OptionInfo(False, "Save a copy of image before doing face restoration."),
"save_images_before_highres_fix": OptionInfo(False, "Save a copy of image before applying highres fix."),
"save_images_before_color_correction": OptionInfo(False, "Save a copy of image before applying color correction to img2img results"),
"save_mask": OptionInfo(False, "For inpainting, save a copy of the greyscale mask"),
"save_mask_composite": OptionInfo(False, "For inpainting, save a masked composite"),
"jpeg_quality": OptionInfo(80, "Quality for saved jpeg images", gr.Slider, {"minimum": 1, "maximum": 100, "step": 1}),
"webp_lossless": OptionInfo(False, "Use lossless compression for webp images"),
"export_for_4chan": OptionInfo(True, "Save copy of large images as JPG").info("if the file size is above the limit, or either width or height are above the limit"),
"img_downscale_threshold": OptionInfo(4.0, "File size limit for the above option, MB", gr.Number),
"target_side_length": OptionInfo(4000, "Width/height limit for the above option, in pixels", gr.Number),
"img_max_size_mp": OptionInfo(200, "Maximum image size", gr.Number).info("in megapixels"),
"use_original_name_batch": OptionInfo(True, "Use original name for output filename during batch process in extras tab"),
"use_upscaler_name_as_suffix": OptionInfo(False, "Use upscaler name as filename suffix in the extras tab"),
"save_selected_only": OptionInfo(True, "When using 'Save' button, only save a single selected image"),
"save_init_img": OptionInfo(False, "Save init images when using img2img"),
"temp_dir": OptionInfo("", "Directory for temporary images; leave empty for default"),
"clean_temp_dir_at_start": OptionInfo(False, "Cleanup non-default temporary directory when starting webui"),
}))
options_templates.update(options_section(('saving-paths', "Paths for saving"), {
"outdir_samples": OptionInfo("", "Output directory for images; if empty, defaults to three directories below", component_args=hide_dirs),
"outdir_txt2img_samples": OptionInfo("outputs/txt2img-images", 'Output directory for txt2img images', component_args=hide_dirs),
"outdir_img2img_samples": OptionInfo("outputs/img2img-images", 'Output directory for img2img images', component_args=hide_dirs),
"outdir_extras_samples": OptionInfo("outputs/extras-images", 'Output directory for images from extras tab', component_args=hide_dirs),
"outdir_grids": OptionInfo("", "Output directory for grids; if empty, defaults to two directories below", component_args=hide_dirs),
"outdir_txt2img_grids": OptionInfo("outputs/txt2img-grids", 'Output directory for txt2img grids', component_args=hide_dirs),
"outdir_img2img_grids": OptionInfo("outputs/img2img-grids", 'Output directory for img2img grids', component_args=hide_dirs),
"outdir_save": OptionInfo("log/images", "Directory for saving images using the Save button", component_args=hide_dirs),
"outdir_init_images": OptionInfo("outputs/init-images", "Directory for saving init images when using img2img", component_args=hide_dirs),
}))
options_templates.update(options_section(('saving-to-dirs', "Saving to a directory"), {
"save_to_dirs": OptionInfo(True, "Save images to a subdirectory"),
"grid_save_to_dirs": OptionInfo(True, "Save grids to a subdirectory"),
"use_save_to_dirs_for_ui": OptionInfo(False, "When using \"Save\" button, save images to a subdirectory"),
"directories_filename_pattern": OptionInfo("[date]", "Directory name pattern", component_args=hide_dirs).link("wiki", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Images-Filename-Name-and-Subdirectory"),
"directories_max_prompt_words": OptionInfo(8, "Max prompt words for [prompt_words] pattern", gr.Slider, {"minimum": 1, "maximum": 20, "step": 1, **hide_dirs}),
}))
options_templates.update(options_section(('upscaling', "Upscaling"), {
"ESRGAN_tile": OptionInfo(192, "Tile size for ESRGAN upscalers.", gr.Slider, {"minimum": 0, "maximum": 512, "step": 16}).info("0 = no tiling"),
"ESRGAN_tile_overlap": OptionInfo(8, "Tile overlap for ESRGAN upscalers.", gr.Slider, {"minimum": 0, "maximum": 48, "step": 1}).info("Low values = visible seam"),
"realesrgan_enabled_models": OptionInfo(["R-ESRGAN 4x+", "R-ESRGAN 4x+ Anime6B"], "Select which Real-ESRGAN models to show in the web UI.", gr.CheckboxGroup, lambda: {"choices": shared_items.realesrgan_models_names()}),
"upscaler_for_img2img": OptionInfo(None, "Upscaler for img2img", gr.Dropdown, lambda: {"choices": [x.name for x in sd_upscalers]}),
}))
options_templates.update(options_section(('face-restoration', "Face restoration"), {
"face_restoration_model": OptionInfo("CodeFormer", "Face restoration model", gr.Radio, lambda: {"choices": [x.name() for x in face_restorers]}),
"code_former_weight": OptionInfo(0.5, "CodeFormer weight", gr.Slider, {"minimum": 0, "maximum": 1, "step": 0.01}).info("0 = maximum effect; 1 = minimum effect"),
"face_restoration_unload": OptionInfo(False, "Move face restoration model from VRAM into RAM after processing"),
}))
options_templates.update(options_section(('system', "System"), {
"show_warnings": OptionInfo(False, "Show warnings in console."),
"memmon_poll_rate": OptionInfo(8, "VRAM usage polls per second during generation.", gr.Slider, {"minimum": 0, "maximum": 40, "step": 1}).info("0 = disable"),
"samples_log_stdout": OptionInfo(False, "Always print all generation info to standard output"),
"multiple_tqdm": OptionInfo(True, "Add a second progress bar to the console that shows progress for an entire job."),
"print_hypernet_extra": OptionInfo(False, "Print extra hypernetwork information to console."),
"list_hidden_files": OptionInfo(True, "Load models/files in hidden directories").info("directory is hidden if its name starts with \".\""),
"disable_mmap_load_safetensors": OptionInfo(False, "Disable memmapping for loading .safetensors files.").info("fixes very slow loading speed in some cases"),
}))
options_templates.update(options_section(('training', "Training"), {
"unload_models_when_training": OptionInfo(False, "Move VAE and CLIP to RAM when training if possible. Saves VRAM."),
"pin_memory": OptionInfo(False, "Turn on pin_memory for DataLoader. Makes training slightly faster but can increase memory usage."),
"save_optimizer_state": OptionInfo(False, "Saves Optimizer state as separate *.optim file. Training of embedding or HN can be resumed with the matching optim file."),
"save_training_settings_to_txt": OptionInfo(True, "Save textual inversion and hypernet settings to a text file whenever training starts."),
"dataset_filename_word_regex": OptionInfo("", "Filename word regex"),
"dataset_filename_join_string": OptionInfo(" ", "Filename join string"),
"training_image_repeats_per_epoch": OptionInfo(1, "Number of repeats for a single input image per epoch; used only for displaying epoch number", gr.Number, {"precision": 0}),
"training_write_csv_every": OptionInfo(500, "Save an csv containing the loss to log directory every N steps, 0 to disable"),
"training_xattention_optimizations": OptionInfo(False, "Use cross attention optimizations while training"),
"training_enable_tensorboard": OptionInfo(False, "Enable tensorboard logging."),
"training_tensorboard_save_images": OptionInfo(False, "Save generated images within tensorboard."),
"training_tensorboard_flush_every": OptionInfo(120, "How often, in seconds, to flush the pending tensorboard events and summaries to disk."),
}))
options_templates.update(options_section(('sd', "Stable Diffusion"), {
"sd_model_checkpoint": OptionInfo(None, "Stable Diffusion checkpoint", gr.Dropdown, lambda: {"choices": list_checkpoint_tiles()}, refresh=refresh_checkpoints),
"sd_checkpoint_cache": OptionInfo(0, "Checkpoints to cache in RAM", gr.Slider, {"minimum": 0, "maximum": 10, "step": 1}),
"sd_vae_checkpoint_cache": OptionInfo(0, "VAE Checkpoints to cache in RAM", gr.Slider, {"minimum": 0, "maximum": 10, "step": 1}),
"sd_vae": OptionInfo("Automatic", "SD VAE", gr.Dropdown, lambda: {"choices": shared_items.sd_vae_items()}, refresh=shared_items.refresh_vae_list).info("choose VAE model: Automatic = use one with same filename as checkpoint; None = use VAE from checkpoint"),
"sd_vae_as_default": OptionInfo(True, "Ignore selected VAE for stable diffusion checkpoints that have their own .vae.pt next to them"),
"sd_unet": OptionInfo("Automatic", "SD Unet", gr.Dropdown, lambda: {"choices": shared_items.sd_unet_items()}, refresh=shared_items.refresh_unet_list).info("choose Unet model: Automatic = use one with same filename as checkpoint; None = use Unet from checkpoint"),
"inpainting_mask_weight": OptionInfo(1.0, "Inpainting conditioning mask strength", gr.Slider, {"minimum": 0.0, "maximum": 1.0, "step": 0.01}),
"initial_noise_multiplier": OptionInfo(1.0, "Noise multiplier for img2img", gr.Slider, {"minimum": 0.5, "maximum": 1.5, "step": 0.01}),
"img2img_color_correction": OptionInfo(False, "Apply color correction to img2img results to match original colors."),
"img2img_fix_steps": OptionInfo(False, "With img2img, do exactly the amount of steps the slider specifies.").info("normally you'd do less with less denoising"),
"img2img_background_color": OptionInfo("#ffffff", "With img2img, fill image's transparent parts with this color.", ui_components.FormColorPicker, {}),
"enable_quantization": OptionInfo(False, "Enable quantization in K samplers for sharper and cleaner results. This may change existing seeds. Requires restart to apply."),
"enable_emphasis": OptionInfo(True, "Enable emphasis").info("use (text) to make model pay more attention to text and [text] to make it pay less attention"),
"enable_batch_seeds": OptionInfo(True, "Make K-diffusion samplers produce same images in a batch as when making a single image"),
"comma_padding_backtrack": OptionInfo(20, "Prompt word wrap length limit", gr.Slider, {"minimum": 0, "maximum": 74, "step": 1}).info("in tokens - for texts shorter than specified, if they don't fit into 75 token limit, move them to the next 75 token chunk"),
"CLIP_stop_at_last_layers": OptionInfo(1, "Clip skip", gr.Slider, {"minimum": 1, "maximum": 12, "step": 1}).link("wiki", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#clip-skip").info("ignore last layers of CLIP network; 1 ignores none, 2 ignores one layer"),
"upcast_attn": OptionInfo(False, "Upcast cross attention layer to float32"),
"auto_vae_precision": OptionInfo(True, "Automaticlly revert VAE to 32-bit floats").info("triggers when a tensor with NaNs is produced in VAE; disabling the option in this case will result in a black square image"),
"randn_source": OptionInfo("GPU", "Random number generator source.", gr.Radio, {"choices": ["GPU", "CPU"]}).info("changes seeds drastically; use CPU to produce the same picture across different videocard vendors"),
}))
options_templates.update(options_section(('sdxl', "Stable Diffusion XL"), {
"sdxl_crop_top": OptionInfo(0, "crop top coordinate"),
"sdxl_crop_left": OptionInfo(0, "crop left coordinate"),
"sdxl_refiner_low_aesthetic_score": OptionInfo(2.5, "SDXL low aesthetic score", gr.Number).info("used for refiner model negative prompt"),
"sdxl_refiner_high_aesthetic_score": OptionInfo(6.0, "SDXL high aesthetic score", gr.Number).info("used for refiner model prompt"),
}))
options_templates.update(options_section(('optimizations', "Optimizations"), {
"cross_attention_optimization": OptionInfo("Automatic", "Cross attention optimization", gr.Dropdown, lambda: {"choices": shared_items.cross_attention_optimizations()}),
"s_min_uncond": OptionInfo(0.0, "Negative Guidance minimum sigma", gr.Slider, {"minimum": 0.0, "maximum": 15.0, "step": 0.01}).link("PR", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/9177").info("skip negative prompt for some steps when the image is almost ready; 0=disable, higher=faster"),
"token_merging_ratio": OptionInfo(0.0, "Token merging ratio", gr.Slider, {"minimum": 0.0, "maximum": 0.9, "step": 0.1}).link("PR", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/9256").info("0=disable, higher=faster"),
"token_merging_ratio_img2img": OptionInfo(0.0, "Token merging ratio for img2img", gr.Slider, {"minimum": 0.0, "maximum": 0.9, "step": 0.1}).info("only applies if non-zero and overrides above"),
"token_merging_ratio_hr": OptionInfo(0.0, "Token merging ratio for high-res pass", gr.Slider, {"minimum": 0.0, "maximum": 0.9, "step": 0.1}).info("only applies if non-zero and overrides above"),
"pad_cond_uncond": OptionInfo(False, "Pad prompt/negative prompt to be same length").info("improves performance when prompt and negative prompt have different lengths; changes seeds"),
"experimental_persistent_cond_cache": OptionInfo(False, "persistent cond cache").info("Experimental, keep cond caches across jobs, reduce overhead."),
}))
options_templates.update(options_section(('compatibility', "Compatibility"), {
"use_old_emphasis_implementation": OptionInfo(False, "Use old emphasis implementation. Can be useful to reproduce old seeds."),
"use_old_karras_scheduler_sigmas": OptionInfo(False, "Use old karras scheduler sigmas (0.1 to 10)."),
"no_dpmpp_sde_batch_determinism": OptionInfo(False, "Do not make DPM++ SDE deterministic across different batch sizes."),
"use_old_hires_fix_width_height": OptionInfo(False, "For hires fix, use width/height sliders to set final resolution rather than first pass (disables Upscale by, Resize width/height to)."),
"dont_fix_second_order_samplers_schedule": OptionInfo(False, "Do not fix prompt schedule for second order samplers."),
"hires_fix_use_firstpass_conds": OptionInfo(False, "For hires fix, calculate conds of second pass using extra networks of first pass."),
}))
options_templates.update(options_section(('interrogate', "Interrogate Options"), {
"interrogate_keep_models_in_memory": OptionInfo(False, "Keep models in VRAM"),
"interrogate_return_ranks": OptionInfo(False, "Include ranks of model tags matches in results.").info("booru only"),
"interrogate_clip_num_beams": OptionInfo(1, "BLIP: num_beams", gr.Slider, {"minimum": 1, "maximum": 16, "step": 1}),
"interrogate_clip_min_length": OptionInfo(24, "BLIP: minimum description length", gr.Slider, {"minimum": 1, "maximum": 128, "step": 1}),
"interrogate_clip_max_length": OptionInfo(48, "BLIP: maximum description length", gr.Slider, {"minimum": 1, "maximum": 256, "step": 1}),
"interrogate_clip_dict_limit": OptionInfo(1500, "CLIP: maximum number of lines in text file").info("0 = No limit"),
"interrogate_clip_skip_categories": OptionInfo([], "CLIP: skip inquire categories", gr.CheckboxGroup, lambda: {"choices": modules.interrogate.category_types()}, refresh=modules.interrogate.category_types),
"interrogate_deepbooru_score_threshold": OptionInfo(0.5, "deepbooru: score threshold", gr.Slider, {"minimum": 0, "maximum": 1, "step": 0.01}),
"deepbooru_sort_alpha": OptionInfo(True, "deepbooru: sort tags alphabetically").info("if not: sort by score"),
"deepbooru_use_spaces": OptionInfo(True, "deepbooru: use spaces in tags").info("if not: use underscores"),
"deepbooru_escape": OptionInfo(True, "deepbooru: escape (\\) brackets").info("so they are used as literal brackets and not for emphasis"),
"deepbooru_filter_tags": OptionInfo("", "deepbooru: filter out those tags").info("separate by comma"),
}))
options_templates.update(options_section(('extra_networks', "Extra Networks"), {
"extra_networks_show_hidden_directories": OptionInfo(True, "Show hidden directories").info("directory is hidden if its name starts with \".\"."),
"extra_networks_hidden_models": OptionInfo("When searched", "Show cards for models in hidden directories", gr.Radio, {"choices": ["Always", "When searched", "Never"]}).info('"When searched" option will only show the item when the search string has 4 characters or more'),
"extra_networks_default_multiplier": OptionInfo(1.0, "Default multiplier for extra networks", gr.Slider, {"minimum": 0.0, "maximum": 2.0, "step": 0.01}),
"extra_networks_card_width": OptionInfo(0, "Card width for Extra Networks").info("in pixels"),
"extra_networks_card_height": OptionInfo(0, "Card height for Extra Networks").info("in pixels"),
"extra_networks_card_text_scale": OptionInfo(1.0, "Card text scale", gr.Slider, {"minimum": 0.0, "maximum": 2.0, "step": 0.01}).info("1 = original size"),
"extra_networks_card_show_desc": OptionInfo(True, "Show description on card"),
"extra_networks_add_text_separator": OptionInfo(" ", "Extra networks separator").info("extra text to add before <...> when adding extra network to prompt"),
"ui_extra_networks_tab_reorder": OptionInfo("", "Extra networks tab order").needs_restart(),
"textual_inversion_print_at_load": OptionInfo(False, "Print a list of Textual Inversion embeddings when loading model"),
"textual_inversion_add_hashes_to_infotext": OptionInfo(True, "Add Textual Inversion hashes to infotext"),
"sd_hypernetwork": OptionInfo("None", "Add hypernetwork to prompt", gr.Dropdown, lambda: {"choices": ["None", *hypernetworks]}, refresh=reload_hypernetworks),
}))
options_templates.update(options_section(('ui', "User interface"), {
"localization": OptionInfo("None", "Localization", gr.Dropdown, lambda: {"choices": ["None"] + list(localization.localizations.keys())}, refresh=lambda: localization.list_localizations(cmd_opts.localizations_dir)).needs_restart(),
"gradio_theme": OptionInfo("Default", "Gradio theme", ui_components.DropdownEditable, lambda: {"choices": ["Default"] + gradio_hf_hub_themes}).needs_restart(),
"img2img_editor_height": OptionInfo(720, "img2img: height of image editor", gr.Slider, {"minimum": 80, "maximum": 1600, "step": 1}).info("in pixels").needs_restart(),
"return_grid": OptionInfo(True, "Show grid in results for web"),
"return_mask": OptionInfo(False, "For inpainting, include the greyscale mask in results for web"),
"return_mask_composite": OptionInfo(False, "For inpainting, include masked composite in results for web"),
"do_not_show_images": OptionInfo(False, "Do not show any images in results for web"),
"send_seed": OptionInfo(True, "Send seed when sending prompt or image to other interface"),
"send_size": OptionInfo(True, "Send size when sending prompt or image to another interface"),
"js_modal_lightbox": OptionInfo(True, "Enable full page image viewer"),
"js_modal_lightbox_initially_zoomed": OptionInfo(True, "Show images zoomed in by default in full page image viewer"),
"js_modal_lightbox_gamepad": OptionInfo(False, "Navigate image viewer with gamepad"),
"js_modal_lightbox_gamepad_repeat": OptionInfo(250, "Gamepad repeat period, in milliseconds"),
"show_progress_in_title": OptionInfo(True, "Show generation progress in window title."),
"samplers_in_dropdown": OptionInfo(True, "Use dropdown for sampler selection instead of radio group").needs_restart(),
"dimensions_and_batch_together": OptionInfo(True, "Show Width/Height and Batch sliders in same row").needs_restart(),
"keyedit_precision_attention": OptionInfo(0.1, "Ctrl+up/down precision when editing (attention:1.1)", gr.Slider, {"minimum": 0.01, "maximum": 0.2, "step": 0.001}),
"keyedit_precision_extra": OptionInfo(0.05, "Ctrl+up/down precision when editing <extra networks:0.9>", gr.Slider, {"minimum": 0.01, "maximum": 0.2, "step": 0.001}),
"keyedit_delimiters": OptionInfo(".,\\/!?%^*;:{}=`~()", "Ctrl+up/down word delimiters"),
"keyedit_move": OptionInfo(True, "Alt+left/right moves prompt elements"),
"quicksettings_list": OptionInfo(["sd_model_checkpoint"], "Quicksettings list", ui_components.DropdownMulti, lambda: {"choices": list(opts.data_labels.keys())}).js("info", "settingsHintsShowQuicksettings").info("setting entries that appear at the top of page rather than in settings tab").needs_restart(),
"ui_tab_order": OptionInfo([], "UI tab order", ui_components.DropdownMulti, lambda: {"choices": list(tab_names)}).needs_restart(),
"hidden_tabs": OptionInfo([], "Hidden UI tabs", ui_components.DropdownMulti, lambda: {"choices": list(tab_names)}).needs_restart(),
"ui_reorder_list": OptionInfo([], "txt2img/img2img UI item order", ui_components.DropdownMulti, lambda: {"choices": list(shared_items.ui_reorder_categories())}).info("selected items appear first").needs_restart(),
"hires_fix_show_sampler": OptionInfo(False, "Hires fix: show hires sampler selection").needs_restart(),
"hires_fix_show_prompts": OptionInfo(False, "Hires fix: show hires prompt and negative prompt").needs_restart(),
"disable_token_counters": OptionInfo(False, "Disable prompt token counters").needs_restart(),
}))
options_templates.update(options_section(('infotext', "Infotext"), {
"add_model_hash_to_info": OptionInfo(True, "Add model hash to generation information"),
"add_model_name_to_info": OptionInfo(True, "Add model name to generation information"),
"add_user_name_to_info": OptionInfo(False, "Add user name to generation information when authenticated"),
"add_version_to_infotext": OptionInfo(True, "Add program version to generation information"),
"disable_weights_auto_swap": OptionInfo(True, "Disregard checkpoint information from pasted infotext").info("when reading generation parameters from text into UI"),
"infotext_styles": OptionInfo("Apply if any", "Infer styles from prompts of pasted infotext", gr.Radio, {"choices": ["Ignore", "Apply", "Discard", "Apply if any"]}).info("when reading generation parameters from text into UI)").html("""<ul style='margin-left: 1.5em'>
<li>Ignore: keep prompt and styles dropdown as it is.</li>
<li>Apply: remove style text from prompt, always replace styles dropdown value with found styles (even if none are found).</li>
<li>Discard: remove style text from prompt, keep styles dropdown as it is.</li>
<li>Apply if any: remove style text from prompt; if any styles are found in prompt, put them into styles dropdown, otherwise keep it as it is.</li>
</ul>"""),
}))
options_templates.update(options_section(('ui', "Live previews"), {
"show_progressbar": OptionInfo(True, "Show progressbar"),
"live_previews_enable": OptionInfo(True, "Show live previews of the created image"),
"live_previews_image_format": OptionInfo("png", "Live preview file format", gr.Radio, {"choices": ["jpeg", "png", "webp"]}),
"show_progress_grid": OptionInfo(True, "Show previews of all images generated in a batch as a grid"),
"show_progress_every_n_steps": OptionInfo(10, "Live preview display period", gr.Slider, {"minimum": -1, "maximum": 32, "step": 1}).info("in sampling steps - show new live preview image every N sampling steps; -1 = only show after completion of batch"),
"show_progress_type": OptionInfo("Approx NN", "Live preview method", gr.Radio, {"choices": ["Full", "Approx NN", "Approx cheap", "TAESD"]}).info("Full = slow but pretty; Approx NN and TAESD = fast but low quality; Approx cheap = super fast but terrible otherwise"),
"live_preview_content": OptionInfo("Prompt", "Live preview subject", gr.Radio, {"choices": ["Combined", "Prompt", "Negative prompt"]}),
"live_preview_refresh_period": OptionInfo(1000, "Progressbar and preview update period").info("in milliseconds"),
}))
options_templates.update(options_section(('sampler-params', "Sampler parameters"), {
"hide_samplers": OptionInfo([], "Hide samplers in user interface", gr.CheckboxGroup, lambda: {"choices": [x.name for x in list_samplers()]}).needs_restart(),
"eta_ddim": OptionInfo(0.0, "Eta for DDIM", gr.Slider, {"minimum": 0.0, "maximum": 1.0, "step": 0.01}).info("noise multiplier; higher = more unperdictable results"),
"eta_ancestral": OptionInfo(1.0, "Eta for ancestral samplers", gr.Slider, {"minimum": 0.0, "maximum": 1.0, "step": 0.01}).info("noise multiplier; applies to Euler a and other samplers that have a in them"),
"ddim_discretize": OptionInfo('uniform', "img2img DDIM discretize", gr.Radio, {"choices": ['uniform', 'quad']}),
's_churn': OptionInfo(0.0, "sigma churn", gr.Slider, {"minimum": 0.0, "maximum": 1.0, "step": 0.01}),
's_tmin': OptionInfo(0.0, "sigma tmin", gr.Slider, {"minimum": 0.0, "maximum": 1.0, "step": 0.01}),
's_noise': OptionInfo(1.0, "sigma noise", gr.Slider, {"minimum": 0.0, "maximum": 1.0, "step": 0.01}),
'k_sched_type': OptionInfo("Automatic", "scheduler type", gr.Dropdown, {"choices": ["Automatic", "karras", "exponential", "polyexponential"]}).info("lets you override the noise schedule for k-diffusion samplers; choosing Automatic disables the three parameters below"),
'sigma_min': OptionInfo(0.0, "sigma min", gr.Number).info("0 = default (~0.03); minimum noise strength for k-diffusion noise scheduler"),
'sigma_max': OptionInfo(0.0, "sigma max", gr.Number).info("0 = default (~14.6); maximum noise strength for k-diffusion noise schedule"),
'rho': OptionInfo(0.0, "rho", gr.Number).info("0 = default (7 for karras, 1 for polyexponential); higher values result in a more steep noise schedule (decreases faster)"),
'eta_noise_seed_delta': OptionInfo(0, "Eta noise seed delta", gr.Number, {"precision": 0}).info("ENSD; does not improve anything, just produces different results for ancestral samplers - only useful for reproducing images"),
'always_discard_next_to_last_sigma': OptionInfo(False, "Always discard next-to-last sigma").link("PR", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/6044"),
'uni_pc_variant': OptionInfo("bh1", "UniPC variant", gr.Radio, {"choices": ["bh1", "bh2", "vary_coeff"]}),
'uni_pc_skip_type': OptionInfo("time_uniform", "UniPC skip type", gr.Radio, {"choices": ["time_uniform", "time_quadratic", "logSNR"]}),
'uni_pc_order': OptionInfo(3, "UniPC order", gr.Slider, {"minimum": 1, "maximum": 50, "step": 1}).info("must be < sampling steps"),
'uni_pc_lower_order_final': OptionInfo(True, "UniPC lower order final"),
}))
options_templates.update(options_section(('postprocessing', "Postprocessing"), {
'postprocessing_enable_in_main_ui': OptionInfo([], "Enable postprocessing operations in txt2img and img2img tabs", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts()]}),
'postprocessing_operation_order': OptionInfo([], "Postprocessing operation order", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts()]}),
'upscaling_max_images_in_cache': OptionInfo(5, "Maximum number of images in upscaling cache", gr.Slider, {"minimum": 0, "maximum": 10, "step": 1}),
}))
options_templates.update(options_section((None, "Hidden options"), {
"disabled_extensions": OptionInfo([], "Disable these extensions"),
"disable_all_extensions": OptionInfo("none", "Disable all extensions (preserves the list of disabled extensions)", gr.Radio, {"choices": ["none", "extra", "all"]}),
"restore_config_state_file": OptionInfo("", "Config state file to restore from, under 'config-states/' folder"),
"sd_checkpoint_hash": OptionInfo("", "SHA256 hash of the current checkpoint"),
}))
options_templates.update()
class Options:
data = None
data_labels = options_templates
typemap = {int: float}
def __init__(self):
self.data = {k: v.default for k, v in self.data_labels.items()}
def __setattr__(self, key, value):
if self.data is not None:
if key in self.data or key in self.data_labels:
assert not cmd_opts.freeze_settings, "changing settings is disabled"
info = opts.data_labels.get(key, None)
comp_args = info.component_args if info else None
if isinstance(comp_args, dict) and comp_args.get('visible', True) is False:
raise RuntimeError(f"not possible to set {key} because it is restricted")
if cmd_opts.hide_ui_dir_config and key in restricted_opts:
raise RuntimeError(f"not possible to set {key} because it is restricted")
self.data[key] = value
return
return super(Options, self).__setattr__(key, value)
def __getattr__(self, item):
if self.data is not None:
if item in self.data:
return self.data[item]
if item in self.data_labels:
return self.data_labels[item].default
return super(Options, self).__getattribute__(item)
def set(self, key, value):
"""sets an option and calls its onchange callback, returning True if the option changed and False otherwise"""
oldval = self.data.get(key, None)
if oldval == value:
return False
try:
setattr(self, key, value)
except RuntimeError:
return False
if self.data_labels[key].onchange is not None:
try:
self.data_labels[key].onchange()
except Exception as e:
errors.display(e, f"changing setting {key} to {value}")
setattr(self, key, oldval)
return False
return True
def get_default(self, key):
"""returns the default value for the key"""
data_label = self.data_labels.get(key)
if data_label is None:
return None
return data_label.default
def save(self, filename):
assert not cmd_opts.freeze_settings, "saving settings is disabled"
with open(filename, "w", encoding="utf8") as file:
json.dump(self.data, file, indent=4)
def same_type(self, x, y):
if x is None or y is None:
return True
type_x = self.typemap.get(type(x), type(x))
type_y = self.typemap.get(type(y), type(y))
return type_x == type_y
def load(self, filename):
with open(filename, "r", encoding="utf8") as file:
self.data = json.load(file)
# 1.1.1 quicksettings list migration
if self.data.get('quicksettings') is not None and self.data.get('quicksettings_list') is None:
self.data['quicksettings_list'] = [i.strip() for i in self.data.get('quicksettings').split(',')]
# 1.4.0 ui_reorder
if isinstance(self.data.get('ui_reorder'), str) and self.data.get('ui_reorder') and "ui_reorder_list" not in self.data:
self.data['ui_reorder_list'] = [i.strip() for i in self.data.get('ui_reorder').split(',')]
bad_settings = 0
for k, v in self.data.items():
info = self.data_labels.get(k, None)
if info is not None and not self.same_type(info.default, v):
print(f"Warning: bad setting value: {k}: {v} ({type(v).__name__}; expected {type(info.default).__name__})", file=sys.stderr)
bad_settings += 1
if bad_settings > 0:
print(f"The program is likely to not work with bad settings.\nSettings file: {filename}\nEither fix the file, or delete it and restart.", file=sys.stderr)
def onchange(self, key, func, call=True):
item = self.data_labels.get(key)
item.onchange = func
if call:
func()
def dumpjson(self):
d = {k: self.data.get(k, v.default) for k, v in self.data_labels.items()}
d["_comments_before"] = {k: v.comment_before for k, v in self.data_labels.items() if v.comment_before is not None}
d["_comments_after"] = {k: v.comment_after for k, v in self.data_labels.items() if v.comment_after is not None}
return json.dumps(d)
def add_option(self, key, info):
self.data_labels[key] = info
def reorder(self):
"""reorder settings so that all items related to section always go together"""
section_ids = {}
settings_items = self.data_labels.items()
for _, item in settings_items:
if item.section not in section_ids:
section_ids[item.section] = len(section_ids)
self.data_labels = dict(sorted(settings_items, key=lambda x: section_ids[x[1].section]))
def cast_value(self, key, value):
"""casts an arbitrary to the same type as this setting's value with key
Example: cast_value("eta_noise_seed_delta", "12") -> returns 12 (an int rather than str)
"""
if value is None:
return None
default_value = self.data_labels[key].default
if default_value is None:
default_value = getattr(self, key, None)
if default_value is None:
return None
expected_type = type(default_value)
if expected_type == bool and value == "False":
value = False
else:
value = expected_type(value)
return value
opts = Options()
if os.path.exists(config_filename):
opts.load(config_filename)
class Shared(sys.modules[__name__].__class__):
"""
this class is here to provide sd_model field as a property, so that it can be created and loaded on demand rather than
at program startup.
"""
sd_model_val = None
@property
def sd_model(self):
import modules.sd_models
return modules.sd_models.model_data.get_sd_model()
@sd_model.setter
def sd_model(self, value):
import modules.sd_models
modules.sd_models.model_data.set_sd_model(value)
sd_model: LatentDiffusion = None # this var is here just for IDE's type checking; it cannot be accessed because the class field above will be accessed instead
sys.modules[__name__].__class__ = Shared
sd_model: sd_models_types.WebuiSdModel = None
settings_components = None
"""assinged from ui.py, a mapping on setting names to gradio components repsponsible for those settings"""
tab_names = []
latent_upscale_default_mode = "Latent"
latent_upscale_modes = {
"Latent": {"mode": "bilinear", "antialias": False},
@@ -784,108 +64,24 @@ progress_print_out = sys.stdout
gradio_theme = gr.themes.Base()
total_tqdm = None
def reload_gradio_theme(theme_name=None):
global gradio_theme
if not theme_name:
theme_name = opts.gradio_theme
mem_mon = None
default_theme_args = dict(
font=["Source Sans Pro", 'ui-sans-serif', 'system-ui', 'sans-serif'],
font_mono=['IBM Plex Mono', 'ui-monospace', 'Consolas', 'monospace'],
)
options_section = options.options_section
OptionInfo = options.OptionInfo
OptionHTML = options.OptionHTML
if theme_name == "Default":
gradio_theme = gr.themes.Default(**default_theme_args)
else:
try:
gradio_theme = gr.themes.ThemeClass.from_hub(theme_name)
except Exception as e:
errors.display(e, "changing gradio theme")
gradio_theme = gr.themes.Default(**default_theme_args)
natural_sort_key = util.natural_sort_key
listfiles = util.listfiles
html_path = util.html_path
html = util.html
walk_files = util.walk_files
ldm_print = util.ldm_print
reload_gradio_theme = shared_gradio_themes.reload_gradio_theme
class TotalTQDM:
def __init__(self):
self._tqdm = None
def reset(self):
self._tqdm = tqdm.tqdm(
desc="Total progress",
total=state.job_count * state.sampling_steps,
position=1,
file=progress_print_out
)
def update(self):
if not opts.multiple_tqdm or cmd_opts.disable_console_progressbars:
return
if self._tqdm is None:
self.reset()
self._tqdm.update()
def updateTotal(self, new_total):
if not opts.multiple_tqdm or cmd_opts.disable_console_progressbars:
return
if self._tqdm is None:
self.reset()
self._tqdm.total = new_total
def clear(self):
if self._tqdm is not None:
self._tqdm.refresh()
self._tqdm.close()
self._tqdm = None
total_tqdm = TotalTQDM()
mem_mon = modules.memmon.MemUsageMonitor("MemMon", device, opts)
mem_mon.start()
def natural_sort_key(s, regex=re.compile('([0-9]+)')):
return [int(text) if text.isdigit() else text.lower() for text in regex.split(s)]
def listfiles(dirname):
filenames = [os.path.join(dirname, x) for x in sorted(os.listdir(dirname), key=natural_sort_key) if not x.startswith(".")]
return [file for file in filenames if os.path.isfile(file)]
def html_path(filename):
return os.path.join(script_path, "html", filename)
def html(filename):
path = html_path(filename)
if os.path.exists(path):
with open(path, encoding="utf8") as file:
return file.read()
return ""
def walk_files(path, allowed_extensions=None):
if not os.path.exists(path):
return
if allowed_extensions is not None:
allowed_extensions = set(allowed_extensions)
items = list(os.walk(path, followlinks=True))
items = sorted(items, key=lambda x: natural_sort_key(x[0]))
for root, _, files in items:
for filename in sorted(files, key=natural_sort_key):
if allowed_extensions is not None:
_, ext = os.path.splitext(filename)
if ext not in allowed_extensions:
continue
if not opts.list_hidden_files and ("/." in root or "\\." in root):
continue
yield os.path.join(root, filename)
list_checkpoint_tiles = shared_items.list_checkpoint_tiles
refresh_checkpoints = shared_items.refresh_checkpoints
list_samplers = shared_items.list_samplers
reload_hypernetworks = shared_items.reload_hypernetworks

Some files were not shown because too many files have changed in this diff Show More