Compare commits
159 Commits
v1.10.0-RC
...
dev
| Author | SHA1 | Date | |
|---|---|---|---|
| 1937682a20 | |||
| fd0f475aaf | |||
| 76759a1853 | |||
| fd68e0c384 | |||
| 6685e532df | |||
| d4f02b8916 | |||
| e0be7a8427 | |||
| 2174ce5afe | |||
| 82bf9a3730 | |||
| ebbc56b4ec | |||
| 6d664438ba | |||
| 3064b21e23 | |||
| 374bb6cc38 | |||
| e7edad6fe9 | |||
| d8688def65 | |||
| 435b1df2db | |||
| 7fd7fc67f7 | |||
| 9d1accfea0 | |||
| c59a2badd2 | |||
| 9f670bc7e8 | |||
| afbd3da2fa | |||
| 32595360f2 | |||
| 8c7bc08f60 | |||
| 57e15ec9b5 | |||
| 021154d8b1 | |||
| b82ba9b0be | |||
| 45493949cd | |||
| 8c6c973614 | |||
| a037918748 | |||
| 4fa673a68a | |||
| 813c3912fc | |||
| 078d04ef23 | |||
| 1a773bf2c8 | |||
| f113474a6e | |||
| 6577e063d1 | |||
| 7953c570d9 | |||
| f25c3fc9cb | |||
| fc0952abb9 | |||
| b414c62ce4 | |||
| 04903af798 | |||
| e8c3b1f2a0 | |||
| 8bf30e3c42 | |||
| fbc51fa210 | |||
| 7025a2c4a5 | |||
| 0120768f63 | |||
| b425b97ad6 | |||
| 539ea3982d | |||
| 65bd61e87c | |||
| 023454b49e | |||
| cd869bb7a3 | |||
| 957888a100 | |||
| d2c9efb1f3 | |||
| 7799859f9f | |||
| ca3bedbd02 | |||
| 1b16c62608 | |||
| 91de919451 | |||
| aa52408aab | |||
| e6f36d9cdc | |||
| 28323cf8ee | |||
| 533c7b7b7f | |||
| ac28cad998 | |||
| 5206b938ef | |||
| 59481435f4 | |||
| f31faf6f14 | |||
| 14c6d6cb3a | |||
| 4ec10bcdc1 | |||
| 0bf36cf2ad | |||
| 820fe8d2b5 | |||
| deb3803a3a | |||
| 95686227bd | |||
| df74c3c638 | |||
| d88a3c15f7 | |||
| 38c8043229 | |||
| ee0ad5cceb | |||
| 984b952eb3 | |||
| 5865da28d1 | |||
| bb1f39196e | |||
| 6a59766313 | |||
| 65423d2b33 | |||
| c2bc187ce7 | |||
| d0b27dc906 | |||
| c2ce1d3b9c | |||
| bb4cbaf0cf | |||
| c462e5a575 | |||
| 8b19b75270 | |||
| 907bfb5ef0 | |||
| 1ae073c052 | |||
| c9a06d1093 | |||
| d8ad364617 | |||
| f57ec2b53b | |||
| 9677b09b7c | |||
| cbaaf0af0e | |||
| 48239090f1 | |||
| 82a973c043 | |||
| 1d7e9eca09 | |||
| 850e14923e | |||
| 8e0881d9ab | |||
| 834297b13d | |||
| c19d044364 | |||
| 8b3d98c5a5 | |||
| 5bbbda473f | |||
| 9f5a98d576 | |||
| 986c31dcfe | |||
| 5096c163c1 | |||
| 7b99e14ab1 | |||
| 7c8a4ccecb | |||
| 5d26c6ae89 | |||
| 5a10bb9aa6 | |||
| fa0ba939a7 | |||
| fc7b25ac67 | |||
| e09104a126 | |||
| 141d4b71b1 | |||
| ea903819cb | |||
| 24a23e1225 | |||
| 8749540602 | |||
| 9de7084884 | |||
| 94275b115c | |||
| e285af6e48 | |||
| f6f055a93d | |||
| 3a5a66775c | |||
| 7e1bd3e3c3 | |||
| 964fc13a99 | |||
| a5f66b5003 | |||
| 2abc628899 | |||
| 2b50233f3f | |||
| f5866199c4 | |||
| 7e5cdaab4b | |||
| b2453d280a | |||
| b4d62a05af | |||
| 589dda3cf2 | |||
| 3d2dbefcde | |||
| b1695c1b68 | |||
| d57ff884ed | |||
| 26cccd8faa | |||
| 9cc7142dd7 | |||
| 5a5fe7494a | |||
| 6a7042fe2f | |||
| 72cfa2829d | |||
| 4debd4d3ef | |||
| 3f6dcda3e5 | |||
| 27d96fa608 | |||
| dd4f798b97 | |||
| 27947a79d6 | |||
| 11f827c58b | |||
| 48dd4d9eae | |||
| 93c00b2af7 | |||
| 7d7f7f4b49 | |||
| 1b0823db94 | |||
| 6ca7a453d4 | |||
| bad47dcfeb | |||
| c3d8b78b47 | |||
| 21e72d1a5e | |||
| 7b2917255a | |||
| 11cfe0dd05 | |||
| 780c70f6ea | |||
| b5481c6195 | |||
| 1da4907927 | |||
| ec580374e5 | |||
| b82caf1322 |
@@ -88,6 +88,7 @@ module.exports = {
|
||||
// imageviewer.js
|
||||
modalPrevImage: "readonly",
|
||||
modalNextImage: "readonly",
|
||||
updateModalImageIfVisible: "readonly",
|
||||
// localStorage.js
|
||||
localSet: "readonly",
|
||||
localGet: "readonly",
|
||||
|
||||
@@ -22,7 +22,7 @@ jobs:
|
||||
- name: Install Ruff
|
||||
run: pip install ruff==0.3.3
|
||||
- name: Run Ruff
|
||||
run: ruff .
|
||||
run: ruff check .
|
||||
lint-js:
|
||||
name: eslint
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
@@ -2,6 +2,7 @@ __pycache__
|
||||
*.ckpt
|
||||
*.safetensors
|
||||
*.pth
|
||||
.DS_Store
|
||||
/ESRGAN/*
|
||||
/SwinIR/*
|
||||
/repositories
|
||||
@@ -40,3 +41,4 @@ notification.mp3
|
||||
/test/test_outputs
|
||||
/cache
|
||||
trace.json
|
||||
/sysinfo-????-??-??-??-??.json
|
||||
|
||||
+22
-2
@@ -1,8 +1,14 @@
|
||||
## 1.10.1
|
||||
|
||||
### Bug Fixes:
|
||||
* fix image upscale on cpu ([#16275](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16275))
|
||||
|
||||
|
||||
## 1.10.0
|
||||
|
||||
### Features:
|
||||
* A lot of performance improvements (see below in Performance section)
|
||||
* Stable Diffusion 3 support ([#16030](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16030))
|
||||
* Stable Diffusion 3 support ([#16030](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16030), [#16164](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16164), [#16212](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16212))
|
||||
* Recommended Euler sampler; DDIM and other timestamp samplers currently not supported
|
||||
* T5 text model is disabled by default, enable it in settings
|
||||
* New schedulers:
|
||||
@@ -11,6 +17,7 @@
|
||||
* Normal ([#16149](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16149))
|
||||
* DDIM ([#16149](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16149))
|
||||
* Simple ([#16142](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16142))
|
||||
* Beta ([#16235](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16235))
|
||||
* New sampler: DDIM CFG++ ([#16035](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16035))
|
||||
|
||||
### Minor:
|
||||
@@ -25,6 +32,8 @@
|
||||
* Add option to enable clip skip for clip L on SDXL ([#15992](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15992))
|
||||
* Option to prevent screen sleep during generation ([#16001](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16001))
|
||||
* ToggleLivePriview button in image viewer ([#16065](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16065))
|
||||
* Remove ui flashing on reloading and fast scrollong ([#16153](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16153))
|
||||
* option to disable save button log.csv ([#16242](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16242))
|
||||
|
||||
### Extensions and API:
|
||||
* Add process_before_every_sampling hook ([#15984](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15984))
|
||||
@@ -73,6 +82,10 @@
|
||||
* Fix SD2 loading ([#16078](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16078), [#16079](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16079))
|
||||
* fix infotext Lora hashes for hires fix different lora ([#16062](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16062))
|
||||
* Fix sampler scheduler autocorrection warning ([#16054](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16054))
|
||||
* fix ui flashing on reloading and fast scrollong ([#16153](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16153))
|
||||
* fix upscale logic ([#16239](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16239))
|
||||
* [bug] do not break progressbar on non-job actions (add wrap_gradio_call_no_job) ([#16202](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16202))
|
||||
* fix OSError: cannot write mode P as JPEG ([#16194](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16194))
|
||||
|
||||
### Other:
|
||||
* fix changelog #15883 -> #15882 ([#15907](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15907))
|
||||
@@ -89,10 +102,17 @@
|
||||
* Bump spandrel to 0.3.4 ([#16144](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16144))
|
||||
* Defunct --max-batch-count ([#16119](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16119))
|
||||
* docs: update bug_report.yml ([#16102](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16102))
|
||||
* Maintaining Project Compatibility for Python 3.9 Users Without Upgrade Requirements. ([#16088](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16088))
|
||||
* Maintaining Project Compatibility for Python 3.9 Users Without Upgrade Requirements. ([#16088](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16088), [#16169](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16169), [#16192](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16192))
|
||||
* Update torch for ARM Macs to 2.3.1 ([#16059](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16059))
|
||||
* remove deprecated setting dont_fix_second_order_samplers_schedule ([#16061](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16061))
|
||||
* chore: fix typos ([#16060](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16060))
|
||||
* shlex.join launch args in console log ([#16170](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16170))
|
||||
* activate venv .bat ([#16231](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16231))
|
||||
* add ids to the resize tabs in img2img ([#16218](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16218))
|
||||
* update installation guide linux ([#16178](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16178))
|
||||
* Robust sysinfo ([#16173](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16173))
|
||||
* do not send image size on paste inpaint ([#16180](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16180))
|
||||
* Fix noisy DS_Store files for MacOS ([#16166](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16166))
|
||||
|
||||
|
||||
## 1.9.4
|
||||
|
||||
+1
-12
@@ -1,12 +1 @@
|
||||
* @AUTOMATIC1111
|
||||
|
||||
# if you were managing a localization and were removed from this file, this is because
|
||||
# the intended way to do localizations now is via extensions. See:
|
||||
# https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Developing-extensions
|
||||
# Make a repo with your localization and since you are still listed as a collaborator
|
||||
# you can add it to the wiki page yourself. This change is because some people complained
|
||||
# the git commit log is cluttered with things unrelated to almost everyone and
|
||||
# because I believe this is the best overall for the project to handle localizations almost
|
||||
# entirely without my oversight.
|
||||
|
||||
|
||||
* @AUTOMATIC1111 @w-e-w @catboxanon
|
||||
|
||||
@@ -128,10 +128,33 @@ sudo zypper install wget git python3 libtcmalloc4 libglvnd
|
||||
# Arch-based:
|
||||
sudo pacman -S wget git python3
|
||||
```
|
||||
If your system is very new, you need to install python3.11 or python3.10:
|
||||
```bash
|
||||
# Ubuntu 24.04
|
||||
sudo add-apt-repository ppa:deadsnakes/ppa
|
||||
sudo apt update
|
||||
sudo apt install python3.11 python3.11-venv
|
||||
|
||||
# Manjaro/Arch
|
||||
sudo pacman -S yay
|
||||
yay -S python311 # do not confuse with python3.11 package
|
||||
|
||||
# Only for 3.11
|
||||
# Then set up env variable in launch script
|
||||
export python_cmd="python3.11"
|
||||
# or in webui-user.sh
|
||||
python_cmd="python3.11"
|
||||
```
|
||||
2. Navigate to the directory you would like the webui to be installed and execute the following command:
|
||||
```bash
|
||||
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
|
||||
chmod +x webui.sh
|
||||
```
|
||||
Or just clone the repo wherever you want:
|
||||
```bash
|
||||
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
|
||||
```
|
||||
|
||||
3. Run `webui.sh`.
|
||||
4. Check `webui-user.sh` for options.
|
||||
### Installation on Apple Silicon
|
||||
|
||||
@@ -0,0 +1,98 @@
|
||||
model:
|
||||
target: sgm.models.diffusion.DiffusionEngine
|
||||
params:
|
||||
scale_factor: 0.13025
|
||||
disable_first_stage_autocast: True
|
||||
|
||||
denoiser_config:
|
||||
target: sgm.modules.diffusionmodules.denoiser.DiscreteDenoiser
|
||||
params:
|
||||
num_idx: 1000
|
||||
|
||||
weighting_config:
|
||||
target: sgm.modules.diffusionmodules.denoiser_weighting.VWeighting
|
||||
scaling_config:
|
||||
target: sgm.modules.diffusionmodules.denoiser_scaling.VScaling
|
||||
discretization_config:
|
||||
target: sgm.modules.diffusionmodules.discretizer.LegacyDDPMDiscretization
|
||||
|
||||
network_config:
|
||||
target: sgm.modules.diffusionmodules.openaimodel.UNetModel
|
||||
params:
|
||||
adm_in_channels: 2816
|
||||
num_classes: sequential
|
||||
use_checkpoint: False
|
||||
in_channels: 4
|
||||
out_channels: 4
|
||||
model_channels: 320
|
||||
attention_resolutions: [4, 2]
|
||||
num_res_blocks: 2
|
||||
channel_mult: [1, 2, 4]
|
||||
num_head_channels: 64
|
||||
use_spatial_transformer: True
|
||||
use_linear_in_transformer: True
|
||||
transformer_depth: [1, 2, 10] # note: the first is unused (due to attn_res starting at 2) 32, 16, 8 --> 64, 32, 16
|
||||
context_dim: 2048
|
||||
spatial_transformer_attn_type: softmax-xformers
|
||||
legacy: False
|
||||
|
||||
conditioner_config:
|
||||
target: sgm.modules.GeneralConditioner
|
||||
params:
|
||||
emb_models:
|
||||
# crossattn cond
|
||||
- is_trainable: False
|
||||
input_key: txt
|
||||
target: sgm.modules.encoders.modules.FrozenCLIPEmbedder
|
||||
params:
|
||||
layer: hidden
|
||||
layer_idx: 11
|
||||
# crossattn and vector cond
|
||||
- is_trainable: False
|
||||
input_key: txt
|
||||
target: sgm.modules.encoders.modules.FrozenOpenCLIPEmbedder2
|
||||
params:
|
||||
arch: ViT-bigG-14
|
||||
version: laion2b_s39b_b160k
|
||||
freeze: True
|
||||
layer: penultimate
|
||||
always_return_pooled: True
|
||||
legacy: False
|
||||
# vector cond
|
||||
- is_trainable: False
|
||||
input_key: original_size_as_tuple
|
||||
target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
|
||||
params:
|
||||
outdim: 256 # multiplied by two
|
||||
# vector cond
|
||||
- is_trainable: False
|
||||
input_key: crop_coords_top_left
|
||||
target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
|
||||
params:
|
||||
outdim: 256 # multiplied by two
|
||||
# vector cond
|
||||
- is_trainable: False
|
||||
input_key: target_size_as_tuple
|
||||
target: sgm.modules.encoders.modules.ConcatTimestepEmbedderND
|
||||
params:
|
||||
outdim: 256 # multiplied by two
|
||||
|
||||
first_stage_config:
|
||||
target: sgm.models.autoencoder.AutoencoderKLInferenceWrapper
|
||||
params:
|
||||
embed_dim: 4
|
||||
monitor: val/rec_loss
|
||||
ddconfig:
|
||||
attn_type: vanilla-xformers
|
||||
double_z: true
|
||||
z_channels: 4
|
||||
resolution: 256
|
||||
in_channels: 3
|
||||
out_ch: 3
|
||||
ch: 128
|
||||
ch_mult: [1, 2, 4, 4]
|
||||
num_res_blocks: 2
|
||||
attn_resolutions: []
|
||||
dropout: 0.0
|
||||
lossconfig:
|
||||
target: torch.nn.Identity
|
||||
@@ -7,6 +7,7 @@ import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
|
||||
from modules import sd_models, cache, errors, hashes, shared
|
||||
import modules.models.sd3.mmdit
|
||||
|
||||
NetworkWeights = namedtuple('NetworkWeights', ['network_key', 'sd_key', 'w', 'sd_module'])
|
||||
|
||||
@@ -114,7 +115,10 @@ class NetworkModule:
|
||||
self.sd_key = weights.sd_key
|
||||
self.sd_module = weights.sd_module
|
||||
|
||||
if hasattr(self.sd_module, 'weight'):
|
||||
if isinstance(self.sd_module, modules.models.sd3.mmdit.QkvLinear):
|
||||
s = self.sd_module.weight.shape
|
||||
self.shape = (s[0] // 3, s[1])
|
||||
elif hasattr(self.sd_module, 'weight'):
|
||||
self.shape = self.sd_module.weight.shape
|
||||
elif isinstance(self.sd_module, nn.MultiheadAttention):
|
||||
# For now, only self-attn use Pytorch's MHA
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import torch
|
||||
|
||||
import lyco_helpers
|
||||
import modules.models.sd3.mmdit
|
||||
import network
|
||||
from modules import devices
|
||||
|
||||
@@ -10,6 +11,13 @@ class ModuleTypeLora(network.ModuleType):
|
||||
if all(x in weights.w for x in ["lora_up.weight", "lora_down.weight"]):
|
||||
return NetworkModuleLora(net, weights)
|
||||
|
||||
if all(x in weights.w for x in ["lora_A.weight", "lora_B.weight"]):
|
||||
w = weights.w.copy()
|
||||
weights.w.clear()
|
||||
weights.w.update({"lora_up.weight": w["lora_B.weight"], "lora_down.weight": w["lora_A.weight"]})
|
||||
|
||||
return NetworkModuleLora(net, weights)
|
||||
|
||||
return None
|
||||
|
||||
|
||||
@@ -29,7 +37,7 @@ class NetworkModuleLora(network.NetworkModule):
|
||||
if weight is None and none_ok:
|
||||
return None
|
||||
|
||||
is_linear = type(self.sd_module) in [torch.nn.Linear, torch.nn.modules.linear.NonDynamicallyQuantizableLinear, torch.nn.MultiheadAttention]
|
||||
is_linear = type(self.sd_module) in [torch.nn.Linear, torch.nn.modules.linear.NonDynamicallyQuantizableLinear, torch.nn.MultiheadAttention, modules.models.sd3.mmdit.QkvLinear]
|
||||
is_conv = type(self.sd_module) in [torch.nn.Conv2d]
|
||||
|
||||
if is_linear:
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
from __future__ import annotations
|
||||
import gradio as gr
|
||||
import logging
|
||||
import os
|
||||
@@ -19,6 +20,7 @@ from typing import Union
|
||||
|
||||
from modules import shared, devices, sd_models, errors, scripts, sd_hijack
|
||||
import modules.textual_inversion.textual_inversion as textual_inversion
|
||||
import modules.models.sd3.mmdit
|
||||
|
||||
from lora_logger import logger
|
||||
|
||||
@@ -165,11 +167,25 @@ def load_network(name, network_on_disk):
|
||||
|
||||
keys_failed_to_match = {}
|
||||
is_sd2 = 'model_transformer_resblocks' in shared.sd_model.network_layer_mapping
|
||||
if hasattr(shared.sd_model, 'diffusers_weight_map'):
|
||||
diffusers_weight_map = shared.sd_model.diffusers_weight_map
|
||||
elif hasattr(shared.sd_model, 'diffusers_weight_mapping'):
|
||||
diffusers_weight_map = {}
|
||||
for k, v in shared.sd_model.diffusers_weight_mapping():
|
||||
diffusers_weight_map[k] = v
|
||||
shared.sd_model.diffusers_weight_map = diffusers_weight_map
|
||||
else:
|
||||
diffusers_weight_map = None
|
||||
|
||||
matched_networks = {}
|
||||
bundle_embeddings = {}
|
||||
|
||||
for key_network, weight in sd.items():
|
||||
|
||||
if diffusers_weight_map:
|
||||
key_network_without_network_parts, network_name, network_weight = key_network.rsplit(".", 2)
|
||||
network_part = network_name + '.' + network_weight
|
||||
else:
|
||||
key_network_without_network_parts, _, network_part = key_network.partition(".")
|
||||
|
||||
if key_network_without_network_parts == "bundle_emb":
|
||||
@@ -182,7 +198,11 @@ def load_network(name, network_on_disk):
|
||||
emb_dict[vec_name] = weight
|
||||
bundle_embeddings[emb_name] = emb_dict
|
||||
|
||||
if diffusers_weight_map:
|
||||
key = diffusers_weight_map.get(key_network_without_network_parts, key_network_without_network_parts)
|
||||
else:
|
||||
key = convert_diffusers_name_to_compvis(key_network_without_network_parts, is_sd2)
|
||||
|
||||
sd_module = shared.sd_model.network_layer_mapping.get(key, None)
|
||||
|
||||
if sd_module is None:
|
||||
@@ -346,6 +366,28 @@ def load_networks(names, te_multipliers=None, unet_multipliers=None, dyn_dims=No
|
||||
purge_networks_from_memory()
|
||||
|
||||
|
||||
def allowed_layer_without_weight(layer):
|
||||
if isinstance(layer, torch.nn.LayerNorm) and not layer.elementwise_affine:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def store_weights_backup(weight):
|
||||
if weight is None:
|
||||
return None
|
||||
|
||||
return weight.to(devices.cpu, copy=True)
|
||||
|
||||
|
||||
def restore_weights_backup(obj, field, weight):
|
||||
if weight is None:
|
||||
setattr(obj, field, None)
|
||||
return
|
||||
|
||||
getattr(obj, field).copy_(weight)
|
||||
|
||||
|
||||
def network_restore_weights_from_backup(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn.GroupNorm, torch.nn.LayerNorm, torch.nn.MultiheadAttention]):
|
||||
weights_backup = getattr(self, "network_weights_backup", None)
|
||||
bias_backup = getattr(self, "network_bias_backup", None)
|
||||
@@ -355,21 +397,15 @@ def network_restore_weights_from_backup(self: Union[torch.nn.Conv2d, torch.nn.Li
|
||||
|
||||
if weights_backup is not None:
|
||||
if isinstance(self, torch.nn.MultiheadAttention):
|
||||
self.in_proj_weight.copy_(weights_backup[0])
|
||||
self.out_proj.weight.copy_(weights_backup[1])
|
||||
restore_weights_backup(self, 'in_proj_weight', weights_backup[0])
|
||||
restore_weights_backup(self.out_proj, 'weight', weights_backup[1])
|
||||
else:
|
||||
self.weight.copy_(weights_backup)
|
||||
restore_weights_backup(self, 'weight', weights_backup)
|
||||
|
||||
if bias_backup is not None:
|
||||
if isinstance(self, torch.nn.MultiheadAttention):
|
||||
self.out_proj.bias.copy_(bias_backup)
|
||||
restore_weights_backup(self.out_proj, 'bias', bias_backup)
|
||||
else:
|
||||
self.bias.copy_(bias_backup)
|
||||
else:
|
||||
if isinstance(self, torch.nn.MultiheadAttention):
|
||||
self.out_proj.bias = None
|
||||
else:
|
||||
self.bias = None
|
||||
restore_weights_backup(self, 'bias', bias_backup)
|
||||
|
||||
|
||||
def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn.GroupNorm, torch.nn.LayerNorm, torch.nn.MultiheadAttention]):
|
||||
@@ -388,22 +424,22 @@ def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn
|
||||
|
||||
weights_backup = getattr(self, "network_weights_backup", None)
|
||||
if weights_backup is None and wanted_names != ():
|
||||
if current_names != ():
|
||||
raise RuntimeError("no backup weights found and current weights are not unchanged")
|
||||
if current_names != () and not allowed_layer_without_weight(self):
|
||||
raise RuntimeError(f"{network_layer_name} - no backup weights found and current weights are not unchanged")
|
||||
|
||||
if isinstance(self, torch.nn.MultiheadAttention):
|
||||
weights_backup = (self.in_proj_weight.to(devices.cpu, copy=True), self.out_proj.weight.to(devices.cpu, copy=True))
|
||||
weights_backup = (store_weights_backup(self.in_proj_weight), store_weights_backup(self.out_proj.weight))
|
||||
else:
|
||||
weights_backup = self.weight.to(devices.cpu, copy=True)
|
||||
weights_backup = store_weights_backup(self.weight)
|
||||
|
||||
self.network_weights_backup = weights_backup
|
||||
|
||||
bias_backup = getattr(self, "network_bias_backup", None)
|
||||
if bias_backup is None and wanted_names != ():
|
||||
if isinstance(self, torch.nn.MultiheadAttention) and self.out_proj.bias is not None:
|
||||
bias_backup = self.out_proj.bias.to(devices.cpu, copy=True)
|
||||
bias_backup = store_weights_backup(self.out_proj.bias)
|
||||
elif getattr(self, 'bias', None) is not None:
|
||||
bias_backup = self.bias.to(devices.cpu, copy=True)
|
||||
bias_backup = store_weights_backup(self.bias)
|
||||
else:
|
||||
bias_backup = None
|
||||
|
||||
@@ -411,6 +447,7 @@ def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn
|
||||
# Only report if bias is not None and current bias are not unchanged.
|
||||
if bias_backup is not None and current_names != ():
|
||||
raise RuntimeError("no backup bias found and current bias are not unchanged")
|
||||
|
||||
self.network_bias_backup = bias_backup
|
||||
|
||||
if current_names != wanted_names:
|
||||
@@ -418,7 +455,7 @@ def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn
|
||||
|
||||
for net in loaded_networks:
|
||||
module = net.modules.get(network_layer_name, None)
|
||||
if module is not None and hasattr(self, 'weight'):
|
||||
if module is not None and hasattr(self, 'weight') and not isinstance(module, modules.models.sd3.mmdit.QkvLinear):
|
||||
try:
|
||||
with torch.no_grad():
|
||||
if getattr(self, 'fp16_weight', None) is None:
|
||||
@@ -478,6 +515,24 @@ def network_apply_weights(self: Union[torch.nn.Conv2d, torch.nn.Linear, torch.nn
|
||||
|
||||
continue
|
||||
|
||||
if isinstance(self, modules.models.sd3.mmdit.QkvLinear) and module_q and module_k and module_v:
|
||||
try:
|
||||
with torch.no_grad():
|
||||
# Send "real" orig_weight into MHA's lora module
|
||||
qw, kw, vw = self.weight.chunk(3, 0)
|
||||
updown_q, _ = module_q.calc_updown(qw)
|
||||
updown_k, _ = module_k.calc_updown(kw)
|
||||
updown_v, _ = module_v.calc_updown(vw)
|
||||
del qw, kw, vw
|
||||
updown_qkv = torch.vstack([updown_q, updown_k, updown_v])
|
||||
self.weight += updown_qkv
|
||||
|
||||
except RuntimeError as e:
|
||||
logging.debug(f"Network {net.name} layer {network_layer_name}: {e}")
|
||||
extra_network_lora.errors[net.name] = extra_network_lora.errors.get(net.name, 0) + 1
|
||||
|
||||
continue
|
||||
|
||||
if module is None:
|
||||
continue
|
||||
|
||||
|
||||
@@ -816,7 +816,7 @@ onUiLoaded(async() => {
|
||||
// Increase or decrease brush size based on scroll direction
|
||||
adjustBrushSize(elemId, e.deltaY);
|
||||
}
|
||||
});
|
||||
}, {passive: false});
|
||||
|
||||
// Handle the move event for pan functionality. Updates the panX and panY variables and applies the new transform to the target element.
|
||||
function handleMoveKeyDown(e) {
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
"""
|
||||
Hypertile module for splitting attention layers in SD-1.5 U-Net and SD-1.5 VAE
|
||||
Warn: The patch works well only if the input image has a width and height that are multiples of 128
|
||||
Original author: @tfernd Github: https://github.com/tfernd/HyperTile
|
||||
Original author: @tfernd GitHub: https://github.com/tfernd/HyperTile
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
+6
-6
@@ -34,14 +34,14 @@ class ScriptPostprocessingAutosizedCrop(scripts_postprocessing.ScriptPostprocess
|
||||
with ui_components.InputAccordion(False, label="Auto-sized crop") as enable:
|
||||
gr.Markdown('Each image is center-cropped with an automatically chosen width and height.')
|
||||
with gr.Row():
|
||||
mindim = gr.Slider(minimum=64, maximum=2048, step=8, label="Dimension lower bound", value=384, elem_id="postprocess_multicrop_mindim")
|
||||
maxdim = gr.Slider(minimum=64, maximum=2048, step=8, label="Dimension upper bound", value=768, elem_id="postprocess_multicrop_maxdim")
|
||||
mindim = gr.Slider(minimum=64, maximum=2048, step=8, label="Dimension lower bound", value=384, elem_id=self.elem_id_suffix("postprocess_multicrop_mindim"))
|
||||
maxdim = gr.Slider(minimum=64, maximum=2048, step=8, label="Dimension upper bound", value=768, elem_id=self.elem_id_suffix("postprocess_multicrop_maxdim"))
|
||||
with gr.Row():
|
||||
minarea = gr.Slider(minimum=64 * 64, maximum=2048 * 2048, step=1, label="Area lower bound", value=64 * 64, elem_id="postprocess_multicrop_minarea")
|
||||
maxarea = gr.Slider(minimum=64 * 64, maximum=2048 * 2048, step=1, label="Area upper bound", value=640 * 640, elem_id="postprocess_multicrop_maxarea")
|
||||
minarea = gr.Slider(minimum=64 * 64, maximum=2048 * 2048, step=1, label="Area lower bound", value=64 * 64, elem_id=self.elem_id_suffix("postprocess_multicrop_minarea"))
|
||||
maxarea = gr.Slider(minimum=64 * 64, maximum=2048 * 2048, step=1, label="Area upper bound", value=640 * 640, elem_id=self.elem_id_suffix("postprocess_multicrop_maxarea"))
|
||||
with gr.Row():
|
||||
objective = gr.Radio(["Maximize area", "Minimize error"], value="Maximize area", label="Resizing objective", elem_id="postprocess_multicrop_objective")
|
||||
threshold = gr.Slider(minimum=0, maximum=1, step=0.01, label="Error threshold", value=0.1, elem_id="postprocess_multicrop_threshold")
|
||||
objective = gr.Radio(["Maximize area", "Minimize error"], value="Maximize area", label="Resizing objective", elem_id=self.elem_id_suffix("postprocess_multicrop_objective"))
|
||||
threshold = gr.Slider(minimum=0, maximum=1, step=0.01, label="Error threshold", value=0.1, elem_id=self.elem_id_suffix("postprocess_multicrop_threshold"))
|
||||
|
||||
return {
|
||||
"enable": enable,
|
||||
|
||||
@@ -11,10 +11,10 @@ class ScriptPostprocessingFocalCrop(scripts_postprocessing.ScriptPostprocessing)
|
||||
|
||||
def ui(self):
|
||||
with ui_components.InputAccordion(False, label="Auto focal point crop") as enable:
|
||||
face_weight = gr.Slider(label='Focal point face weight', value=0.9, minimum=0.0, maximum=1.0, step=0.05, elem_id="postprocess_focal_crop_face_weight")
|
||||
entropy_weight = gr.Slider(label='Focal point entropy weight', value=0.15, minimum=0.0, maximum=1.0, step=0.05, elem_id="postprocess_focal_crop_entropy_weight")
|
||||
edges_weight = gr.Slider(label='Focal point edges weight', value=0.5, minimum=0.0, maximum=1.0, step=0.05, elem_id="postprocess_focal_crop_edges_weight")
|
||||
debug = gr.Checkbox(label='Create debug image', elem_id="train_process_focal_crop_debug")
|
||||
face_weight = gr.Slider(label='Focal point face weight', value=0.9, minimum=0.0, maximum=1.0, step=0.05, elem_id=self.elem_id_suffix("postprocess_focal_crop_face_weight"))
|
||||
entropy_weight = gr.Slider(label='Focal point entropy weight', value=0.15, minimum=0.0, maximum=1.0, step=0.05, elem_id=self.elem_id_suffix("postprocess_focal_crop_entropy_weight"))
|
||||
edges_weight = gr.Slider(label='Focal point edges weight', value=0.5, minimum=0.0, maximum=1.0, step=0.05, elem_id=self.elem_id_suffix("postprocess_focal_crop_edges_weight"))
|
||||
debug = gr.Checkbox(label='Create debug image', elem_id=self.elem_id_suffix("train_process_focal_crop_debug"))
|
||||
|
||||
return {
|
||||
"enable": enable,
|
||||
|
||||
+2
-2
@@ -35,8 +35,8 @@ class ScriptPostprocessingSplitOversized(scripts_postprocessing.ScriptPostproces
|
||||
def ui(self):
|
||||
with ui_components.InputAccordion(False, label="Split oversized images") as enable:
|
||||
with gr.Row():
|
||||
split_threshold = gr.Slider(label='Threshold', value=0.5, minimum=0.0, maximum=1.0, step=0.05, elem_id="postprocess_split_threshold")
|
||||
overlap_ratio = gr.Slider(label='Overlap ratio', value=0.2, minimum=0.0, maximum=0.9, step=0.05, elem_id="postprocess_overlap_ratio")
|
||||
split_threshold = gr.Slider(label='Threshold', value=0.5, minimum=0.0, maximum=1.0, step=0.05, elem_id=self.elem_id_suffix("postprocess_split_threshold"))
|
||||
overlap_ratio = gr.Slider(label='Overlap ratio', value=0.2, minimum=0.0, maximum=0.9, step=0.05, elem_id=self.elem_id_suffix("postprocess_overlap_ratio"))
|
||||
|
||||
return {
|
||||
"enable": enable,
|
||||
|
||||
@@ -1,36 +1,69 @@
|
||||
// Stable Diffusion WebUI - Bracket checker
|
||||
// By Hingashi no Florin/Bwin4L & @akx
|
||||
// Stable Diffusion WebUI - Bracket Checker
|
||||
// By @Bwin4L, @akx, @w-e-w, @Haoming02
|
||||
// Counts open and closed brackets (round, square, curly) in the prompt and negative prompt text boxes in the txt2img and img2img tabs.
|
||||
// If there's a mismatch, the keyword counter turns red and if you hover on it, a tooltip tells you what's wrong.
|
||||
// If there's a mismatch, the keyword counter turns red, and if you hover on it, a tooltip tells you what's wrong.
|
||||
|
||||
function checkBrackets(textArea, counterElt) {
|
||||
var counts = {};
|
||||
(textArea.value.match(/[(){}[\]]/g) || []).forEach(bracket => {
|
||||
counts[bracket] = (counts[bracket] || 0) + 1;
|
||||
});
|
||||
var errors = [];
|
||||
function checkBrackets(textArea, counterElem) {
|
||||
const pairs = [
|
||||
['(', ')', 'round brackets'],
|
||||
['[', ']', 'square brackets'],
|
||||
['{', '}', 'curly brackets']
|
||||
];
|
||||
|
||||
function checkPair(open, close, kind) {
|
||||
if (counts[open] !== counts[close]) {
|
||||
errors.push(
|
||||
`${open}...${close} - Detected ${counts[open] || 0} opening and ${counts[close] || 0} closing ${kind}.`
|
||||
);
|
||||
const counts = {};
|
||||
const errors = new Set();
|
||||
let i = 0;
|
||||
|
||||
while (i < textArea.value.length) {
|
||||
let char = textArea.value[i];
|
||||
let escaped = false;
|
||||
while (char === '\\' && i + 1 < textArea.value.length) {
|
||||
escaped = !escaped;
|
||||
i++;
|
||||
char = textArea.value[i];
|
||||
}
|
||||
|
||||
if (escaped) {
|
||||
i++;
|
||||
continue;
|
||||
}
|
||||
|
||||
for (const [open, close, label] of pairs) {
|
||||
if (char === open) {
|
||||
counts[label] = (counts[label] || 0) + 1;
|
||||
} else if (char === close) {
|
||||
counts[label] = (counts[label] || 0) - 1;
|
||||
if (counts[label] < 0) {
|
||||
errors.add(`Incorrect order of ${label}.`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
checkPair('(', ')', 'round brackets');
|
||||
checkPair('[', ']', 'square brackets');
|
||||
checkPair('{', '}', 'curly brackets');
|
||||
counterElt.title = errors.join('\n');
|
||||
counterElt.classList.toggle('error', errors.length !== 0);
|
||||
i++;
|
||||
}
|
||||
|
||||
for (const [open, close, label] of pairs) {
|
||||
if (counts[label] == undefined) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if (counts[label] > 0) {
|
||||
errors.add(`${open} ... ${close} - Detected ${counts[label]} more opening than closing ${label}.`);
|
||||
} else if (counts[label] < 0) {
|
||||
errors.add(`${open} ... ${close} - Detected ${-counts[label]} more closing than opening ${label}.`);
|
||||
}
|
||||
}
|
||||
|
||||
counterElem.title = [...errors].join('\n');
|
||||
counterElem.classList.toggle('error', errors.size !== 0);
|
||||
}
|
||||
|
||||
function setupBracketChecking(id_prompt, id_counter) {
|
||||
var textarea = gradioApp().querySelector("#" + id_prompt + " > label > textarea");
|
||||
var counter = gradioApp().getElementById(id_counter);
|
||||
const textarea = gradioApp().querySelector(`#${id_prompt} > label > textarea`);
|
||||
const counter = gradioApp().getElementById(id_counter);
|
||||
|
||||
if (textarea && counter) {
|
||||
textarea.addEventListener("input", () => checkBrackets(textarea, counter));
|
||||
onEdit(`${id_prompt}_BracketChecking`, textarea, 400, () => checkBrackets(textarea, counter));
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
+2
-2
@@ -1,7 +1,7 @@
|
||||
<div>
|
||||
<a href="{api_docs}">API</a>
|
||||
<a href="{api_docs}" target="_blank">API</a>
|
||||
•
|
||||
<a href="https://github.com/AUTOMATIC1111/stable-diffusion-webui">Github</a>
|
||||
<a href="https://github.com/AUTOMATIC1111/stable-diffusion-webui">GitHub</a>
|
||||
•
|
||||
<a href="https://gradio.app">Gradio</a>
|
||||
•
|
||||
|
||||
@@ -104,7 +104,7 @@ var contextMenuInit = function() {
|
||||
e.preventDefault();
|
||||
}
|
||||
});
|
||||
});
|
||||
}, {passive: false});
|
||||
});
|
||||
eventListenerApplied = true;
|
||||
|
||||
|
||||
@@ -201,7 +201,7 @@ function setupExtraNetworks() {
|
||||
setupExtraNetworksForTab('img2img');
|
||||
}
|
||||
|
||||
var re_extranet = /<([^:^>]+:[^:]+):[\d.]+>(.*)/;
|
||||
var re_extranet = /<([^:^>]+:[^:]+):[\d.]+>(.*)/s;
|
||||
var re_extranet_g = /<([^:^>]+:[^:]+):[\d.]+>/g;
|
||||
|
||||
var re_extranet_neg = /\(([^:^>]+:[\d.]+)\)/;
|
||||
|
||||
@@ -13,6 +13,7 @@ function showModal(event) {
|
||||
if (modalImage.style.display === 'none') {
|
||||
lb.style.setProperty('background-image', 'url(' + source.src + ')');
|
||||
}
|
||||
updateModalImage();
|
||||
lb.style.display = "flex";
|
||||
lb.focus();
|
||||
|
||||
@@ -31,9 +32,8 @@ function negmod(n, m) {
|
||||
return ((n % m) + m) % m;
|
||||
}
|
||||
|
||||
function updateOnBackgroundChange() {
|
||||
function updateModalImage() {
|
||||
const modalImage = gradioApp().getElementById("modalImage");
|
||||
if (modalImage && modalImage.offsetParent) {
|
||||
let currentButton = selected_gallery_button();
|
||||
let preview = gradioApp().querySelectorAll('.livePreview > img');
|
||||
if (opts.js_live_preview_in_modal_lightbox && preview.length > 0) {
|
||||
@@ -47,7 +47,14 @@ function updateOnBackgroundChange() {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function updateOnBackgroundChange() {
|
||||
const modalImage = gradioApp().getElementById("modalImage");
|
||||
if (modalImage && modalImage.offsetParent) {
|
||||
updateModalImage();
|
||||
}
|
||||
}
|
||||
const updateModalImageIfVisible = updateOnBackgroundChange;
|
||||
|
||||
function modalImageSwitch(offset) {
|
||||
var galleryButtons = all_gallery_buttons();
|
||||
@@ -158,6 +165,7 @@ function modalLivePreviewToggle(event) {
|
||||
const modalToggleLivePreview = gradioApp().getElementById("modal_toggle_live_preview");
|
||||
opts.js_live_preview_in_modal_lightbox = !opts.js_live_preview_in_modal_lightbox;
|
||||
modalToggleLivePreview.innerHTML = opts.js_live_preview_in_modal_lightbox ? "🗇" : "🗆";
|
||||
updateModalImageIfVisible();
|
||||
event.stopPropagation();
|
||||
}
|
||||
|
||||
|
||||
@@ -79,11 +79,12 @@ function requestProgress(id_task, progressbarContainer, gallery, atEnd, onProgre
|
||||
var wakeLock = null;
|
||||
|
||||
var requestWakeLock = async function() {
|
||||
if (!opts.prevent_screen_sleep_during_generation || wakeLock) return;
|
||||
if (!opts.prevent_screen_sleep_during_generation || wakeLock !== null) return;
|
||||
try {
|
||||
wakeLock = await navigator.wakeLock.request('screen');
|
||||
} catch (err) {
|
||||
console.error('Wake Lock is not supported.');
|
||||
wakeLock = false;
|
||||
}
|
||||
};
|
||||
|
||||
@@ -189,7 +190,7 @@ function requestProgress(id_task, progressbarContainer, gallery, atEnd, onProgre
|
||||
livePreview.className = 'livePreview';
|
||||
gallery.insertBefore(livePreview, gallery.firstElementChild);
|
||||
}
|
||||
|
||||
updateModalImageIfVisible();
|
||||
livePreview.appendChild(img);
|
||||
if (livePreview.childElementCount > 2) {
|
||||
livePreview.removeChild(livePreview.firstElementChild);
|
||||
|
||||
@@ -124,7 +124,7 @@
|
||||
} else {
|
||||
R.screenX = evt.changedTouches[0].screenX;
|
||||
}
|
||||
});
|
||||
}, {passive: false});
|
||||
});
|
||||
|
||||
resizeHandle.addEventListener('dblclick', onDoubleClick);
|
||||
|
||||
@@ -6,6 +6,11 @@ git = launch_utils.git
|
||||
index_url = launch_utils.index_url
|
||||
dir_repos = launch_utils.dir_repos
|
||||
|
||||
if args.uv:
|
||||
from modules.uv_hook import patch
|
||||
patch()
|
||||
|
||||
|
||||
commit_hash = launch_utils.commit_hash
|
||||
git_tag = launch_utils.git_tag
|
||||
|
||||
|
||||
+2
-2
@@ -113,7 +113,7 @@ def encode_pil_to_base64(image):
|
||||
image.save(output_bytes, format="PNG", pnginfo=(metadata if use_metadata else None), quality=opts.jpeg_quality)
|
||||
|
||||
elif opts.samples_format.lower() in ("jpg", "jpeg", "webp"):
|
||||
if image.mode == "RGBA":
|
||||
if image.mode in ("RGBA", "P"):
|
||||
image = image.convert("RGB")
|
||||
parameters = image.info.get('parameters', None)
|
||||
exif_bytes = piexif.dump({
|
||||
@@ -122,7 +122,7 @@ def encode_pil_to_base64(image):
|
||||
if opts.samples_format.lower() in ("jpg", "jpeg"):
|
||||
image.save(output_bytes, format="JPEG", exif = exif_bytes, quality=opts.jpeg_quality)
|
||||
else:
|
||||
image.save(output_bytes, format="WEBP", exif = exif_bytes, quality=opts.jpeg_quality)
|
||||
image.save(output_bytes, format="WEBP", exif = exif_bytes, quality=opts.jpeg_quality, lossless=opts.webp_lossless)
|
||||
|
||||
else:
|
||||
raise HTTPException(status_code=500, detail="Invalid image format")
|
||||
|
||||
+17
-8
@@ -47,6 +47,22 @@ def wrap_gradio_gpu_call(func, extra_outputs=None):
|
||||
|
||||
|
||||
def wrap_gradio_call(func, extra_outputs=None, add_stats=False):
|
||||
@wraps(func)
|
||||
def f(*args, **kwargs):
|
||||
try:
|
||||
res = func(*args, **kwargs)
|
||||
finally:
|
||||
shared.state.skipped = False
|
||||
shared.state.interrupted = False
|
||||
shared.state.stopping_generation = False
|
||||
shared.state.job_count = 0
|
||||
shared.state.job = ""
|
||||
return res
|
||||
|
||||
return wrap_gradio_call_no_job(f, extra_outputs, add_stats)
|
||||
|
||||
|
||||
def wrap_gradio_call_no_job(func, extra_outputs=None, add_stats=False):
|
||||
@wraps(func)
|
||||
def f(*args, extra_outputs_array=extra_outputs, **kwargs):
|
||||
run_memmon = shared.opts.memmon_poll_rate > 0 and not shared.mem_mon.disabled and add_stats
|
||||
@@ -66,9 +82,6 @@ def wrap_gradio_call(func, extra_outputs=None, add_stats=False):
|
||||
arg_str += f" (Argument list truncated at {max_debug_str_len}/{len(arg_str)} characters)"
|
||||
errors.report(f"{message}\n{arg_str}", exc_info=True)
|
||||
|
||||
shared.state.job = ""
|
||||
shared.state.job_count = 0
|
||||
|
||||
if extra_outputs_array is None:
|
||||
extra_outputs_array = [None, '']
|
||||
|
||||
@@ -77,11 +90,6 @@ def wrap_gradio_call(func, extra_outputs=None, add_stats=False):
|
||||
|
||||
devices.torch_gc()
|
||||
|
||||
shared.state.skipped = False
|
||||
shared.state.interrupted = False
|
||||
shared.state.stopping_generation = False
|
||||
shared.state.job_count = 0
|
||||
|
||||
if not add_stats:
|
||||
return tuple(res)
|
||||
|
||||
@@ -123,3 +131,4 @@ def wrap_gradio_call(func, extra_outputs=None, add_stats=False):
|
||||
return tuple(res)
|
||||
|
||||
return f
|
||||
|
||||
|
||||
@@ -126,3 +126,4 @@ parser.add_argument("--skip-load-model-at-start", action='store_true', help="if
|
||||
parser.add_argument("--unix-filenames-sanitization", action='store_true', help="allow any symbols except '/' in filenames. May conflict with your browser and file system")
|
||||
parser.add_argument("--filenames-max-length", type=int, default=128, help='maximal length of filenames of saved images. If you override it, it can conflict with your file system')
|
||||
parser.add_argument("--no-prompt-history", action='store_true', help="disable read prompt from last generation feature; settings this argument will not create '--data_path/params.txt' file")
|
||||
parser.add_argument("--uv", action='store_true', help="use the uv package manager")
|
||||
|
||||
+18
-4
@@ -1,7 +1,7 @@
|
||||
import os
|
||||
|
||||
from modules import modelloader, errors
|
||||
from modules.shared import cmd_opts, opts
|
||||
from modules.shared import cmd_opts, opts, hf_endpoint
|
||||
from modules.upscaler import Upscaler, UpscalerData
|
||||
from modules.upscaler_utils import upscale_with_model
|
||||
|
||||
@@ -49,7 +49,18 @@ class UpscalerDAT(Upscaler):
|
||||
scaler.local_data_path = modelloader.load_file_from_url(
|
||||
scaler.data_path,
|
||||
model_dir=self.model_download_path,
|
||||
hash_prefix=scaler.sha256,
|
||||
)
|
||||
|
||||
if os.path.getsize(scaler.local_data_path) < 200:
|
||||
# Re-download if the file is too small, probably an LFS pointer
|
||||
scaler.local_data_path = modelloader.load_file_from_url(
|
||||
scaler.data_path,
|
||||
model_dir=self.model_download_path,
|
||||
hash_prefix=scaler.sha256,
|
||||
re_download=True,
|
||||
)
|
||||
|
||||
if not os.path.exists(scaler.local_data_path):
|
||||
raise FileNotFoundError(f"DAT data missing: {scaler.local_data_path}")
|
||||
return scaler
|
||||
@@ -60,20 +71,23 @@ def get_dat_models(scaler):
|
||||
return [
|
||||
UpscalerData(
|
||||
name="DAT x2",
|
||||
path="https://github.com/n0kovo/dat_upscaler_models/raw/main/DAT/DAT_x2.pth",
|
||||
path=f"{hf_endpoint}/w-e-w/DAT/resolve/main/experiments/pretrained_models/DAT/DAT_x2.pth",
|
||||
scale=2,
|
||||
upscaler=scaler,
|
||||
sha256='7760aa96e4ee77e29d4f89c3a4486200042e019461fdb8aa286f49aa00b89b51',
|
||||
),
|
||||
UpscalerData(
|
||||
name="DAT x3",
|
||||
path="https://github.com/n0kovo/dat_upscaler_models/raw/main/DAT/DAT_x3.pth",
|
||||
path=f"{hf_endpoint}/w-e-w/DAT/resolve/main/experiments/pretrained_models/DAT/DAT_x3.pth",
|
||||
scale=3,
|
||||
upscaler=scaler,
|
||||
sha256='581973e02c06f90d4eb90acf743ec9604f56f3c2c6f9e1e2c2b38ded1f80d197',
|
||||
),
|
||||
UpscalerData(
|
||||
name="DAT x4",
|
||||
path="https://github.com/n0kovo/dat_upscaler_models/raw/main/DAT/DAT_x4.pth",
|
||||
path=f"{hf_endpoint}/w-e-w/DAT/resolve/main/experiments/pretrained_models/DAT/DAT_x4.pth",
|
||||
scale=4,
|
||||
upscaler=scaler,
|
||||
sha256='391a6ce69899dff5ea3214557e9d585608254579217169faf3d4c353caff049e',
|
||||
),
|
||||
]
|
||||
|
||||
+1
-1
@@ -23,7 +23,7 @@ def run_pnginfo(image):
|
||||
info = ''
|
||||
for key, text in items.items():
|
||||
info += f"""
|
||||
<div>
|
||||
<div class="infotext">
|
||||
<p><b>{plaintext_to_html(str(key))}</b></p>
|
||||
<p>{plaintext_to_html(str(text))}</p>
|
||||
</div>
|
||||
|
||||
+30
-2
@@ -1,7 +1,7 @@
|
||||
import hashlib
|
||||
import os.path
|
||||
|
||||
from modules import shared
|
||||
from modules import shared, errors
|
||||
import modules.cache
|
||||
|
||||
dump_cache = modules.cache.dump_cache
|
||||
@@ -32,7 +32,7 @@ def sha256_from_cache(filename, title, use_addnet_hash=False):
|
||||
cached_sha256 = hashes[title].get("sha256", None)
|
||||
cached_mtime = hashes[title].get("mtime", 0)
|
||||
|
||||
if ondisk_mtime > cached_mtime or cached_sha256 is None:
|
||||
if ondisk_mtime != cached_mtime or cached_sha256 is None:
|
||||
return None
|
||||
|
||||
return cached_sha256
|
||||
@@ -82,3 +82,31 @@ def addnet_hash_safetensors(b):
|
||||
|
||||
return hash_sha256.hexdigest()
|
||||
|
||||
|
||||
def partial_hash_from_cache(filename, *, ignore_cache: bool = False, digits: int = 8):
|
||||
"""old hash that only looks at a small part of the file and is prone to collisions
|
||||
kept for compatibility, don't use this for new things
|
||||
"""
|
||||
try:
|
||||
filename = str(filename)
|
||||
mtime = os.path.getmtime(filename)
|
||||
hashes = cache('partial-hash')
|
||||
cache_entry = hashes.get(filename, {})
|
||||
cache_mtime = cache_entry.get("mtime", 0)
|
||||
cache_hash = cache_entry.get("hash", None)
|
||||
if mtime == cache_mtime and cache_hash and not ignore_cache:
|
||||
return cache_hash[0:digits]
|
||||
|
||||
with open(filename, 'rb') as file:
|
||||
m = hashlib.sha256()
|
||||
file.seek(0x100000)
|
||||
m.update(file.read(0x10000))
|
||||
partial_hash = m.hexdigest()
|
||||
hashes[filename] = {'mtime': mtime, 'hash': partial_hash}
|
||||
return partial_hash[0:digits]
|
||||
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
except Exception:
|
||||
errors.report(f'Error calculating partial hash for {filename}', exc_info=True)
|
||||
return 'NOFILE'
|
||||
|
||||
@@ -409,6 +409,7 @@ class FilenameGenerator:
|
||||
'generation_number': lambda self: NOTHING_AND_SKIP_PREVIOUS_TEXT if (self.p.n_iter == 1 and self.p.batch_size == 1) or self.zip else self.p.iteration * self.p.batch_size + self.p.batch_index + 1,
|
||||
'hasprompt': lambda self, *args: self.hasprompt(*args), # accepts formats:[hasprompt<prompt1|default><prompt2>..]
|
||||
'clip_skip': lambda self: opts.data["CLIP_stop_at_last_layers"],
|
||||
'randn_source': lambda self: opts.data["randn_source"],
|
||||
'denoising': lambda self: self.p.denoising_strength if self.p and self.p.denoising_strength else NOTHING_AND_SKIP_PREVIOUS_TEXT,
|
||||
'user': lambda self: self.p.user,
|
||||
'vae_filename': lambda self: self.get_vae_filename(),
|
||||
|
||||
@@ -146,18 +146,19 @@ def connect_paste_params_buttons():
|
||||
destination_height_component = next(iter([field for field, name in fields if name == "Size-2"] if fields else []), None)
|
||||
|
||||
if binding.source_image_component and destination_image_component:
|
||||
need_send_dementions = destination_width_component and binding.tabname != 'inpaint'
|
||||
if isinstance(binding.source_image_component, gr.Gallery):
|
||||
func = send_image_and_dimensions if destination_width_component else image_from_url_text
|
||||
func = send_image_and_dimensions if need_send_dementions else image_from_url_text
|
||||
jsfunc = "extract_image_from_gallery"
|
||||
else:
|
||||
func = send_image_and_dimensions if destination_width_component else lambda x: x
|
||||
func = send_image_and_dimensions if need_send_dementions else lambda x: x
|
||||
jsfunc = None
|
||||
|
||||
binding.paste_button.click(
|
||||
fn=func,
|
||||
_js=jsfunc,
|
||||
inputs=[binding.source_image_component],
|
||||
outputs=[destination_image_component, destination_width_component, destination_height_component] if destination_width_component else [destination_image_component],
|
||||
outputs=[destination_image_component, destination_width_component, destination_height_component] if need_send_dementions else [destination_image_component],
|
||||
show_progress=False,
|
||||
)
|
||||
|
||||
|
||||
+58
-10
@@ -9,6 +9,7 @@ import importlib.util
|
||||
import importlib.metadata
|
||||
import platform
|
||||
import json
|
||||
import shlex
|
||||
from functools import lru_cache
|
||||
|
||||
from modules import cmd_args, errors
|
||||
@@ -42,9 +43,7 @@ def check_python_version():
|
||||
supported_minors = [7, 8, 9, 10, 11]
|
||||
|
||||
if not (major == 3 and minor in supported_minors):
|
||||
import modules.errors
|
||||
|
||||
modules.errors.print_error_explanation(f"""
|
||||
errors.print_error_explanation(f"""
|
||||
INCOMPATIBLE PYTHON VERSION
|
||||
|
||||
This program is tested with 3.10.6 Python, but you have {major}.{minor}.{micro}.
|
||||
@@ -314,9 +313,43 @@ def requirements_met(requirements_file):
|
||||
return True
|
||||
|
||||
|
||||
def get_cuda_comp_cap():
|
||||
"""
|
||||
Returns float of CUDA Compute Capability using nvidia-smi
|
||||
Returns 0.0 on error
|
||||
CUDA Compute Capability
|
||||
ref https://developer.nvidia.com/cuda-gpus
|
||||
ref https://en.wikipedia.org/wiki/CUDA
|
||||
Blackwell consumer GPUs should return 12.0 data-center GPUs should return 10.0
|
||||
"""
|
||||
try:
|
||||
return max(map(float, subprocess.check_output(['nvidia-smi', '--query-gpu=compute_cap', '--format=noheader,csv'], text=True).splitlines()))
|
||||
except Exception as _:
|
||||
return 0.0
|
||||
|
||||
|
||||
def early_access_blackwell_wheels():
|
||||
"""For Blackwell GPUs, use Early Access PyTorch Wheels provided by Nvidia"""
|
||||
print('deprecated early_access_blackwell_wheels')
|
||||
if all([
|
||||
os.environ.get('TORCH_INDEX_URL') is None,
|
||||
sys.version_info.major == 3,
|
||||
sys.version_info.minor in (10, 11, 12),
|
||||
platform.system() == "Windows",
|
||||
get_cuda_comp_cap() >= 10, # Blackwell
|
||||
]):
|
||||
base_repo = 'https://huggingface.co/w-e-w/torch-2.6.0-cu128.nv/resolve/main/'
|
||||
ea_whl = {
|
||||
10: f'{base_repo}torch-2.6.0+cu128.nv-cp310-cp310-win_amd64.whl#sha256=fef3de7ce8f4642e405576008f384304ad0e44f7b06cc1aa45e0ab4b6e70490d {base_repo}torchvision-0.20.0a0+cu128.nv-cp310-cp310-win_amd64.whl#sha256=50841254f59f1db750e7348b90a8f4cd6befec217ab53cbb03780490b225abef',
|
||||
11: f'{base_repo}torch-2.6.0+cu128.nv-cp311-cp311-win_amd64.whl#sha256=6665c36e6a7e79e7a2cb42bec190d376be9ca2859732ed29dd5b7b5a612d0d26 {base_repo}torchvision-0.20.0a0+cu128.nv-cp311-cp311-win_amd64.whl#sha256=bbc0ee4938e35fe5a30de3613bfcd2d8ef4eae334cf8d49db860668f0bb47083',
|
||||
12: f'{base_repo}torch-2.6.0+cu128.nv-cp312-cp312-win_amd64.whl#sha256=a3197f72379d34b08c4a4bcf49ea262544a484e8702b8c46cbcd66356c89def6 {base_repo}torchvision-0.20.0a0+cu128.nv-cp312-cp312-win_amd64.whl#sha256=235e7be71ac4e75b0f8e817bae4796d7bac8a67146d2037ab96394f2bdc63e6c'
|
||||
}
|
||||
return f'pip install {ea_whl.get(sys.version_info.minor)}'
|
||||
|
||||
|
||||
def prepare_environment():
|
||||
torch_index_url = os.environ.get('TORCH_INDEX_URL', "https://download.pytorch.org/whl/cu121")
|
||||
torch_command = os.environ.get('TORCH_COMMAND', f"pip install torch==2.1.2 torchvision==0.16.2 --extra-index-url {torch_index_url}")
|
||||
torch_index_url = os.environ.get('TORCH_INDEX_URL', "https://download.pytorch.org/whl/cu128")
|
||||
torch_command = os.environ.get('TORCH_COMMAND', f"pip install torch==2.7.0 torchvision==0.22.0 --extra-index-url {torch_index_url}")
|
||||
if args.use_ipex:
|
||||
if platform.system() == "Windows":
|
||||
# The "Nuullll/intel-extension-for-pytorch" wheels were built from IPEX source for Intel Arc GPU: https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main
|
||||
@@ -340,12 +373,12 @@ def prepare_environment():
|
||||
requirements_file = os.environ.get('REQS_FILE', "requirements_versions.txt")
|
||||
requirements_file_for_npu = os.environ.get('REQS_FILE_FOR_NPU', "requirements_npu.txt")
|
||||
|
||||
xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.23.post1')
|
||||
xformers_package = os.environ.get('XFORMERS_PACKAGE', 'xformers==0.0.30')
|
||||
clip_package = os.environ.get('CLIP_PACKAGE', "https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip")
|
||||
openclip_package = os.environ.get('OPENCLIP_PACKAGE', "https://github.com/mlfoundations/open_clip/archive/bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b.zip")
|
||||
|
||||
assets_repo = os.environ.get('ASSETS_REPO', "https://github.com/AUTOMATIC1111/stable-diffusion-webui-assets.git")
|
||||
stable_diffusion_repo = os.environ.get('STABLE_DIFFUSION_REPO', "https://github.com/Stability-AI/stablediffusion.git")
|
||||
stable_diffusion_repo = os.environ.get('STABLE_DIFFUSION_REPO', "https://github.com/w-e-w/stablediffusion.git")
|
||||
stable_diffusion_xl_repo = os.environ.get('STABLE_DIFFUSION_XL_REPO', "https://github.com/Stability-AI/generative-models.git")
|
||||
k_diffusion_repo = os.environ.get('K_DIFFUSION_REPO', 'https://github.com/crowsonkb/k-diffusion.git')
|
||||
blip_repo = os.environ.get('BLIP_REPO', 'https://github.com/salesforce/BLIP.git')
|
||||
@@ -389,8 +422,24 @@ def prepare_environment():
|
||||
)
|
||||
startup_timer.record("torch GPU test")
|
||||
|
||||
# Ensure build dependencies are installed before any package that might need them
|
||||
def ensure_build_dependencies():
|
||||
"""Ensure essential build tools are available"""
|
||||
if not is_installed("wheel"):
|
||||
run_pip("install wheel", "wheel")
|
||||
# Check setuptools version compatibility
|
||||
try:
|
||||
setuptools_version = run(f'"{python}" -c "import setuptools; print(setuptools.__version__)"', None, None).strip()
|
||||
if setuptools_version >= "70":
|
||||
run_pip("install setuptools==69.5.1", "setuptools")
|
||||
except Exception:
|
||||
# If setuptools check fails, install compatible version
|
||||
run_pip("install setuptools==69.5.1", "setuptools")
|
||||
# Install build dependencies early
|
||||
ensure_build_dependencies()
|
||||
|
||||
if not is_installed("clip"):
|
||||
run_pip(f"install {clip_package}", "clip")
|
||||
run_pip(f"install --no-build-isolation {clip_package}", "clip")
|
||||
startup_timer.record("install clip")
|
||||
|
||||
if not is_installed("open_clip"):
|
||||
@@ -445,7 +494,6 @@ def prepare_environment():
|
||||
exit(0)
|
||||
|
||||
|
||||
|
||||
def configure_for_tests():
|
||||
if "--api" not in sys.argv:
|
||||
sys.argv.append("--api")
|
||||
@@ -461,7 +509,7 @@ def configure_for_tests():
|
||||
|
||||
|
||||
def start():
|
||||
print(f"Launching {'API server' if '--nowebui' in sys.argv else 'Web UI'} with arguments: {' '.join(sys.argv[1:])}")
|
||||
print(f"Launching {'API server' if '--nowebui' in sys.argv else 'Web UI'} with arguments: {shlex.join(sys.argv[1:])}")
|
||||
import webui
|
||||
if '--nowebui' in sys.argv:
|
||||
webui.api_only()
|
||||
|
||||
+1
-24
@@ -10,6 +10,7 @@ import torch
|
||||
|
||||
from modules import shared
|
||||
from modules.upscaler import Upscaler, UpscalerLanczos, UpscalerNearest, UpscalerNone
|
||||
from modules.util import load_file_from_url # noqa, backwards compatibility
|
||||
|
||||
if TYPE_CHECKING:
|
||||
import spandrel
|
||||
@@ -17,30 +18,6 @@ if TYPE_CHECKING:
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def load_file_from_url(
|
||||
url: str,
|
||||
*,
|
||||
model_dir: str,
|
||||
progress: bool = True,
|
||||
file_name: str | None = None,
|
||||
hash_prefix: str | None = None,
|
||||
) -> str:
|
||||
"""Download a file from `url` into `model_dir`, using the file present if possible.
|
||||
|
||||
Returns the path to the downloaded file.
|
||||
"""
|
||||
os.makedirs(model_dir, exist_ok=True)
|
||||
if not file_name:
|
||||
parts = urlparse(url)
|
||||
file_name = os.path.basename(parts.path)
|
||||
cached_file = os.path.abspath(os.path.join(model_dir, file_name))
|
||||
if not os.path.exists(cached_file):
|
||||
print(f'Downloading: "{url}" to {cached_file}\n')
|
||||
from torch.hub import download_url_to_file
|
||||
download_url_to_file(url, cached_file, progress=progress, hash_prefix=hash_prefix)
|
||||
return cached_file
|
||||
|
||||
|
||||
def load_models(model_path: str, model_url: str = None, command_path: str = None, ext_filter=None, download_name=None, ext_blacklist=None, hash_prefix=None) -> list:
|
||||
"""
|
||||
A one-and done loader to try finding the desired models in specified directories.
|
||||
|
||||
@@ -175,6 +175,9 @@ class VectorEmbedder(nn.Module):
|
||||
#################################################################################
|
||||
|
||||
|
||||
class QkvLinear(torch.nn.Linear):
|
||||
pass
|
||||
|
||||
def split_qkv(qkv, head_dim):
|
||||
qkv = qkv.reshape(qkv.shape[0], qkv.shape[1], 3, -1, head_dim).movedim(2, 0)
|
||||
return qkv[0], qkv[1], qkv[2]
|
||||
@@ -202,7 +205,7 @@ class SelfAttention(nn.Module):
|
||||
self.num_heads = num_heads
|
||||
self.head_dim = dim // num_heads
|
||||
|
||||
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias, dtype=dtype, device=device)
|
||||
self.qkv = QkvLinear(dim, dim * 3, bias=qkv_bias, dtype=dtype, device=device)
|
||||
if not pre_only:
|
||||
self.proj = nn.Linear(dim, dim, dtype=dtype, device=device)
|
||||
assert attn_mode in self.ATTENTION_MODES
|
||||
|
||||
@@ -5,6 +5,8 @@ import math
|
||||
from torch import nn
|
||||
from transformers import CLIPTokenizer, T5TokenizerFast
|
||||
|
||||
from modules import sd_hijack
|
||||
|
||||
|
||||
#################################################################################################
|
||||
### Core/Utility
|
||||
@@ -110,9 +112,9 @@ class CLIPEncoder(torch.nn.Module):
|
||||
|
||||
|
||||
class CLIPEmbeddings(torch.nn.Module):
|
||||
def __init__(self, embed_dim, vocab_size=49408, num_positions=77, dtype=None, device=None):
|
||||
def __init__(self, embed_dim, vocab_size=49408, num_positions=77, dtype=None, device=None, textual_inversion_key="clip_l"):
|
||||
super().__init__()
|
||||
self.token_embedding = torch.nn.Embedding(vocab_size, embed_dim, dtype=dtype, device=device)
|
||||
self.token_embedding = sd_hijack.TextualInversionEmbeddings(vocab_size, embed_dim, dtype=dtype, device=device, textual_inversion_key=textual_inversion_key)
|
||||
self.position_embedding = torch.nn.Embedding(num_positions, embed_dim, dtype=dtype, device=device)
|
||||
|
||||
def forward(self, input_tokens):
|
||||
@@ -127,7 +129,7 @@ class CLIPTextModel_(torch.nn.Module):
|
||||
intermediate_size = config_dict["intermediate_size"]
|
||||
intermediate_activation = config_dict["hidden_act"]
|
||||
super().__init__()
|
||||
self.embeddings = CLIPEmbeddings(embed_dim, dtype=torch.float32, device=device)
|
||||
self.embeddings = CLIPEmbeddings(embed_dim, dtype=torch.float32, device=device, textual_inversion_key=config_dict.get('textual_inversion_key', 'clip_l'))
|
||||
self.encoder = CLIPEncoder(num_layers, embed_dim, heads, intermediate_size, intermediate_activation, dtype, device)
|
||||
self.final_layer_norm = nn.LayerNorm(embed_dim, dtype=dtype, device=device)
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@ class SafetensorsMapping(typing.Mapping):
|
||||
return self.file.get_tensor(key)
|
||||
|
||||
|
||||
CLIPL_URL = "https://huggingface.co/AUTOMATIC/stable-diffusion-3-medium-text-encoders/resolve/main/clip_l.safetensors"
|
||||
CLIPL_URL = f"{shared.hf_endpoint}/AUTOMATIC/stable-diffusion-3-medium-text-encoders/resolve/main/clip_l.safetensors"
|
||||
CLIPL_CONFIG = {
|
||||
"hidden_act": "quick_gelu",
|
||||
"hidden_size": 768,
|
||||
@@ -33,16 +33,17 @@ CLIPL_CONFIG = {
|
||||
"num_hidden_layers": 12,
|
||||
}
|
||||
|
||||
CLIPG_URL = "https://huggingface.co/AUTOMATIC/stable-diffusion-3-medium-text-encoders/resolve/main/clip_g.safetensors"
|
||||
CLIPG_URL = f"{shared.hf_endpoint}/AUTOMATIC/stable-diffusion-3-medium-text-encoders/resolve/main/clip_g.safetensors"
|
||||
CLIPG_CONFIG = {
|
||||
"hidden_act": "gelu",
|
||||
"hidden_size": 1280,
|
||||
"intermediate_size": 5120,
|
||||
"num_attention_heads": 20,
|
||||
"num_hidden_layers": 32,
|
||||
"textual_inversion_key": "clip_g",
|
||||
}
|
||||
|
||||
T5_URL = "https://huggingface.co/AUTOMATIC/stable-diffusion-3-medium-text-encoders/resolve/main/t5xxl_fp16.safetensors"
|
||||
T5_URL = f"{shared.hf_endpoint}/AUTOMATIC/stable-diffusion-3-medium-text-encoders/resolve/main/t5xxl_fp16.safetensors"
|
||||
T5_CONFIG = {
|
||||
"d_ff": 10240,
|
||||
"d_model": 4096,
|
||||
@@ -204,7 +205,10 @@ class SD3Cond(torch.nn.Module):
|
||||
self.t5xxl.transformer.load_state_dict(SafetensorsMapping(file), strict=False)
|
||||
|
||||
def encode_embedding_init_text(self, init_text, nvpt):
|
||||
return torch.tensor([[0]], device=devices.device) # XXX
|
||||
return self.model_lg.encode_embedding_init_text(init_text, nvpt)
|
||||
|
||||
def tokenize(self, texts):
|
||||
return self.model_lg.tokenize(texts)
|
||||
|
||||
def medvram_modules(self):
|
||||
return [self.clip_g, self.clip_l, self.t5xxl]
|
||||
|
||||
@@ -67,6 +67,7 @@ class BaseModel(torch.nn.Module):
|
||||
}
|
||||
self.diffusion_model = MMDiT(input_size=None, pos_embed_scaling_factor=None, pos_embed_offset=None, pos_embed_max_size=pos_embed_max_size, patch_size=patch_size, in_channels=16, depth=depth, num_patches=num_patches, adm_in_channels=adm_in_channels, context_embedder_config=context_embedder_config, device=device, dtype=dtype)
|
||||
self.model_sampling = ModelSamplingDiscreteFlow(shift=shift)
|
||||
self.depth = depth
|
||||
|
||||
def apply_model(self, x, sigma, c_crossattn=None, y=None):
|
||||
dtype = self.get_dtype()
|
||||
|
||||
@@ -82,3 +82,15 @@ class SD3Inferencer(torch.nn.Module):
|
||||
|
||||
def fix_dimensions(self, width, height):
|
||||
return width // 16 * 16, height // 16 * 16
|
||||
|
||||
def diffusers_weight_mapping(self):
|
||||
for i in range(self.model.depth):
|
||||
yield f"transformer.transformer_blocks.{i}.attn.to_q", f"diffusion_model_joint_blocks_{i}_x_block_attn_qkv_q_proj"
|
||||
yield f"transformer.transformer_blocks.{i}.attn.to_k", f"diffusion_model_joint_blocks_{i}_x_block_attn_qkv_k_proj"
|
||||
yield f"transformer.transformer_blocks.{i}.attn.to_v", f"diffusion_model_joint_blocks_{i}_x_block_attn_qkv_v_proj"
|
||||
yield f"transformer.transformer_blocks.{i}.attn.to_out.0", f"diffusion_model_joint_blocks_{i}_x_block_attn_proj"
|
||||
|
||||
yield f"transformer.transformer_blocks.{i}.attn.add_q_proj", f"diffusion_model_joint_blocks_{i}_context_block.attn_qkv_q_proj"
|
||||
yield f"transformer.transformer_blocks.{i}.attn.add_k_proj", f"diffusion_model_joint_blocks_{i}_context_block.attn_qkv_k_proj"
|
||||
yield f"transformer.transformer_blocks.{i}.attn.add_v_proj", f"diffusion_model_joint_blocks_{i}_context_block.attn_qkv_v_proj"
|
||||
yield f"transformer.transformer_blocks.{i}.attn.add_out_proj.0", f"diffusion_model_joint_blocks_{i}_context_block_attn_proj"
|
||||
|
||||
@@ -1259,6 +1259,9 @@ class StableDiffusionProcessingTxt2Img(StableDiffusionProcessing):
|
||||
if self.hr_checkpoint_info is None:
|
||||
raise Exception(f'Could not find checkpoint with name {self.hr_checkpoint_name}')
|
||||
|
||||
if shared.sd_model.sd_checkpoint_info == self.hr_checkpoint_info:
|
||||
self.hr_checkpoint_info = None
|
||||
else:
|
||||
self.extra_generation_params["Hires checkpoint"] = self.hr_checkpoint_info.short_title
|
||||
|
||||
if self.hr_sampler_name is not None and self.hr_sampler_name != self.sampler_name:
|
||||
|
||||
@@ -13,6 +13,7 @@ class ScriptPostprocessingForMainUI(scripts.Script):
|
||||
return scripts.AlwaysVisible
|
||||
|
||||
def ui(self, is_img2img):
|
||||
self.script.tab_name = '_img2img' if is_img2img else '_txt2img'
|
||||
self.postprocessing_controls = self.script.ui()
|
||||
return self.postprocessing_controls.values()
|
||||
|
||||
@@ -33,7 +34,7 @@ def create_auto_preprocessing_script_data():
|
||||
|
||||
for name in shared.opts.postprocessing_enable_in_main_ui:
|
||||
script = next(iter([x for x in scripts.postprocessing_scripts_data if x.script_class.name == name]), None)
|
||||
if script is None:
|
||||
if script is None or script.script_class.extra_only:
|
||||
continue
|
||||
|
||||
constructor = lambda s=script: ScriptPostprocessingForMainUI(s.script_class())
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import re
|
||||
import dataclasses
|
||||
import os
|
||||
import gradio as gr
|
||||
@@ -59,6 +60,10 @@ class ScriptPostprocessing:
|
||||
args_from = None
|
||||
args_to = None
|
||||
|
||||
# define if the script should be used only in extras or main UI
|
||||
extra_only = None
|
||||
main_ui_only = None
|
||||
|
||||
order = 1000
|
||||
"""scripts will be ordred by this value in postprocessing UI"""
|
||||
|
||||
@@ -97,6 +102,31 @@ class ScriptPostprocessing:
|
||||
def image_changed(self):
|
||||
pass
|
||||
|
||||
tab_name = '' # used by ScriptPostprocessingForMainUI
|
||||
replace_pattern = re.compile(r'\s')
|
||||
rm_pattern = re.compile(r'[^a-z_0-9]')
|
||||
|
||||
def elem_id(self, item_id):
|
||||
"""
|
||||
Helper function to generate id for a HTML element
|
||||
constructs final id out of script name and user-supplied item_id
|
||||
'script_extras_{self.name.lower()}_{item_id}'
|
||||
{tab_name} will append to the end of the id if set
|
||||
tab_name will be set to '_img2img' or '_txt2img' if use by ScriptPostprocessingForMainUI
|
||||
|
||||
Extensions should use this function to generate element IDs
|
||||
"""
|
||||
return self.elem_id_suffix(f'extras_{self.name.lower()}_{item_id}')
|
||||
|
||||
def elem_id_suffix(self, base_id):
|
||||
"""
|
||||
Append tab_name to the base_id
|
||||
|
||||
Extensions that already have specific there element IDs and wish to keep their IDs the same when possible should use this function
|
||||
"""
|
||||
base_id = self.rm_pattern.sub('', self.replace_pattern.sub('_', base_id))
|
||||
return f'{base_id}{self.tab_name}'
|
||||
|
||||
|
||||
def wrap_call(func, filename, funcname, *args, default=None, **kwargs):
|
||||
try:
|
||||
@@ -119,10 +149,6 @@ class ScriptPostprocessingRunner:
|
||||
for script_data in scripts_data:
|
||||
script: ScriptPostprocessing = script_data.script_class()
|
||||
script.filename = script_data.path
|
||||
|
||||
if script.name == "Simple Upscale":
|
||||
continue
|
||||
|
||||
self.scripts.append(script)
|
||||
|
||||
def create_script_ui(self, script, inputs):
|
||||
@@ -152,7 +178,7 @@ class ScriptPostprocessingRunner:
|
||||
|
||||
return len(self.scripts)
|
||||
|
||||
filtered_scripts = [script for script in self.scripts if script.name not in scripts_filter_out]
|
||||
filtered_scripts = [script for script in self.scripts if script.name not in scripts_filter_out and not script.main_ui_only]
|
||||
script_scores = {script.name: (script_score(script.name), script.order, script.name, original_index) for original_index, script in enumerate(filtered_scripts)}
|
||||
|
||||
return sorted(filtered_scripts, key=lambda x: script_scores[x.name])
|
||||
|
||||
@@ -76,7 +76,7 @@ class DisableInitialization(ReplaceHelper):
|
||||
def transformers_utils_hub_get_file_from_cache(original, url, *args, **kwargs):
|
||||
|
||||
# this file is always 404, prevent making request
|
||||
if url == 'https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/added_tokens.json' or url == 'openai/clip-vit-large-patch14' and args[0] == 'added_tokens.json':
|
||||
if url == f'{shared.hf_endpoint}/openai/clip-vit-large-patch14/resolve/main/added_tokens.json' or url == 'openai/clip-vit-large-patch14' and args[0] == 'added_tokens.json':
|
||||
return None
|
||||
|
||||
try:
|
||||
|
||||
+16
-1
@@ -359,13 +359,28 @@ class EmbeddingsWithFixes(torch.nn.Module):
|
||||
vec = embedding.vec[self.textual_inversion_key] if isinstance(embedding.vec, dict) else embedding.vec
|
||||
emb = devices.cond_cast_unet(vec)
|
||||
emb_len = min(tensor.shape[0] - offset - 1, emb.shape[0])
|
||||
tensor = torch.cat([tensor[0:offset + 1], emb[0:emb_len], tensor[offset + 1 + emb_len:]])
|
||||
tensor = torch.cat([tensor[0:offset + 1], emb[0:emb_len], tensor[offset + 1 + emb_len:]]).to(dtype=inputs_embeds.dtype)
|
||||
|
||||
vecs.append(tensor)
|
||||
|
||||
return torch.stack(vecs)
|
||||
|
||||
|
||||
class TextualInversionEmbeddings(torch.nn.Embedding):
|
||||
def __init__(self, num_embeddings: int, embedding_dim: int, textual_inversion_key='clip_l', **kwargs):
|
||||
super().__init__(num_embeddings, embedding_dim, **kwargs)
|
||||
|
||||
self.embeddings = model_hijack
|
||||
self.textual_inversion_key = textual_inversion_key
|
||||
|
||||
@property
|
||||
def wrapped(self):
|
||||
return super().forward
|
||||
|
||||
def forward(self, input_ids):
|
||||
return EmbeddingsWithFixes.forward(self, input_ids)
|
||||
|
||||
|
||||
def add_circular_option_to_conv_2d():
|
||||
conv2d_constructor = torch.nn.Conv2d.__init__
|
||||
|
||||
|
||||
@@ -54,7 +54,7 @@ class SdOptimizationXformers(SdOptimization):
|
||||
priority = 100
|
||||
|
||||
def is_available(self):
|
||||
return shared.cmd_opts.force_enable_xformers or (shared.xformers_available and torch.cuda.is_available() and (6, 0) <= torch.cuda.get_device_capability(shared.device) <= (9, 0))
|
||||
return shared.cmd_opts.force_enable_xformers or (shared.xformers_available and torch.cuda.is_available() and (6, 0) <= torch.cuda.get_device_capability(shared.device) <= (12, 0))
|
||||
|
||||
def apply(self):
|
||||
ldm.modules.attention.CrossAttention.forward = xformers_attention_forward
|
||||
|
||||
+11
-20
@@ -13,6 +13,7 @@ from urllib import request
|
||||
import ldm.modules.midas as midas
|
||||
|
||||
from modules import paths, shared, modelloader, devices, script_callbacks, sd_vae, sd_disable_initialization, errors, hashes, sd_models_config, sd_unet, sd_models_xl, cache, extra_networks, processing, lowvram, sd_hijack, patches
|
||||
from modules.hashes import partial_hash_from_cache as model_hash # noqa: F401 for backwards compatibility
|
||||
from modules.timer import Timer
|
||||
from modules.shared import opts
|
||||
import tomesd
|
||||
@@ -87,7 +88,7 @@ class CheckpointInfo:
|
||||
self.name = name
|
||||
self.name_for_extra = os.path.splitext(os.path.basename(filename))[0]
|
||||
self.model_name = os.path.splitext(name.replace("/", "_").replace("\\", "_"))[0]
|
||||
self.hash = model_hash(filename)
|
||||
self.hash = hashes.partial_hash_from_cache(filename)
|
||||
|
||||
self.sha256 = hashes.sha256_from_cache(self.filename, f"checkpoint/{name}")
|
||||
self.shorthash = self.sha256[0:10] if self.sha256 else None
|
||||
@@ -159,7 +160,7 @@ def list_models():
|
||||
model_url = None
|
||||
expected_sha256 = None
|
||||
else:
|
||||
model_url = f"{shared.hf_endpoint}/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors"
|
||||
model_url = f"{shared.hf_endpoint}/stable-diffusion-v1-5/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors"
|
||||
expected_sha256 = '6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa'
|
||||
|
||||
model_list = modelloader.load_models(model_path=model_path, model_url=model_url, command_path=shared.cmd_opts.ckpt_dir, ext_filter=[".ckpt", ".safetensors"], download_name="v1-5-pruned-emaonly.safetensors", ext_blacklist=[".vae.ckpt", ".vae.safetensors"], hash_prefix=expected_sha256)
|
||||
@@ -200,21 +201,6 @@ def get_closet_checkpoint_match(search_string):
|
||||
return None
|
||||
|
||||
|
||||
def model_hash(filename):
|
||||
"""old hash that only looks at a small part of the file and is prone to collisions"""
|
||||
|
||||
try:
|
||||
with open(filename, "rb") as file:
|
||||
import hashlib
|
||||
m = hashlib.sha256()
|
||||
|
||||
file.seek(0x100000)
|
||||
m.update(file.read(0x10000))
|
||||
return m.hexdigest()[0:8]
|
||||
except FileNotFoundError:
|
||||
return 'NOFILE'
|
||||
|
||||
|
||||
def select_checkpoint():
|
||||
"""Raises `FileNotFoundError` if no checkpoints are found."""
|
||||
model_checkpoint = shared.opts.sd_model_checkpoint
|
||||
@@ -423,6 +409,10 @@ def load_model_weights(model, checkpoint_info: CheckpointInfo, state_dict, timer
|
||||
|
||||
set_model_type(model, state_dict)
|
||||
set_model_fields(model)
|
||||
if 'ztsnr' in state_dict:
|
||||
model.ztsnr = True
|
||||
else:
|
||||
model.ztsnr = False
|
||||
|
||||
if model.is_sdxl:
|
||||
sd_models_xl.extend_sdxl(model)
|
||||
@@ -661,7 +651,7 @@ def apply_alpha_schedule_override(sd_model, p=None):
|
||||
p.extra_generation_params['Downcast alphas_cumprod'] = opts.use_downcasted_alpha_bar
|
||||
sd_model.alphas_cumprod = sd_model.alphas_cumprod.half().to(shared.device)
|
||||
|
||||
if opts.sd_noise_schedule == "Zero Terminal SNR":
|
||||
if opts.sd_noise_schedule == "Zero Terminal SNR" or (hasattr(sd_model, 'ztsnr') and sd_model.ztsnr):
|
||||
if p is not None:
|
||||
p.extra_generation_params['Noise Schedule'] = opts.sd_noise_schedule
|
||||
sd_model.alphas_cumprod = rescale_zero_terminal_snr_abar(sd_model.alphas_cumprod).to(shared.device)
|
||||
@@ -783,7 +773,7 @@ def get_obj_from_str(string, reload=False):
|
||||
return getattr(importlib.import_module(module, package=None), cls)
|
||||
|
||||
|
||||
def load_model(checkpoint_info=None, already_loaded_state_dict=None):
|
||||
def load_model(checkpoint_info=None, already_loaded_state_dict=None, checkpoint_config=None):
|
||||
from modules import sd_hijack
|
||||
checkpoint_info = checkpoint_info or select_checkpoint()
|
||||
|
||||
@@ -801,6 +791,7 @@ def load_model(checkpoint_info=None, already_loaded_state_dict=None):
|
||||
else:
|
||||
state_dict = get_checkpoint_state_dict(checkpoint_info, timer)
|
||||
|
||||
if not checkpoint_config:
|
||||
checkpoint_config = sd_models_config.find_checkpoint_config(state_dict, checkpoint_info)
|
||||
clip_is_included_into_sd = any(x for x in [sd1_clip_weight, sd2_clip_weight, sdxl_clip_weight, sdxl_refiner_clip_weight] if x in state_dict)
|
||||
|
||||
@@ -974,7 +965,7 @@ def reload_model_weights(sd_model=None, info=None, forced_reload=False):
|
||||
if sd_model is not None:
|
||||
send_model_to_trash(sd_model)
|
||||
|
||||
load_model(checkpoint_info, already_loaded_state_dict=state_dict)
|
||||
load_model(checkpoint_info, already_loaded_state_dict=state_dict, checkpoint_config=checkpoint_config)
|
||||
return model_data.sd_model
|
||||
|
||||
try:
|
||||
|
||||
@@ -14,6 +14,7 @@ config_sd2 = os.path.join(sd_repo_configs_path, "v2-inference.yaml")
|
||||
config_sd2v = os.path.join(sd_repo_configs_path, "v2-inference-v.yaml")
|
||||
config_sd2_inpainting = os.path.join(sd_repo_configs_path, "v2-inpainting-inference.yaml")
|
||||
config_sdxl = os.path.join(sd_xl_repo_configs_path, "sd_xl_base.yaml")
|
||||
config_sdxlv = os.path.join(sd_configs_path, "sd_xl_v.yaml")
|
||||
config_sdxl_refiner = os.path.join(sd_xl_repo_configs_path, "sd_xl_refiner.yaml")
|
||||
config_sdxl_inpainting = os.path.join(sd_configs_path, "sd_xl_inpaint.yaml")
|
||||
config_depth_model = os.path.join(sd_repo_configs_path, "v2-midas-inference.yaml")
|
||||
@@ -81,6 +82,9 @@ def guess_model_config_from_state_dict(sd, filename):
|
||||
if diffusion_model_input.shape[1] == 9:
|
||||
return config_sdxl_inpainting
|
||||
else:
|
||||
if ('v_pred' in sd):
|
||||
del sd['v_pred']
|
||||
return config_sdxlv
|
||||
return config_sdxl
|
||||
|
||||
if sd.get('conditioner.embedders.0.model.ln_final.weight', None) is not None:
|
||||
|
||||
@@ -120,6 +120,10 @@ class KDiffusionSampler(sd_samplers_common.Sampler):
|
||||
if scheduler.need_inner_model:
|
||||
sigmas_kwargs['inner_model'] = self.model_wrap
|
||||
|
||||
if scheduler.label == 'Beta':
|
||||
p.extra_generation_params["Beta schedule alpha"] = opts.beta_dist_alpha
|
||||
p.extra_generation_params["Beta schedule beta"] = opts.beta_dist_beta
|
||||
|
||||
sigmas = scheduler.function(n=steps, **sigmas_kwargs, device=devices.cpu)
|
||||
|
||||
if discard_next_to_last_sigma:
|
||||
|
||||
@@ -2,6 +2,7 @@ import dataclasses
|
||||
import torch
|
||||
import k_diffusion
|
||||
import numpy as np
|
||||
from scipy import stats
|
||||
|
||||
from modules import shared
|
||||
|
||||
@@ -115,6 +116,20 @@ def ddim_scheduler(n, sigma_min, sigma_max, inner_model, device):
|
||||
return torch.FloatTensor(sigs).to(device)
|
||||
|
||||
|
||||
def beta_scheduler(n, sigma_min, sigma_max, inner_model, device):
|
||||
# From "Beta Sampling is All You Need" [arXiv:2407.12173] (Lee et. al, 2024)
|
||||
alpha = shared.opts.beta_dist_alpha
|
||||
beta = shared.opts.beta_dist_beta
|
||||
curve = [stats.beta.ppf(x, alpha, beta) for x in np.linspace(1, 0, n)]
|
||||
|
||||
start = inner_model.sigma_to_t(torch.tensor(sigma_max))
|
||||
end = inner_model.sigma_to_t(torch.tensor(sigma_min))
|
||||
timesteps = [end + x * (start - end) for x in curve]
|
||||
sigmas = [inner_model.t_to_sigma(ts) for ts in timesteps]
|
||||
sigmas += [0.0]
|
||||
return torch.FloatTensor(sigmas).to(device)
|
||||
|
||||
|
||||
schedulers = [
|
||||
Scheduler('automatic', 'Automatic', None),
|
||||
Scheduler('uniform', 'Uniform', uniform, need_inner_model=True),
|
||||
@@ -127,6 +142,7 @@ schedulers = [
|
||||
Scheduler('simple', 'Simple', simple_scheduler, need_inner_model=True),
|
||||
Scheduler('normal', 'Normal', normal_scheduler, need_inner_model=True),
|
||||
Scheduler('ddim', 'DDIM', ddim_scheduler, need_inner_model=True),
|
||||
Scheduler('beta', 'Beta', beta_scheduler, need_inner_model=True),
|
||||
]
|
||||
|
||||
schedulers_map = {**{x.name: x for x in schedulers}, **{x.label: x for x in schedulers}}
|
||||
|
||||
@@ -69,3 +69,44 @@ def reload_gradio_theme(theme_name=None):
|
||||
# append additional values gradio_theme
|
||||
shared.gradio_theme.sd_webui_modal_lightbox_toolbar_opacity = shared.opts.sd_webui_modal_lightbox_toolbar_opacity
|
||||
shared.gradio_theme.sd_webui_modal_lightbox_icon_opacity = shared.opts.sd_webui_modal_lightbox_icon_opacity
|
||||
|
||||
|
||||
def resolve_var(name: str, gradio_theme=None, history=None):
|
||||
"""
|
||||
Attempt to resolve a theme variable name to its value
|
||||
|
||||
Parameters:
|
||||
name (str): The name of the theme variable
|
||||
ie "background_fill_primary", "background_fill_primary_dark"
|
||||
spaces and asterisk (*) prefix is removed from name before lookup
|
||||
gradio_theme (gradio.themes.ThemeClass): The theme object to resolve the variable from
|
||||
blank to use the webui default shared.gradio_theme
|
||||
history (list): A list of previously resolved variables to prevent circular references
|
||||
for regular use leave blank
|
||||
Returns:
|
||||
str: The resolved value
|
||||
|
||||
Error handling:
|
||||
return either #000000 or #ffffff depending on initial name ending with "_dark"
|
||||
"""
|
||||
try:
|
||||
if history is None:
|
||||
history = []
|
||||
if gradio_theme is None:
|
||||
gradio_theme = shared.gradio_theme
|
||||
|
||||
name = name.strip()
|
||||
name = name[1:] if name.startswith("*") else name
|
||||
|
||||
if name in history:
|
||||
raise ValueError(f'Circular references: name "{name}" in {history}')
|
||||
|
||||
if value := getattr(gradio_theme, name, None):
|
||||
return resolve_var(value, gradio_theme, history + [name])
|
||||
else:
|
||||
return name
|
||||
|
||||
except Exception:
|
||||
name = history[0] if history else name
|
||||
errors.report(f'resolve_color({name})', exc_info=True)
|
||||
return '#000000' if name.endswith("_dark") else '#ffffff'
|
||||
|
||||
@@ -16,10 +16,12 @@ def dat_models_names():
|
||||
return [x.name for x in modules.dat_model.get_dat_models(None)]
|
||||
|
||||
|
||||
def postprocessing_scripts():
|
||||
def postprocessing_scripts(filter_out_extra_only=False, filter_out_main_ui_only=False):
|
||||
import modules.scripts
|
||||
|
||||
return modules.scripts.scripts_postproc.scripts
|
||||
return list(filter(
|
||||
lambda s: (not filter_out_extra_only or not s.extra_only) and (not filter_out_main_ui_only or not s.main_ui_only),
|
||||
modules.scripts.scripts_postproc.scripts,
|
||||
))
|
||||
|
||||
|
||||
def sd_vae_items():
|
||||
@@ -123,7 +125,7 @@ def ui_reorder_categories():
|
||||
|
||||
def callbacks_order_settings():
|
||||
options = {
|
||||
"sd_vae_explanation": OptionHTML("""
|
||||
"callbacks_order_explanation": OptionHTML("""
|
||||
For categories below, callbacks added to dropdowns happen before others, in order listed.
|
||||
"""),
|
||||
|
||||
|
||||
@@ -33,12 +33,12 @@ categories.register_category("training", "Training")
|
||||
|
||||
options_templates.update(options_section(('saving-images', "Saving images/grids", "saving"), {
|
||||
"samples_save": OptionInfo(True, "Always save all generated images"),
|
||||
"samples_format": OptionInfo('png', 'File format for images'),
|
||||
"samples_format": OptionInfo('png', 'File format for images', ui_components.DropdownEditable, {"choices": ("png", "jpg", "jpeg", "webp", "avif")}).info("manual input of <a href='https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html' target='_blank'>other formats</a> is possible, but compatibility is not guaranteed"),
|
||||
"samples_filename_pattern": OptionInfo("", "Images filename pattern", component_args=hide_dirs).link("wiki", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Images-Filename-Name-and-Subdirectory"),
|
||||
"save_images_add_number": OptionInfo(True, "Add number to filename when saving", component_args=hide_dirs),
|
||||
"save_images_replace_action": OptionInfo("Replace", "Saving the image to an existing file", gr.Radio, {"choices": ["Replace", "Add number suffix"], **hide_dirs}),
|
||||
"grid_save": OptionInfo(True, "Always save all generated image grids"),
|
||||
"grid_format": OptionInfo('png', 'File format for grids'),
|
||||
"grid_format": OptionInfo('png', 'File format for grids', ui_components.DropdownEditable, {"choices": ("png", "jpg", "jpeg", "webp", "avif")}).info("manual input of <a href='https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html' target='_blank'>other formats</a> is possible, but compatibility is not guaranteed"),
|
||||
"grid_extended_filename": OptionInfo(False, "Add extended info (seed, prompt) to filename when saving grid"),
|
||||
"grid_only_if_multiple": OptionInfo(True, "Do not save grids consisting of one picture"),
|
||||
"grid_prevent_empty_spots": OptionInfo(False, "Prevent empty spots in grid (when set to autodetect)"),
|
||||
@@ -64,6 +64,7 @@ options_templates.update(options_section(('saving-images', "Saving images/grids"
|
||||
"use_original_name_batch": OptionInfo(True, "Use original name for output filename during batch process in extras tab"),
|
||||
"use_upscaler_name_as_suffix": OptionInfo(False, "Use upscaler name as filename suffix in the extras tab"),
|
||||
"save_selected_only": OptionInfo(True, "When using 'Save' button, only save a single selected image"),
|
||||
"save_write_log_csv": OptionInfo(True, "Write log.csv when saving images using 'Save' button"),
|
||||
"save_init_img": OptionInfo(False, "Save init images when using img2img"),
|
||||
|
||||
"temp_dir": OptionInfo("", "Directory for temporary images; leave empty for default"),
|
||||
@@ -127,6 +128,7 @@ options_templates.update(options_section(('system', "System", "system"), {
|
||||
"disable_mmap_load_safetensors": OptionInfo(False, "Disable memmapping for loading .safetensors files.").info("fixes very slow loading speed in some cases"),
|
||||
"hide_ldm_prints": OptionInfo(True, "Prevent Stability-AI's ldm/sgm modules from printing noise to console."),
|
||||
"dump_stacks_on_signal": OptionInfo(False, "Print stack traces before exiting the program with ctrl+c."),
|
||||
"concurrent_git_fetch_limit": OptionInfo(16, "Number of simultaneous extension update checks ", gr.Slider, {"step": 1, "minimum": 1, "maximum": 100}).info("reduce extension update check time"),
|
||||
}))
|
||||
|
||||
options_templates.update(options_section(('profiler', "Profiler", "system"), {
|
||||
@@ -230,7 +232,7 @@ options_templates.update(options_section(('img2img', "img2img", "sd"), {
|
||||
|
||||
options_templates.update(options_section(('optimizations', "Optimizations", "sd"), {
|
||||
"cross_attention_optimization": OptionInfo("Automatic", "Cross attention optimization", gr.Dropdown, lambda: {"choices": shared_items.cross_attention_optimizations()}),
|
||||
"s_min_uncond": OptionInfo(0.0, "Negative Guidance minimum sigma", gr.Slider, {"minimum": 0.0, "maximum": 15.0, "step": 0.01}, infotext='NGMS').link("PR", "https://github.com/AUTOMATIC1111/stablediffusion-webui/pull/9177").info("skip negative prompt for some steps when the image is almost ready; 0=disable, higher=faster"),
|
||||
"s_min_uncond": OptionInfo(0.0, "Negative Guidance minimum sigma", gr.Slider, {"minimum": 0.0, "maximum": 15.0, "step": 0.01}, infotext='NGMS').link("PR", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/9177").info("skip negative prompt for some steps when the image is almost ready; 0=disable, higher=faster"),
|
||||
"s_min_uncond_all": OptionInfo(False, "Negative Guidance minimum sigma all steps", infotext='NGMS all steps').info("By default, NGMS above skips every other step; this makes it skip all steps"),
|
||||
"token_merging_ratio": OptionInfo(0.0, "Token merging ratio", gr.Slider, {"minimum": 0.0, "maximum": 0.9, "step": 0.1}, infotext='Token merging ratio').link("PR", "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/9256").info("0=disable, higher=faster"),
|
||||
"token_merging_ratio_img2img": OptionInfo(0.0, "Token merging ratio for img2img", gr.Slider, {"minimum": 0.0, "maximum": 0.9, "step": 0.1}).info("only applies if non-zero and overrides above"),
|
||||
@@ -290,6 +292,7 @@ options_templates.update(options_section(('extra_networks', "Extra Networks", "s
|
||||
"textual_inversion_print_at_load": OptionInfo(False, "Print a list of Textual Inversion embeddings when loading model"),
|
||||
"textual_inversion_add_hashes_to_infotext": OptionInfo(True, "Add Textual Inversion hashes to infotext"),
|
||||
"sd_hypernetwork": OptionInfo("None", "Add hypernetwork to prompt", gr.Dropdown, lambda: {"choices": ["None", *shared.hypernetworks]}, refresh=shared_items.reload_hypernetworks),
|
||||
"textual_inversion_image_embedding_data_cache": OptionInfo(False, 'Cache the data of image embeddings').info('potentially increase TI load time at the cost some disk space'),
|
||||
}))
|
||||
|
||||
options_templates.update(options_section(('ui_prompt_editing', "Prompt editing", "ui"), {
|
||||
@@ -403,13 +406,15 @@ options_templates.update(options_section(('sampler-params', "Sampler parameters"
|
||||
'uni_pc_order': OptionInfo(3, "UniPC order", gr.Slider, {"minimum": 1, "maximum": 50, "step": 1}, infotext='UniPC order').info("must be < sampling steps"),
|
||||
'uni_pc_lower_order_final': OptionInfo(True, "UniPC lower order final", infotext='UniPC lower order final'),
|
||||
'sd_noise_schedule': OptionInfo("Default", "Noise schedule for sampling", gr.Radio, {"choices": ["Default", "Zero Terminal SNR"]}, infotext="Noise Schedule").info("for use with zero terminal SNR trained models"),
|
||||
'skip_early_cond': OptionInfo(0.0, "Ignore negative prompt during early sampling", gr.Slider, {"minimum": 0.0, "maximum": 1.0, "step": 0.01}, infotext="Skip Early CFG").info("disables CFG on a proportion of steps at the beginning of generation; 0=skip none; 1=skip all; can both improve sample diversity/quality and speed up sampling"),
|
||||
'skip_early_cond': OptionInfo(0.0, "Ignore negative prompt during early sampling", gr.Slider, {"minimum": 0.0, "maximum": 1.0, "step": 0.01}, infotext="Skip Early CFG").info("disables CFG on a proportion of steps at the beginning of generation; 0=skip none; 1=skip all; can both improve sample diversity/quality and speed up sampling; XYZ plot: Skip Early CFG"),
|
||||
'beta_dist_alpha': OptionInfo(0.6, "Beta scheduler - alpha", gr.Slider, {"minimum": 0.01, "maximum": 5.0, "step": 0.01}, infotext='Beta scheduler alpha').info('Default = 0.6; the alpha parameter of the beta distribution used in Beta sampling'),
|
||||
'beta_dist_beta': OptionInfo(0.6, "Beta scheduler - beta", gr.Slider, {"minimum": 0.01, "maximum": 5.0, "step": 0.01}, infotext='Beta scheduler beta').info('Default = 0.6; the beta parameter of the beta distribution used in Beta sampling'),
|
||||
}))
|
||||
|
||||
options_templates.update(options_section(('postprocessing', "Postprocessing", "postprocessing"), {
|
||||
'postprocessing_enable_in_main_ui': OptionInfo([], "Enable postprocessing operations in txt2img and img2img tabs", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts()]}),
|
||||
'postprocessing_disable_in_extras': OptionInfo([], "Disable postprocessing operations in extras tab", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts()]}),
|
||||
'postprocessing_operation_order': OptionInfo([], "Postprocessing operation order", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts()]}),
|
||||
'postprocessing_enable_in_main_ui': OptionInfo([], "Enable postprocessing operations in txt2img and img2img tabs", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts(filter_out_extra_only=True)]}),
|
||||
'postprocessing_disable_in_extras': OptionInfo([], "Disable postprocessing operations in extras tab", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts(filter_out_main_ui_only=True)]}),
|
||||
'postprocessing_operation_order': OptionInfo([], "Postprocessing operation order", ui_components.DropdownMulti, lambda: {"choices": [x.name for x in shared_items.postprocessing_scripts(filter_out_main_ui_only=True)]}),
|
||||
'upscaling_max_images_in_cache': OptionInfo(5, "Maximum number of images in upscaling cache", gr.Slider, {"minimum": 0, "maximum": 10, "step": 1}),
|
||||
'postprocessing_existing_caption_action': OptionInfo("Ignore", "Action for existing captions", gr.Radio, {"choices": ["Ignore", "Keep", "Prepend", "Append"]}).info("when generating captions using postprocessing; Ignore = use generated; Keep = use original; Prepend/Append = combine both"),
|
||||
}))
|
||||
|
||||
@@ -162,7 +162,7 @@ class State:
|
||||
errors.record_exception()
|
||||
|
||||
def assign_current_image(self, image):
|
||||
if shared.opts.live_previews_image_format == 'jpeg' and image.mode == 'RGBA':
|
||||
if shared.opts.live_previews_image_format == 'jpeg' and image.mode in ('RGBA', 'P'):
|
||||
image = image.convert('RGB')
|
||||
self.current_image = image
|
||||
self.id_live_preview += 1
|
||||
|
||||
+82
-27
@@ -1,15 +1,13 @@
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
|
||||
import subprocess
|
||||
import platform
|
||||
import hashlib
|
||||
import pkg_resources
|
||||
import psutil
|
||||
import re
|
||||
from pathlib import Path
|
||||
|
||||
import launch
|
||||
from modules import paths_internal, timer, shared, extensions, errors
|
||||
from modules import paths_internal, timer, shared_cmd_options, errors, launch_utils
|
||||
|
||||
checksum_token = "DontStealMyGamePlz__WINNERS_DONT_USE_DRUGS__DONT_COPY_THAT_FLOPPY"
|
||||
environment_whitelist = {
|
||||
@@ -69,14 +67,46 @@ def check(x):
|
||||
return h.hexdigest() == m.group(1)
|
||||
|
||||
|
||||
def get_dict():
|
||||
ram = psutil.virtual_memory()
|
||||
def get_cpu_info():
|
||||
cpu_info = {"model": platform.processor()}
|
||||
try:
|
||||
import psutil
|
||||
cpu_info["count logical"] = psutil.cpu_count(logical=True)
|
||||
cpu_info["count physical"] = psutil.cpu_count(logical=False)
|
||||
except Exception as e:
|
||||
cpu_info["error"] = str(e)
|
||||
return cpu_info
|
||||
|
||||
|
||||
def get_ram_info():
|
||||
try:
|
||||
import psutil
|
||||
ram = psutil.virtual_memory()
|
||||
return {x: pretty_bytes(getattr(ram, x, 0)) for x in ["total", "used", "free", "active", "inactive", "buffers", "cached", "shared"] if getattr(ram, x, 0) != 0}
|
||||
except Exception as e:
|
||||
return str(e)
|
||||
|
||||
|
||||
def get_packages():
|
||||
try:
|
||||
return subprocess.check_output([sys.executable, '-m', 'pip', 'freeze', '--all']).decode("utf8").splitlines()
|
||||
except Exception as pip_error:
|
||||
try:
|
||||
import importlib.metadata
|
||||
packages = importlib.metadata.distributions()
|
||||
return sorted([f"{package.metadata['Name']}=={package.version}" for package in packages])
|
||||
except Exception as e2:
|
||||
return {'error pip': pip_error, 'error importlib': str(e2)}
|
||||
|
||||
|
||||
def get_dict():
|
||||
config = get_config()
|
||||
res = {
|
||||
"Platform": platform.platform(),
|
||||
"Python": platform.python_version(),
|
||||
"Version": launch.git_tag(),
|
||||
"Commit": launch.commit_hash(),
|
||||
"Version": launch_utils.git_tag(),
|
||||
"Commit": launch_utils.commit_hash(),
|
||||
"Git status": git_status(paths_internal.script_path),
|
||||
"Script path": paths_internal.script_path,
|
||||
"Data path": paths_internal.data_path,
|
||||
"Extensions dir": paths_internal.extensions_dir,
|
||||
@@ -84,20 +114,14 @@ def get_dict():
|
||||
"Commandline": get_argv(),
|
||||
"Torch env info": get_torch_sysinfo(),
|
||||
"Exceptions": errors.get_exceptions(),
|
||||
"CPU": {
|
||||
"model": platform.processor(),
|
||||
"count logical": psutil.cpu_count(logical=True),
|
||||
"count physical": psutil.cpu_count(logical=False),
|
||||
},
|
||||
"RAM": {
|
||||
x: pretty_bytes(getattr(ram, x, 0)) for x in ["total", "used", "free", "active", "inactive", "buffers", "cached", "shared"] if getattr(ram, x, 0) != 0
|
||||
},
|
||||
"Extensions": get_extensions(enabled=True),
|
||||
"Inactive extensions": get_extensions(enabled=False),
|
||||
"CPU": get_cpu_info(),
|
||||
"RAM": get_ram_info(),
|
||||
"Extensions": get_extensions(enabled=True, fallback_disabled_extensions=config.get('disabled_extensions', [])),
|
||||
"Inactive extensions": get_extensions(enabled=False, fallback_disabled_extensions=config.get('disabled_extensions', [])),
|
||||
"Environment": get_environment(),
|
||||
"Config": get_config(),
|
||||
"Config": config,
|
||||
"Startup": timer.startup_record,
|
||||
"Packages": sorted([f"{pkg.key}=={pkg.version}" for pkg in pkg_resources.working_set]),
|
||||
"Packages": get_packages(),
|
||||
}
|
||||
|
||||
return res
|
||||
@@ -111,11 +135,11 @@ def get_argv():
|
||||
res = []
|
||||
|
||||
for v in sys.argv:
|
||||
if shared.cmd_opts.gradio_auth and shared.cmd_opts.gradio_auth == v:
|
||||
if shared_cmd_options.cmd_opts.gradio_auth and shared_cmd_options.cmd_opts.gradio_auth == v:
|
||||
res.append("<hidden>")
|
||||
continue
|
||||
|
||||
if shared.cmd_opts.api_auth and shared.cmd_opts.api_auth == v:
|
||||
if shared_cmd_options.cmd_opts.api_auth and shared_cmd_options.cmd_opts.api_auth == v:
|
||||
res.append("<hidden>")
|
||||
continue
|
||||
|
||||
@@ -123,6 +147,7 @@ def get_argv():
|
||||
|
||||
return res
|
||||
|
||||
|
||||
re_newline = re.compile(r"\r*\n")
|
||||
|
||||
|
||||
@@ -136,25 +161,55 @@ def get_torch_sysinfo():
|
||||
return str(e)
|
||||
|
||||
|
||||
def get_extensions(*, enabled):
|
||||
|
||||
def run_git(path, *args):
|
||||
try:
|
||||
return subprocess.check_output([launch_utils.git, '-C', path, *args], shell=False, encoding='utf8').strip()
|
||||
except Exception as e:
|
||||
return str(e)
|
||||
|
||||
|
||||
def git_status(path):
|
||||
if (Path(path) / '.git').is_dir():
|
||||
return run_git(paths_internal.script_path, 'status')
|
||||
|
||||
|
||||
def get_info_from_repo_path(path: Path):
|
||||
is_repo = (path / '.git').is_dir()
|
||||
return {
|
||||
'name': path.name,
|
||||
'path': str(path),
|
||||
'commit': run_git(path, 'rev-parse', 'HEAD') if is_repo else None,
|
||||
'branch': run_git(path, 'branch', '--show-current') if is_repo else None,
|
||||
'remote': run_git(path, 'remote', 'get-url', 'origin') if is_repo else None,
|
||||
}
|
||||
|
||||
|
||||
def get_extensions(*, enabled, fallback_disabled_extensions=None):
|
||||
try:
|
||||
from modules import extensions
|
||||
if extensions.extensions:
|
||||
def to_json(x: extensions.Extension):
|
||||
return {
|
||||
"name": x.name,
|
||||
"path": x.path,
|
||||
"version": x.version,
|
||||
"commit": x.commit_hash,
|
||||
"branch": x.branch,
|
||||
"remote": x.remote,
|
||||
}
|
||||
|
||||
return [to_json(x) for x in extensions.extensions if not x.is_builtin and x.enabled == enabled]
|
||||
else:
|
||||
return [get_info_from_repo_path(d) for d in Path(paths_internal.extensions_dir).iterdir() if d.is_dir() and enabled != (str(d.name) in fallback_disabled_extensions)]
|
||||
except Exception as e:
|
||||
return str(e)
|
||||
|
||||
|
||||
def get_config():
|
||||
try:
|
||||
from modules import shared
|
||||
return shared.opts.data
|
||||
except Exception as _:
|
||||
try:
|
||||
with open(shared_cmd_options.cmd_opts.ui_settings_file, 'r') as f:
|
||||
return json.load(f)
|
||||
except Exception as e:
|
||||
return str(e)
|
||||
|
||||
@@ -12,7 +12,7 @@ import safetensors.torch
|
||||
import numpy as np
|
||||
from PIL import Image, PngImagePlugin
|
||||
|
||||
from modules import shared, devices, sd_hijack, sd_models, images, sd_samplers, sd_hijack_checkpoint, errors, hashes
|
||||
from modules import shared, devices, sd_hijack, sd_models, images, sd_samplers, sd_hijack_checkpoint, errors, hashes, cache
|
||||
import modules.textual_inversion.dataset
|
||||
from modules.textual_inversion.learn_schedule import LearnRateScheduler
|
||||
|
||||
@@ -116,6 +116,7 @@ class EmbeddingDatabase:
|
||||
self.expected_shape = -1
|
||||
self.embedding_dirs = {}
|
||||
self.previously_displayed_embeddings = ()
|
||||
self.image_embedding_cache = cache.cache('image-embedding')
|
||||
|
||||
def add_embedding_dir(self, path):
|
||||
self.embedding_dirs[path] = DirWithTextualInversionEmbeddings(path)
|
||||
@@ -154,6 +155,31 @@ class EmbeddingDatabase:
|
||||
vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
|
||||
return vec.shape[1]
|
||||
|
||||
def read_embedding_from_image(self, path, name):
|
||||
try:
|
||||
ondisk_mtime = os.path.getmtime(path)
|
||||
|
||||
if (cache_embedding := self.image_embedding_cache.get(path)) and ondisk_mtime == cache_embedding.get('mtime', 0):
|
||||
# cache will only be used if the file has not been modified time matches
|
||||
return cache_embedding.get('data', None), cache_embedding.get('name', None)
|
||||
|
||||
embed_image = Image.open(path)
|
||||
if hasattr(embed_image, 'text') and 'sd-ti-embedding' in embed_image.text:
|
||||
data = embedding_from_b64(embed_image.text['sd-ti-embedding'])
|
||||
name = data.get('name', name)
|
||||
elif data := extract_image_data_embed(embed_image):
|
||||
name = data.get('name', name)
|
||||
|
||||
if data is None or shared.opts.textual_inversion_image_embedding_data_cache:
|
||||
# data of image embeddings only will be cached if the option textual_inversion_image_embedding_data_cache is enabled
|
||||
# results of images that are not embeddings will allways be cached to reduce unnecessary future disk reads
|
||||
self.image_embedding_cache[path] = {'data': data, 'name': None if data is None else name, 'mtime': ondisk_mtime}
|
||||
|
||||
return data, name
|
||||
except Exception:
|
||||
errors.report(f"Error loading embedding {path}", exc_info=True)
|
||||
return None, None
|
||||
|
||||
def load_from_file(self, path, filename):
|
||||
name, ext = os.path.splitext(filename)
|
||||
ext = ext.upper()
|
||||
@@ -163,17 +189,10 @@ class EmbeddingDatabase:
|
||||
if second_ext.upper() == '.PREVIEW':
|
||||
return
|
||||
|
||||
embed_image = Image.open(path)
|
||||
if hasattr(embed_image, 'text') and 'sd-ti-embedding' in embed_image.text:
|
||||
data = embedding_from_b64(embed_image.text['sd-ti-embedding'])
|
||||
name = data.get('name', name)
|
||||
else:
|
||||
data = extract_image_data_embed(embed_image)
|
||||
if data:
|
||||
name = data.get('name', name)
|
||||
else:
|
||||
# if data is None, means this is not an embedding, just a preview image
|
||||
data, name = self.read_embedding_from_image(path, name)
|
||||
if data is None:
|
||||
return
|
||||
|
||||
elif ext in ['.BIN', '.PT']:
|
||||
data = torch.load(path, map_location="cpu")
|
||||
elif ext in ['.SAFETENSORS']:
|
||||
@@ -191,7 +210,6 @@ class EmbeddingDatabase:
|
||||
else:
|
||||
print(f"Unable to load Textual inversion embedding due to data issue: '{name}'.")
|
||||
|
||||
|
||||
def load_from_dir(self, embdir):
|
||||
if not os.path.isdir(embdir.path):
|
||||
return
|
||||
|
||||
+8
-5
@@ -10,7 +10,7 @@ import gradio as gr
|
||||
import gradio.utils
|
||||
import numpy as np
|
||||
from PIL import Image, PngImagePlugin # noqa: F401
|
||||
from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, wrap_gradio_call
|
||||
from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, wrap_gradio_call, wrap_gradio_call_no_job # noqa: F401
|
||||
|
||||
from modules import gradio_extensons, sd_schedulers # noqa: F401
|
||||
from modules import sd_hijack, sd_models, script_callbacks, ui_extensions, deepbooru, extra_networks, ui_common, ui_postprocessing, progress, ui_loadsave, shared_items, ui_settings, timer, sysinfo, ui_checkpoint_merger, scripts, sd_samplers, processing, ui_extra_networks, ui_toprow, launch_utils
|
||||
@@ -44,6 +44,9 @@ mimetypes.add_type('application/javascript', '.mjs')
|
||||
mimetypes.add_type('image/webp', '.webp')
|
||||
mimetypes.add_type('image/avif', '.avif')
|
||||
|
||||
# override potentially incorrect mimetypes
|
||||
mimetypes.add_type('text/css', '.css')
|
||||
|
||||
if not cmd_opts.share and not cmd_opts.listen:
|
||||
# fix gradio phoning home
|
||||
gradio.utils.version_check = lambda: None
|
||||
@@ -622,8 +625,8 @@ def create_ui():
|
||||
with gr.Column(elem_id="img2img_column_size", scale=4):
|
||||
selected_scale_tab = gr.Number(value=0, visible=False)
|
||||
|
||||
with gr.Tabs():
|
||||
with gr.Tab(label="Resize to", elem_id="img2img_tab_resize_to") as tab_scale_to:
|
||||
with gr.Tabs(elem_id="img2img_tabs_resize"):
|
||||
with gr.Tab(label="Resize to", id="to", elem_id="img2img_tab_resize_to") as tab_scale_to:
|
||||
with FormRow():
|
||||
with gr.Column(elem_id="img2img_column_size", scale=4):
|
||||
width = gr.Slider(minimum=64, maximum=2048, step=8, label="Width", value=512, elem_id="img2img_width")
|
||||
@@ -632,7 +635,7 @@ def create_ui():
|
||||
res_switch_btn = ToolButton(value=switch_values_symbol, elem_id="img2img_res_switch_btn", tooltip="Switch width/height")
|
||||
detect_image_size_btn = ToolButton(value=detect_image_size_symbol, elem_id="img2img_detect_image_size_btn", tooltip="Auto detect size from img2img")
|
||||
|
||||
with gr.Tab(label="Resize by", elem_id="img2img_tab_resize_by") as tab_scale_by:
|
||||
with gr.Tab(label="Resize by", id="by", elem_id="img2img_tab_resize_by") as tab_scale_by:
|
||||
scale_by = gr.Slider(minimum=0.05, maximum=4.0, step=0.05, label="Scale", value=1.0, elem_id="img2img_scale")
|
||||
|
||||
with FormRow():
|
||||
@@ -889,7 +892,7 @@ def create_ui():
|
||||
))
|
||||
|
||||
image.change(
|
||||
fn=wrap_gradio_call(modules.extras.run_pnginfo),
|
||||
fn=wrap_gradio_call_no_job(modules.extras.run_pnginfo),
|
||||
inputs=[image],
|
||||
outputs=[html, generation_info, html2],
|
||||
)
|
||||
|
||||
@@ -3,6 +3,7 @@ import dataclasses
|
||||
import json
|
||||
import html
|
||||
import os
|
||||
from contextlib import nullcontext
|
||||
|
||||
import gradio as gr
|
||||
|
||||
@@ -103,10 +104,11 @@ def save_files(js_data, images, do_make_zip, index):
|
||||
|
||||
# NOTE: ensure csv integrity when fields are added by
|
||||
# updating headers and padding with delimiters where needed
|
||||
if os.path.exists(logfile_path):
|
||||
if shared.opts.save_write_log_csv and os.path.exists(logfile_path):
|
||||
update_logfile(logfile_path, fields)
|
||||
|
||||
with open(logfile_path, "a", encoding="utf8", newline='') as file:
|
||||
with (open(logfile_path, "a", encoding="utf8", newline='') if shared.opts.save_write_log_csv else nullcontext()) as file:
|
||||
if file:
|
||||
at_start = file.tell() == 0
|
||||
writer = csv.writer(file)
|
||||
if at_start:
|
||||
@@ -130,6 +132,7 @@ def save_files(js_data, images, do_make_zip, index):
|
||||
filenames.append(os.path.basename(txt_fullfn))
|
||||
fullfns.append(txt_fullfn)
|
||||
|
||||
if file:
|
||||
writer.writerow([parsed_infotexts[0]['Prompt'], parsed_infotexts[0]['Seed'], data["width"], data["height"], data["sampler_name"], data["cfg_scale"], data["steps"], filenames[0], parsed_infotexts[0]['Negative prompt'], data["sd_model_name"], data["sd_model_hash"]])
|
||||
|
||||
# Make Zip
|
||||
@@ -228,7 +231,7 @@ def create_output_panel(tabname, outdir, toprow=None):
|
||||
)
|
||||
|
||||
save.click(
|
||||
fn=call_queue.wrap_gradio_call(save_files),
|
||||
fn=call_queue.wrap_gradio_call_no_job(save_files),
|
||||
_js="(x, y, z, w) => [x, y, false, selected_gallery_index()]",
|
||||
inputs=[
|
||||
res.generation_info,
|
||||
@@ -244,7 +247,7 @@ def create_output_panel(tabname, outdir, toprow=None):
|
||||
)
|
||||
|
||||
save_zip.click(
|
||||
fn=call_queue.wrap_gradio_call(save_files),
|
||||
fn=call_queue.wrap_gradio_call_no_job(save_files),
|
||||
_js="(x, y, z, w) => [x, y, true, selected_gallery_index()]",
|
||||
inputs=[
|
||||
res.generation_info,
|
||||
|
||||
@@ -91,6 +91,7 @@ class InputAccordion(gr.Checkbox):
|
||||
Actually just a hidden checkbox, but creates an accordion that follows and is followed by the state of the checkbox.
|
||||
"""
|
||||
|
||||
accordion_id_set = set()
|
||||
global_index = 0
|
||||
|
||||
def __init__(self, value, **kwargs):
|
||||
@@ -99,6 +100,18 @@ class InputAccordion(gr.Checkbox):
|
||||
self.accordion_id = f"input-accordion-{InputAccordion.global_index}"
|
||||
InputAccordion.global_index += 1
|
||||
|
||||
if not InputAccordion.accordion_id_set:
|
||||
from modules import script_callbacks
|
||||
script_callbacks.on_script_unloaded(InputAccordion.reset)
|
||||
|
||||
if self.accordion_id in InputAccordion.accordion_id_set:
|
||||
count = 1
|
||||
while (unique_id := f'{self.accordion_id}-{count}') in InputAccordion.accordion_id_set:
|
||||
count += 1
|
||||
self.accordion_id = unique_id
|
||||
|
||||
InputAccordion.accordion_id_set.add(self.accordion_id)
|
||||
|
||||
kwargs_checkbox = {
|
||||
**kwargs,
|
||||
"elem_id": f"{self.accordion_id}-checkbox",
|
||||
@@ -143,3 +156,7 @@ class InputAccordion(gr.Checkbox):
|
||||
def get_block_name(self):
|
||||
return "checkbox"
|
||||
|
||||
@classmethod
|
||||
def reset(cls):
|
||||
cls.global_index = 0
|
||||
cls.accordion_id_set.clear()
|
||||
|
||||
+17
-10
@@ -1,5 +1,6 @@
|
||||
import json
|
||||
import os
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
import threading
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
@@ -106,19 +107,25 @@ def check_updates(id_task, disable_list):
|
||||
exts = [ext for ext in extensions.extensions if ext.remote is not None and ext.name not in disabled]
|
||||
shared.state.job_count = len(exts)
|
||||
|
||||
for ext in exts:
|
||||
shared.state.textinfo = ext.name
|
||||
lock = threading.Lock()
|
||||
|
||||
def _check_update(ext):
|
||||
try:
|
||||
ext.check_updates()
|
||||
except FileNotFoundError as e:
|
||||
if 'FETCH_HEAD' not in str(e):
|
||||
raise
|
||||
except Exception:
|
||||
with lock:
|
||||
errors.report(f"Error checking updates for {ext.name}", exc_info=True)
|
||||
|
||||
with lock:
|
||||
shared.state.textinfo = ext.name
|
||||
shared.state.nextjob()
|
||||
|
||||
with ThreadPoolExecutor(max_workers=max(1, int(shared.opts.concurrent_git_fetch_limit))) as executor:
|
||||
for ext in exts:
|
||||
executor.submit(_check_update, ext)
|
||||
|
||||
return extension_table(), ""
|
||||
|
||||
|
||||
@@ -624,37 +631,37 @@ def create_ui():
|
||||
)
|
||||
|
||||
install_extension_button.click(
|
||||
fn=modules.ui.wrap_gradio_call(install_extension_from_index, extra_outputs=[gr.update(), gr.update()]),
|
||||
fn=modules.ui.wrap_gradio_call_no_job(install_extension_from_index, extra_outputs=[gr.update(), gr.update()]),
|
||||
inputs=[extension_to_install, selected_tags, showing_type, filtering_type, sort_column, search_extensions_text],
|
||||
outputs=[available_extensions_table, extensions_table, install_result],
|
||||
)
|
||||
|
||||
search_extensions_text.change(
|
||||
fn=modules.ui.wrap_gradio_call(search_extensions, extra_outputs=[gr.update()]),
|
||||
fn=modules.ui.wrap_gradio_call_no_job(search_extensions, extra_outputs=[gr.update()]),
|
||||
inputs=[search_extensions_text, selected_tags, showing_type, filtering_type, sort_column],
|
||||
outputs=[available_extensions_table, install_result],
|
||||
)
|
||||
|
||||
selected_tags.change(
|
||||
fn=modules.ui.wrap_gradio_call(refresh_available_extensions_for_tags, extra_outputs=[gr.update()]),
|
||||
fn=modules.ui.wrap_gradio_call_no_job(refresh_available_extensions_for_tags, extra_outputs=[gr.update()]),
|
||||
inputs=[selected_tags, showing_type, filtering_type, sort_column, search_extensions_text],
|
||||
outputs=[available_extensions_table, install_result]
|
||||
)
|
||||
|
||||
showing_type.change(
|
||||
fn=modules.ui.wrap_gradio_call(refresh_available_extensions_for_tags, extra_outputs=[gr.update()]),
|
||||
fn=modules.ui.wrap_gradio_call_no_job(refresh_available_extensions_for_tags, extra_outputs=[gr.update()]),
|
||||
inputs=[selected_tags, showing_type, filtering_type, sort_column, search_extensions_text],
|
||||
outputs=[available_extensions_table, install_result]
|
||||
)
|
||||
|
||||
filtering_type.change(
|
||||
fn=modules.ui.wrap_gradio_call(refresh_available_extensions_for_tags, extra_outputs=[gr.update()]),
|
||||
fn=modules.ui.wrap_gradio_call_no_job(refresh_available_extensions_for_tags, extra_outputs=[gr.update()]),
|
||||
inputs=[selected_tags, showing_type, filtering_type, sort_column, search_extensions_text],
|
||||
outputs=[available_extensions_table, install_result]
|
||||
)
|
||||
|
||||
sort_column.change(
|
||||
fn=modules.ui.wrap_gradio_call(refresh_available_extensions_for_tags, extra_outputs=[gr.update()]),
|
||||
fn=modules.ui.wrap_gradio_call_no_job(refresh_available_extensions_for_tags, extra_outputs=[gr.update()]),
|
||||
inputs=[selected_tags, showing_type, filtering_type, sort_column, search_extensions_text],
|
||||
outputs=[available_extensions_table, install_result]
|
||||
)
|
||||
@@ -667,7 +674,7 @@ def create_ui():
|
||||
install_result = gr.HTML(elem_id="extension_install_result")
|
||||
|
||||
install_button.click(
|
||||
fn=modules.ui.wrap_gradio_call(lambda *args: [gr.update(), *install_extension_from_url(*args)], extra_outputs=[gr.update(), gr.update()]),
|
||||
fn=modules.ui.wrap_gradio_call_no_job(lambda *args: [gr.update(), *install_extension_from_url(*args)], extra_outputs=[gr.update(), gr.update()]),
|
||||
inputs=[install_dirname, install_url, install_branch],
|
||||
outputs=[install_url, extensions_table, install_result],
|
||||
)
|
||||
|
||||
@@ -177,10 +177,8 @@ def add_pages_to_demo(app):
|
||||
app.add_api_route("/sd_extra_networks/get-single-card", get_single_card, methods=["GET"])
|
||||
|
||||
|
||||
def quote_js(s):
|
||||
s = s.replace('\\', '\\\\')
|
||||
s = s.replace('"', '\\"')
|
||||
return f'"{s}"'
|
||||
def quote_js(s: str):
|
||||
return json.dumps(s, ensure_ascii=False)
|
||||
|
||||
|
||||
class ExtraNetworksPage:
|
||||
|
||||
@@ -41,6 +41,11 @@ def css_html():
|
||||
if os.path.exists(user_css):
|
||||
head += stylesheet(user_css)
|
||||
|
||||
from modules.shared_gradio_themes import resolve_var
|
||||
light = resolve_var('background_fill_primary')
|
||||
dark = resolve_var('background_fill_primary_dark')
|
||||
head += f'<style>html {{ background-color: {light}; }} @media (prefers-color-scheme: dark) {{ html {{background-color: {dark}; }} }}</style>'
|
||||
|
||||
return head
|
||||
|
||||
|
||||
|
||||
@@ -176,7 +176,7 @@ class UiLoadsave:
|
||||
if new_value == old_value:
|
||||
continue
|
||||
|
||||
if old_value is None and new_value == '' or new_value == []:
|
||||
if old_value is None and (new_value == '' or new_value == []):
|
||||
continue
|
||||
|
||||
yield path, old_value, new_value
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import gradio as gr
|
||||
|
||||
from modules import ui_common, shared, script_callbacks, scripts, sd_models, sysinfo, timer, shared_items
|
||||
from modules.call_queue import wrap_gradio_call
|
||||
from modules.call_queue import wrap_gradio_call_no_job
|
||||
from modules.options import options_section
|
||||
from modules.shared import opts
|
||||
from modules.ui_components import FormRow
|
||||
@@ -295,7 +295,7 @@ class UiSettings:
|
||||
|
||||
def add_functionality(self, demo):
|
||||
self.submit.click(
|
||||
fn=wrap_gradio_call(lambda *args: self.run_settings(*args), extra_outputs=[gr.update()]),
|
||||
fn=wrap_gradio_call_no_job(lambda *args: self.run_settings(*args), extra_outputs=[gr.update()]),
|
||||
inputs=self.components,
|
||||
outputs=[self.text_settings, self.result],
|
||||
)
|
||||
|
||||
+4
-3
@@ -56,8 +56,8 @@ class Upscaler:
|
||||
dest_w = int((img.width * scale) // 8 * 8)
|
||||
dest_h = int((img.height * scale) // 8 * 8)
|
||||
|
||||
for _ in range(3):
|
||||
if img.width >= dest_w and img.height >= dest_h and scale != 1:
|
||||
for i in range(3):
|
||||
if img.width >= dest_w and img.height >= dest_h and (i > 0 or scale != 1):
|
||||
break
|
||||
|
||||
if shared.state.interrupted:
|
||||
@@ -93,13 +93,14 @@ class UpscalerData:
|
||||
scaler: Upscaler = None
|
||||
model: None
|
||||
|
||||
def __init__(self, name: str, path: str, upscaler: Upscaler = None, scale: int = 4, model=None):
|
||||
def __init__(self, name: str, path: str, upscaler: Upscaler = None, scale: int = 4, model=None, sha256: str = None):
|
||||
self.name = name
|
||||
self.data_path = path
|
||||
self.local_data_path = path
|
||||
self.scaler = upscaler
|
||||
self.scale = scale
|
||||
self.model = model
|
||||
self.sha256 = sha256
|
||||
|
||||
def __repr__(self):
|
||||
return f"<UpscalerData name={self.name} path={self.data_path} scale={self.scale}>"
|
||||
|
||||
@@ -41,7 +41,7 @@ def upscale_pil_patch(model, img: Image.Image) -> Image.Image:
|
||||
"""
|
||||
param = torch_utils.get_param(model)
|
||||
|
||||
with torch.no_grad():
|
||||
with torch.inference_mode():
|
||||
tensor = pil_image_to_torch_bgr(img).unsqueeze(0) # add batch dimension
|
||||
tensor = tensor.to(device=param.device, dtype=param.dtype)
|
||||
with devices.without_autocast():
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
from __future__ import annotations
|
||||
import os
|
||||
import re
|
||||
|
||||
@@ -211,3 +212,80 @@ Requested path was: {path}
|
||||
subprocess.Popen(["explorer.exe", subprocess.check_output(["wslpath", "-w", path])])
|
||||
else:
|
||||
subprocess.Popen(["xdg-open", path])
|
||||
|
||||
|
||||
def load_file_from_url(
|
||||
url: str,
|
||||
*,
|
||||
model_dir: str,
|
||||
progress: bool = True,
|
||||
file_name: str | None = None,
|
||||
hash_prefix: str | None = None,
|
||||
re_download: bool = False,
|
||||
) -> str:
|
||||
"""Download a file from `url` into `model_dir`, using the file present if possible.
|
||||
Returns the path to the downloaded file.
|
||||
|
||||
file_name: if specified, it will be used as the filename, otherwise the filename will be extracted from the url.
|
||||
file is downloaded to {file_name}.tmp then moved to the final location after download is complete.
|
||||
hash_prefix: sha256 hex string, if provided, the hash of the downloaded file will be checked against this prefix.
|
||||
if the hash does not match, the temporary file is deleted and a ValueError is raised.
|
||||
re_download: forcibly re-download the file even if it already exists.
|
||||
"""
|
||||
from urllib.parse import urlparse
|
||||
import requests
|
||||
try:
|
||||
from tqdm import tqdm
|
||||
except ImportError:
|
||||
class tqdm:
|
||||
def __init__(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
def update(self, n=1, *args, **kwargs):
|
||||
pass
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
pass
|
||||
|
||||
if not file_name:
|
||||
parts = urlparse(url)
|
||||
file_name = os.path.basename(parts.path)
|
||||
|
||||
cached_file = os.path.abspath(os.path.join(model_dir, file_name))
|
||||
|
||||
if re_download or not os.path.exists(cached_file):
|
||||
os.makedirs(model_dir, exist_ok=True)
|
||||
temp_file = os.path.join(model_dir, f"{file_name}.tmp")
|
||||
print(f'\nDownloading: "{url}" to {cached_file}')
|
||||
response = requests.get(url, stream=True)
|
||||
response.raise_for_status()
|
||||
total_size = int(response.headers.get('content-length', 0))
|
||||
with tqdm(total=total_size, unit='B', unit_scale=True, desc=file_name, disable=not progress) as progress_bar:
|
||||
with open(temp_file, 'wb') as file:
|
||||
for chunk in response.iter_content(chunk_size=1024):
|
||||
if chunk:
|
||||
file.write(chunk)
|
||||
progress_bar.update(len(chunk))
|
||||
|
||||
if hash_prefix and not compare_sha256(temp_file, hash_prefix):
|
||||
print(f"Hash mismatch for {temp_file}. Deleting the temporary file.")
|
||||
os.remove(temp_file)
|
||||
raise ValueError(f"File hash does not match the expected hash prefix {hash_prefix}!")
|
||||
|
||||
os.rename(temp_file, cached_file)
|
||||
return cached_file
|
||||
|
||||
|
||||
def compare_sha256(file_path: str, hash_prefix: str) -> bool:
|
||||
"""Check if the SHA256 hash of the file matches the given prefix."""
|
||||
import hashlib
|
||||
hash_sha256 = hashlib.sha256()
|
||||
blksize = 1024 * 1024
|
||||
|
||||
with open(file_path, "rb") as f:
|
||||
for chunk in iter(lambda: f.read(blksize), b""):
|
||||
hash_sha256.update(chunk)
|
||||
return hash_sha256.hexdigest().startswith(hash_prefix.strip().lower())
|
||||
|
||||
@@ -0,0 +1,50 @@
|
||||
import sys
|
||||
import copy
|
||||
import shlex
|
||||
import subprocess
|
||||
from functools import wraps
|
||||
|
||||
BAD_FLAGS = ("--prefer-binary", '-I', '--ignore-installed')
|
||||
|
||||
|
||||
def patch():
|
||||
if hasattr(subprocess, "__original_run"):
|
||||
return
|
||||
|
||||
print("using uv")
|
||||
try:
|
||||
subprocess.run(['uv', '-V'])
|
||||
except FileNotFoundError:
|
||||
subprocess.run([sys.executable, '-m', 'pip', 'install', 'uv'])
|
||||
|
||||
subprocess.__original_run = subprocess.run
|
||||
|
||||
@wraps(subprocess.__original_run)
|
||||
def patched_run(*args, **kwargs):
|
||||
_kwargs = copy.copy(kwargs)
|
||||
if args:
|
||||
command, *_args = args
|
||||
else:
|
||||
command, _args = _kwargs.pop("args", ""), ()
|
||||
|
||||
if isinstance(command, str):
|
||||
command = shlex.split(command)
|
||||
else:
|
||||
command = [arg.strip() for arg in command]
|
||||
|
||||
if not isinstance(command, list) or "pip" not in command:
|
||||
return subprocess.__original_run(*args, **kwargs)
|
||||
|
||||
cmd = command[command.index("pip") + 1:]
|
||||
|
||||
cmd = [arg for arg in cmd if arg not in BAD_FLAGS]
|
||||
|
||||
modified_command = ["uv", "pip", *cmd]
|
||||
|
||||
cmd_str = shlex.join([*modified_command, *_args])
|
||||
result = subprocess.__original_run(cmd_str, **_kwargs)
|
||||
if result.returncode != 0:
|
||||
return subprocess.__original_run(*args, **kwargs)
|
||||
return result
|
||||
|
||||
subprocess.run = patched_run
|
||||
@@ -22,7 +22,7 @@ protobuf==3.20.0
|
||||
psutil==5.9.5
|
||||
pytorch_lightning==1.9.4
|
||||
resize-right==0.0.2
|
||||
safetensors==0.4.2
|
||||
safetensors==0.4.5
|
||||
scikit-image==0.21.0
|
||||
spandrel==0.3.4
|
||||
spandrel-extra-arches==0.1.1
|
||||
|
||||
@@ -182,7 +182,7 @@ document.addEventListener('keydown', function(e) {
|
||||
const lightboxModal = document.querySelector('#lightboxModal');
|
||||
if (!globalPopup || globalPopup.style.display === 'none') {
|
||||
if (document.activeElement === lightboxModal) return;
|
||||
if (interruptButton.style.display === 'block') {
|
||||
if (interruptButton?.style.display === 'block') {
|
||||
interruptButton.click();
|
||||
e.preventDefault();
|
||||
}
|
||||
|
||||
@@ -12,8 +12,8 @@ class ScriptPostprocessingCodeFormer(scripts_postprocessing.ScriptPostprocessing
|
||||
def ui(self):
|
||||
with ui_components.InputAccordion(False, label="CodeFormer") as enable:
|
||||
with gr.Row():
|
||||
codeformer_visibility = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="Visibility", value=1.0, elem_id="extras_codeformer_visibility")
|
||||
codeformer_weight = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="Weight (0 = maximum effect, 1 = minimum effect)", value=0, elem_id="extras_codeformer_weight")
|
||||
codeformer_visibility = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="Visibility", value=1.0, elem_id=self.elem_id_suffix("extras_codeformer_visibility"))
|
||||
codeformer_weight = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="Weight (0 = maximum effect, 1 = minimum effect)", value=0, elem_id=self.elem_id_suffix("extras_codeformer_weight"))
|
||||
|
||||
return {
|
||||
"enable": enable,
|
||||
@@ -29,6 +29,10 @@ class ScriptPostprocessingCodeFormer(scripts_postprocessing.ScriptPostprocessing
|
||||
res = Image.fromarray(restored_img)
|
||||
|
||||
if codeformer_visibility < 1.0:
|
||||
if pp.image.size != res.size:
|
||||
res = res.resize(pp.image.size)
|
||||
if pp.image.mode != res.mode:
|
||||
res = res.convert(pp.image.mode)
|
||||
res = Image.blend(pp.image, res, codeformer_visibility)
|
||||
|
||||
pp.image = res
|
||||
|
||||
@@ -11,7 +11,7 @@ class ScriptPostprocessingGfpGan(scripts_postprocessing.ScriptPostprocessing):
|
||||
|
||||
def ui(self):
|
||||
with ui_components.InputAccordion(False, label="GFPGAN") as enable:
|
||||
gfpgan_visibility = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="Visibility", value=1.0, elem_id="extras_gfpgan_visibility")
|
||||
gfpgan_visibility = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="Visibility", value=1.0, elem_id=self.elem_id_suffix("extras_gfpgan_visibility"))
|
||||
|
||||
return {
|
||||
"enable": enable,
|
||||
@@ -26,6 +26,10 @@ class ScriptPostprocessingGfpGan(scripts_postprocessing.ScriptPostprocessing):
|
||||
res = Image.fromarray(restored_img)
|
||||
|
||||
if gfpgan_visibility < 1.0:
|
||||
if pp.image.size != res.size:
|
||||
res = res.resize(pp.image.size)
|
||||
if pp.image.mode != res.mode:
|
||||
res = res.convert(pp.image.mode)
|
||||
res = Image.blend(pp.image, res, gfpgan_visibility)
|
||||
|
||||
pp.image = res
|
||||
|
||||
@@ -30,31 +30,31 @@ class ScriptPostprocessingUpscale(scripts_postprocessing.ScriptPostprocessing):
|
||||
def ui(self):
|
||||
selected_tab = gr.Number(value=0, visible=False)
|
||||
|
||||
with InputAccordion(True, label="Upscale", elem_id="extras_upscale") as upscale_enabled:
|
||||
with InputAccordion(True, label="Upscale", elem_id=self.elem_id_suffix("extras_upscale")) as upscale_enabled:
|
||||
with FormRow():
|
||||
extras_upscaler_1 = gr.Dropdown(label='Upscaler 1', elem_id="extras_upscaler_1", choices=[x.name for x in shared.sd_upscalers], value=shared.sd_upscalers[0].name)
|
||||
extras_upscaler_1 = gr.Dropdown(label='Upscaler 1', elem_id=self.elem_id_suffix("extras_upscaler_1"), choices=[x.name for x in shared.sd_upscalers], value=shared.sd_upscalers[0].name)
|
||||
|
||||
with FormRow():
|
||||
extras_upscaler_2 = gr.Dropdown(label='Upscaler 2', elem_id="extras_upscaler_2", choices=[x.name for x in shared.sd_upscalers], value=shared.sd_upscalers[0].name)
|
||||
extras_upscaler_2_visibility = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="Upscaler 2 visibility", value=0.0, elem_id="extras_upscaler_2_visibility")
|
||||
extras_upscaler_2 = gr.Dropdown(label='Upscaler 2', elem_id=self.elem_id_suffix("extras_upscaler_2"), choices=[x.name for x in shared.sd_upscalers], value=shared.sd_upscalers[0].name)
|
||||
extras_upscaler_2_visibility = gr.Slider(minimum=0.0, maximum=1.0, step=0.001, label="Upscaler 2 visibility", value=0.0, elem_id=self.elem_id_suffix("extras_upscaler_2_visibility"))
|
||||
|
||||
with FormRow():
|
||||
with gr.Tabs(elem_id="extras_resize_mode"):
|
||||
with gr.TabItem('Scale by', elem_id="extras_scale_by_tab") as tab_scale_by:
|
||||
with gr.Tabs(elem_id=self.elem_id_suffix("extras_resize_mode")):
|
||||
with gr.TabItem('Scale by', elem_id=self.elem_id_suffix("extras_scale_by_tab")) as tab_scale_by:
|
||||
with gr.Row():
|
||||
with gr.Column(scale=4):
|
||||
upscaling_resize = gr.Slider(minimum=1.0, maximum=8.0, step=0.05, label="Resize", value=4, elem_id="extras_upscaling_resize")
|
||||
upscaling_resize = gr.Slider(minimum=1.0, maximum=8.0, step=0.05, label="Resize", value=4, elem_id=self.elem_id_suffix("extras_upscaling_resize"))
|
||||
with gr.Column(scale=1, min_width=160):
|
||||
max_side_length = gr.Number(label="Max side length", value=0, elem_id="extras_upscale_max_side_length", tooltip="If any of two sides of the image ends up larger than specified, will downscale it to fit. 0 = no limit.", min_width=160, step=8, minimum=0)
|
||||
max_side_length = gr.Number(label="Max side length", value=0, elem_id=self.elem_id_suffix("extras_upscale_max_side_length"), tooltip="If any of two sides of the image ends up larger than specified, will downscale it to fit. 0 = no limit.", min_width=160, step=8, minimum=0)
|
||||
|
||||
with gr.TabItem('Scale to', elem_id="extras_scale_to_tab") as tab_scale_to:
|
||||
with gr.TabItem('Scale to', elem_id=self.elem_id_suffix("extras_scale_to_tab")) as tab_scale_to:
|
||||
with FormRow():
|
||||
with gr.Column(elem_id="upscaling_column_size", scale=4):
|
||||
upscaling_resize_w = gr.Slider(minimum=64, maximum=8192, step=8, label="Width", value=512, elem_id="extras_upscaling_resize_w")
|
||||
upscaling_resize_h = gr.Slider(minimum=64, maximum=8192, step=8, label="Height", value=512, elem_id="extras_upscaling_resize_h")
|
||||
with gr.Column(elem_id="upscaling_dimensions_row", scale=1, elem_classes="dimensions-tools"):
|
||||
upscaling_res_switch_btn = ToolButton(value=switch_values_symbol, elem_id="upscaling_res_switch_btn", tooltip="Switch width/height")
|
||||
upscaling_crop = gr.Checkbox(label='Crop to fit', value=True, elem_id="extras_upscaling_crop")
|
||||
with gr.Column(elem_id=self.elem_id_suffix("upscaling_column_size"), scale=4):
|
||||
upscaling_resize_w = gr.Slider(minimum=64, maximum=8192, step=8, label="Width", value=512, elem_id=self.elem_id_suffix("extras_upscaling_resize_w"))
|
||||
upscaling_resize_h = gr.Slider(minimum=64, maximum=8192, step=8, label="Height", value=512, elem_id=self.elem_id_suffix("extras_upscaling_resize_h"))
|
||||
with gr.Column(elem_id=self.elem_id_suffix("upscaling_dimensions_row"), scale=1, elem_classes="dimensions-tools"):
|
||||
upscaling_res_switch_btn = ToolButton(value=switch_values_symbol, elem_id=self.elem_id_suffix("upscaling_res_switch_btn"), tooltip="Switch width/height")
|
||||
upscaling_crop = gr.Checkbox(label='Crop to fit', value=True, elem_id=self.elem_id_suffix("extras_upscaling_crop"))
|
||||
|
||||
def on_selected_upscale_method(upscale_method):
|
||||
if not shared.opts.set_scale_by_when_changing_upscaler:
|
||||
@@ -169,6 +169,7 @@ class ScriptPostprocessingUpscale(scripts_postprocessing.ScriptPostprocessing):
|
||||
class ScriptPostprocessingUpscaleSimple(ScriptPostprocessingUpscale):
|
||||
name = "Simple Upscale"
|
||||
order = 900
|
||||
main_ui_only = True
|
||||
|
||||
def ui(self):
|
||||
with FormRow():
|
||||
|
||||
+21
-14
@@ -20,7 +20,7 @@ import modules.sd_models
|
||||
import modules.sd_vae
|
||||
import re
|
||||
|
||||
from modules.ui_components import ToolButton
|
||||
from modules.ui_components import ToolButton, InputAccordion
|
||||
|
||||
fill_values_symbol = "\U0001f4d2" # 📒
|
||||
|
||||
@@ -118,10 +118,9 @@ def apply_size(p, x: str, xs) -> None:
|
||||
|
||||
|
||||
def find_vae(name: str):
|
||||
match name := name.lower().strip():
|
||||
case 'auto', 'automatic':
|
||||
if (name := name.strip().lower()) in ('auto', 'automatic'):
|
||||
return 'Automatic'
|
||||
case 'none':
|
||||
elif name == 'none':
|
||||
return 'None'
|
||||
return next((k for k in modules.sd_vae.vae_dict if k.lower() == name), print(f'No VAE found for {name}; using Automatic') or 'Automatic')
|
||||
|
||||
@@ -260,6 +259,9 @@ axis_options = [
|
||||
AxisOption("Schedule min sigma", float, apply_override("sigma_min")),
|
||||
AxisOption("Schedule max sigma", float, apply_override("sigma_max")),
|
||||
AxisOption("Schedule rho", float, apply_override("rho")),
|
||||
AxisOption("Skip Early CFG", float, apply_override('skip_early_cond')),
|
||||
AxisOption("Beta schedule alpha", float, apply_override("beta_dist_alpha")),
|
||||
AxisOption("Beta schedule beta", float, apply_override("beta_dist_beta")),
|
||||
AxisOption("Eta", float, apply_field("eta")),
|
||||
AxisOption("Clip skip", int, apply_override('CLIP_stop_at_last_layers')),
|
||||
AxisOption("Denoising", float, apply_field("denoising_strength")),
|
||||
@@ -283,7 +285,7 @@ axis_options = [
|
||||
]
|
||||
|
||||
|
||||
def draw_xyz_grid(p, xs, ys, zs, x_labels, y_labels, z_labels, cell, draw_legend, include_lone_images, include_sub_grids, first_axes_processed, second_axes_processed, margin_size):
|
||||
def draw_xyz_grid(p, xs, ys, zs, x_labels, y_labels, z_labels, cell, draw_legend, include_lone_images, include_sub_grids, first_axes_processed, second_axes_processed, margin_size, draw_grid):
|
||||
hor_texts = [[images.GridAnnotation(x)] for x in x_labels]
|
||||
ver_texts = [[images.GridAnnotation(y)] for y in y_labels]
|
||||
title_texts = [[images.GridAnnotation(z)] for z in z_labels]
|
||||
@@ -368,6 +370,7 @@ def draw_xyz_grid(p, xs, ys, zs, x_labels, y_labels, z_labels, cell, draw_legend
|
||||
print("Unexpected error: draw_xyz_grid failed to return even a single processed image")
|
||||
return Processed(p, [])
|
||||
|
||||
if draw_grid:
|
||||
z_count = len(zs)
|
||||
|
||||
for i in range(z_count):
|
||||
@@ -440,7 +443,6 @@ class Script(scripts.Script):
|
||||
|
||||
with gr.Row(variant="compact", elem_id="axis_options"):
|
||||
with gr.Column():
|
||||
draw_legend = gr.Checkbox(label='Draw legend', value=True, elem_id=self.elem_id("draw_legend"))
|
||||
no_fixed_seeds = gr.Checkbox(label='Keep -1 for seeds', value=False, elem_id=self.elem_id("no_fixed_seeds"))
|
||||
with gr.Row():
|
||||
vary_seeds_x = gr.Checkbox(label='Vary seeds for X', value=False, min_width=80, elem_id=self.elem_id("vary_seeds_x"), tooltip="Use different seeds for images along X axis.")
|
||||
@@ -448,9 +450,12 @@ class Script(scripts.Script):
|
||||
vary_seeds_z = gr.Checkbox(label='Vary seeds for Z', value=False, min_width=80, elem_id=self.elem_id("vary_seeds_z"), tooltip="Use different seeds for images along Z axis.")
|
||||
with gr.Column():
|
||||
include_lone_images = gr.Checkbox(label='Include Sub Images', value=False, elem_id=self.elem_id("include_lone_images"))
|
||||
include_sub_grids = gr.Checkbox(label='Include Sub Grids', value=False, elem_id=self.elem_id("include_sub_grids"))
|
||||
csv_mode = gr.Checkbox(label='Use text inputs instead of dropdowns', value=False, elem_id=self.elem_id("csv_mode"))
|
||||
with gr.Column():
|
||||
|
||||
with InputAccordion(True, label='Draw grid', elem_id=self.elem_id('draw_grid')) as draw_grid:
|
||||
with gr.Row():
|
||||
include_sub_grids = gr.Checkbox(label='Include Sub Grids', value=False, elem_id=self.elem_id("include_sub_grids"))
|
||||
draw_legend = gr.Checkbox(label='Draw legend', value=True, elem_id=self.elem_id("draw_legend"))
|
||||
margin_size = gr.Slider(label="Grid margins (px)", minimum=0, maximum=500, value=0, step=2, elem_id=self.elem_id("margin_size"))
|
||||
|
||||
with gr.Row(variant="compact", elem_id="swap_axes"):
|
||||
@@ -532,9 +537,9 @@ class Script(scripts.Script):
|
||||
(z_values_dropdown, lambda params: get_dropdown_update_from_params("Z", params)),
|
||||
)
|
||||
|
||||
return [x_type, x_values, x_values_dropdown, y_type, y_values, y_values_dropdown, z_type, z_values, z_values_dropdown, draw_legend, include_lone_images, include_sub_grids, no_fixed_seeds, vary_seeds_x, vary_seeds_y, vary_seeds_z, margin_size, csv_mode]
|
||||
return [x_type, x_values, x_values_dropdown, y_type, y_values, y_values_dropdown, z_type, z_values, z_values_dropdown, draw_legend, include_lone_images, include_sub_grids, no_fixed_seeds, vary_seeds_x, vary_seeds_y, vary_seeds_z, margin_size, csv_mode, draw_grid]
|
||||
|
||||
def run(self, p, x_type, x_values, x_values_dropdown, y_type, y_values, y_values_dropdown, z_type, z_values, z_values_dropdown, draw_legend, include_lone_images, include_sub_grids, no_fixed_seeds, vary_seeds_x, vary_seeds_y, vary_seeds_z, margin_size, csv_mode):
|
||||
def run(self, p, x_type, x_values, x_values_dropdown, y_type, y_values, y_values_dropdown, z_type, z_values, z_values_dropdown, draw_legend, include_lone_images, include_sub_grids, no_fixed_seeds, vary_seeds_x, vary_seeds_y, vary_seeds_z, margin_size, csv_mode, draw_grid):
|
||||
x_type, y_type, z_type = x_type or 0, y_type or 0, z_type or 0 # if axle type is None set to 0
|
||||
|
||||
if not no_fixed_seeds:
|
||||
@@ -779,7 +784,8 @@ class Script(scripts.Script):
|
||||
include_sub_grids=include_sub_grids,
|
||||
first_axes_processed=first_axes_processed,
|
||||
second_axes_processed=second_axes_processed,
|
||||
margin_size=margin_size
|
||||
margin_size=margin_size,
|
||||
draw_grid=draw_grid,
|
||||
)
|
||||
|
||||
if not processed.images:
|
||||
@@ -788,14 +794,15 @@ class Script(scripts.Script):
|
||||
|
||||
z_count = len(zs)
|
||||
|
||||
if draw_grid:
|
||||
# Set the grid infotexts to the real ones with extra_generation_params (1 main grid + z_count sub-grids)
|
||||
processed.infotexts[:1 + z_count] = grid_infotext[:1 + z_count]
|
||||
|
||||
if not include_lone_images:
|
||||
# Don't need sub-images anymore, drop from list:
|
||||
processed.images = processed.images[:z_count + 1]
|
||||
processed.images = processed.images[:z_count + 1] if draw_grid else []
|
||||
|
||||
if opts.grid_save:
|
||||
if draw_grid and opts.grid_save:
|
||||
# Auto-save main and sub-grids:
|
||||
grid_count = z_count + 1 if z_count > 1 else 1
|
||||
for g in range(grid_count):
|
||||
@@ -805,7 +812,7 @@ class Script(scripts.Script):
|
||||
if not include_sub_grids: # if not include_sub_grids then skip saving after the first grid
|
||||
break
|
||||
|
||||
if not include_sub_grids:
|
||||
if draw_grid and not include_sub_grids:
|
||||
# Done with sub-grids, drop all related information:
|
||||
for _ in range(z_count):
|
||||
del processed.images[1]
|
||||
|
||||
@@ -480,8 +480,10 @@ div.toprow-compact-tools{
|
||||
}
|
||||
|
||||
#settings_result{
|
||||
height: 1.4em;
|
||||
min-height: 1.4em;
|
||||
margin: 0 1.2em;
|
||||
max-height: calc(var(--text-md) * var(--line-sm) * 5);
|
||||
overflow-y: auto;
|
||||
}
|
||||
|
||||
table.popup-table{
|
||||
@@ -600,6 +602,7 @@ table.popup-table .link{
|
||||
background: var(--background-fill-primary);
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
pointer-events: none;
|
||||
}
|
||||
|
||||
.livePreview img{
|
||||
|
||||
@@ -4,7 +4,16 @@ if exist webui.settings.bat (
|
||||
call webui.settings.bat
|
||||
)
|
||||
|
||||
if not defined PYTHON (set PYTHON=python)
|
||||
if not defined PYTHON (
|
||||
for /f "delims=" %%A in ('where python ^| findstr /n . ^| findstr ^^1:') do (
|
||||
if /i "%%~xA" == ".exe" (
|
||||
set PYTHON=python
|
||||
) else (
|
||||
set PYTHON=call python
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
if defined GIT (set "GIT_PYTHON_GIT_EXECUTABLE=%GIT%")
|
||||
if not defined VENV_DIR (set "VENV_DIR=%~dp0%venv")
|
||||
|
||||
@@ -48,6 +57,7 @@ echo Warning: Failed to upgrade PIP version
|
||||
|
||||
:activate_venv
|
||||
set PYTHON="%VENV_DIR%\Scripts\Python.exe"
|
||||
call "%VENV_DIR%\Scripts\activate.bat"
|
||||
echo venv %PYTHON%
|
||||
|
||||
:skip_venv
|
||||
|
||||
@@ -45,6 +45,44 @@ def api_only():
|
||||
)
|
||||
|
||||
|
||||
def warning_if_invalid_install_dir():
|
||||
"""
|
||||
Shows a warning if the webui is installed under a path that contains a leading dot in any of its parent directories.
|
||||
|
||||
Gradio '/file=' route will block access to files that have a leading dot in the path segments.
|
||||
We use this route to serve files such as JavaScript and CSS to the webpage,
|
||||
if those files are blocked, the webpage will not function properly.
|
||||
See https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13292
|
||||
|
||||
This is a security feature was added to Gradio 3.32.0 and is removed in later versions,
|
||||
this function replicates Gradio file access blocking logic.
|
||||
|
||||
This check should be removed when it's no longer applicable.
|
||||
"""
|
||||
from packaging.version import parse
|
||||
from pathlib import Path
|
||||
import gradio
|
||||
|
||||
if parse('3.32.0') <= parse(gradio.__version__) < parse('4'):
|
||||
|
||||
def abspath(path):
|
||||
"""modified from Gradio 3.41.2 gradio.utils.abspath()"""
|
||||
if path.is_absolute():
|
||||
return path
|
||||
is_symlink = path.is_symlink() or any(parent.is_symlink() for parent in path.parents)
|
||||
return Path.cwd() / path if (is_symlink or path == path.resolve()) else path.resolve()
|
||||
|
||||
webui_root = Path(__file__).parent
|
||||
if any(part.startswith(".") for part in abspath(webui_root).parts):
|
||||
print(f'''{"!"*25} Warning {"!"*25}
|
||||
WebUI is installed in a directory that has a leading dot (.) in one of its parent directories.
|
||||
This will prevent WebUI from functioning properly.
|
||||
Please move the installation to a different directory.
|
||||
Current path: "{webui_root}"
|
||||
For more information see: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13292
|
||||
{"!"*25} Warning {"!"*25}''')
|
||||
|
||||
|
||||
def webui():
|
||||
from modules.shared_cmd_options import cmd_opts
|
||||
|
||||
@@ -53,6 +91,8 @@ def webui():
|
||||
|
||||
from modules import shared, ui_tempdir, script_callbacks, ui, progress, ui_extra_networks
|
||||
|
||||
warning_if_invalid_install_dir()
|
||||
|
||||
while 1:
|
||||
if shared.opts.clean_temp_dir_at_start:
|
||||
ui_tempdir.cleanup_tmpdr()
|
||||
|
||||
Reference in New Issue
Block a user