Commit Graph

373 Commits

Author SHA1 Message Date
brkirch e3b53fd295 Add UI setting for upcasting attention to float32
Adds "Upcast cross attention layer to float32" option in Stable Diffusion settings. This allows for generating images using SD 2.1 models without --no-half or xFormers.

In order to make upcasting cross attention layer optimizations possible it is necessary to indent several sections of code in sd_hijack_optimizations.py so that a context manager can be used to disable autocast. Also, even though Stable Diffusion (and Diffusers) only upcast q and k, unfortunately my findings were that most of the cross attention layer optimizations could not function unless v is upcast also.
2023-01-25 01:13:04 -05:00
brkirch 84d9ce30cb Add option for float32 sampling with float16 UNet
This also handles type casting so that ROCm and MPS torch devices work correctly without --no-half. One cast is required for deepbooru in deepbooru_model.py, some explicit casting is required for img2img and inpainting. depth_model can't be converted to float16 or it won't work correctly on some systems (it's known to have issues on MPS) so in sd_models.py model.depth_model is removed for model.half().
2023-01-25 01:13:02 -05:00
InvincibleDude 44c0e6b993 Merge branch 'AUTOMATIC1111:master' into master 2023-01-24 15:44:09 +03:00
brkirch f64af77adc Fix different first gen with Approx NN previews
The loading of the model for approx nn live previews can change the internal state of PyTorch, resulting in a different image. This can be avoided by preloading the approx nn model in advance.
2023-01-23 22:49:20 -05:00
invincibledude 3bc8ee998d Gen params paste improvement 2023-01-22 16:35:42 +03:00
invincibledude 7f62300f7d Gen params paste improvement 2023-01-22 16:29:08 +03:00
invincibledude a5c2b5ed89 UI and PNG info improvements 2023-01-22 15:50:20 +03:00
invincibledude bbb1e35ea2 UI and PNG info improvements 2023-01-22 15:44:59 +03:00
invincibledude 34f6d66742 hr conditioning 2023-01-22 15:32:47 +03:00
invincibledude 125d5c8d96 hr conditioning 2023-01-22 15:31:11 +03:00
invincibledude 2ab2bce74d hr conditioning 2023-01-22 15:28:38 +03:00
invincibledude c5d4c87c02 hr conditioning 2023-01-22 15:17:43 +03:00
invincibledude 4e0cf7d4ed hr conditioning 2023-01-22 15:15:08 +03:00
invincibledude a9f0e7d536 hr conditioning 2023-01-22 15:12:00 +03:00
invincibledude f774a8d24e Hr-fix separate prompt experimentation 2023-01-22 14:52:01 +03:00
invincibledude 81e0723d65 Logging for debugging 2023-01-22 14:41:41 +03:00
invincibledude b331ca784a Fix 2023-01-22 14:35:34 +03:00
invincibledude 8114959e7e Hr separate prompt test 2023-01-22 14:28:53 +03:00
invincibledude 0f6862ef30 PLMS edge-case handling fix 5 2023-01-22 00:11:05 +03:00
invincibledude 6cd7bf9f86 PLMS edge-case handling fix 3 2023-01-22 00:08:58 +03:00
invincibledude 3ffe2e768b PLMS edge-case handling fix 2 2023-01-22 00:07:46 +03:00
invincibledude 9e1f49c4e5 PLMS edge-case handling fix 2023-01-22 00:03:16 +03:00
AUTOMATIC 78f59a4e01 enable compact view for train tab
prevent  previews from ruining hypernetwork training
2023-01-22 00:02:51 +03:00
invincibledude 6c0566f937 Type mismatch fix 2023-01-21 23:25:36 +03:00
invincibledude 3bd898b6ce First test of different sampler for hi-res fix 2023-01-21 23:14:59 +03:00
AUTOMATIC 3deea34135 extract extra network data from prompt earlier 2023-01-21 19:36:08 +03:00
AUTOMATIC 92fb1096db make it so that extra networks are not removed from infotext 2023-01-21 16:41:25 +03:00
AUTOMATIC 40ff6db532 extra networks UI
rework of hypernets: rather than via settings, hypernets are added directly to prompt as <hypernet:name:weight>
2023-01-21 08:36:07 +03:00
AUTOMATIC1111 a8322ad75b Merge pull request #6854 from EllangoK/master
Saves Extra Generation Parameters to params.txt
2023-01-18 23:25:56 +03:00
AUTOMATIC b186d44dcd use DDIM in hires fix is the sampler is PLMS 2023-01-18 23:20:23 +03:00
EllangoK 5e15a0b422 Changed params.txt save to after manual init call 2023-01-17 11:42:44 -05:00
AUTOMATIC e0e8005009 make StableDiffusionProcessing class not hold a reference to shared.sd_model object 2023-01-16 23:09:08 +03:00
AUTOMATIC 9991967f40 Add a check and explanation for tensor with all NaNs. 2023-01-16 22:59:46 +03:00
AUTOMATIC f9ac3352cb change hypernets to use sha256 hashes 2023-01-14 10:25:37 +03:00
space-nuko 88416ab5ff Fix extension parameters not being saved to last used parameters 2023-01-12 13:46:59 -08:00
AUTOMATIC d4fd2418ef add an option to use old hiresfix width/height behavior
add a visual effect to inactive hires fix elements
2023-01-09 14:57:47 +03:00
noodleanon 50e2536279 Merge branch 'AUTOMATIC1111:master' into img2img-api-scripts 2023-01-07 14:18:09 +00:00
AUTOMATIC 1a5b86ad65 rework hires fix preview for #6437: movie it to where it takes less place, make it actually account for all relevant sliders and calculate dimensions correctly 2023-01-07 09:56:37 +03:00
noodleanon b5253f0dab allow img2img api to run scripts 2023-01-05 21:21:48 +00:00
AUTOMATIC 847f869c67 experimental optimization 2023-01-05 21:00:52 +03:00
AUTOMATIC 2e30997450 move sd_model assignment to the place where we change the sd_model 2023-01-05 10:21:17 +03:00
Philpax 83ca8dd0c9 Merge branch 'AUTOMATIC1111:master' into fix-sd-arch-switch-in-override-settings 2023-01-05 05:00:58 +01:00
AUTOMATIC 99b67cff0b make hires fix not do anything if the user chooses the second pass resolution to be the same as first pass resolution 2023-01-05 01:25:52 +03:00
AUTOMATIC bc43293c64 fix incorrect display/calculation for number of steps for hires fix in progress bars 2023-01-04 23:56:43 +03:00
AUTOMATIC 8149078094 added the option to specify target resolution with possibility of truncating for hires fix; also sampling steps 2023-01-04 22:04:40 +03:00
AUTOMATIC 097a90b88b add XY plot parameters to grid image and do not add them to individual images 2023-01-04 19:19:11 +03:00
AUTOMATIC 525cea9245 use shared function from processing for creating dummy mask when training inpainting model 2023-01-04 17:58:07 +03:00
AUTOMATIC 4d66bf2c0d add infotext to "-before-highres-fix" images 2023-01-04 17:24:46 +03:00
AUTOMATIC1111 6281c1bdb4 Merge pull request #6299 from stysmmaker/feat/latent-upscale-modes
Add more latent upscale modes
2023-01-04 13:47:36 +03:00
MMaker 15fd0b8bc4 Update processing.py 2023-01-04 05:12:54 -05:00