Skip to content

Conversation

@leejet
Copy link
Owner

@leejet leejet commented Nov 28, 2025

.\bin\Release\sd.exe --diffusion-model  ..\..\ComfyUI\models\diffusion_models\flux2-dev-Q4_K_S.gguf --vae ..\..\ComfyUI\models\vae\flux2_ae.safetensors  --llm ..\..\ComfyUI\models\text_encoders\Mistral-Small-3.2-24B-Instruct-2506-Q4_K_M.gguf -r .\kontext_input.png -p "change 'flux.cpp' to 'flux2-dev.cpp'" --cfg-scale 1.0 --sampling-method euler -v --diffusion-fa --offload-to-cpu
output

TODO:

  • Flux2FlowDenoiser

@stduhpf
Copy link
Contributor

stduhpf commented Nov 29, 2025

It runs surprisingly not too slow with a Q2_K quant. Quality is pretty bad though with that quant. If someone has enough memory and disk space to make an iq3_xxs quant of this model, that would probably be a lot better quality than the Q2 while fitting in 16 GB of VRAM.

@leejet leejet merged commit 52b67c5 into master Nov 30, 2025
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants