Nunchaku Flux Kontext vs. Turbo LoRA
5.0
0 reviewsDescription
Unlock the full potential of the Flux Kontex Dev model without the agonizing wait! This workflow solves the biggest pain point of this powerhouse model: slow generation times. By leveraging Nunchaku technology, you get 8x faster image generation (as low as 5 seconds on an RTX 4090) while preserving near-identical quality to the original model. Perfect for rapid prototyping, testing prompts, or batch generation!
YouTube Tutorial: Â
Discussion
(No comments yet)
Loading...
Reviews
No reviews yet
Versions (1)
- latest (2 months ago)
Node Details
Primitive Nodes (11)
Anything Everywhere3 (2)
Fast Groups Muter (rgthree) (1)
FluxGuidance (1)
FluxKontextImageScale (1)
Note (1)
NunchakuFluxDiTLoader (1)
Prompts Everywhere (1)
ReferenceLatent (1)
UnetLoaderGGUF (2)
Custom Nodes (17)
ComfyUI
- VAEEncode (1)
- ConditioningZeroOut (1)
- VAELoader (1)
- DualCLIPLoader (1)
- KSampler (3)
- CLIPTextEncode (1)
- LoadImage (1)
- LoraLoaderModelOnly (1)
- VAEDecode (3)
- PreviewImage (3)
- easy seed (1)
Model Details
Checkpoints (0)
LoRAs (1)
flux/FLUX.1-Turbo-Alpha.safetensors