WAN 2.1 Self Forcing w/ LoRA T2V and VACE I2V(Low VRAM)
5.0
0 reviewsDescription
I've put together a simple custom 2 part text-to-video and image-to-video workflow with VACE, that uses the self_forcing_dmd model and Kijai's WAN 14B Lightx2v LoRA which brings down inference to only 4 steps. To make it more accessible for those of us with less powerful GPU's, it also uses a lightweight UMT5 GGUF clip model.
If you would like a full breakdown of the workflow and the installation process, you can watch my YouTube tutorial below.
Link - Â https://youtu.be/TyiEHj8TtTE
Discussion
(No comments yet)
Loading...
Reviews
No reviews yet
Versions (1)
- latest (2 months ago)
Node Details
Primitive Nodes (9)
ClipLoaderGGUF (2)
EmptyHunyuanLatentVideo (1)
Fast Groups Bypasser (rgthree) (1)
ModelSamplingSD3 (2)
Note (1)
TrimVideoLatent (1)
WanVaceToVideo (1)
Custom Nodes (17)
ComfyUI
- CLIPTextEncode (4)
- VAELoader (2)
- VAEDecode (2)
- UNETLoader (2)
- LoraLoader (2)
- KSampler (2)
- LoadImage (1)
- VHS_VideoCombine (2)
Model Details
Checkpoints (0)
LoRAs (2)
Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors