Mochi Text2Video Workflow - Beginner Friendly
5.0
0 reviewsDescription
Prerequisites:
1) Mochi safetensor model
2) Mochi VAE model
3) T5xxl Clip Encoder (Same one if you already have it for Flux)
Download model from:
https://huggingface.co/Kijai/Mochi_preview_comfy/tree/main
For more info about node and dependencies:
https://github.com/kijai/ComfyUI-MochiWrapper?tab=readme-ov-file
How to use the workflow:
1) Download all corresponding model files up top
2) Choose Text2Video or Img2Video as input (refer to notes)
3) Select GGUF Q8 or lower models, and fp8 clip encoder if you have limited VRAM
Diffuse away!
Discussion
(No comments yet)
Loading...
Resources (2)
Reviews
No reviews yet
Versions (2)
- latest (a year ago)
- v20241114-164041
Node Details
Primitive Nodes (13)
MochiDecodeSpatialTiling (1)
MochiImageEncode (1)
MochiModelLoader (1)
MochiSampler (1)
MochiSigmaSchedule (1)
MochiVAEEncoderLoader (1)
MochiVAELoader (1)
Note (4)
OverrideCLIPDevice (1)
UnloadAllModels (1)
Custom Nodes (6)
ComfyUI
- CLIPLoader (1)
- LoadImage (1)
- ImageScale (1)
- CLIPTextEncode (2)
- VHS_VideoCombine (1)
Model Details
Checkpoints (0)
LoRAs (0)