catvton flux beta
5.0
0 reviewsDescription
RAM 16GB and VRAM 12GB can run this workflow.
I hope someone can perform GGUF quantization on this model, as my hardware cannot handle it.
You can find the model at: https://huggingface.co/xiaozaa/catvton-flux-beta
Please put the merged model file in the ComfyUI/models/diffusion_models/ folder.
python
from safetensors.torch import load_file, save_file
# List of shard files
shards = [
"diffusion_pytorch_model-00001-of-00003.safetensors",
"diffusion_pytorch_model-00002-of-00003.safetensors",
"diffusion_pytorch_model-00003-of-00003.safetensors"
]
# Merge models
merged = {}
for shard in shards:
print(f"Loading shard: {shard}")
merged.update(load_file(shard))
# Save merged result
save_file(merged, "merged_model.safetensors")
print("Merge completed: merged_model.safetensors")
Discussion
(No comments yet)
Loading...
Reviews
No reviews yet
Versions (1)
- latest (6 months ago)
Node Details
Primitive Nodes (4)
Anything Everywhere3 (1)
DualCLIPLoaderGGUF (1)
FluxGuidance (1)
LayerUtility: ICMask (1)
Custom Nodes (15)
ComfyUI
- InpaintModelConditioning (1)
- VAEDecode (1)
- KSampler (1)
- ConditioningZeroOut (1)
- StyleModelApply (1)
- PreviewImage (1)
- CLIPVisionEncode (1)
- LoadImage (2)
- UNETLoader (1)
- VAELoader (1)
- CLIPTextEncode (1)
- CLIPVisionLoader (1)
- StyleModelLoader (1)
- DifferentialDiffusion (1)
Model Details
Checkpoints (0)
LoRAs (0)