ComfyUI workflow for Flux (simple)
4.0
6 reviewsDescription
It is a simple workflow of Flux AI on ComfyUI.
EZ way, kust download this one and run like another checkpoint ;) https://civitai.com/models/628682/flux-1-checkpoint-easy-to-use
Check out more detailed instructions here: https://maitruclam.com/flux-ai-la-gi/
Just 20GB and no more download alot of thing.
it was a bug when i tried to run flux on A1111 but i was finally able to use it on ComfyUI :V
Old ver:
You will need at least 30 GB to use them :)
***
If you are a newbie like me, you will be less confused when trying to figure out how to use Flux on ComfyUI.
In addition to this workflow, you will also need:
Download Model:
1. Model: flux1-dev.sft: 23.8 GB
Link: https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main
Location: ComfyUI/models/unet/
Download CLIP:
1. t5xxl_fp16.safetensors: 9.79 GB
2. clip_l.safetensors: 246 MB
3. (optional if your machine has less than 32GB of TvT ram) t5xxl_fp8_e4m3fn.safetensors: 4.89 GB
Link: https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main
Location: ComfyUI/models/clip/
Download VAE:
1. ae.sft: 335 MB
Link: https://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/ae.safetensors
Location: ComfyUI/models/vae/
If you are using an Ubuntu VPS like me, the command is as simple as this:
# Download t5xxl_fp16.safetensors to the directory ComfyUI/models/clip/
wget -P /home/ubuntu/ComfyUI/models/clip/ https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors
# Download tp clip_l.safetensors to ComfyUI/models/clip/
wget -P /home/ubuntu/ComfyUI/models/clip/ https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors
# (Optional) Download t5xxl_fp8_e4m3fn.safetensors to ComfyUI/models/clip/
wget -P /home/ubuntu/ComfyUI/models/clip/ https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp8_e4m3fn.safetensors
# Download ae.sft to ComfyUI/models/vae/
wget -P /home/ubuntu/ComfyUI/models/vae/ https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/ae.safetensors
For the model, you will need to learn how to generate Huggingface Access Tokens and add them to download and use like this:
I don't know much about them so you can find out more.
Why don't I make tutorial for Windows 10, 11 or XP? What do you expect from a Mario 64 laptop :)
Original tutorial: https://comfyanonymous.github.io/ComfyUI_examples/flux/
Note: It works well with FLUX.1-Turbo-Alpha, LORA human face. π€β¨
Useful and FREE resources:
β€οΈFree server to make art with Flux: Shakker and Tensor Art and Sea Art
β¨ More FLUX LORA? List and detailed description of each LORA I implement here: https://maitruclam.com/loraπ
π First time using FLUX? Explanation and tutorial with A1111 forge offline and Comfy UI here: https://maitruclam.com/flux-ai-la-gi/π
π οΈ How to train your LORA with Flux? My detailed instructions are here: https://maitruclam.com/training-flux/π
β€οΈ Donate me (I would be really surprised if you did that! π): https://maitruclam.com/donate
Find me / Contact for work on:
π± Facebook: @maitruclam4real
π¬ Discord: @maitruclam
π Web: maitruclam.com
Node Diagram
Discussion
https://comfyanonymous.github.io/ComfyUI_examples/flux/ ???
Thanks. But if you click load more you will see it below
How much system RAM do you have? I have 32GB and Comfy hits 100% usage while loading the models. Locks up my entire PC. π
64GB ram bro :))) and 24GB vRam, and when it load it will eat all of ram and vram :(
Ran it with no problems on my 2080ti 11gb and 32gb RAM. It just generates slowly, 4-6 minutes per picture, but that's minor stuff.
(Edited)If they don't figure out a way to optimize this model, it won't be a very popular one. Most people won't be able to run it locally.
I hope so, I also find it very annoying every time I load it
i5 9600K + 32GB RAM + 4060Ti 16GB
fp8 weight and RAM is almost fully loaded... Prompt executed in 156 seconds (first run), after that all under 30 seconds, but one thing I dont know if it is just me, model weight at fp8 and clip at fp16 seems to be running better and faster?
same :) first time loading into unet is really annoying, my VPS even tried a stronger version but it still took 1-2 minutes to load the first time
Hello, my comfy can find other models (VAE, clip), but not the unet :
https://ibb.co/0qTf4w7
https://ibb.co/StjvRBG
(Edited)try change .sft to .safetensors or try this version I uploaded to civitai: https://civitai.com/models/617609/flux1-dev
Also, if your pc is ok, it is recommended to use the dev version, because schnell creates relatively bad images for the trade-off of 20gb of hard drive
Any way to use negative prompts with this workflow?
no negative prompt needed for that Diffusion Model.
so how to explicitly tell the model to avoid certain elements of an image?
hmmmm hard to tell in fact that model is really new ^^ but i think by adding a negative prompt box into the comfyui workflow will fix the problem i guess..
followed above instructions, also changed file extensions from .sft to . safetensors ... but get error ' KeyError: 'conv_in.weight' .. any ideas please...
Can you take a screenshot? Just saying that, I don't know what operating system you're using, what device you have, or what the problem is.
update comfy + check if the type in the cliploader is "flux"
my type doesnt show "flux" how to add the type in dual clip loader
updated and it worked thanks
renaming safetensor as is won't work....run the below code.
import torch
from safetensors.torch import save_file
# Load the model from the .sft file
model = torch.load("C:\\xxx.sft", map_location='cpu')
# Save the model as a .safetensors file
save_file(model, r"C:\xxx.safetensors")
replace xxx with appropriate directory/filename. This method will save all weights in the renamed safetensor file.
use load from the web UI of COMFY
and choose workflow file from there to load
starting again :(
Is there a flux 1 (Comfyui) workflow to do enhancer and upscaling?
When executing, why will display "press any key to restart" what is the case!
I HAVE A GTX 1060 6GB cAN IT RUN? AND WHAT IS THE WAIT TIME FOR GENERATIONS?
i think you need 12gb+ Vram
Jus tstarted using comfy from a1111, where do the output images go, and is there a way to know the progress of each gen?
Hi! I have a RTX 2080 8GB of VRAM and 16GB of ram. I keep running into Reconnecting...
and sometimes it gives me the Type Error.
Do I need more than 8GB vRam?
Prestartup times for custom nodes:
1.3 seconds: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
Total VRAM 8192 MB, total RAM 16234 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2080 : cudaMallocAsync
Using pytorch cross attention
[Prompt Server] web root: C:\ComfyUI_windows_portable\ComfyUI\web
### Loading: ComfyUI-Manager (V2.48.5)
### ComfyUI Revision: 2480 [b334605a] | Released on '2024-08-06'
Import times for custom nodes:
0.0 seconds: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py
0.3 seconds: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
Starting server
To see the GUI go to: http://127.0.0.1:8188
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
FETCH DATA from: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
got prompt
C:\ComfyUI_windows_portable>pause
Press any key to continue . . .
After I press any key to continue the the cmd window closes.
my one is 4060Ti (8GB) and keep running into Reconnecting ....
Please let me know if u found any solution.
hello 2 friends. Maybe the main problem is due to the amount of vram is too little. Although the minimum suggestion is 12 gb vram. But in reality when I run it consumes 20/24GB of my vram.
Try to find the f8 version and use it or use the schnell version :V
hello. I have used the f8 version since it was recommended if u have less than 12GB of vRam but it still doesn't work. Me and my friend were on discord together and were doing the setup at the same time and he was running into the same issues as me too even tho he has 12GB of vRam.
I have kind of given up and might just build an entirely new rig with a 4090 in there just so I can peacefully do my thing.
Hmmm I am using linux vps, just tested this version https://civitai.com/models/628682 maybe you should try it :V Or check my blog maybe it can help in some way: https://maitruclam.com/flux-ai-la-gi/
(Edited)I'll give it a try tomorrow ;p
Thank you very much for going out of your way though. I highly appreciate that!
Okay I tried it. I have 16GB of ram (NOT VRAM) and I think that's too little because I did everything correctly and it just doesn't run. The green highlights the model and it stops even tho I have everything else closed.
minimum is 12 vram my friend :( not ram, doing that will be very pitiful for the machine. You should try going back to 1.5 or xl will be better :(
tried schnell version with fp8 clip on my 4GB VRAM GTX1650Ti laptop (16GB RAM)and it took me 679 seconds to generate a 720x720 image with 20 steps
The model file was renamed. It is now called flux1-dev.safetensors but the doc says:
Download Model:
1. Model: flux1-dev.sft: 23.8 GB
Download VAE:
1. ae.sft: 335 MB
Link: https://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/ae.sft
Location: ComfyUI/models/vae/
The above link shows "404 entry not found"
Please help. Thank you.
Download VAE missing:
1. ae.sf
....found here as: ae.safetensors
https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main
Thank you for the installation instructions provided.
I'm running the workflow I downloaded from here, it runs smoothly without any issues. My laptop uses an Intel Core Ultra 9, 64GB RAM, and RTX 4070 Laptop GPU.
Image generation times for dimensions:
1024x1024 is about ~145 seconds
768x1344 is about ~140 seconds
For the model, you will need to learn how to generate Huggingface Access Tokens and add them to download and use like this:??
point of doing this? token generated as show in the picture and where to use it?
learn it bro, but i will show the way to do latter. Or just download this one and use like checkpoint https://civitai.com/models/628682
I get this error when trying to use the workflow: "Error occurred when executing VAEDecode:
First of all, thank you very much for the tutorial. But when I try to use the workflow I get this error: "Error occurred when executing VAEDecode: Given groups=1, weight of size [4, 4, 1, 1], expected input[1, 16, 128, 128] to have 4 channels, but got 16 channels instead". I would greatly appreciate some kind of help :/. I use the flux1-dev model. the t5xxl_fp16 clip and the ClearVAE_V2.3_fp16 VAE. My graphics card is a 4090
"
Hello mates, can someone help me, my comfyui can't fine the missing nodes, all the basicguider, samplercustomadvanced and randomnoise are missing.
It's all perfectly updated.
Can someone help me finding why it can't find them or giving me the link for the update by link?
Thanks
can u make workflow to load input?
I have set up ComfyUI on Ubuntu and am attempting to run the workflow downloaded from here for the first time. However, I am getting a warning clean_up_tokenization_spaces and then ComfyUI is shutdown - full console output below:
got prompt
model weight dtype torch.bfloat16, manual cast: None
model_type FLUX
/home/garrett/AI/ComfyUI/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
./launch.sh: line 7: 47378 Killed python3 main.py
Where can you set the tokenizer for this?
(Edited)seems strange, I don't have this so not sure. How much GB of vram do you have? Have you tried the f8 version which requires less vram?
16gb of ram and 16gb of swap. I am using the f8 version. There is a bug tracker on github about this (https://github.com/huggingface/transformers/issues/31884) and lots of people seem to be encountering the issue. Is there a way to trace errors in ComfyUI
tried flux1-schnell with FP8 clip, to generate a 720x720 image with 20 steps on my laptop with:
- Ryzen 5 4600H 6 cores 3GHz
- GTX1650Ti 4GB VRAM
- 16GB Dual Channel RAM
and it took 11min 19sec to generate an image, with no other tasks running and only flux model running.
but hey atleast it works! :)
Runs like a charm. I have an Alienware R17 2 with the 3080Ti with 16GB of VRAM and 64GB of RAM. I could say is kind of slow the image processing, about 106 seconds which is almost 2 minutes, but it's not an issue, the quality is unmatched. It takes 80% of VRAM and about 70% of RAM. It doesn't get slow or freeze at any moment.
"YouTube thumbnail for a gaming channel with bold white text saying 'OH MY GOD ???!!!' on a bright red strip at the top. In the center, a strong, excited gamer character with an intense expression is holding a game controller and possibly wearing a headset. The background is a dark gradient, blending deep purples and dark blues with a subtle glow effect, giving a high-energy, futuristic vibe. Replace food doodles with gaming icons, like controllers and pixelated action symbols, scattered lightly for texture. The character has a thick white outline, and dynamic action lines around them add extra intensity."
jeune fille, la rue Γ Paris
Is it possible to get the prompt as text? I want connect it to the Extended Save File node.
Node Details
Primitive Nodes (0)
Custom Nodes (12)
ComfyUI
- SamplerCustomAdvanced (1)
- BasicGuider (1)
- KSamplerSelect (1)
- VAEDecode (1)
- RandomNoise (1)
- UNETLoader (1)
- VAELoader (1)
- EmptyLatentImage (1)
- CLIPTextEncode (1)
- BasicScheduler (1)
- DualCLIPLoader (1)
- SaveImage (1)
Model Details
Checkpoints (0)
LoRAs (0)