ComfyUI workflow for Flux (simple)

4.0

6 reviews
187
364.6K
132.5K
98
Description

It is a simple workflow of Flux AI on ComfyUI.

EZ way, kust download this one and run like another checkpoint ;) https://civitai.com/models/628682/flux-1-checkpoint-easy-to-use

Check out more detailed instructions here: https://maitruclam.com/flux-ai-la-gi/

Just 20GB and no more download alot of thing.

it was a bug when i tried to run flux on A1111 but i was finally able to use it on ComfyUI :V

Old ver:

You will need at least 30 GB to use them :)

***

If you are a newbie like me, you will be less confused when trying to figure out how to use Flux on ComfyUI.

In addition to this workflow, you will also need:

Download Model:

1. Model: flux1-dev.sft: 23.8 GB

Link:  https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main

Location: ComfyUI/models/unet/

Download CLIP:

1. t5xxl_fp16.safetensors: 9.79 GB

2. clip_l.safetensors: 246 MB

3. (optional if your machine has less than 32GB of TvT ram) t5xxl_fp8_e4m3fn.safetensors: 4.89 GB

Link: https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main

Location: ComfyUI/models/clip/

Download VAE:

1. ae.sft: 335 MB

Link: https://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/ae.safetensors

Location: ComfyUI/models/vae/


If you are using an Ubuntu VPS like me, the command is as simple as this:

# Download t5xxl_fp16.safetensors to the directory ComfyUI/models/clip/

wget -P /home/ubuntu/ComfyUI/models/clip/ https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors

# Download tp clip_l.safetensors to ComfyUI/models/clip/

wget -P /home/ubuntu/ComfyUI/models/clip/ https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors

# (Optional) Download t5xxl_fp8_e4m3fn.safetensors to ComfyUI/models/clip/

wget -P /home/ubuntu/ComfyUI/models/clip/ https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp8_e4m3fn.safetensors

# Download ae.sft to ComfyUI/models/vae/

wget -P /home/ubuntu/ComfyUI/models/vae/ https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/ae.safetensors


For the model, you will need to learn how to generate Huggingface Access Tokens and add them to download and use like this:

I don't know much about them so you can find out more.


Why don't I make tutorial for Windows 10, 11 or XP? What do you expect from a Mario 64 laptop :)

Original tutorial: https://comfyanonymous.github.io/ComfyUI_examples/flux/

Note: It works well with FLUX.1-Turbo-Alpha, LORA human face. πŸ‘€βœ¨

Useful and FREE resources:

❀️Free server to make art with Flux: Shakker and Tensor Art and Sea Art

✨ More FLUX LORA? List and detailed description of each LORA I implement here:  https://maitruclam.com/loraπŸ“š

πŸ†• First time using FLUX? Explanation and tutorial with A1111 forge offline and Comfy UI here: https://maitruclam.com/flux-ai-la-gi/🌐

πŸ› οΈ How to train your LORA with Flux? My detailed instructions are here: https://maitruclam.com/training-flux/πŸ“š

❀️ Donate me (I would be really surprised if you did that! πŸ˜„): https://maitruclam.com/donate

Find me / Contact for work on:

πŸ“± Facebook: @maitruclam4real
πŸ’¬ Discord: @maitruclam
🌐 Web: maitruclam.com

Node Diagram
Discussion
A
Aderek8 months ago

https://comfyanonymous.github.io/ComfyUI_examples/flux/  ???

πŸ‘1
LΓ’m8 months ago

Thanks. But if you click load more you will see it below

d
dakhari8 months ago

How much system RAM do you have? I have 32GB and Comfy hits 100% usage while loading the models. Locks up my entire PC. 😭

LΓ’m8 months ago

64GB ram bro :))) and 24GB vRam, and when it load it will eat all of ram and vram :(

K
KOGAN BOSS8 months ago

Ran it with no problems on my 2080ti 11gb and 32gb RAM. It just generates slowly, 4-6 minutes per picture, but that's minor stuff.

(Edited)
S
Sylvain8 months ago

If they don't figure out a way to optimize this model, it won't be a very popular one. Most people won't be able to run it locally.

LΓ’m8 months ago

I hope so, I also find it very annoying every time I load it

Have tested a few things follows very great the prompt, very fascinating! My 3080 is having a hard time but still works just fine.

y
yunick_8 months ago

change the weight dtype...

(Edited)
o
online render8 months ago

rtx 4090 will run smooth?


LΓ’m8 months ago

smooth!

A
Alex Benfica8 months ago

I have a 10GB RTX3080 and 64GB RAM, will it work?

LΓ’m8 months ago

try it, and let me know :) thanks

T
Tangerine8 months ago

i5 9600K + 32GB RAM + 4060Ti 16GB

fp8 weight and RAM is almost fully loaded... Prompt executed in 156 seconds (first run), after that all under 30 seconds, but one thing I dont know if it is just me, model weight at fp8 and clip at fp16 seems to be running better and faster?

LΓ’m8 months ago

same :) first time loading into unet is really annoying, my VPS even tried a stronger version but it still took 1-2 minutes to load the first time

Hello, my comfy can find other models (VAE, clip), but not the unet :

https://ibb.co/0qTf4w7

https://ibb.co/StjvRBG

(Edited)
LΓ’m8 months ago

try change .sft to .safetensors or try this version I uploaded to civitai: https://civitai.com/models/617609/flux1-dev

Also, if your pc is ok, it is recommended to use the dev version, because schnell creates relatively bad images for the trade-off of 20gb of hard drive

E
Erik Wettergren8 months ago

Any way to use negative prompts with this workflow?


πŸ‘1
K
Kim Karnage8 months ago

no negative prompt needed for that Diffusion Model.

E
Erik Wettergren8 months ago

so how to explicitly tell the model to avoid certain elements of an image?

K
Kim Karnage8 months ago

hmmmm hard to tell in fact that model is really new ^^ but i think by adding a negative prompt box into the comfyui workflow will fix the problem i guess..


s
shan boe8 months ago

followed above instructions, also changed file extensions from .sft to . safetensors ... but get error ' KeyError: 'conv_in.weight'  .. any ideas please...

LΓ’m8 months ago

Can you take a screenshot? Just saying that, I don't know what operating system you're using, what device you have, or what the problem is.

k
klezi8 months ago

update comfy + check if the type in the cliploader is "flux"

F
Filterophilic XX8 months ago

my type doesnt show "flux" how to add the type in dual clip loader

F
Filterophilic XX8 months ago

updated and it worked thanks

j
jayanth hema8 months ago

renaming safetensor as is won't work....run the below code.

import torch

from safetensors.torch import save_file


# Load the model from the .sft file

model = torch.load("C:\\xxx.sft", map_location='cpu')

# Save the model as a .safetensors file

save_file(model, r"C:\xxx.safetensors")

replace xxx with appropriate directory/filename. This method will save all weights in the renamed safetensor file.

S
S4iNT8 months ago

working fine on a RTX 4070Ti 12gb vram and 32gb , AMD Ryzen 5900, having M.2 SSD high speed read write Drives helps a lot.

(Edited)
L
LΓ’m8 months ago

On your computer, how long does it take for a 1024x1024 image and 20 simple steps

where to put workflow?

L
LΓ’m8 months ago

Drag the json file directly into comfy or load > select the workflow file

P
Piledriver 3518 months ago

I downloaded all the files to the right folders but it wont run (windows 11)

S
S4iNT8 months ago

if u have comfy ui open via workflow make sure all the clips and models are the ones u downloaded in place i had to make those corrections in comfy ui


πŸ‘1
S
S4iNT8 months ago

also the VAE file is chosen correctly


P
Piledriver 3518 months ago

the workflow wont open


L
LΓ’m8 months ago

In fact, it can run without VAE, just like SD

S
Saint Slogic8 months ago

use load from the web UI of COMFY

and choose workflow file from there to load

P
Piledriver 3518 months ago

starting again :(


πŸ‘1
A
Abel Bustos8 months ago

Is there a flux 1 (Comfyui) workflow to do enhancer and upscaling?

When executing, why will display "press any key to restart" what is the case!

H
Harfaoui Sami8 months ago

I HAVE A GTX 1060 6GB cAN IT RUN? AND WHAT IS THE WAIT TIME FOR GENERATIONS?

F
FrostyDelights8 months ago

i think you need 12gb+ Vram

F
FrostyDelights8 months ago

Jus tstarted using comfy from a1111, where do the output images go, and is there a way to know the progress of each gen?

A
Andy Wei8 months ago

my problem is my ComfyUI can not load the "Load Diffusion Model" node, it always show Unet node there.

i already upgrade my ComfyUI to the lastest version but problem still there.

do not know why and how i  can resolve this issue.

pls. anybody can help.

thanks a lot.

j
jie zhou8 months ago

me the same.

t
themudsy8 months ago

How to fix these missing node? - I cant find them anywhere?

  • SamplerCustomAdvanced
  • RandomNoise
  • BasicGuider
t
themudsy8 months ago

Nevermind, it was comfyui update issue.

L
LΓ’m8 months ago

Thanks bro, this will help people who have the same problem

D
Dave8 months ago

Hi! I have a RTX 2080 8GB of VRAM and 16GB of ram. I keep running into Reconnecting...

and sometimes it gives me the Type Error.

Do I need more than 8GB vRam?









Prestartup times for custom nodes:

  1.3 seconds: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager


Total VRAM 8192 MB, total RAM 16234 MB

pytorch version: 2.3.1+cu121

Set vram state to: NORMAL_VRAM

Device: cuda:0 NVIDIA GeForce RTX 2080 : cudaMallocAsync

Using pytorch cross attention

[Prompt Server] web root: C:\ComfyUI_windows_portable\ComfyUI\web

### Loading: ComfyUI-Manager (V2.48.5)

### ComfyUI Revision: 2480 [b334605a] | Released on '2024-08-06'


Import times for custom nodes:

  0.0 seconds: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py

  0.3 seconds: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager


Starting server


To see the GUI go to: http://127.0.0.1:8188

[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json

[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json

[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json

[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json

[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json

FETCH DATA from: C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]

got prompt


C:\ComfyUI_windows_portable>pause

Press any key to continue . . .


After I press any key to continue the the cmd window closes.

😒1
K
Kglay Kophyo8 months ago

my one is 4060Ti (8GB) and keep running into Reconnecting ....

Please let me know if u found any solution.

L
LΓ’m8 months ago

hello 2 friends. Maybe the main problem is due to the amount of vram is too little. Although the minimum suggestion is 12 gb vram. But in reality when I run it consumes 20/24GB of my vram.

Try to find the f8 version and use it or use the schnell version :V

D
Dave8 months ago

hello. I have used the f8 version since it was recommended if u have less than 12GB of vRam but it still doesn't work. Me and my friend were on discord together and were doing the setup at the same time and he was running into the same issues as me too even tho he has 12GB of vRam.

I have kind of given up and might just build an entirely new rig with a 4090 in there just so I can peacefully do my thing.

L
LΓ’m8 months ago

Hmmm I am using linux vps, just tested this version  https://civitai.com/models/628682  maybe you should try it :V Or check my blog maybe it can help in some way:  https://maitruclam.com/flux-ai-la-gi/

(Edited)
πŸ‘1
D
Dave8 months ago

I'll give it a try tomorrow ;p

Thank you very much for going out of your way though. I highly appreciate that!

❀️1
D
Dave8 months ago

Okay I tried it. I have 16GB of ram (NOT VRAM) and I think that's too little because I did everything correctly and it just doesn't run. The green highlights the model and it stops even tho I have everything else closed.

L
LΓ’m8 months ago

minimum is 12 vram my friend :( not ram, doing that will be very pitiful for the machine. You should try going back to 1.5 or xl will be better :(

tried schnell version with fp8 clip on my 4GB VRAM GTX1650Ti laptop (16GB RAM)and it took me 679 seconds to generate a 720x720 image with 20 steps

T
Tony Bellomo8 months ago

The model file was renamed.  It is now called flux1-dev.safetensors but the doc says:

Download Model:

1. Model: flux1-dev.sft: 23.8 GB


F
Fu-Chen Tsai8 months ago

Download VAE:

1. ae.sft: 335 MB

Link: https://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/ae.sft

Location: ComfyUI/models/vae/

The above link shows "404 entry not found"

Please help.  Thank you.

L
Luke Bubb8 months ago

Download VAE missing:

1. ae.sf

....found here as: ae.safetensors

https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main

L
LΓ’m8 months ago

they update it, i just update the link in post

s
syaiful arifin8 months ago

Thank you for the installation instructions provided.

I'm running the workflow I downloaded from here, it runs smoothly without any issues. My laptop uses an Intel Core Ultra 9, 64GB RAM, and  RTX 4070 Laptop GPU.

Image generation times for dimensions:

1024x1024 is about ~145 seconds

768x1344 is about ~140 seconds

❀️1
N
Natalia West8 months ago

For the model, you will need to learn how to generate Huggingface Access Tokens and add them to download and use like this:??

point of doing this? token generated as show in the picture and where to use it?

L
LΓ’m8 months ago

learn it bro, but i will show the way to do latter. Or just download this one and use like checkpoint  https://civitai.com/models/628682

P
Peter Parker8 months ago

I get this error when trying to use the workflow: "Error occurred when executing VAEDecode:


First of all, thank you very much for the tutorial. But when I try to use the workflow I get this error: "Error occurred when executing VAEDecode: Given groups=1, weight of size [4, 4, 1, 1], expected input[1, 16, 128, 128] to have 4 channels, but got 16 channels instead". I would greatly appreciate some kind of help :/. I use the flux1-dev model. the t5xxl_fp16 clip and the ClearVAE_V2.3_fp16 VAE. My graphics card is a 4090

"

m
madrat07338 months ago

Hello mates, can someone help me, my comfyui can't fine the missing nodes, all the basicguider, samplercustomadvanced and randomnoise are missing.

It's all perfectly updated.

Can someone help me finding why it can't find them or giving me the link for the update by link?

Thanks

can u make workflow to load input?

a
auspiciousandrew7 months ago

I don't have a unet directory un the models folder since I use stable swarm. Therefore, I don't have a UNET loader node in COMFY. Anyone else have this problem?

(Edited)
L
LΓ’m7 months ago

update bro

C
CraftWibu7 months ago

How add Lora?

(Edited)
S
Sacso San7 months ago

Can you share a hint for the John Wick picture?

L
LΓ’m7 months ago

disney pixar style john wick and the last dog movie poster with the movie title "john wick and the last dog" below

πŸ‘1
S
Sacso San7 months ago

Thank you for your help, I hope that in the future there will be new working schemes from you.

❀️1

I have set up ComfyUI on Ubuntu and am attempting to run  the workflow downloaded from here for the first time. However, I am getting a warning clean_up_tokenization_spaces  and then  ComfyUI is shutdown - full console output below:


got prompt
model weight dtype torch.bfloat16, manual cast: None
model_type FLUX
/home/garrett/AI/ComfyUI/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
  warnings.warn(
./launch.sh: line 7: 47378 Killed                  python3 main.py


Where can you set the tokenizer for this?

(Edited)
L
LΓ’m7 months ago

seems strange, I don't have this so not sure. How much GB of vram do you have? Have you tried the f8 version which requires less vram?

Garrett7 months ago

16gb of ram and 16gb of swap. I am using the f8 version. There is a bug tracker on github about this (https://github.com/huggingface/transformers/issues/31884) and lots of people seem to be encountering the issue. Is there a way to trace errors in ComfyUI

L
LΓ’m7 months ago

hmmm i will follow it, if i can find the solution, i will comment here, if u find out first pls share with everyone, thank u!

πŸ‘1

tried flux1-schnell with FP8 clip, to generate a 720x720 image with 20 steps on my laptop with:


  • Ryzen 5 4600H 6 cores 3GHz
  • GTX1650Ti 4GB VRAM
  • 16GB Dual Channel RAM


and it took 11min 19sec to generate an image, with no other tasks running and only flux model running.

but hey atleast it works! :)

🀣1

Runs like a charm. I have an Alienware R17 2 with the 3080Ti with 16GB of VRAM and 64GB of RAM. I could say is kind of slow the image processing, about 106 seconds which is almost 2 minutes, but it's not an issue, the quality is unmatched.  It takes 80% of VRAM and about 70% of RAM. It doesn't get slow or freeze at  any moment.  

L
LΓ’m5 months ago

hmmm it too slow with your pc, use around 20 - 30 step is good, much more is not necessary

w
waleed galal5 months ago

"YouTube thumbnail for a gaming channel with bold white text saying 'OH MY GOD ???!!!' on a bright red strip at the top. In the center, a strong, excited gamer character with an intense expression is holding a game controller and possibly wearing a headset. The background is a dark gradient, blending deep purples and dark blues with a subtle glow effect, giving a high-energy, futuristic vibe. Replace food doodles with gaming icons, like controllers and pixelated action symbols, scattered lightly for texture. The character has a thick white outline, and dynamic action lines around them add extra intensity."

b
bruno gandon4 months ago

jeune fille,  la rue Γ  Paris

P
Peter Brightman2 months ago

Is it possible to get the prompt as text? I want connect it to the Extended Save File node.

Author

6
138.9K
242
395.6K

Resources (2)

    workflow-flux-lam-comfyui.json (9.3 kB)
    workflow flux.json (9.3 kB)

Reviews

h

huy hieu

3 months ago

great

S

Sacso San

7 months ago

Great no fuss workflow that produces great results!

f

farsinuce

8 months ago

A good start

S

Stefan Segerqvist

8 months ago

LΓ’m

8 months ago

Please read the full content of this workflow from where it came from. Thanks.

#

#NeuraLunk

8 months ago

Looks like a straight copy of: https://comfyanonymous.github.io/ComfyUI_examples/flux/ But thanks for putting it here for all to use ;) #NeuraLunk (mod)

Versions (2)

  • - latest (8 months ago)

  • - v20240801-174718

Primitive Nodes (0)

Custom Nodes (12)

ComfyUI

  • - SamplerCustomAdvanced (1)

  • - BasicGuider (1)

  • - KSamplerSelect (1)

  • - VAEDecode (1)

  • - RandomNoise (1)

  • - UNETLoader (1)

  • - VAELoader (1)

  • - EmptyLatentImage (1)

  • - CLIPTextEncode (1)

  • - BasicScheduler (1)

  • - DualCLIPLoader (1)

  • - SaveImage (1)

Checkpoints (0)

LoRAs (0)