Product Photography Relight v3 - With internal Frequency Separation for keeping details
5.0
11 reviewsDescription
Updated v4 here, with IPAdapters, upscaler, color matching and more: https://openart.ai/workflows/risunobushi/product-photo-relight-v4---from-photo-to-advertising-preserve-details-color-upscale-and-more/gCMFAhrxCMjqc3Xr3Zsj
Huge thanks to u/Powered_JJ on Reddit, who developed a group of nodes for using the Frequency Separation technique inside of comfyUI, thus allowing for complete detail preservation after relight directly inside of comfyUI.
Video tutorial: https://youtu.be/3N0vvmAoKJA
and a link to u/Powered_JJ's Reddit thread here: https://www.reddit.com/r/comfyui/comments/1cuuz1u/frequency_separation_union_workflow/
It's a all in one workflow, where the user can:
- either generate a product, or start from a pre-existing product photo
- segment out the product through a SAM group
- generate a new background
- blend the original product on top of the generated background
- relight through mask (either a pre-existing light mask or by masking the resulting original product + background image in the Preview Bridge node)
- keep finer details (such as text) by using a series of nodes that act as a frequency separation technique
Want to support me? You can buy me a coffee here: https://ko-fi.com/risunobushi
Cheers!
Andrea
Node Diagram
Discussion
Sorry, I had updated the workflow with a scuffed version of it.
I updated it with a tidier version now.
Cheers!
I know you have a better workflow then this one but the pure simplicity of has helped me creatively, so please tell me what am I doing wrong because for some reason the Blend original subject on top of background has stopped working for me... the process is stopping at "IDLE" on that exact group, but I do not see any issues with it... also have tried the "if_empty_mask"
https://i.ibb.co/bzrBxgd/25235235.png
Pretty good workflow, I love how extensive and detailed memo included in this workflow but it seems the background generated is dependent on how clean the product image was, so a good clean up in photoshop beforehand is recommended!
Yeah, starting from a good image gets you better results, so either a clean, white backdrop or an image where the depth is what you want to get out of the relit picture would work best.
Although I'm quite satisfied with how well it works with bad iphone shots, all things considered.
Glad you like the memos, sometimes I think I leave wayyyy to many of them, but then again I never know who'll end up using my workflows, so I always try to think of any possible issue one might have with it.
Hi Andrea, love the demo. I'd like to test the workflow but I am getting dependencies conflicts. Do you think you could post a print of your packages versions please ? (like 'python -m pip freeze' or something like that).
Would be much appreciated.
Thanks anyways !
My dependencies are a mess, I should really try erasing some of the stuff I don't need anymore. Here you go:
accelerate==0.29.3
aiohttp==3.9.3
aiosignal==1.3.1
albumentations==1.3.1
annotated-types==0.6.0
antlr4-python3-runtime==4.9.3
anyio==4.3.0
appdirs==1.4.4
attrs==23.2.0
beautifulsoup4==4.12.2
bitarray==2.9.1
certifi==2023.7.22
cffi==1.16.0
charset-normalizer==3.3.2
clip-interrogator==0.6.0
colorama==0.4.6
coloredlogs==15.0.1
contourpy==1.2.0
cryptography==42.0.5
cycler==0.12.1
Cython==3.0.8
cytoolz==0.12.2
dataclasses-json==0.6.4
Deprecated==1.2.14
diffusers==0.27.0.dev0
distro==1.9.0
easydict==1.11
ecdsa==0.18.0
einops==0.7.0
executing==2.0.1
ffmpeg==1.4
filelock==3.13.1
fire==0.5.0
flatbuffers==24.3.25
fonttools==4.47.2
frozendict==2.4.0
frozenlist==1.4.1
fsspec==2023.10.0
ftfy==6.2.0
gitdb==4.0.11
GitPython==3.1.41
h11==0.14.0
hexbytes==0.3.1
html5lib==1.1
httpcore==1.0.5
httpx==0.27.0
huggingface-hub==0.22.2
humanfriendly==10.0
idna==3.4
imageio==2.33.1
imageio-ffmpeg==0.4.9
importlib-metadata==6.8.0
insightface==0.7.3
Jinja2==3.1.2
joblib==1.3.2
jsonschema==4.21.1
jsonschema-specifications==2023.12.1
kiui==0.2.7
kiwisolver==1.4.5
kornia==0.7.2
kornia_rs==0.1.3
lark-parser==0.12.0
lazy_loader==0.4
llvmlite==0.42.0
lxml==5.0.0
MarkupSafe==2.1.3
marshmallow==3.21.1
matplotlib==3.8.4
mnemonic==0.20
more-itertools==10.2.0
mpmath==1.3.0
msvc-runtime==14.34.31931
multidict==6.0.5
multitasking==0.0.11
mypy-extensions==1.0.0
networkx==3.2.1
numba==0.59.1
numpy==1.26.4
nvdiffrast @ git+https://github.com/NVlabs/nvdiffrast@c5caf7bdb8a2448acc491a9faa47753972edd380
objprint==0.2.3
omegaconf==2.3.0
onnx==1.15.0
onnxruntime==1.17.3
onnxruntime-gpu==1.17.1
open-clip-torch==2.24.0
openai==1.23.6
openai-whisper @ git+https://github.com/openai/whisper.git@ba3f3cd54b0e5b8ce1ab3de13e32122d0d5f98ab
opencv-contrib-python-headless==4.9.0.80
opencv-python==4.9.0.80
opencv-python-headless==4.9.0.80
packaging==23.2
pandas==2.1.4
parsimonious==0.9.0
peewee==3.17.0
piexif==1.1.3
Pillow==9.5.0
platformdirs==4.2.1
pooch==1.8.1
prettytable==3.9.0
protobuf==4.25.2
psutil==5.9.7
py-cpuinfo==9.0.0
pycparser==2.22
pycryptodome==3.19.1
pydantic==2.7.1
pydantic_core==2.18.2
pygltflib==1.16.2
PyMatting==1.1.12
pymeshlab==2023.12.post1
pyOpenSSL==24.1.0
pyparsing==3.1.1
pyreadline3==3.4.1
python-dateutil==2.8.2
pytz==2023.3.post1
pywin32==306
PyYAML==6.0.1
qudida==0.0.4
referencing==0.35.0
regex==2023.10.3
rembg==2.0.56
requests==2.31.0
rlp==4.0.0
rpds-py==0.18.0
safetensors==0.4.3
scikit-image==0.23.2
scikit-learn==1.3.2
scipy==1.13.0
segment-anything==1.0
sentencepiece==0.1.99
simple-lama-inpainting==0.1.2
six==1.16.0
smmap==5.0.1
sniffio==1.3.1
soupsieve==2.5
sympy==1.12
termcolor==2.4.0
threadpoolctl==3.2.0
tifffile==2023.12.9
tiktoken==0.6.0
timm==0.6.13
tokenizers==0.19.1
toolz==0.12.0
torch==2.1.2+cu121
torchaudio==2.1.2+cu121
torchsde==0.2.6
torchvision==0.16.2
tqdm==4.66.2
trampoline==0.1.2
transformers==4.40.1
trimesh==4.0.5
typing-inspect==0.9.0
typing_extensions==4.8.0
tzdata==2023.4
urllib3==2.0.7
varname==0.13.0
watchdog==4.0.0
wcwidth==0.2.13
webencodings==0.5.1
wrapt==1.16.0
xatlas==0.0.9
yarl==1.9.4
zipp==3.17.0
You legend ! Thanks a lot !
Amazing Tree. I love it. I'm noticing that my final images are sometimes different shades of color after rendering. I'm wondering if there is some way of QCing color (color check, etc) within this flow so the relight isnt destroying the initial colours of the images. Thank you, again for this flow - it's super epic.
There is, sorry I haven't had the time to release a new version, but we figured that a color matching pass could do wonders on selective areas, either the whole subject by employing the SAM mask, or custom areas based on light / shadow / color with a RGB / CMYK / BW to mask node.
I will try to update it in the next few days.
thanks a lot having the same issue!
ive got this error when running it ,any advice ?
Error occurred when executing ImageResize+: not enough values to unpack (expected 4, got 3) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_essentials\image.py", line 254, in execute _, oh, ow, _ = image.shape
I'm not sure why the Image Resize node I'm using gives this error at random times. Going forward, I'll use another one.
You can just swap another Image Resize node and it should work.
Also, you can refer to the updated v4 of this workflow, where we deal with color matching, upscaling, etc.
Hi, I get this error, any help?
Error occurred when executing MaskFromColor+: The size of tensor a (1024) must match the size of tensor b (3) at non-singleton dimension 3
It should mean there's a height mismatch between input (your image, at 1024 pixels) and your output coming in and out of the mask from color node. You can swap in another color to mask node if the error persist.
wish I understood what you are suggesting. I have this same problem and cant get past it in comfyui. the mask from color node has a purple outline, but the red outline is on the "image resize node" as it arrives in "relight" section. I dont have an alternative mask from color node to use and am not sure what needs to be done to fix this issue.
EDIT: Fixed it bypassing color mask as suggested in your video, but then ran into a problem with the Image Resize node alone, apparently due to an update in Comfui so switch it to "select keep_proportion for the method of the Image Resize nodes." solved it and I could get passed this concern.
now just hitting a "math domain error" on the last step out of "Blend High Frequency Layers" but not getting much info other than that. Will persevere.
(Edited)change values
black_level = 80.0
mid_level = 130.0
white_level = 180.0
Hi Andrea, this looks awesome, I am a product photographer and would like to integrate this into my business. I downloaded the workflow and after a while got it going. I tried V4 however it kept on craching Comfy for some reason. So I would like to get V3 up and going to my satisfaction first. After some guidance, have a look at the images below (before and after) as you can see it has change the product (cake) and plate are their settings to prevent this from happening? I have other questions but would like to fix one at a time. Mmm looks like I cant upload any images here?
I can't see the images, but since you want to keep the whole subject intact you could just bypass the whole "re-background" group and plug in the original picture in the relight group, bypassing the background altering. You'd still need a way, like with the SAM, to preserve the original details, though.
You also want to keep a lower CFG, say between 1.1 and 1.5, in the relight KSampler, so it doesn't change a ton of stuff when relighting.
Hi, amazing workflow, really love the demo video. I just started learning comfyUI. But I am getting this error while I'm testing it. Please help
Error occurred when executing GroundingDinoSAMSegment (segment anything): Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 2.62 GiB Requested : 768.00 MiB Device limit : 4.00 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_segment_anything\node.py", line 325, in main (images, masks) = sam_segment( ^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_segment_anything\node.py", line 243, in sam_segment predictor.set_image(image_np_rgb) File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\segment_anything\predictor.py", line 60, in set_image self.set_torch_image(input_image_torch, image.shape[:2]) File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_segment_anything\sam_hq\predictor.py", line 56, in set_torch_image self.features = self.model.image_encoder(input_image) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\segment_anything\modeling\image_encoder.py", line 112, in forward x = blk(x) ^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\segment_anything\modeling\image_encoder.py", line 174, in forward x = self.attn(x) ^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\segment_anything\modeling\image_encoder.py", line 234, in forward attn = add_decomposed_rel_pos(attn, q, self.rel_pos_h, self.rel_pos_w, (H, W), (H, W)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\segment_anything\modeling\image_encoder.py", line 358, in add_decomposed_rel_pos attn.view(B, q_h, q_w, k_h, k_w) + rel_h[:, :, :, :, None] + rel_w[:, :, :, None, :] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~
SAM needs more VRAM than your hardware has, so it's giving you this error. For easy pictures, like a single subject over a backdrop, you could substitute the SAM group for a Remove Background node and still achieve the desired results. SAM is there as a catch all for more complex images.
Shows alot of conflicting nodes. Do you know a solution for this? I don't want to break anything
Conflicting nodes don't mean that you can't install them, it's just a heads up from the manager that a node may conflict with other nodes, specified in the conflicting nodes infos. I have a ton of potentially conflicting nodes in my install, and never had any issues.
If you don't want to trust a random internet stranger though, which is absolutely understandable, you could create a fresh comfyUI install, create a separate VENV, and run the workflow there, it's a bit time consuming but it's not too hard to do.
Error occurred when executing ImageResize+: not enough values to unpack (expected 4, got 3)
How to fix that?
it appears that the image tensor is getting passed to the resize node has one less value (probably alpha?) than normal. can you try swapping out the image resize node for another resize node and check if you get the same error?
Error occurred when executing KSampler:
Input channels 8 does not match model in channels 12, 'opt background' latentinput should be used with the lC-Light 'fbc' model, and only with it
File "E:\ComfyUl-aki-v1.3\execution.py", line 152,in recursive executeoutput data, output ui= get output data(obj, input data all)File "E:\ComfyUl-aki-v1.3\execution.py", line 82, in get output datareturn values = map node over list(obj, input data all, obj.FUNCTIONallow interrupt=True)
File "E:\ComfyUl-aki-v1.3\execution.py", line 75, in map node over listresults.append(getattr(obj, func)(**slice dict(input data all, i)))File "E:\ComfyUl-aki-v1.3\nodes.py", line 1373, in samplereturn common ksampler(model, seed, steps, cfg, sampler name, schedulerpositive, negative, latent image, denoise=denoise)File "E:\ComfyUl-aki-v1.3\nodes.py", line 1343, in common ksamplersamples = comfy.sample.sample(model, noise, steps, cfg, sampler namescheduler, positive, negative, latent image,File "E:\ComfyUl-aki-v1.3\custom nodes\ComfyUl-lmpact-Packlmoduleslimpact\sample error enhancer,py", line 9, in informative samplereturn original sample(*args, **kwargs) # This code helps interpret errormessages that occur within exceptions but does not have any impact on otheroperations.
File "E:\ComfyUl-aki-v1.3\custom nodes\ComfyUl-AnimateDiff
you either plugged in a latent in the opt background input for IC-Light, or you're using the FCB model instead of the FC model.
the model I use, FC, doesn't use the opt background latent.
I've just used this model! but still warning...
iclight_st15_fbc. safesensors
download website: https://huggingface.co/lllyasviel/ic-light/tree/main
I keep getting this ksampler attribute error. Anyone knows how to fix it or what causes it?
Error occurred when executing KSampler: 'ModuleList' object has no attribute '1' File "/Users/baslefeber/Documents/ComfyUI-master/execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/baslefeber/Documents/ComfyUI-master/execution.py", line 192, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/baslefeber/Documents/ComfyUI-master/execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "/Users/baslefeber/Documents/ComfyUI-master/execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/baslefeber/Documents/ComfyUI-master/nodes.py", line 1429, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/baslefeber/Documents/ComfyUI-master/nodes.py", line 1396, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/baslefeber/Documents/ComfyUI-master/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/baslefeber/Documents/ComfyUI-master/comfy/sample.py", line 43, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/baslefeber/Documents/ComfyUI-master/comfy/samplers.py", line 829, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/baslefeber/Documents/ComfyUI-master/comfy/samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/baslefeber/Documents/ComfyUI-master/comfy/samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/baslefeber/Documents/ComfyUI-master/comfy/samplers.py", line 691, in inner_sample self.conds = process_conds(self.inner_model, noise, self.conds, device, latent_image, denoise_mask, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/baslefeber/Documents/ComfyUI-master/comfy/samplers.py", line 653, in process_conds pre_run_control(model, conds[k]) File "/Users/baslefeber/Documents/ComfyUI-master/comfy/samplers.py", line 501, in pre_run_control x['control'].pre_run(model, percent_to_timestep_function) File "/Users/baslefeber/Documents/ComfyUI-master/comfy/controlnet.py", line 330, in pre_run super().pre_run(model, percent_to_timestep_function) File "/Users/baslefeber/Documents/ComfyUI-master/comfy/controlnet.py", line 254, in pre_run super().pre_run(model, percent_to_timestep_function) File "/Users/baslefeber/Documents/ComfyUI-master/comfy/controlnet.py", line 94, in pre_run self.previous_controlnet.pre_run(model, percent_to_timestep_function) File "/Users/baslefeber/Documents/ComfyUI-master/comfy/controlnet.py", line 361, in pre_run comfy.utils.set_attr_param(self.control_model, k, self.control_weights[k].to(dtype).to(comfy.model_management.get_torch_device())) File "/Users/baslefeber/Documents/ComfyUI-master/comfy/utils.py", line 591, in set_attr_param return set_attr(obj, attr, torch.nn.Parameter(value, requires_grad=False)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/baslefeber/Documents/ComfyUI-master/comfy/utils.py", line 585, in set_attr obj = getattr(obj, name) ^^^^^^^^^^^^^^^^^^ File "/Users/baslefeber/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1729, in __getattr__ raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
Hi! first time hearing about this specific error, but I found this searching for no attribute '1' https://github.com/LucianoCirino/efficiency-nodes-comfyui/issues/227 , hope it helps
Love this workflow, I'm 5 days into ComfyUI and whole SD models and stumbling upon your YT Channel was the gratest thing that have happened. One Error I'm running into with this workflow is the ValueError 308 Image Levels Adjustment (attached image)
When I use the values provided by you (black level 83/mid level 1/white level 172) it gives me the error, but when I use the default values provided within the Image Levels Adjustment node of (black level 0/mid level 127.5/white level 255) it renders thru but the image is washed away, and if I try to change the value to your settings it crashes with the error show in the image attached. Any help or workaround would be greatly apricated.
Hi! I haven't checked this workflow in a hot minute, as it's a old version of what I'm currently using, but you can swap out the image level adjustment node for any other level adjustment node, or try out a newer version of the relighting workflows - or even try different version and then creating your own based on different parts of the different versions.
Maybe the level adjustment node I was using got updated, and the values are not the right ones anymore.
Roger that sir, Thank you the tips, I will try never versions!
I've learned EVERYTHING I know about ComfyUI thru your video's.
Keep up the great work, subscriber forever haha
thank you for your kind words!
did you solve this. I am running into it now?
Same error with both this and the new workflow i'm afraid. Those Image Level values are no longer valid and throw an error and i have no idea on what the levels should be using a new node. All washed out colours atm...
Hi. I am getting this error. Could you please guide me as to how I can resolve it?
Error occurred when executing Image Levels Adjustment: math domain error
I am having this also, did you solve it?
change values
black_level = 80.0
mid_level = 130.0
white_level = 180.0
ImageResize+
not enough values to unpack (expected 4, got 3)
Hello, I love this workflow but just can't seem to get it to work. I have successfully run many ComfyUI workflows on my MacBook Pro, but with this one, I get the error.
got prompt
Failed to validate prompt for output 76:
* ControlNetLoader 316:
- Value not in list: control_net_name: 'control_v11p_sd15_lineart.pth' not in ['diffusion_pytorch_model_promax.safetensors']
* ControlNetLoader 215:
- Value not in list: control_net_name: 'control_sd15_depth.pth' not in ['diffusion_pytorch_model_promax.safetensors']
I do have the 'diffusion_pytorch_model_promax.safetensors' model in my /models/controlnet/ folder. Don't know where to find or install the .pth files indicated? Can anyone offer help? Thank you!
I got an error - Missing Node Types , When loading the graph, the following node types were not found- ImageGaussianBlur
Does anybody know how to sole that issue?
I know this is unrelated but, do you know a way to fix this issue?
GroundingDinoSAMSegment (segment anything)
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
Since my computer can't run the segment anything models, how can I remove their nodes without effecting the rest? I want to manually upload the masked object instead of relying to SAM.
I've been trying to get this to work for a while now and I don't know what I am doing wrong but it doesn't work. I have installed everything from the manager menu. When I run "Queue Prompt" its starts and actually does a few things but I never get a end result. Some of the boxes such as "Load and Apply IC-Light" get a red border but I don't know what that means. There is no message, no report, nothing. It just stays Idle and doesn't finish the process. Any ideas what it could be? thank you in advance
🚀 Now you can Edit & Run it online (Fast & Error Free):
该工作流在线编辑运行地址:(速度快,不爆红)
https://www.runninghub.cn/post/1890226586728407041/?utm_source=openart
RunningHub – Highly reliable Cloud-Based ComfyUI, Edit and Run Workflows Online, no local installation required. Powered by RTX 4090 GPUs for faster performance and fully error-free node operation.
RunningHub - 超高可用性云端ComfyUI,本地免安装,标配RTX4090带来更快运行速度,站内已发布海量高质量工作流可一键运行,速度快,不爆红
-------------------------------------
👋🏻 Hello andrea baioni,
I’m Spike from RunningHub. I hope this message finds you well! I wanted to kindly inform you that we’ve copied your workflow to RunningHub (with the original author clearly credited and a link to your OpenArt profile). The purpose is to allow more users who love your work to easily experience it online.
We’d love to hear from you at: spike@runninghub.ai
🎉 If you’re open to joining us, we can transfer the workflows we’ve uploaded to your RunningHub account and provide you with additional benefits as a token of our appreciation.
😞 However, if you’re not comfortable with this approach, we will promptly remove all the uploaded content.
Looking forward to your response!
Hey Andrea, awesome work.
I haven't had any experience with Stable Diffusion or ComfyUI until now. Your video inspired me to try it out.
I am using runcomfy.com It took a moment, but I was able to install all missing nodes/models.
However one node remains red, when running the Queue: Image Resize in the relight group.
ComfyUI Essentials with ImageResize+ and Various ComfyUI Nodes by Type with JWImageResize are both installed.
Any hints as to what I am missing?
Node Details
Primitive Nodes (27)
Note (26)
Reroute (1)
Custom Nodes (74)
ComfyUI
- CLIPTextEncode (6)
- PreviewImage (9)
- VAEEncode (2)
- CheckpointLoaderSimple (2)
- MaskToImage (4)
- VAEDecode (3)
- KSampler (3)
- ImageInvert (4)
- EmptyLatentImage (2)
- SplitImageWithAlpha (2)
- ControlNetLoader (2)
- ControlNetApply (2)
- GrowMask (1)
- LoadImage (2)
- LoadImageMask (1)
- ImageResize+ (1)
- MaskFromColor+ (1)
- SAMLoader (1)
- PreviewBridge (5)
- ImpactGaussianBlurMask (2)
- DepthAnythingPreprocessor (1)
- AnimeLineArtPreprocessor (1)
- ImageGaussianBlur (2)
- ICLightConditioning (1)
- LoadAndApplyICLightUnet (1)
- GrowMaskWithBlur (1)
- RemapMaskRange (1)
- GroundingDinoModelLoader (segment anything) (1)
- GroundingDinoSAMSegment (segment anything) (1)
- JWImageResize (1)
- Image Blending Mode (5)
- Image Levels Adjustment (1)
- Image Blend by Mask (2)
Model Details
Checkpoints (2)
epicrealism_naturalSinRC1VAE.safetensors
LoRAs (0)