Simple Product Ehancement with IC-Relight & IPAdapter

5.0

1 reviews
328
41.2K
15.3K
86
Description

This workflow allows us to create realistic blend between subject and background, including lighting using the power of IC-Light. IC-Light might change your product's color, so I recommend using simple prompts in the CLIP.


IC-Light:  https://huggingface.co/lllyasviel/ic-light

IPAdapter:  https://github.com/tencent-ailab/IP-Adapter


Node Diagram
Discussion
杨俊哲9 months ago

你对latent image的处理有点绝啊,怎么想到这样处理的呢


R
Reverent Elusarca9 months ago

Thank you for your kind words. I came up with this idea on the ComfyOrg Discord channel. Someone was asking for help to blend a product and background smoothly, and I was aware that the IC-Light library blends objects perfectly. After a few tries, I developed this simple yet great workflow.

Following is translation, hope it's correct:

感谢你的夸奖。我是在ComfyOrg的Discord频道上想到这个方法的。有人在上面寻求帮助,想要将产品和背景平滑地融合在一起,而我知道IC-Light库可以完美地融合对象。经过几次尝试,我得到了这个简单但非常有效的工作流程。

杨俊哲9 months ago

确实很起作用,我之前也尝试过将前景输入Ksampler的latent image,总是差了一点,结合了遮罩复合latent,物体一致性感觉就增强了不少。本质上fc mode是使用了layerdiffuse机制的背景重绘,社区里很多实践都是给Ksampler输入光源掩码图像,忽略了前景层的重要性。

杨俊哲9 months ago

btw,使用了hyper ckpt,质量和速度都得到了极大的保障,是一个性价比很高的尝试。

R
Reverent Elusarca9 months ago

Please do share your workflows with me! Seems like you have great ideas

杨俊哲9 months ago

leave me a mail address please

R
Reverent Elusarca9 months ago

ksmskt@gmail.com

s
senlin cheng8 months ago

can you share your workflows with me? i want to have a try.thanks 972705994@qq.com

Hi, love your idea, can you share the workflow with me, thanks!! Here's my email: hdmanh37@gmail.com

Y
YourDev5 months ago

Hi, can you also share workflow with me. Thank you. Email : glidenexus@gmail.com

a
anti_ryou4 months ago

can you share your workflows with me? thanks! antiluang@gmail.com

frog_immaculate_167 months ago

can you share your workflows with me? i want to have a try.thanks 2159890279@qq.com

x
xu x6 months ago

你好,我用下载的生成产品,产品总是有点变化,你可以分享一下这个改进的工作流吗,万分谢谢,这是的电子邮件:793225396@qq.com

L
Lim Davy16 days ago

大佬可以分享您的workflow吗?感激不尽~~

limdavy28@gmail.com

李煌8 months ago

can you share your workflows with me? i want to have a try.thanks 389242151@qq.com

r
rayenbs_9273422 days ago

nice work bro, can you share the workflow with me to try it. thanks rayenbensaid198@gmail.com

F
Frankie Smith7 months ago

can you share your workflows with me? i want to have a try.thanks 1456893152@qq.com

T
Tony Wu9 months ago

Error occurred like "Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 128, 128] to have 4 channels, but got 8 channels instead"

How could I resolve it?

R
Reverent Elusarca9 months ago

Hi Tony, IC-Light node only works with Stable Diffusion 1.5 models. This error looks like the one when you use SDXL model instead of SD 1.5

T
Tony Wu9 months ago

Thank you for the reply.

I switched to the SD 1.5 model, but the error still occurred.

Here is the model I used.

https://civitai.com/models/4201?modelVersionId=130072

R
Reverent Elusarca9 months ago

Hi, I just updated the workflow. Can you please re-download and try again?

T
Tony Wu9 months ago

Much appreciated

沈承志9 months ago

I used the same one as you, but I still reported an error

R
Reverent Elusarca9 months ago

Hi, I just updated the workflow. Can you please re-download and try again?

沈承志9 months ago

Thank you very much for sharing!!! The workflow has been smooth all the way this time

👍1
R
Reijo Loik9 months ago

Does this also work with pictures of people?

R
Reverent Elusarca9 months ago

Yes and no. See, If you upload your own photo for example, it will not properly match with your facial aspects, but if you generate a person and put that generated person as your base image, the results probably will be better.

The main point is IC-Light library is actually a enviromental lighting tool and I personally don't think it's good with the people.

You can check the original repo for more examples:  https://github.com/lllyasviel/IC-Light

👍1
白忆9 months ago

执行  easy ipadapterApply 时发生错误:未找到  Clipvision 模型,这个怎么解决呢,模型要到哪里下呢

R
Reverent Elusarca9 months ago

There is an explanation here with details: https://github.com/cubiq/ComfyUI_IPAdapter_plus

Download these following into /ComfyUI/models/clip_vision  (if you dont have the folder just create it)

https://huggingface.co/h94/IP-Adapter/resolve/main/models/image_encoder/model.safetensors (Rename the file as 'CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors')

https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/image_encoder/model.safetensors (Rename the file as 'CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors')

(Edited)
立方体9 months ago

谢谢

!!! Exception during processing!!! not enough values to unpack (expected 2, got 1)

How can I slove this ?

Please help!!! Thank you

B
Baseer Farooqui9 months ago

hi im using your latest workflow but still getting same error of ""'IPAdapter' object has no attribute 'apply_ipadapter'""  

is there any solution to resolve. thanks

O
Ofentse Nglazi9 months ago

Am getting this error on the ipadapter node, please assist. Thank you.


Error occurred when executing easy ipadapterApply: too many values to unpack (expected 1) File "C:\Users\ofent\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\ofent\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\ofent\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Users\ofent\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\custom_nodes\ComfyUI-Easy-Use\py\easyNodes.py", line 2310, in apply model, = cls().apply_ipadapter(model, ipadapter, image, weight, start_at, end_at, weight_type='standard',attn_mask=attn_mask)
J
Jeriff Cheng9 months ago

What if the original image is not square? stretch it to 1024*1024 will distort it.

R
Reverent Elusarca9 months ago

enter your original image's reoslution into the empty latent node

J
Jung Peng9 months ago

Exception: Input channels 8 does not match model in_channels 12, 'opt_background' latent input should be used with the IC-Light 'fbc' model, and only with it

How could I resolve it?

🙏3
谢军9 months ago

The following error occurred during the first use

Error occurred when executing easy ipadapterApply: too many values to unpack (expected 1)

could u help me ; (

v
visual9 months ago

你好,这个报错怎么解决呢,执行 easy ipadapterApply 时出错: [错误]要使用 ipadapterApply,您需要安装“ComfyUI_IPAdapter_plus”

D
Duplicate Mate8 months ago

how to solve???

R
Reverent Elusarca8 months ago

Install the IPAdapter Plus and restart your comfy ui:  https://github.com/cubiq/ComfyUI_IPAdapter_plus

k
knowbody.noh9 months ago

Can i execute in Mac M1 max? I see the this error and seems my laptop can't afford for this workflow.. any other solutions?

Error occurred when executing easy ipadapterApply: Error while deserializing header: HeaderTooLarge File "/Volumes/T7/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/Volumes/T7/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/Volumes/T7/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/Volumes/T7/ComfyUI/custom_nodes/ComfyUI-Easy-Use/py/easyNodes.py", line 2763, in apply model, ipadapter = self.load_model(model, preset, lora_strength, provider, clip_vision=None, optional_ipadapter=optional_ipadapter, cache_mode=cache_mode) File "/Volumes/T7/ComfyUI/custom_nodes/ComfyUI-Easy-Use/py/easyNodes.py", line 2674, in load_model clip_vision = load_clip_vision(clipvision_file) File "/Volumes/T7/ComfyUI/comfy/clip_vision.py", line 113, in load sd = load_torch_file(ckpt_path) File "/Volumes/T7/ComfyUI/comfy/utils.py", line 15, in load_torch_file sd = safetensors.torch.load_file(ckpt, device=device.type) File "/opt/miniconda3/envs/comfyui/lib/python3.10/site-packages/safetensors/torch.py", line 311, in load_file with safe_open(filename, framework="pt", device=device) as f:

d
dongxing liu8 months ago

为什么要接入IPAdapter呢,加入它之后生成的图片都呈现极度偏黄的暖色调了,去掉之后反而正常了,是我哪里没调对吗

s
senlin cheng8 months ago

确实很偏黄

G
Gabi Dobre8 months ago

I've used the same models as you and my result is always a blank image


Requested to load BaseModel
Loading 1 new model
WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3])
IC-Light: Merged with diffusion_model.input_blocks.0.0.weight channel changed from torch.Size([320, 4, 3, 3]) to [320, 8, 3, 3]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [02:02<00:00,  4.89s/it]
Requested to load AutoencoderKL
Loading 1 new model
/Users/user/AI/ComfyUI/nodes.py:1435: RuntimeWarning: invalid value encountered in cast
  img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
 


👍1
j
jianfeng998 months ago

After trying to sort out the workflow, I found that adding a module at the end, using the latest XLTile+XL model +SD amplification, can generate more detailed and high-definition product photography on the basis of the original

R
Reverent Elusarca8 months ago

could you send me the workflow of yours, would love to try it

make the foreground blend better with the newly generated background, resize the final image and foreground so that everything is correct size, and try to keep the foreground intact, keep details and color

王斌8 months ago

Error occurred when executing easy ipadapterApply: 'ModelPatcher' object has no attribute 'get_model_object' File "D:\ComfyUI-aki-v1.3\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI-aki-v1.3\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\ComfyUI-aki-v1.3\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-Easy-Use\py\easyNodes.py", line 3231, in apply model, images = cls().apply_ipadapter(model, ipadapter, image, weight, start_at, end_at, weight_type='standard', attn_mask=attn_mask) File "D:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 695, in apply_ipadapter return ipadapter_execute(model.clone(), ipadapter['ipadapter']['model'], ipadapter['clipvision']['model'], **ipa_args) File "D:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 445, in ipadapter_execute sigma_start = model.get_model_object("model_sampling").percent_to_sigma(start_at)

R
Richard Kane8 months ago

If the subject element is a fixed-ratio picture! Do you need to change the perspective to put it into the background~

Do you need to add a 3D model to match the texture and then blend it into the background?

Or is it a high-sampled 3D model with a texture, blended back into the background


Such a workflow should be more practical in actual scenes.

D
Daniel Dash8 months ago

Error occurred when executing LoadAndApplyICLightUnet: Attempted to load SDXL model, IC-Light is only compatible with SD 1.5 models.

D
Daniel Dash8 months ago

How to fix that?

L
Luke Daddy7 months ago

Is it possible to resize to a 16:9 ratio instead of a square image? When I upload an image that’s not in a square aspect ratio, the image gets distorted.

on image resize node, change the resolution to your image's resolution. but if it's too large i suggest it to resize 768x1344

F
Frankie Smith7 months ago

Error occurred like "Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 128, 128] to have 4 channels, but got 8 channels instead"

How could I resolve it?

R
Reverent Elusarca7 months ago

probably you are trying an SDXL model instead of SD 1.5 model

n
niu_bee7 months ago

Error occurred when executing LoadAndApplyICLightUnet: IC-Light: Could not patch calculate_weight - IC-Light: The 'calculate_weight' function does not exist in 'lora'

n
niu_bee7 months ago

The model was downloaded correctly, but ic-light could not run to report the error

H
Hoàng Versus7 months ago

just press UPDATE ALL in MANAGER MENU, i just solved the same problem after hours looking for solution...

y
yd d7 months ago


Encountered the same problem, how to solve it?

n
niu_bee7 months ago

The guy above answered the answer

c
chaojichuangyi7 months ago

I also encountered this problem, and it has been solved now.


It is mainly due to the incompatibility of the versions of comyui and ic light. If your comyui is the latest version, then update iclight to the latest version. If comyui is an older version, then lower the version of iclight to June.

s
smile7 months ago

can you share your workflows with me? i want to have a try.thanks 2159890279@qq.com

B
Brain wu7 months ago

Error occurred when executing KSampler: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 128, 128] to have 4 channels, but got 8 channels instead 我是这个错误,换了很多1.5的模型都不行...重新换了同样节点链接也不行.

A
Adrian Stratulat4 months ago

the object seems to levitate. how can i make it sit on objects?

P
Peng yang2 months ago

Hi! Would you be interested in a platform that turns your existing workflows into a fully functional SaaS tool, with website setup, registration, and subscription management, allowing you to monetize your creation? Cause it seems so powerful enough to be such a professional e-commerce photo editing tool.


Would be happy to have any kind replies or insights!! And offer extra 100 usd to have 30mins talk or message chat.

R
Reverent Elusarca2 months ago

No, thanks. I'm providing these workflows as open-source, and only accepting donations.

P
Peng yang2 months ago

Sure, that’s also great!! Thanks a lot for ur kind reply

称橙逞秤a month ago

🚀 Now you can Edit & Run it online (Fast & Error Free):

该工作流在线编辑运行地址:(速度快,不爆红)
https://www.runninghub.cn/post/1886980817795837953?utm_source=openart

RunningHub – Highly reliable Cloud-Based ComfyUI, Edit and Run Workflows Online, no local installation required. Powered by RTX 4090 GPUs for faster performance and fully error-free node operation.

RunningHub - 超高可用性云端ComfyUI,本地免安装,标配RTX4090带来更快运行速度,站内已发布海量高质量工作流可一键运行,速度快,不爆红

-------------------------------------

👋🏻 Hello Reverent Elusarca,    
I’m Spike from RunningHub.  I hope this message finds you well! I wanted to kindly inform you that we’ve copied your workflow to RunningHub (with the original author clearly credited and a link to your OpenArt profile). The purpose is to allow more users who love your work to easily experience it online.  
We’d love to hear from you at: spike@runninghub.ai
🎉 If you’re open to joining us, we can transfer the workflows we’ve uploaded to your RunningHub account and provide you with additional benefits as a token of our appreciation.
😞 However, if you’re not comfortable with this approach, we will promptly remove all the uploaded content.    
Looking forward to your response!

u
unkindheada month ago

easy ipadapterApply

tuple index out of range

How to fix that?


u
unkindheada month ago


# ComfyUI Error Report

## Error Details

- **Node ID:** 58

- **Node Type:** easy ipadapterApply

- **Exception Type:** IndexError

- **Exception Message:** tuple index out of range

## Stack Trace

```

 File "e:\AI\ComfyUI\ComfyUI\execution.py", line 327, in execute

   output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


 File "e:\AI\ComfyUI\ComfyUI\execution.py", line 202, in get_output_data

   return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


 File "e:\AI\ComfyUI\ComfyUI\execution.py", line 174, in _map_node_over_list

   process_inputs(input_dict, i)


 File "e:\AI\ComfyUI\ComfyUI\execution.py", line 163, in process_inputs

   results.append(getattr(obj, func)(**inputs))

                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^


 File "E:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui-easy-use\py\easyNodes.py", line 3358, in apply

   model, ipadapter = self.load_model(model, preset, lora_strength, provider, clip_vision=None, optional_ipadapter=optional_ipadapter, cache_mode=cache_mode)

                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


 File "E:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui-easy-use\py\easyNodes.py", line 3252, in load_model

   clipvision_file = get_local_filepath(model_url, IPADAPTER_DIR, "clip-vit-h-14-laion2B-s32B-b79K.safetensors")

                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


 File "E:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui-easy-use\py\libs\utils.py", line 227, in get_local_filepath

   raise Exception(f'无法从 {url} 下载,错误信息:{str(err.args[0])}')

                                                       ~~~~~~~~^^^


```

## System Information

- **ComfyUI Version:** 0.3.13

- **Arguments:** ComfyUI\main.py

- **OS:** nt

- **Python Version:** 3.11.9 (tags/v3.11.9:de54cf5, Apr  2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]

- **Embedded Python:** true

- **PyTorch Version:** 2.6.0+cu124

## Devices


- **Name:** cuda:0 NVIDIA GeForce RTX 4070 : cudaMallocAsync

 - **Type:** cuda

 - **VRAM Total:** 12878086144

 - **VRAM Free:** 11389965706

 - **Torch VRAM Total:** 268435456

 - **Torch VRAM Free:** 92607882



C
Chinmay Kawale23 days ago

hey did you find any fix ?

           

H
Hai Chen11 days ago

Hi,Attempted to load SDXL model, IC-Light is only compatible with SD 1.5 models,

Author

21
128.1K
1.3K
402.1K

Reviews

I

Itamar Paradny

24 days ago

Versions (2)

  • - latest (9 months ago)

  • - v20240623-114901

Primitive Nodes (0)

Custom Nodes (24)

ComfyUI

  • - VAEDecode (2)

  • - EmptyLatentImage (1)

  • - SaveImage (2)

  • - ImageCompositeMasked (1)

  • - PreviewImage (2)

  • - ControlNetApply (1)

  • - SplitImageWithAlpha (1)

  • - KSampler (1)

  • - LoadImage (1)

  • - CLIPTextEncode (2)

  • - CheckpointLoaderSimple (1)

  • - easy imageRemBg (1)

  • - easy ipadapterApply (1)

  • - ImageResize+ (1)

  • - DetailTransfer (1)

  • - LoadAndApplyICLightUnet (1)

  • - ICLightConditioning (1)

  • - VAEEncodeArgMax (2)

  • - ICLightApplyMaskGrey (1)

Checkpoints (1)

realisticVisionV20_v20.safetensors

LoRAs (0)