Simple Product Ehancement with IC-Relight & IPAdapter
5.0
1 reviewsDescription
This workflow allows us to create realistic blend between subject and background, including lighting using the power of IC-Light. IC-Light might change your product's color, so I recommend using simple prompts in the CLIP.
IC-Light: https://huggingface.co/lllyasviel/ic-light
IPAdapter: https://github.com/tencent-ailab/IP-Adapter
Node Diagram
Discussion
你对latent image的处理有点绝啊,怎么想到这样处理的呢
Thank you for your kind words. I came up with this idea on the ComfyOrg Discord channel. Someone was asking for help to blend a product and background smoothly, and I was aware that the IC-Light library blends objects perfectly. After a few tries, I developed this simple yet great workflow.
Following is translation, hope it's correct:
感谢你的夸奖。我是在ComfyOrg的Discord频道上想到这个方法的。有人在上面寻求帮助,想要将产品和背景平滑地融合在一起,而我知道IC-Light库可以完美地融合对象。经过几次尝试,我得到了这个简单但非常有效的工作流程。
确实很起作用,我之前也尝试过将前景输入Ksampler的latent image,总是差了一点,结合了遮罩复合latent,物体一致性感觉就增强了不少。本质上fc mode是使用了layerdiffuse机制的背景重绘,社区里很多实践都是给Ksampler输入光源掩码图像,忽略了前景层的重要性。
btw,使用了hyper ckpt,质量和速度都得到了极大的保障,是一个性价比很高的尝试。
Please do share your workflows with me! Seems like you have great ideas
leave me a mail address please
ksmskt@gmail.com
can you share your workflows with me? i want to have a try.thanks 972705994@qq.com
can you share your workflows with me? i want to have a try.thanks 2159890279@qq.com
nice work bro, can you share the workflow with me to try it. thanks rayenbensaid198@gmail.com
can you share your workflows with me? i want to have a try.thanks 1456893152@qq.com
May I ask how to download the UNET model?
Error occurred like "Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 128, 128] to have 4 channels, but got 8 channels instead"
How could I resolve it?
Hi Tony, IC-Light node only works with Stable Diffusion 1.5 models. This error looks like the one when you use SDXL model instead of SD 1.5
Thank you for the reply.
I switched to the SD 1.5 model, but the error still occurred.
Here is the model I used.
https://civitai.com/models/4201?modelVersionId=130072
Does this also work with pictures of people?
Yes and no. See, If you upload your own photo for example, it will not properly match with your facial aspects, but if you generate a person and put that generated person as your base image, the results probably will be better.
The main point is IC-Light library is actually a enviromental lighting tool and I personally don't think it's good with the people.
You can check the original repo for more examples: https://github.com/lllyasviel/IC-Light
执行 easy ipadapterApply 时发生错误:未找到 Clipvision 模型,这个怎么解决呢,模型要到哪里下呢
There is an explanation here with details: https://github.com/cubiq/ComfyUI_IPAdapter_plus
Download these following into /ComfyUI/models/clip_vision (if you dont have the folder just create it)
https://huggingface.co/h94/IP-Adapter/resolve/main/models/image_encoder/model.safetensors (Rename the file as 'CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors')
https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/image_encoder/model.safetensors (Rename the file as 'CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors')
(Edited)!!! Exception during processing!!! not enough values to unpack (expected 2, got 1)
How can I slove this ?
Please help!!! Thank you
hi im using your latest workflow but still getting same error of ""'IPAdapter' object has no attribute 'apply_ipadapter'""
is there any solution to resolve. thanks
Am getting this error on the ipadapter node, please assist. Thank you.
Error occurred when executing easy ipadapterApply: too many values to unpack (expected 1) File "C:\Users\ofent\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\ofent\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\ofent\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Users\ofent\AppData\Roaming\StabilityMatrix\Packages\ComfyUI\custom_nodes\ComfyUI-Easy-Use\py\easyNodes.py", line 2310, in apply model, = cls().apply_ipadapter(model, ipadapter, image, weight, start_at, end_at, weight_type='standard',attn_mask=attn_mask)
What if the original image is not square? stretch it to 1024*1024 will distort it.
enter your original image's reoslution into the empty latent node
你好,这个报错怎么解决呢,执行 easy ipadapterApply 时出错: [错误]要使用 ipadapterApply,您需要安装“ComfyUI_IPAdapter_plus”
how to solve???
Install the IPAdapter Plus and restart your comfy ui: https://github.com/cubiq/ComfyUI_IPAdapter_plus
Can i execute in Mac M1 max? I see the this error and seems my laptop can't afford for this workflow.. any other solutions?
Error occurred when executing easy ipadapterApply: Error while deserializing header: HeaderTooLarge File "/Volumes/T7/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/Volumes/T7/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/Volumes/T7/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/Volumes/T7/ComfyUI/custom_nodes/ComfyUI-Easy-Use/py/easyNodes.py", line 2763, in apply model, ipadapter = self.load_model(model, preset, lora_strength, provider, clip_vision=None, optional_ipadapter=optional_ipadapter, cache_mode=cache_mode) File "/Volumes/T7/ComfyUI/custom_nodes/ComfyUI-Easy-Use/py/easyNodes.py", line 2674, in load_model clip_vision = load_clip_vision(clipvision_file) File "/Volumes/T7/ComfyUI/comfy/clip_vision.py", line 113, in load sd = load_torch_file(ckpt_path) File "/Volumes/T7/ComfyUI/comfy/utils.py", line 15, in load_torch_file sd = safetensors.torch.load_file(ckpt, device=device.type) File "/opt/miniconda3/envs/comfyui/lib/python3.10/site-packages/safetensors/torch.py", line 311, in load_file with safe_open(filename, framework="pt", device=device) as f:
I've used the same models as you and my result is always a blank image
Requested to load BaseModel
Loading 1 new model
WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3])
IC-Light: Merged with diffusion_model.input_blocks.0.0.weight channel changed from torch.Size([320, 4, 3, 3]) to [320, 8, 3, 3]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [02:02<00:00, 4.89s/it]
Requested to load AutoencoderKL
Loading 1 new model
/Users/user/AI/ComfyUI/nodes.py:1435: RuntimeWarning: invalid value encountered in cast
img = Image.fromarray(np.clip(i, 0, 255).astype(np.uint8))
After trying to sort out the workflow, I found that adding a module at the end, using the latest XLTile+XL model +SD amplification, can generate more detailed and high-definition product photography on the basis of the original
could you send me the workflow of yours, would love to try it
make the foreground blend better with the newly generated background, resize the final image and foreground so that everything is correct size, and try to keep the foreground intact, keep details and color
Error occurred when executing easy ipadapterApply: 'ModelPatcher' object has no attribute 'get_model_object' File "D:\ComfyUI-aki-v1.3\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI-aki-v1.3\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\ComfyUI-aki-v1.3\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-Easy-Use\py\easyNodes.py", line 3231, in apply model, images = cls().apply_ipadapter(model, ipadapter, image, weight, start_at, end_at, weight_type='standard', attn_mask=attn_mask) File "D:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 695, in apply_ipadapter return ipadapter_execute(model.clone(), ipadapter['ipadapter']['model'], ipadapter['clipvision']['model'], **ipa_args) File "D:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 445, in ipadapter_execute sigma_start = model.get_model_object("model_sampling").percent_to_sigma(start_at)
If the subject element is a fixed-ratio picture! Do you need to change the perspective to put it into the background~
Do you need to add a 3D model to match the texture and then blend it into the background?
Or is it a high-sampled 3D model with a texture, blended back into the background
Such a workflow should be more practical in actual scenes.
Error occurred when executing LoadAndApplyICLightUnet: Attempted to load SDXL model, IC-Light is only compatible with SD 1.5 models.
How to fix that?
Is it possible to resize to a 16:9 ratio instead of a square image? When I upload an image that’s not in a square aspect ratio, the image gets distorted.
on image resize node, change the resolution to your image's resolution. but if it's too large i suggest it to resize 768x1344
Error occurred like "Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 128, 128] to have 4 channels, but got 8 channels instead"
How could I resolve it?
probably you are trying an SDXL model instead of SD 1.5 model
Error occurred when executing LoadAndApplyICLightUnet: IC-Light: Could not patch calculate_weight - IC-Light: The 'calculate_weight' function does not exist in 'lora'
The model was downloaded correctly, but ic-light could not run to report the error
just press UPDATE ALL in MANAGER MENU, i just solved the same problem after hours looking for solution...
I also encountered this problem, and it has been solved now.
It is mainly due to the incompatibility of the versions of comyui and ic light. If your comyui is the latest version, then update iclight to the latest version. If comyui is an older version, then lower the version of iclight to June.
the object seems to levitate. how can i make it sit on objects?
Hi! Would you be interested in a platform that turns your existing workflows into a fully functional SaaS tool, with website setup, registration, and subscription management, allowing you to monetize your creation? Cause it seems so powerful enough to be such a professional e-commerce photo editing tool.
Would be happy to have any kind replies or insights!! And offer extra 100 usd to have 30mins talk or message chat.
🚀 Now you can Edit & Run it online (Fast & Error Free):
该工作流在线编辑运行地址:(速度快,不爆红)
https://www.runninghub.cn/post/1886980817795837953?utm_source=openart
RunningHub – Highly reliable Cloud-Based ComfyUI, Edit and Run Workflows Online, no local installation required. Powered by RTX 4090 GPUs for faster performance and fully error-free node operation.
RunningHub - 超高可用性云端ComfyUI,本地免安装,标配RTX4090带来更快运行速度,站内已发布海量高质量工作流可一键运行,速度快,不爆红
-------------------------------------
👋🏻 Hello Reverent Elusarca,
I’m Spike from RunningHub. I hope this message finds you well! I wanted to kindly inform you that we’ve copied your workflow to RunningHub (with the original author clearly credited and a link to your OpenArt profile). The purpose is to allow more users who love your work to easily experience it online.
We’d love to hear from you at: spike@runninghub.ai
🎉 If you’re open to joining us, we can transfer the workflows we’ve uploaded to your RunningHub account and provide you with additional benefits as a token of our appreciation.
😞 However, if you’re not comfortable with this approach, we will promptly remove all the uploaded content.
Looking forward to your response!
easy ipadapterApply
tuple index out of range
How to fix that?
# ComfyUI Error Report
## Error Details
- **Node ID:** 58
- **Node Type:** easy ipadapterApply
- **Exception Type:** IndexError
- **Exception Message:** tuple index out of range
## Stack Trace
```
File "e:\AI\ComfyUI\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\AI\ComfyUI\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "e:\AI\ComfyUI\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "e:\AI\ComfyUI\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui-easy-use\py\easyNodes.py", line 3358, in apply
model, ipadapter = self.load_model(model, preset, lora_strength, provider, clip_vision=None, optional_ipadapter=optional_ipadapter, cache_mode=cache_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui-easy-use\py\easyNodes.py", line 3252, in load_model
clipvision_file = get_local_filepath(model_url, IPADAPTER_DIR, "clip-vit-h-14-laion2B-s32B-b79K.safetensors")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI\ComfyUI\custom_nodes\comfyui-easy-use\py\libs\utils.py", line 227, in get_local_filepath
raise Exception(f'无法从 {url} 下载,错误信息:{str(err.args[0])}')
~~~~~~~~^^^
```
## System Information
- **ComfyUI Version:** 0.3.13
- **Arguments:** ComfyUI\main.py
- **OS:** nt
- **Python Version:** 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
- **Embedded Python:** true
- **PyTorch Version:** 2.6.0+cu124
## Devices
- **Name:** cuda:0 NVIDIA GeForce RTX 4070 : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 12878086144
- **VRAM Free:** 11389965706
- **Torch VRAM Total:** 268435456
- **Torch VRAM Free:** 92607882
hey did you find any fix ?
Node Details
Primitive Nodes (0)
Custom Nodes (24)
ComfyUI
- VAEDecode (2)
- EmptyLatentImage (1)
- SaveImage (2)
- ImageCompositeMasked (1)
- PreviewImage (2)
- ControlNetApply (1)
- SplitImageWithAlpha (1)
- KSampler (1)
- LoadImage (1)
- CLIPTextEncode (2)
- CheckpointLoaderSimple (1)
- easy imageRemBg (1)
- easy ipadapterApply (1)
- ImageResize+ (1)
- DetailTransfer (1)
- LoadAndApplyICLightUnet (1)
- ICLightConditioning (1)
- VAEEncodeArgMax (2)
- ICLightApplyMaskGrey (1)
Model Details
Checkpoints (1)
realisticVisionV20_v20.safetensors
LoRAs (0)