Change product background with the image of your choice using Style Transfer.
5.0
0 reviewsDescription
Introducing our innovative workflow, designed to revolutionize the way you create product backgrounds. With this cutting-edge technology, you can transform any product image into a stunning visual masterpiece, complete with a background that perfectly captures the essence of your brand.
Our workflow takes two inputs: a product image and a reference image. The product image is the focal point, while the reference image serves as the inspiration for the background. Using advanced algorithms and sophisticated techniques, our workflow seamlessly combines the two, creating a new image that is both visually striking and true to your brand's aesthetic.
The result is a product image with a background that perfectly complements the product itself, creating a cohesive and professional-looking visual identity for your brand. Whether you're looking to elevate your e-commerce platform, create stunning marketing materials, or simply enhance your product's visual appeal, our workflow is the perfect solution.
you can put any image of your choice in the reference load image section. just upload any product image with white background it will remove it automatically. you can also play around with the base model. i have used lightning version of juggernaut photo 2 lightening. it gives the results in 10 to 15 seconds. depneding on your specs. also play around with the grow mask option to get better fusion of the products and the generated background.
ask me anything!
If you like my work and want to support me. Buy me a Coffee.
click on this link: https://www.buymeacoffee.com/Darshanchauhan
credits goes to latent vision the maker of ipadapter.
Node Diagram
Discussion
excellent idea workflow, thank you
Hey, you are welcome, you can support by following my insta page : https://www.instagram.com/darshmakes3d?igsh=MTdybzJiZGtqaWFhcg==
great workflow, One issue i noticed, The text on final output image sometimes gets different than the original product picture. Can we fix this?
That is how the ipadapter works the mask basically just protects the vibe of the image, i tried many different things but no result, what you can do is put back the original image above the final image using photoshop or something.
May I ask which folder should depth-anything_vitb14.pth be placed in?
Root folder of comfyui. Models folder in that you can controlnet aux folder. Or you can find it comfyui managers install model section and it will automatically download in the right location!
Hello sir, how can i fix this problem?
Error occurred when executing IPAdapterUnifiedLoader: IPAdapter model not found. File "D:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu_2\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu_2\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu_2\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu_2\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 454, in load_models raise Exception("IPAdapter model not found.")
have you found any solution?
Error occurred when executing IPAdapterUnifiedLoader: IPAdapter model not found. File "C:\Ai\Data\Packages\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Ai\Data\Packages\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Ai\Data\Packages\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Ai\Data\Packages\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 393, in load_models raise Exception("IPAdapter model not found.")
have you found any solution?
there're some node its failed to install form github, might change name. please advise,
RMGB node
ImageRemoveBackground+ (1)
RemBGSession+ (1)
Hello~ May I ask which folder should I place the U2net models for Rembg Session? Every time for this process, cannot finish the download, error occurred. Thanks a lot~
Error occurred when executing RemBGSession+: HTTPSConnectionPool(host='github.com', port=443): Read timed out. (read timeout=5) File "F:\ComfyUI-aki-v1.3 new\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI-aki-v1.3 new\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI-aki-v1.3 new\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI-aki-v1.3 new\custom_nodes\ComfyUI_essentials\essentials.py", line 1631, in execute return (rembg_new_session(model, providers=[providers+"ExecutionProvider"]),)
Error occurred when executing KSampler:
'ModuleList' object has no attribute '1'
File "/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/ComfyUI/nodes.py", line 1344, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "/ComfyUI/nodes.py", line 1314, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "/ComfyUI/comfy/sample.py", line 37, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/ComfyUI/comfy/samplers.py", line 755, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/ComfyUI/comfy/samplers.py", line 657, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "/ComfyUI/comfy/samplers.py", line 644, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "/ComfyUI/comfy/samplers.py", line 619, in inner_sample
self.conds = process_conds(self.inner_model, noise, self.conds, device, latent_image, denoise_mask, seed)
File "/ComfyUI/comfy/samplers.py", line 581, in process_conds
pre_run_control(model, conds[k])
File "/ComfyUI/comfy/samplers.py", line 439, in pre_run_control
x['control'].pre_run(model, percent_to_timestep_function)
File "/ComfyUI/comfy/controlnet.py", line 306, in pre_run
comfy.utils.set_attr_param(self.control_model, k, self.control_weights[k].to(dtype).to(comfy.model_management.get_torch_device()))
File "/ComfyUI/comfy/utils.py", line 302, in set_attr_param
return set_attr(obj, attr, torch.nn.Parameter(value, requires_grad=False))
File "/ComfyUI/comfy/utils.py", line 296, in set_attr
obj = getattr(obj, name)
File "/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1695, in __getattr__
raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
Do you have any recommendations and how to do the prompt for this workflow?
IPAdapterUnifiedLoader
IPAdapter model not found.
🚀 Now you can Edit & Run it online (Fast & Error Free):
该工作流在线编辑运行地址:(速度快,不爆红)
https://www.runninghub.cn/post/1878682372546928641/?utm_source=openart
RunningHub – Highly reliable Cloud-Based ComfyUI, Edit and Run Workflows Online, no local installation required. Powered by RTX 4090 GPUs for faster performance and fully error-free node operation.
RunningHub - 超高可用性云端ComfyUI,本地免安装,标配RTX4090带来更快运行速度,站内已发布海量高质量工作流可一键运行,速度快,不爆红
-------------------------------------
👋🏻 Hello darshan chauhan,
I’m Spike from RunningHub. I hope this message finds you well! I wanted to kindly inform you that we’ve copied your workflow to RunningHub (with the original author clearly credited and a link to your OpenArt profile). The purpose is to allow more users who love your work to easily experience it online.
We’d love to hear from you at: spike@runninghub.ai
🎉 If you’re open to joining us, we can transfer the workflows we’ve uploaded to your RunningHub account and provide you with additional benefits as a token of our appreciation.
😞 However, if you’re not comfortable with this approach, we will promptly remove all the uploaded content.
Looking forward to your response!
Node Details
Primitive Nodes (0)
Custom Nodes (24)
ComfyUI
- ControlNetLoader (1)
- PreviewImage (1)
- CheckpointLoaderSimple (1)
- DifferentialDiffusion (1)
- InvertMask (1)
- GrowMask (1)
- CLIPVisionLoader (1)
- KSampler (1)
- InpaintModelConditioning (1)
- ControlNetApplyAdvanced (1)
- CLIPTextEncode (2)
- VAEDecode (1)
- SaveImage (1)
- LoadImage (2)
- ImageRemoveBackground+ (1)
- MaskPreview+ (1)
- RemBGSession+ (1)
- ImageResize+ (1)
- DepthAnythingPreprocessor (1)
- IPAdapterUnifiedLoader (1)
- PrepImageForClipVision (1)
- IPAdapterAdvanced (1)
Model Details
Checkpoints (1)
juggernautXL_v9Rdphoto2Lightning.safetensors
LoRAs (0)