Background Replacer for Products & Portraits, Lighting Adjustment & Detail Preservation

3.7

3 reviews
174
63.6K
35.3K
29
Description

Table of Contents

New Features Compared to Previous Versions
Video Tutorial and Model Installation
-----------------------------------------------------------
Key Differences Between ‘Flux Background Replacer V2’ and ‘Flux Background Replacer V3’:

Important: All versions of V1, V2 and V3, V4 can be downloaded by clicking the buttons on the right. From top to bottom are V4, V3, V2, V1.

This ComfyUI workflow is a powerful tool designed to help creators effortlessly replace the background of an image while ensuring that the subject blends seamlessly with the new environment. Whether you’re working on portraits, products, or complex compositions, this workflow combines cutting-edge models like SDXL and Flux to deliver stunning results with minimal effort. It’s ideal for creators who want high-quality background replacement without the performance hit of traditional methods.

New Features Compared to Previous Versions

  • Faster, More Efficient: The new version introduces a lightning version of SDXL, which significantly reduces VRAM usage (now requiring only 6GB of VRAM) while retaining most of the detail. The lightning SDXL version allows for faster processing with just 10 sampling steps, making the relighting process more efficient and faster without sacrificing quality.
  • Enhanced Background Removal: The new workflow utilizes BiRefNet and RMBG-2.0 models, which provide more accurate and efficient background removal, especially for fine details like hair and intricate subject edges. Additionally, you can toggle the “process_detail” setting to fine-tune the result for complex subjects, making it more adaptable.
  • Realistic Background Generation: The integration of Flux-based fine-tuned checkpoint enhances background generation. This fine-tuning allows for a more realistic and detailed background while offering smoother transitions between the subject and the new environment.
  • Greater Masking Control: The updated version adds more customization options for masking and painting, including the ability to edit masks in real-time using the “Mask Editor” and paint over areas to refine details like hair and subject edges.

Video Tutorial and Model Installation

https://youtu.be/Dxv75kTYvbo

-----------------------------------------------------------

Key Differences Between ‘Flux Background Replacer V2’ and ‘Flux Background Replacer V3’:

  1. Node Count and Complexity: Flux Background Replacer V2: A highly complex workflow featuring over 108 nodes, ideal for scenarios where precise image manipulation and blending are essential. Flux Background Replacer V3: With a more streamlined design, V3 contains 80 nodes, providing a simpler and more efficient workflow without compromising on quality.
  2. ControlNet Model and Edge Handling: Flux Background Replacer V2: Utilizes the ControlNet Canny model from Xlabs, which is specifically designed for advanced edge detection and outline preservation. This, combined with its relighting function, allows for photorealistic integration of subjects into the new background. Flux Background Replacer V3: Uses the ControlNet Depth model and Upscaler model from Jasper AI, optimized to control edges and outlines without the need for additional specialized nodes. This makes it easy to use the ‘Apply ControlNet’ node, saving time and simplifying setup.

For a detailed walkthrough, watch the Youtube video tutorial:

https://youtu.be/0UG7MOCYh4I

B站主页:https://space.bilibili.com/3546611913329493

-------------------------------------------------------------------------------------------------------------------

V2 Update: Fixed bug due to Xlabs Sampler. The new Xlabs Sampler has a new parameter, “Denoising Strength”. Don't forget to set “image_to_image_strength” to 1.

------------------------------------------------------------------------------------------------------------------

Welcome to a revolutionary ComfyUI workflow designed to simplify the process of changing photo backgrounds, whether for products, people, or various objects. This workflow harnesses the power of Flux models to create seamless background replacements, allowing for creative and professional edits with just a few steps.

Ideal for portrait photographers, product photographers, digital artists, and content creators, this workflow enhances images while preserving essential details, offering versatility and precision.


Functionality

The ComfyUI workflow specializes in changing backgrounds without compromising the integrity of the subject. It operates through five distinct node groups, each dedicated to a specific task:

  1. Background Removal and Subject Placement: Isolates the subject and positions it on a gray reference background.
  2. Background Generation: Utilizes Flux models to create a new background that fills the area behind the subject.
  3. Relighting: Adjusts lighting and shadows on the subject to match the new background for a cohesive look.
  4. Repainting: Refines the subject’s appearance by restoring lost details and enhancing image quality.
  5. Detail Restoration: Ensures all fine details are crisp and natural, bringing out the best in both the subject and the background.

This workflow not only changes backgrounds but also manages lighting, shadows, and details, making it a comprehensive tool for creating high-quality images.


Here are some examples:











Watch the tutorial to see these capabilities in action and follow along to master this powerful workflow.

Youtube: https://youtu.be/dkkyrv53Rp8


By following these steps and utilizing the resources provided, you can create stunning images with new backgrounds effortlessly. Happy editing!

i

Node Diagram
Discussion
f
flutelrut7 months ago

Error loading ControlNet in Repaint:

Error occurred when executing ControlNetLoader: 'NoneType' object has no attribute 'lower' File "E:\ComfyUI\ComfyUI_\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\ComfyUI_\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\ComfyUI_\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\ComfyUI_\ComfyUI\nodes.py", line 720, in load_controlnet controlnet = comfy.controlnet.load_controlnet(controlnet_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\ComfyUI_\ComfyUI\comfy\controlnet.py", line 375, in load_controlnet controlnet_data = comfy.utils.load_torch_file(ckpt_path, safe_load=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\ComfyUI_\ComfyUI\comfy\utils.py", line 14, in load_torch_file if ckpt.lower().endswith(".safetensors") or ckpt.lower().endswith(".sft"):

X
XX XX7 months ago

Hi,

I tried to fix it myself before writing this issue by updating Comfy and all nodes to latest version. However, the issue persists.

ComfyUI Error Report

Error Details

  • Node Type: XlabsSampler
  • Exception Type: IndexError
  • Exception Message: list index out of range
s
stripealipe7 months ago

Looks lik the X-Labs sampler got updated and broke something. I see in his video that his X-Labs sampler doesn't have a 'denoise' slider, whereas my latest  version does. Hoping this gets updated!

W
Wei7 months ago

Thanks a lot for pointing that out! I’ve made the update to fix the issue. If you notice anything else or have more feedback, feel free to let me know!

👍2
A
Alex Jung7 months ago

Wonderful job,just tried two parts of this workflow.

Is it possible to share a sample photo.


e
ellisimo6 months ago

I havent tried yet but looks amazing . Thanks for sharing your workflow !

N
Nickey CHUEH6 months ago

How to fill in the missing nodes?

Cannot execute because a node is missing the class_type property.: Node ID '#167'

eleuth_slimy_25 months ago

ComfyUI Manager. Any video out there will show you how. It is pretty basic.

B
Brick Bricks6 months ago

I have error, can you help me plz!

XlabsSampler

Error while processing rearrange-reduction pattern "b c (h ph) (w pw) -> b (h w) (c ph pw)".
Input tensor shape: torch.Size([1, 16, 75, 150]). Additional info: {'ph': 2, 'pw': 2}.
Shape mismatch, can't divide axis of length 75 in chunks of 2


# ComfyUI Error Report
## Error Details
- **Node Type:** XlabsSampler
- **Exception Type:** einops.EinopsError
- **Exception Message:**  Error while processing rearrange-reduction pattern "b c (h ph) (w pw) -> b (h w) (c ph pw)".
 Input tensor shape: torch.Size([1, 16, 75, 150]). Additional info: {'ph': 2, 'pw': 2}.
 Shape mismatch, can't divide axis of length 75 in chunks of 2
## Stack Trace
```
  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\nodes.py", line 458, in sampling
    x = denoise_controlnet(
        ^^^^^^^^^^^^^^^^^^^

  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\sampling.py", line 258, in denoise_controlnet
    orig_image = rearrange(orig_image, "b c (h ph) (w pw) -> b (h w) (c ph pw)", ph=2, pw=2).to(img.device, dtype = img.dtype)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\einops\einops.py", line 591, in rearrange
    return reduce(tensor, pattern, reduction="rearrange", **axes_lengths)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\einops\einops.py", line 533, in reduce
    raise EinopsError(message + "\n {}".format(e))

```
## System Information
- **ComfyUI Version:** v0.2.2-85-gd985d1d
- **Arguments:** ComfyUI\main.py --windows-standalone-build
- **OS:** nt
- **Python Version:** 3.11.9 (tags/v3.11.9:de54cf5, Apr  2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
- **Embedded Python:** true
- **PyTorch Version:** 2.4.1+cu124
## Devices

- **Name:** cuda:0 NVIDIA GeForce RTX 2070 SUPER : cudaMallocAsync
  - **Type:** cuda
  - **VRAM Total:** 8589475840
  - **VRAM Free:** 109646336
  - **Torch VRAM Total:** 12079595520
  - **Torch VRAM Free:** 109646336

2024-10-03 17:03:12,922 - root - INFO - loaded partially 2774.4004028320314 2774.0332641601562 0
2024-10-03 17:03:16,924 - root - INFO - Unloading models for lowram load.
2024-10-03 17:03:17,513 - root - INFO - 0 models unloaded.
2024-10-03 17:03:18,104 - root - ERROR - !!! Exception during processing !!!  Error while processing rearrange-reduction pattern "b c (h ph) (w pw) -> b (h w) (c ph pw)".
 Input tensor shape: torch.Size([1, 16, 75, 150]). Additional info: {'ph': 2, 'pw': 2}.
 Shape mismatch, can't divide axis of length 75 in chunks of 2
2024-10-03 17:03:18,142 - root - ERROR - Traceback (most recent call last):
  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\einops\einops.py", line 523, in reduce
    return _apply_recipe(
           ^^^^^^^^^^^^^^
  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\einops\einops.py", line 234, in _apply_recipe
    init_shapes, axes_reordering, reduced_axes, added_axes, final_shapes, n_axes_w_added = _reconstruct_from_shape(
                                                                                           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\einops\einops.py", line 187, in _reconstruct_from_shape_uncached
    raise EinopsError(f"Shape mismatch, can't divide axis of length {length} in chunks of {known_product}")
einops.EinopsError: Shape mismatch, can't divide axis of length 75 in chunks of 2

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\nodes.py", line 458, in sampling
    x = denoise_controlnet(
        ^^^^^^^^^^^^^^^^^^^
  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui\sampling.py", line 258, in denoise_controlnet
    orig_image = rearrange(orig_image, "b c (h ph) (w pw) -> b (h w) (c ph pw)", ph=2, pw=2).to(img.device, dtype = img.dtype)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\einops\einops.py", line 591, in rearrange
    return reduce(tensor, pattern, reduction="rearrange", **axes_lengths)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "H:\Stable Diffusion\Comfy UI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\einops\einops.py", line 533, in reduce
    raise EinopsError(message + "\n {}".format(e))
einops.EinopsError:  Error while processing rearrange-reduction pattern "b c (h ph) (w pw) -> b (h w) (c ph pw)".
 Input tensor shape: torch.Size([1, 16, 75, 150]). Additional info: {'ph': 2, 'pw': 2}.
 Shape mismatch, can't divide axis of length 75 in chunks of 2

2024-10-03 17:03:18,169 - root - INFO - Prompt executed in 246.72 seconds
```
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
```
Workflow too large. Please manually upload the workflow from local file system.
```

## Additional Context
(Please add any additional context or steps to reproduce the error here)



v
vincent.den.boer6 months ago

The results look great. Were can I find V2?

楽光6 months ago

Hello, how do I open the flux node in the bottom right corner of the Position Your Subject combination in the options? I have installed the flux DEV fb16 model but I can't open it

W
Wang C5 months ago

Cannot execute because a node is missing the class_type property.: Node ID '#97'



(Edited)
🎉1
eleuth_slimy_25 months ago

Hi,

Is there any reason why you deleted v.2?

eleuth_slimy_25 months ago

I think one of the oldest versions, on the right side of this page, could be the second version of the workflow, although it is titled as v.1.

W
Wang C5 months ago

Cannot execute because a node is missing the class_type property.: Node ID '#167'

eleuth_slimy_25 months ago

I don't see the reason of your message. Maybe you are trying to say that the older workflows do not work properly? Then... just say it, instead of posting cryptic messages to random users.

Anyway, that message seems as if you were missing some custom node, to be installed through the ComfyUI Manager.

V
Vatsal Savaliya5 months ago

please provide the V2 workflow, its not present here on openart

W
Wei5 months ago

All the three versions have been uploaded. Please do not care about the names.

G
Guoguozh4 months ago

🚀 Now you can Edit & Run it online (Fast & Error Free):

该工作流在线编辑运行地址:(速度快,不爆红)

https://www.runninghub.ai/post/1864265045969666049/?utm_source=openart

https://www.runninghub.cn/post/1864226915614670850/?utm_source=openart


RunningHub – Highly reliable Cloud-Based ComfyUI, Edit and Run Workflows Online, no local installation required. Powered by RTX 4090 GPUs for faster performance and fully error-free node operation.


RunningHub - 超高可用性云端ComfyUI,本地免安装,标配RTX4090带来更快运行速度,站内已发布海量高质量工作流可一键运行,速度快,不爆红


-------------------------------------


👋🏻 Hello Wei,    

I’m Spike from RunningHub.  I hope this message finds you well! I wanted to kindly inform you that we’ve copied your workflow to RunningHub (with the original author clearly credited and a link to your OpenArt profile). The purpose is to allow more users who love your work to easily experience it online.  

We’d love to hear from you at: spike@runninghub.ai

🎉 If you’re open to joining us, we can transfer the workflows we’ve uploaded to your RunningHub account and provide you with additional benefits as a token of our appreciation.

😞 However, if you’re not comfortable with this approach, we will promptly remove all the uploaded content.    

Looking forward to your response!

b
becausereasons2 months ago

Only see V1

SimpleCondition+ node missing. Seems to be removed, any ideas?

P
Piotr Siwek24 days ago

I also have problem with missing SimpleCondition+ , any tip?

Author

20
181.8K
1.4K
422.3K

Reviews

m

mayo

3 months ago

好用

C

Connie K

4 months ago

It worked for me and did not break anything.

R

Richard Perry

4 months ago

Installing the custom nodes in this workflow break a ROCm + Linux install.

W

Wei

3 months ago

this is your problem, not mine

Versions (4)

  • - latest (3 months ago)

  • - v20241007-031401

  • - v20240906-013051

  • - v20240902-083659

Primitive Nodes (56)

Anything Everywhere (4)

Anything Everywhere3 (1)

DF_Image_scale_by_ratio (1)

Fast Groups Bypasser (rgthree) (7)

GetNode (9)

Image Comparer (rgthree) (6)

JWIntegerMin (1)

LayerColor: AutoAdjustV2 (1)

LayerMask: BiRefNetUltraV2 (2)

LayerMask: LoadBiRefNetModelV2 (1)

Note (1)

Prompts Everywhere (1)

Reroute (13)

SetNode (5)

SetUnionControlNetType (1)

SimpleCondition+ (1)

easy isMaskEmpty (1)

Custom Nodes (50)

  • - CR Image Input Switch (1)

ComfyUI

  • - CheckpointLoaderSimple (1)

  • - InvertMask (1)

  • - PreviewImage (6)

  • - VAEDecode (2)

  • - DifferentialDiffusion (1)

  • - ControlNetApplyAdvanced (2)

  • - ImageCompositeMasked (3)

  • - InpaintModelConditioning (1)

  • - ImageBlur (1)

  • - KSampler (2)

  • - LoadImage (1)

  • - EmptyLatentImage (1)

  • - CLIPTextEncode (2)

  • - ControlNetLoader (1)

  • - easy imageSize (3)

  • - easy imageDetailTransfer (2)

  • - MaskPreview+ (2)

  • - SDXLEmptyLatentSizePicker+ (1)

  • - MaskBlur+ (1)

  • - PreviewBridge (1)

  • - LamaRemover (1)

  • - LayerMask: MaskGrow (4)

  • - LayerUtility: ImageScaleByAspectRatio V2 (1)

  • - LayerUtility: ImageBlendAdvance V2 (1)

  • - LayerColor: Exposure (1)

  • - LayerUtility: ColorImage V2 (1)

  • - CannyEdgePreprocessor (1)

  • - FloatSlider (3)

  • - ImageAndMaskPreview (1)

Checkpoints (1)

realvisxlV50_v50LightningBakedvae.safetensors

LoRAs (0)