Product Photo Relighting v2 - From Pre-existing photo, generate background, relight, keep details
5.0
0 reviewsDescription
IMPORTANT
This is a workflow we created during this livestream: https://www.youtube.com/watch?v=xjy3JyaPfHQ
It's a all in one workflow, where the user can:
- either generate a product, or start from a pre-existing product photo
- segment out the product through a SAM group
- generate a new background
- blend the original product on top of the generated background
- relight through mask (either a pre-existing light mask or by masking the resulting original product + background image in the Preview Bridge node)
- OPTIONAL: keep finer details (such as text) by a series of masks and post processing nodes
I personally would rather preserve details by using a frequency separation technique in Photoshop, as highlighted here: https://www.youtube.com/live/xjy3JyaPfHQ?si=joT19pYsm30Zs2X9&t=2834
But if you want to do everything inside of comfyUI, I guess you can, it's just more tedious and time consuming.
Please watch the video to better understand how it all works.
Want to support me? You can buy me a coffee here: https://ko-fi.com/risunobushi
Cheers!
Andrea
Node Diagram
Discussion
Traceback (most recent call last): File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-post-processing-nodes\post_processing_nodes.py", line 556, in dodge_and_burn dodged_image = self.dodge(image, mask, intensity, mode) File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-post-processing-nodes\post_processing_nodes.py", line 574, in dodge return img / (1 - mask * intensity + 1e-7) RuntimeError: The size of tensor a (1024) must match the size of tensor b (64) at non-singleton dimension 2
What's this error in response to?
Are you trying to relight from a custom mask in a preview bridge node or from a custom mask imported in a Load Image node?
Also, please refer to the complete rework v3 here: https://openart.ai/workflows/risunobushi/product-photography-relight-v3---with-internal-frequency-separation-for-keeping-details/YrTJ0JTwCX2S0btjFeEN
Node Details
Primitive Nodes (19)
Note (18)
Reroute (1)
Custom Nodes (56)
ComfyUI
- CLIPTextEncode (6)
- CheckpointLoaderSimple (2)
- VAEDecode (3)
- PreviewImage (9)
- VAEEncode (2)
- MaskToImage (4)
- ControlNetLoader (1)
- KSampler (3)
- EmptyLatentImage (2)
- ControlNetApply (1)
- LoadImageMask (1)
- ImageBlur (1)
- LoadImage (2)
- MaskFromRGBCMYBW+ (1)
- ImageResize+ (1)
- PreviewBridge (5)
- SAMLoader (1)
- ImpactGaussianBlurMask (1)
- ColorCorrect (1)
- ICLightConditioning (1)
- LoadAndApplyICLightUnet (1)
- DodgeAndBurn (1)
- GrowMaskWithBlur (1)
- RemapMaskRange (1)
- GroundingDinoSAMSegment (segment anything) (1)
- GroundingDinoModelLoader (segment anything) (1)
- Image Blend by Mask (2)
Model Details
Checkpoints (2)
epicrealism_naturalSinRC1VAE.safetensors
LoRAs (0)