Flux Outpaint Pro
4.4
7 reviewsDescription
Table of Contents
This workflow is designed to help you achieve flawless outpainting results using the powerful combination of Flux models and SDXL checkpoints. Whether you’re extending a background or fixing human features like hands and feet, this process ensures smooth transitions and highly detailed outputs with minimal artifacts.
Functionality
This workflow utilizes a five-step node group system that allows you to:
- Seamlessly outpaint backgrounds and human features (hands, feet, etc.) with precision
- Fix artifacts and transitions left behind by the initial outpainting pass
- Restore original details like faces and key features to preserve image quality
- Upscale the final image to achieve high-resolution outputs, ready for professional use
Demo
Reference
For a step-by-step walkthrough of this workflow, check out my video tutorial here:
B站主页:https://space.bilibili.com/3546611913329493
Watch the full breakdown of each node group and learn how to maximize your image quality using this process!
Node Diagram
Discussion
Hello teacher; Can we get a download link for the load control network model? thank you.
OK, nice workflow
无法检测模型类型:F:\ComfyUI_windows_portable\ComfyUI\models\checkpoints\flux1-dev-fp8.safetensors
Same problem. I can load SDXL model, but not Flux. Any solution?
Hi Ming You, try this Flux model. Work fine for me. So Heavy, but work
https://civitai.com/models/637170?modelVersionId=712441
For the people who cannot pick "flux1-dev-fp8.safetensors" in "load checkpoint" node,
They should use this workflow:
(please add this workflow to your page)
(Edited)HI teacher!Where to download upscalemodel?thank you
强力LoRA加载器
'PowerLoraLoaderHeaderWidget'
这是错误是什么问题呢
I am getting this error on this workflow, basically control does not support repaint type. Though I am using pro-max model only.
File "/workspace/ComfyUI/comfy/samplers.py", line 202, in calc_cond_batch
c['control'] = control.get_control(input_x, timestep_, c, len(cond_or_uncond))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/controlnet.py", line 259, in get_control
control = self.control_model(x=x_noisy.to(dtype), hint=self.cond_hint, timesteps=timestep.to(dtype), context=context.to(dtype), **extra)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/cldm/cldm.py", line 399, in forward
raise ValueError(
ValueError: Control type repaint(7) is out of range for the number of control types(6) supported.
Please consider using the ProMax ControlNet Union model.
https://huggingface.co/xinsir/controlnet-union-sdxl-1.0/tree/main
Great work! Increasing the steps significantly helps a lot.
Great Work, One question, Will this workflow work on video outpainting with consistent content generation?
i dislike the slider, how to turn it into a normal "type to input"?
Neat workflow.
Super job!! Nice workflow.
I've following the Youtube tutorial step by step but it is only outpainting the main character and not the background despite the prompt is giving information about the main character and also about the background.
And by the way, I don't know where or how is the final output image saved, I can slide but I can't get the image itself.
Thanks!
Int-🔬, this is missing and I can´t seem to find the missing
Node Details
Primitive Nodes (39)
Fast Groups Muter (rgthree) (1)
FluxGuidance (1)
GetNode (17)
Image Comparer (rgthree) (8)
Int-🔬 (1)
LayerColor: AutoAdjustV2 (1)
Note (1)
Power Lora Loader (rgthree) (1)
SetNode (7)
SetUnionControlNetType (1)
Custom Nodes (52)
- CR Image Input Switch (3)
- CR Prompt Text (1)
ComfyUI
- ControlNetLoader (1)
- SetLatentNoiseMask (1)
- ControlNetApplyAdvanced (1)
- CheckpointLoaderSimple (2)
- CLIPTextEncode (4)
- ImageScaleBy (1)
- ImageUpscaleWithModel (1)
- UpscaleModelLoader (2)
- VAEEncode (2)
- KSampler (3)
- VAEDecode (3)
- InvertMask (1)
- GrowMask (1)
- MaskToImage (1)
- VAEEncodeForInpaint (1)
- LoadImage (1)
- easy imageSize (3)
- MaskBlur+ (1)
- MaskPreview+ (1)
- SDXLEmptyLatentSizePicker+ (1)
- ImpactSwitch (1)
- LayerUtility: ImageBlend (1)
- LayerUtility: ImageBlendAdvance V2 (1)
- FloatSlider (3)
- Preview Chooser (1)
- GrowMaskWithBlur (1)
- ReroutePrimitive|pysssss (2)
- Bus Node (2)
- Image Blank (1)
- Masks Subtract (1)
- Image Blend by Mask (2)
Model Details
Checkpoints (2)
FLUX1/flux1-dev-fp8.safetensors
juggernautXL_juggXIByRundiffusion.safetensors
LoRAs (0)