Change Background ++ (Redraw background, clothes, hair, face…and Generate foregrounds)
5.0
4 reviewsDescription
多功能换背景工作流(更新V329版本):
1、根据提示词或参考图为照片主体(人、动物、产品等)生成背景;
2、局部重绘功能,例如换衣服、换发型、替换物品等等;
3、提供了脸部修复、手部修复及换脸功能;
4、可通过MaskEditor绘制蒙版生成前景;
5、可选择蒙版以及深度图的生成方式,以及调节全局的ControlNet、IPAdapter以及Mask的影响力;
请切换成英文语言再加载工作流,否则很多节点的中文标题会丢失。
版本更新:
(1031) 主要增加了蒙版编辑群组,有三种方式可以选择,分别是自动抠取背景、通过语义分割获取蒙版以及上传自定义蒙版。通过语义分割节点可以获取局部区域蒙版,实现换服装、发型、物品等等操作。另外增加了使用Flux GGUF模型对重绘区域进一步优化的流程,以及微调了一下操作界面。
(1016) 更新了节点版本。原来的手部修复群组更换成使用Flux GGUF模型来进行修复。
(1004) 重新调整了界面布局,对深度图、高清放大功能进行了优化。
(0803) 使用了IPAdapter Plus新加入的ClipVision Enhancer节点,可以获取更好的背景细节。另外增加了一些效率节点以及调整了一些参数设置。去掉了IC-Light部分,可以另外使用Change Light工作流进行打光。
(0610) 还原细节功能改成复原主体,复原主体后可继续对脸部和手部进行修复。抠图节点替换成BiRefNet,效果很棒!
(0609) 修复偶尔出现边框的问题,提升背景生成效果。
(0607) 增加还原主体细节节点(需要时启用)
(0606) 主要优化了前景生成效果
(0604) 稍微调整了下界面布局,替换了一些节点以及增加了几个控制器
(0602) 优化IC-Light打光,提供了原图色彩迁移功能开关,可以还原原本的色彩
Multi-function background change workflow update V328 version:
- Generate backgrounds for the subjects of photos, such as people, animals, and products, based on the prompt words or reference images;
- Local redrawing function, such as changing clothes, hairstyle, replacing items, etc;
- Provides functions such as face repair, hand repair, and face replacement;
- The foreground can be generated by drawing a mask with MaskEditor;
- You can choose the generation method of the mask and depth map, and adjust the influence of the global ControlNet, IPAdapter, and Mask;
Version update:
(1031) The main addition is the mask editing group, which has three options: automatic background extraction, semantic segmentation for mask acquisition, and uploading custom masks. Through the semantic segmentation node, it is possible to obtain local area masks to achieve operations such as changing clothing, hairstyles, and items. In addition, the process of further optimizing the redrawing area using the Flux GGUF model has been added, and the user interface has been fine-tuned.
(1016) updated the node version. The original hand repair group was replaced with the Flux GGUF model for repair.
(1004) The interface layout has been restructured, and the depth map and high-definition zoom-in features have been optimized.
(0803) The ClipVision Enhancer node, newly added to the IPAdapter Plus, is used to obtain better background details. In addition, some efficiency nodes have been added and some parameter settings have been adjusted. The IC-Light part has been removed, and you can use the Change Light workflow for lighting separately.
(0610) The function of restoring details has been changed to restoring the main body, which can continue to repair the face and hands after restoration. The image-based node is replaced with BiRefNet, and the effect is great!
(0609) Fixed the occasional problem of borders, and improved the background generation effect.
(0607) Enable when adding the restore subject detail node is required
(0606) Mainly optimized the foreground generation effect
(0604) Slightly adjusted the interface layout, replaced some nodes and added several controllers
(0602) Optimize IC-Light lighting, provide a switch for original color transfer function, which can restore the original color
由于工作流不断地更新迭代,操作界面与目前的教程视频里展示的差异较大,因此教程视频仅作为参考。
Due to the continuous update and iteration of the workflow, the operation interface is quite different from the current tutorial video, so the tutorial video is only used as a reference.
How-to video (V3 version)
https://youtu.be/42b5DBCzPX0?si=zy1Jz7ExuF9hgITf
https://www.bilibili.com/video/BV1MU411o7et/?vd_source=c52b6f7c72230aea1ae80463f968691a
How-to video (V2 version)
https://www.bilibili.com/video/BV1Cx4y167fc/?vd_source=c52b6f7c72230aea1ae80463f968691a
https://www.youtube.com/watch?v=yubqVCOf3bo
Node Diagram
Discussion
I encountered the following error with the node IPAdapter ClipVision Enhancer
Error occurred when executing IPAdapterClipVisionEnhancer:
shape '[6, 6, 35, 35]' is invalid for input of size 46080
File "/home/studio-lab-user/ComfyUI/execution.py", line 152, in
recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/home/studio-lab-user/ComfyUI/execution.py", line 82, in
get_output_data
return_values = map_node_over_list(obj, input_data_all,
obj.FUNCTION, allow_interrupt=True)
File "/home/studio-lab-user/ComfyUI/execution.py", line 75, in
map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File
"/home/studio-lab-user/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py",
line 822, in apply_ipadapter
work_model, face_image = ipadapter_execute(work_model,
ipadapter_model, clip_vision, **ipa_args)
File
"/home/studio-lab-user/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py",
line 359, in ipadapter_execute
img_cond_embeds = encode_image_masked(clipvision, image,
batch_size=encode_batch_size, tiles=enhance_tiles, ratio=enhance_ratio,
clipvision_size=clipvision_size)
File
"/home/studio-lab-user/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/utils.py",
line 263, in encode_image_masked
embeds_split["image_embeds"] =
merge_embeddings(embeds_split["image_embeds"], tiles)
File
"/home/studio-lab-user/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/utils.py",
line 224, in merge_embeddings
reshaped = embeds.reshape(grid_size, grid_size, tile_size,
tile_size)
Has anyone ever encountered this error before and can you please give me a solution?
(Edited)I downloaded the latest workflow but it is in Chinese (a bit difficult to use), can you upload the English version similar to the tutorial video?
I can't find the "Switch on human verification" node
Looks like nice workflow and it is working! Thanks
But i couldn't get effectively results because i am beginner of comfy and there is a lot of chineese language in the workflow. Do you give private education for your workflow? I am working on my jewelry products and problem is product's shape and details are changin in output picture.. Here you can see https://hizliresim.com/19g6c0s
Excellent..Works great.
I translated everything into English
Worked smooth with no problems
how did you translate?
Probably the same way I did... by literally copying and pasting everything in to Google Translate one-by-one, then pasting the results back in.
It was EXTREMELY TEDIOUS lol
Yes we did by same way🤣 Importing background reference is very good function. Just product's details are changing in result image. Did you find the way for fix this?
Can you put that english workflow here?
I get his error
Error occurred when executing DWPreprocessor: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
Thank you for sharing! I have a question: When drawing and masking the foreground by keyword, the foreground image produces inconsistent results with the reference background. Do you have any way to handle that problem? It's like creating an additional IPadapter reference method for the foreground mask
Thx Xiser for this workflow is awesome!
But i cant find ic-light panel in this workflow
where is it ?
Iclight is not inside this workflow you have to use iclight module seperately.. Are you working on product photography? If so Have you managed to preserve product details?
nope, not all detail save correct
please help me :(
Error occurred when executing SimpleMath+: invalid syntax (, line 0)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 316, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 191, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 168, in _map_node_over_list
process_inputs(input_dict, i)
File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 157, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_essentials\misc.py", line 75, in execute
result = eval_(ast.parse(value, mode='eval').body)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "ast.py", line 50, in parse
(IMPORT FAILED) Comfyui-ergouzi-Nodes
I am having this error, please help
Error occurred when executing Bounded Image Crop with Mask: index is out of bounds for dimension with size 0 File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\WAS_Node_Suite.py", line 12223, in bounded_image_crop_with_mask rmin, rmax = torch.where(rows)[0][[0, -1]] ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
ccurred when executing IPAdapterUnifiedLoader:
IPAdapter model not found.
File "/Volumes/DinkyHoang/ComfyUI-master/execution.py", line 316, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/DinkyHoang/ComfyUI-master/execution.py", line 191, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/DinkyHoang/ComfyUI-master/execution.py", line 168, in _map_node_over_list
process_inputs(input_dict, i)
File "/Volumes/DinkyHoang/ComfyUI-master/execution.py", line 157, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Volumes/DinkyHoang/ComfyUI-master/custom_nodes/ComfyUI_IPAdapter_plus-main/IPAdapterPlus.py", line 573, in load_models
raise Exception("IPAdapter model not found.")
Error occurred when executing IPAdapterClipVisionEnhancer:
shape '[6, 6, 35, 35]' is invalid for input of size 46080
File "D:\SDAI\ComfyUI-aki-v1.3\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\SDAI\ComfyUI-aki-v1.3\execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\SDAI\ComfyUI-aki-v1.3\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\SDAI\ComfyUI-aki-v1.3\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "D:\SDAI\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 790, in apply_ipadapter
work_model, face_image = ipadapter_execute(work_model, ipadapter_model, clip_vision, **ipa_args)
File "D:\SDAI\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 336, in ipadapter_execute
img_cond_embeds = encode_image_masked(clipvision, image, batch_size=encode_batch_size, tiles=enhance_tiles, ratio=enhance_ratio, clipvision_size=clipvision_size)
File "D:\SDAI\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\utils.py", line 259, in encode_image_masked
embeds_split["image_embeds"] = merge_embeddings(embeds_split["image_embeds"], tiles)
File "D:\SDAI\ComfyUI-aki-v1.3\custom_nodes\ComfyUI_IPAdapter_plus\utils.py", line 220, in merge_embeddings
reshaped = embeds.reshape(grid_size, grid_size, tile_size, tile_size)我不懂编程,怎么解决这个问题
Can I ask why the character's face changes (not retaining its original features) when enabling the high-resolution upscaling mode? When I disable it, the issue goes away, thank you!
When loading the graph, the following node types were not found: easy bookmark
Thiếu node đó bác. Inbox mình fix cho
inbox em với, tele: https://t.me/s4ntiago11
Im dying with this error. can you help me pls, thank so much
File "D:\test\ComfyUI_windows_portable\ComfyUI\execution.py", line 182, in <listcomp>
output.append([x for o in results for x in o[i]])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'ExecutionBlocker' object is not iterable
Prompt executed in 0.45 seconds
stopped at face-detailed node, "
FaceDetailer
'ExecutionBlocker' object is not iterable
Tạo rồi chọn Mask thử công đi bác.
inbox em với, tele: https://t.me/s4ntiago11
Help me pls
When loading the graph, the following node types were not found: LayerColor: Brightness Contrast
The node is in the plugin at the following address, you can ask the author
https://github.com/chflame163/ComfyUI_LayerStyle
where do I download from and to which folder should I put BiRefNet-general-epoch_244.pth
需要解决工作流难题?找我,V:ailaohuyoukiss
How to fix this please:
Cannot execute because a node is missing the class_type property.: Node ID '#2004'
from where I download BiRefNet-general-epoch_244.pth
Here you can find:
The node is in the plugin at the following address, you can ask the author
I have installed the Birefnet node, layermask: load Birefnet model but it does not detect the model BiRefNet-general-epoch_244.pth and no other what can happen? I'm crazy with this and I'm almost seeing this problem almost all day I do not know how to solve.
https://github.com/ZhengPeng7/BiRefNet/releases
doenload and place in YOURLOCATION\ComfyUI\models\BiRefNet\pth
I have installed the Birefnet node, layermask: load Birefnet model but it does not detect the model BiRefNet-general-epoch_244.pth and no other what can happen? I'm crazy with this and I'm almost seeing this problem almost all day I do not know how to solve.
i got same error, did you get any solution about this?
BiRefNet-general-epoch_244.pth放在那个文件夹内?
Output will be ignored
!!! Exception during processing !!! 'BiRefNet-general-epoch_244.pth'
Traceback (most recent call last):
File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_LayerStyle\py\birefnet_ultra_v2.py", line 49, in load_birefnet_model
IPAdapterUnifiedLoaderFaceID
IPAdapter model not found.
SELECTED: input1
!!! Exception during processing !!! IPAdapter model not found.
Traceback (most recent call last):
File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 573, in load_models
raise Exception("IPAdapter model not found.")
Exception: IPAdapter model not found.
Prompt executed in 1.51 seconds
Missing models, see: https://github.com/cubiq/ComfyUI_IPAdapter_plus
视频中的 V3 工作流程在任何地方都可用吗?因为我在这里寻找照明管理,并惊讶地发现它被删除了 :(
Thank you for your labour, it will be very super if you make a video where you edit a single photo from start to finish about this current state.
PLS!!! Eng Vers
DWPreprocessor
'NoneType' object has no attribute 'get_providers'
i got this error please help
UnetLoaderGGUF
expected str, bytes or os.PathLike object, not NoneType 你好,出现了这个报错。 File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\nodes.py", line 258, in load_unet
sd = gguf_sd_loader(unet_path)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-GGUF\nodes.py", line 39, in gguf_sd_loader
reader = gguf.GGUFReader(path)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\gguf\gguf_reader.py", line 90, in __init__
self.data = np.memmap(path, mode = mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Administrator\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\numpy\core\memmap.py", line 229, in __new__
f_ctx = open(os_fspath(filename), ('r' if mode == 'c' else mode)+'b')
^^^^^^^^^^^^^^^^^^^
TypeError: expected str, bytes or os.PathLike object, not NoneType 我已经将flux1-dev-Q4_1文件放置到了C:\Users\Administrator\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\models\MODEL\FLUX目录下。
Cannot execute because a node is missing the class_type property.: Node ID '#1412'
I have also encountered this problem. Have you solved it?
ControlNetLoader
Model in folder 'controlnet' with filename 'SD1.5/control_v11p_sd15_openpose_fp16.safetensors' not found.
檔名改成上面的報錯
controlnet再加一個SD1.5資料夾,再把剛剛改檔名放進去
還是一樣報錯,卡在這邊
求助大神了!!求助大神了!!!
解决了吗朋友,一样的问题
the best remove background. incredible results.
Installing it is not easy, but it is very worth it.
Great job
它太完美了,非常感激你为社区付出的努力和分享。
我是越南人,非常敬佩你!
Anyone no how to generate a background around the background rather than using an existing one?
Anyone know how i can use Flux to generate the backgrounds instead of one of the sd1.5 models?
TypeError: Cannot read properties of null (reading 'length')
Have no idea what i didn't do.
I made a workflow translator app, for all of us that don't speak Chinese!
https://github.com/Clinteastman/comfy-ui-workflow-translator
This guy is the most powerful champion of comfyui. If you succeeded to learn from Xiser you will be on top. It's hard but I guess with lots of efforts and hard work you can achieve that.
Node Details
Primitive Nodes (382)
ACN_ScaledSoftControlNetWeights (1)
Any Switch (rgthree) (7)
DF_Float (19)
DF_Int_to_Float (3)
DF_Integer (7)
DF_Text (1)
DF_Text_Box (3)
DepthAnythingV2Preprocessor (1)
DownloadAndLoadFlorence2Model (1)
Fast Groups Bypasser (rgthree) (3)
Florence2Run (3)
GetNode (145)
IPAdapterClipVisionEnhancer (2)
Image Comparer (rgthree) (1)
Label (rgthree) (65)
LayerMask: BiRefNetUltraV2 (1)
LayerMask: LoadBiRefNetModel (1)
Lora Loader Stack (rgthree) (2)
Note (5)
Primitive boolean [Crystools] (5)
Primitive float [Crystools] (1)
Primitive integer [Crystools] (4)
Primitive string [Crystools] (1)
Reroute (7)
Seed (rgthree) (4)
SetNode (82)
Switch any [Crystools] (6)
UnetLoaderGGUF (1)
Custom Nodes (302)
- ImageEffectsAdjustment (1)
- CM_FloatBinaryOperation (1)
- CM_FloatUnaryOperation (1)
- CR Conditioning Input Switch (2)
- CR String To Combo (2)
- CR Image Input Switch (4 way) (2)
- CR Color Panel (4)
ComfyUI
- CLIPTextEncode (10)
- VAEDecode (5)
- LoadImage (6)
- PreviewImage (31)
- ControlNetLoader (6)
- EmptyImage (3)
- EmptyLatentImage (1)
- KSampler (5)
- InvertMask (7)
- UpscaleModelLoader (2)
- LatentUpscaleBy (2)
- VAELoader (2)
- CLIPSetLastLayer (1)
- MaskComposite (5)
- PerturbedAttentionGuidance (3)
- MaskToImage (8)
- ImageBlend (1)
- ImageToMask (4)
- ConditioningSetMask (2)
- GrowMask (4)
- SplitImageWithAlpha (1)
- ImageScale (2)
- SetLatentNoiseMask (2)
- VAEEncode (2)
- DifferentialDiffusion (2)
- ImageCompositeMasked (1)
- DualCLIPLoader (1)
- CheckpointLoaderSimple (1)
- easy imageColorMatch (1)
- easy cleanGpuUsed (2)
- MaskPreview+ (8)
- ImageCompositeFromMaskBatch+ (7)
- SimpleMath+ (23)
- MaskBlur+ (5)
- ImageResize+ (14)
- ImageCASharpening+ (3)
- GetImageSize+ (5)
- ImageCrop+ (1)
- BboxDetectorCombined_v2 (1)
- UltralyticsDetectorProvider (2)
- SAMLoader (1)
- ImpactSimpleDetectorSEGS (2)
- SegsToCombinedMask (2)
- PreviewBridge (1)
- ImpactSwitch (1)
- ImpactStringSelector (2)
- ImpactImageBatchToImageList (1)
- FaceDetailer (1)
- LayerColor: Brightness & Contrast (3)
- LayerColor: ColorAdapter (1)
- LayerUtility: TextBox (2)
- LayerMask: SegmentAnythingUltra V2 (1)
- LayerUtility: ImageBlend V2 (1)
- DWPreprocessor (1)
- DepthAnythingPreprocessor (1)
- InpaintPreprocessor (3)
- AnimeLineArtPreprocessor (1)
- LineArtPreprocessor (1)
- IPAdapterUnifiedLoaderFaceID (1)
- IPAdapterFaceID (1)
- IPAdapterUnifiedLoader (2)
- ACN_AdvancedControlNetApply (11)
- ScaledSoftControlNetWeights (2)
- EG_RY_HT (18)
- OutlineMask (2)
- SDXL Prompt Styler (JPS) (1)
- RemapMaskRange (5)
- GrowMaskWithBlur (3)
- ResizeMask (2)
- SplineEditor (1)
- CreateGradientFromCoords (1)
- ColorMatch (1)
- ConditioningMultiCombine (1)
- ShowText|pysssss (2)
- GetImageSize (1)
- UltimateSDUpscale (1)
- JWMaskResize (1)
- Bounded Image Crop with Mask (1)
- Text Concatenate (4)
- Image Blend by Mask (2)
- Image Blend (3)
- Mask Crop Region (2)
- Text Find and Replace (1)
- Logic Boolean Primitive (1)
Model Details
Checkpoints (1)
SD1.5/Realistic Vision_V6.0 B1.safetensors
LoRAs (0)