extended image (No prompts needed)
4.7
10 reviewsDescription
工作流已经更新,做了很多改进和升级,它基于XL模型,我推荐使用新的工作流。
The workflow has been updated with a lot of improvements and upgrades, it is based on the XL model and I recommend using the new workflow.
新工作流的链接:
Link to the new workflow:
https://openart.ai/workflows/hornet_splendid_53/extended-outpaintxl-update/RbTrDOJifp89TcHNjo6Z
--------------------------------------------------------------------------------------------------------------------
The stock images I used in the demo are all from the author #NeuraLunk, his images are beautiful and if you like his work, you can find him at the URL below.
https://openart.ai/workflows/profile/neuralunk?sort=latest
What this workflow does
extended image
How to use this workflow
Just drag and drop the image in, no need to use the prompt.
How it works
Use controlnet's inpaint model to make guesses about the extensions.
At the same time, the style model is used to reference the picture, so that controlnet won't guess wildly.
The style model can be either coadapter or IPAdapter, they have different ways to reference the style. I prefer coadapter for extending images.
I highly recommend the realisticVisionV60B1VAE 3.97G model for its great extended image results!
Model Download
CHECKPOINT
https://civitai.com/models/4201/realistic-vision-v60-b1 3.97G
Place it in the ComfyUI\models\checkpoints
coadapter
https://huggingface.co/TencentARC/T2I-Adapter/blob/main/models/coadapter-style-sd15v1.pth
Place it in the ComfyUI\models\style_models
IPAdapter
https://huggingface.co/h94/IP-Adapter/tree/main/models
Place it in the ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\models
IPAdapter clip vision
https://huggingface.co/h94/IP-Adapter/tree/main/models/image_encoder
Place it in the ComfyUI_windows_portable\ComfyUI\models\clip_vision\SD1.5
coadapter clip vision
https://huggingface.co/openai/clip-vit-large-patch14/blob/main/pytorch_model.bin
Place it in the ComfyUI_windows_portable\ComfyUI\models\clip_vision\SD1.5
Please make sure that all models are adapted to the SD1.5 model.
If you have any questions, please add my WeChat: knowknow0
Node Diagram
Discussion
I' would try it Thanks!
If you use
my generated images for your workflow, thats OK... BUT you should and
could !! have asked BEFORE !! using instead of just doing it and adding a
lame disclaimer to avoid trouble. Bye now. END.
You're right. I'll change it. I'd like to apologize to you.
Respect, apologies accepted :)
Now you agree , I dont mind my images being used.
It's just more polite to ask 1st.
Cool then I will change my review :)
I just tried to contact the original author of these images. What I didn't realize was that all of these beautiful images were produced by you! I didn't care before this, I just carefully selected some of them, thinking they were by different authors. :)
Now that you've agreed, I'd better add these beautiful pictures because I think they're beautiful.
All oke no worries ;)
where did you try contact me ?
I am on the dev discord server all day: https://discord.gg/FeeaSdFj
(Edited)Hi there, i would love to try out this workflow but it keeps give me same error Error occurred when executing StyleModelApply: Sizes of tensors must match except in dimension 1. Expected size 1280 but got size 1024 for tensor number 1 in the list. Im sorry for this newb question but, how can i fix it?
I have also encountered the same problem. I have been tinkering for two days, deleting all nodes and reinstalling the system, but it has been ineffective。
This is usually due to a mismatch between the models. My recommended CHECKPOINT model is realisticVisionV60 which is an SD1.5 model.
I have updated the download addresses for the various models so you can view the model names one by one in my workflow chart.
Your main problem is a StyleModelApply error! So you need to check the coadapter model as well as the clip vision model
i have the same problem for that error and i am check the model the coadapter model as well as the clip vision model , and IPAdapter is working but i still want to try the coadapter
will u can give us more tips how to work that
I'd actually like to help, but I'm not too good at this either. Would you double check the clip vision, is it pytorch_model.bin 2.53 GB?
Holy moly, this is some crazy comfy magic! Amazing work friend!
I also encountered a "Sizes of tensors……" error, but when I removed the relevant modules such as “apply style model ”from the workflow and re-enabled IPAdapter, it worked correctly.
So I created a new workflow and applied the relevant modules such as“ apply style model ”to the most basic txt2img. It's still not working.
I switched to a few more cloud environments to run the simplest txt2img workflow with “style model”. It still doesn't work.
As you can see, the error is in the "apply style model" node.
Hi, Amazing workflow, sometimes the colors next to the edges doesn't follow them. is there a setting to increase value to follow more those edge pixels
PS nyone else is loading the workflow and is set in other language?? how can i fix this without remaking all the nodes and switching them?
I have this error with "timestep_kf' with the apply control net, do you know where it comes from ?
work great and fast, thanks bro!
When loading the graph, the following node types were not found:
- IPAdapterApplyNodes that have failed to load will show as red on the graph.
why error?
Error occurred when executing StyleModelApply: Sizes of tensors must match except in dimension 1. Expected size 1280 but got size 1024 for tensor number 1 in the list. File "F:\Blender_ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\Blender_ComfyUI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\Blender_ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\Blender_ComfyUI\ComfyUI\nodes.py", line 937, in apply_stylemodel cond = style_model.get_cond(clip_vision_output).flatten(start_dim=0, end_dim=1).unsqueeze(dim=0) File "F:\Blender_ComfyUI\ComfyUI\comfy\sd.py", line 341, in get_cond return self.model(input.last_hidden_state) File "F:\Blender_ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "F:\Blender_ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "F:\Blender_ComfyUI\ComfyUI\comfy\t2i_adapter\adapter.py", line 214, in forward x = torch.cat([x, style_embedding], dim=1)
repair edges group not active and preview of that not show anything have anyone this problem?
have any sugesstion?
working. It's a bit flawed, but I love it!
When loading the graph, the following node types were not found:
- IPAdapterApplyNodes that have failed to load will show as red on the graph.
Only one side using the coadapter-Style version is available and the other side reports an error, which is hard for people with OCD.
Error prompted:Error occurred when executing ACN_AdvancedControlNetApply: AdvancedControlNetApply.apply_controlnet() got an unexpected keyword argument 'timestep_kf' File "D:\program files\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\program files\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\program files\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Node Details
Primitive Nodes (12)
IPAdapterApply (1)
Image scale to side (3)
Note (4)
Reroute (4)
Custom Nodes (44)
- Mask Contour (1)
ComfyUI
- StyleModelLoader (1)
- CLIPVisionEncode (1)
- CLIPVisionLoader (2)
- StyleModelApply (1)
- CLIPTextEncode (4)
- ControlNetLoader (1)
- SetLatentNoiseMask (2)
- VAEEncode (2)
- ImageToMask (1)
- GrowMask (2)
- MaskToImage (4)
- ImagePadForOutpaint (1)
- VAEDecode (2)
- PreviewImage (3)
- InvertMask (3)
- KSampler (2)
- LoadImage (1)
- CheckpointLoaderSimple (1)
- InpaintPreprocessor (1)
- IPAdapterModelLoader (1)
- ScaledSoftControlNetWeights (1)
- ACN_AdvancedControlNetApply (1)
- Paste By Mask (2)
- Mask Erode Region (1)
- Mask Gaussian Region (1)
- Mask Dilate Region (1)
Model Details
Checkpoints (1)
realisticVisionV60B1_v60B1VAE.safetensors
LoRAs (0)