extended image (No prompts needed)

4.7

10 reviews
278
40.3K
14.0K
43
Description

工作流已经更新,做了很多改进和升级,它基于XL模型,我推荐使用新的工作流。

The workflow has been updated with a lot of improvements and upgrades, it is based on the XL model and I recommend using the new workflow.

新工作流的链接:

Link to the new workflow:

https://openart.ai/workflows/hornet_splendid_53/extended-outpaintxl-update/RbTrDOJifp89TcHNjo6Z

--------------------------------------------------------------------------------------------------------------------

The stock images I used in the demo are all from the author #NeuraLunk, his images are beautiful and if you like his work, you can find him at the URL below.

https://openart.ai/workflows/profile/neuralunk?sort=latest

What this workflow does

extended image

How to use this workflow

Just drag and drop the image in, no need to use the prompt.

How it works

Use controlnet's inpaint model to make guesses about the extensions.

At the same time, the style model is used to reference the picture, so that controlnet won't guess wildly.

The style model can be either coadapter or IPAdapter, they have different ways to reference the style. I prefer coadapter for extending images.

I highly recommend the realisticVisionV60B1VAE  3.97G model for its great extended image results!


Model Download

CHECKPOINT

https://civitai.com/models/4201/realistic-vision-v60-b1       3.97G

Place it in the   ComfyUI\models\checkpoints

coadapter

https://huggingface.co/TencentARC/T2I-Adapter/blob/main/models/coadapter-style-sd15v1.pth

Place it in the ComfyUI\models\style_models


IPAdapter

https://huggingface.co/h94/IP-Adapter/tree/main/models

Place it in the ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\models


IPAdapter  clip vision

https://huggingface.co/h94/IP-Adapter/tree/main/models/image_encoder

 Place it in the  ComfyUI_windows_portable\ComfyUI\models\clip_vision\SD1.5  


coadapter   clip vision

https://huggingface.co/openai/clip-vit-large-patch14/blob/main/pytorch_model.bin

Place it in the  ComfyUI_windows_portable\ComfyUI\models\clip_vision\SD1.5  


Please make sure that all models are adapted to the SD1.5 model.


If you have any questions, please add my WeChat: knowknow0


Node Diagram
Discussion
L
László Isóa year ago

I' would try it Thanks!

#
#NeuraLunka year ago

If you use
my generated images for your workflow, thats OK... BUT you should and
could !! have asked BEFORE !! using instead of just doing it and adding a
lame disclaimer to avoid trouble. Bye now. END.

N
Ninga year ago

You're right. I'll change it. I'd like to apologize to you.

❤️1
#
#NeuraLunka year ago

Respect, apologies accepted :)

Now you agree , I dont mind my images being used.

It's just more polite to ask 1st.

Cool then I will change my review :)


(Edited)
N
Ninga year ago

I just tried to contact the original author of these images. What I didn't realize was that all of these beautiful images were produced by you! I didn't care before this, I just carefully selected some of them, thinking they were by different authors. :)

Now that you've agreed, I'd better add these beautiful pictures because I think they're beautiful.

❤️2
👍1
#
#NeuraLunka year ago

All oke no worries ;)

where did you try contact me ?

I am on the dev discord server all day:   https://discord.gg/FeeaSdFj  

(Edited)
K
Kamil Nowaka year ago

Hi there, i would love to try out this workflow but it keeps give me same error Error occurred when executing StyleModelApply: Sizes of tensors must match except in dimension 1. Expected size 1280 but got size 1024 for tensor number 1 in the list.  Im sorry for this newb question but, how can i fix it?

n
net winga year ago

I have also encountered the same problem. I have been tinkering for two days, deleting all nodes and reinstalling the system, but it has been ineffective。

N
Ninga year ago

I know exactly how you feel as I've been through this a million times as well. I've left a message below so you can see if it solves your problem

N
Ninga year ago

This is usually due to a mismatch between the models. My recommended CHECKPOINT model is realisticVisionV60 which is an SD1.5 model.

I have updated the download addresses for the various models so you can view the model names one by one in my workflow chart.

Your main problem is a StyleModelApply error! So you need to check the coadapter model as well as the clip vision model

5
58263687a year ago

What's wrong with me being in the same situation as her

C
Chiu Chin Ponga year ago

i have the same problem for that error   and i am check the model the coadapter model as well as the clip vision model  ,  and IPAdapter  is working but i still want to try the   coadapter

will u can give us more tips how to work that

N
Ninga year ago

I'd actually like to help, but I'm not too good at this either. Would you double check the clip vision, is it pytorch_model.bin 2.53 GB?

C
Chiu Chin Ponga year ago

thx for that  this is working    

and i can donwload the model from the comfyui manager install model not working

dowwnload from your link is working  


N
Ninga year ago

I've also updated the description. You can check it out.

N
Ninga year ago

A friend of mine may have found the problem.


coadapter clip vision


https://huggingface.co/openai/clip-vit-large-patch14/blob/main/pytorch_model.bin


Place it in the ComfyUI_windows_portable\ComfyUI\models\clip_vision\SD1.5

👍1
5
58263687a year ago


   x = torch.cat([x, style_embedding], dim=1)

RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 1280 but got size 1024 for tensor number 1 in the lis

t.


Prompt executed in 5.38 seconds

Prompt executed in 5.38 seconds

N
Ninga year ago

Would you double check the clip vision, is it pytorch_model.bin 2.53 GB?  I've got the download address in the description.

N
Ninga year ago

A friend of mine may have found the problem.


coadapter clip vision


https://huggingface.co/openai/clip-vit-large-patch14/blob/main/pytorch_model.bin


Place it in the ComfyUI_windows_portable\ComfyUI\models\clip_vision\SD1.5

S
Sweatingtona year ago

Holy moly, this is some crazy comfy magic! Amazing work friend!

I also encountered a "Sizes of tensors……" error, but when I removed the relevant modules such as apply style model ”from the workflow and re-enabled IPAdapter, it worked correctly.

So I created a new workflow and applied the relevant modules such as“ apply style model ”to the most basic txt2img. It's still not working.

I switched to a few more cloud environments to run the simplest txt2img workflow with “style model”. It still doesn't work.

As you can see, the error is in the "apply style model" node.

d
dakharia year ago

Is there any chance you could, or have, develop a similar outpainting workflow for SDXL checkpoints? Those are what I like to work with most often. Thank you

👍2
J
Jams Draka year ago

Hi, Amazing workflow, sometimes the colors next to the edges doesn't follow them. is there a setting to increase value to follow more those edge pixels

PS nyone else is loading the workflow and is set in other language?? how can i fix this without remaking all the nodes and switching them?

👍1
T
Top Wea year ago

I have the same problem,it seems the part of edge repairment doesn't work.

👍1

I have this error with "timestep_kf' with the apply control net, do you know where it comes from ?

v
vu116_34247a year ago

work great and fast, thanks bro!

J
Jeff Thomanna year ago

When loading the graph, the following node types were not found:

  • IPAdapterApplyNodes that have failed to load will show as red on the graph.
p
pet poa year ago

why error?

Error occurred when executing StyleModelApply: Sizes of tensors must match except in dimension 1. Expected size 1280 but got size 1024 for tensor number 1 in the list. File "F:\Blender_ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\Blender_ComfyUI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\Blender_ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\Blender_ComfyUI\ComfyUI\nodes.py", line 937, in apply_stylemodel cond = style_model.get_cond(clip_vision_output).flatten(start_dim=0, end_dim=1).unsqueeze(dim=0) File "F:\Blender_ComfyUI\ComfyUI\comfy\sd.py", line 341, in get_cond return self.model(input.last_hidden_state) File "F:\Blender_ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "F:\Blender_ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "F:\Blender_ComfyUI\ComfyUI\comfy\t2i_adapter\adapter.py", line 214, in forward x = torch.cat([x, style_embedding], dim=1)

M
Miao Liu9 months ago

I have the same mistake

M
Morteza Basija year ago

where i can download control_v11p_sd15_inpaint.pth file?

M
Morteza Basija year ago

i found that the file can be find here

https://huggingface.co/lllyasviel/ControlNet-v1-1/tree/main

M
Morteza Basija year ago

repair edges group not active and preview of that not show anything have anyone this problem?

have any sugesstion?

瞿秋丰a year ago

ipadapter has updated ,Apply IPAdapter node not work

When loading the graph, the following node types were not found:

  • IPAdapterTilesMasked
V
VVV VVVa year ago

same here!


H
Hyun Seonga year ago

working. It's a bit flawed, but I love it!

n
nanaye_37626a year ago

When loading the graph, the following node types were not found:

  • IPAdapterApplyNodes that have failed to load will show as red on the graph.

Only one side using the coadapter-Style version is available and the other side reports an error, which is hard for people with OCD.

r
rat neta year ago

Error prompted:Error occurred when executing ACN_AdvancedControlNetApply: AdvancedControlNetApply.apply_controlnet() got an unexpected keyword argument 'timestep_kf' File "D:\program files\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\program files\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\program files\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

r
rat neta year ago

Apply Style Model error


Author

3
30.4K
514
85.0K

Reviews

t

tan hoon

a year ago

I like your work, wonderful!

瞿秋丰

a year ago

a year ago

F

Frey

a year ago

not working

S

Sweatington

a year ago

Amazing comfy magic. 5/5

Looks really good

s

sen yun

a year ago

真不错啊!真不错。

B

Bear xiong

a year ago

👍🏻

#

#NeuraLunk

a year ago

Nice !

C

Coco

a year ago

Looks really good! Excited to try. Thanks for the credits to the openart site :)

Versions (4)

  • - latest (a year ago)

  • - v20231219-081903

  • - v20231219-081737

  • - v20231219-072611

Primitive Nodes (12)

IPAdapterApply (1)

Image scale to side (3)

Note (4)

Reroute (4)

Custom Nodes (44)

  • - Mask Contour (1)

ComfyUI

  • - StyleModelLoader (1)

  • - CLIPVisionEncode (1)

  • - CLIPVisionLoader (2)

  • - StyleModelApply (1)

  • - CLIPTextEncode (4)

  • - ControlNetLoader (1)

  • - SetLatentNoiseMask (2)

  • - VAEEncode (2)

  • - ImageToMask (1)

  • - GrowMask (2)

  • - MaskToImage (4)

  • - ImagePadForOutpaint (1)

  • - VAEDecode (2)

  • - PreviewImage (3)

  • - InvertMask (3)

  • - KSampler (2)

  • - LoadImage (1)

  • - CheckpointLoaderSimple (1)

  • - InpaintPreprocessor (1)

  • - IPAdapterModelLoader (1)

  • - ScaledSoftControlNetWeights (1)

  • - ACN_AdvancedControlNetApply (1)

  • - Paste By Mask (2)

  • - Mask Erode Region (1)

  • - Mask Gaussian Region (1)

  • - Mask Dilate Region (1)

Checkpoints (1)

realisticVisionV60B1_v60B1VAE.safetensors

LoRAs (0)