Vid2Vid Style Transfer with IPA & Hotshot XL

5.0

2 reviews
361
45.4K
11.8K
53
Description

Table of Contents

Checkpoint I used:
HotshotXL download:
A Lower Memory use vae:

Found that the "Strong Style Transfer" of IPAdapter performs exceptionally well in Vid2Vid.

Just update your IPAdapter and have fun~!


Checkpoint I used:

Any turbo or lightning model will be good, like Dreamshaper XL Turbo or lightning, Juggernaut XL lightning etc. (Remember to check the required samplers and lower your CFG)


HotshotXL download:

Download: https://huggingface.co/Kosinkadink/HotShot-XL-MotionModels/tree/main

Context_length: 8 is good


And I recommend you take a look at Inner_Reflections_AI 's article, which provides very detailed explanations: https://civitai.com/articles/2601



A Lower Memory use vae:

Download: https://civitai.com/models/140686?modelVersionId=155933



Node Diagram
Discussion
V
VVV VVVa year ago

hey, very nice workflow, where i find controlnet models from your workflow i have different ones and one from 1.5 can you please give me the link, i  m little confused with controlnet sdxl models, sometimes are 200 mb big sometimes are like odl 1.5 a1111 2 or 3 gb?

瞿秋丰a year ago

weibo上旋转的女孩工作流能提供吗博主

V
VVV VVVa year ago

hey i manage to find controlnets, but i get this message and am fine with ipadapter i have updatet it

Error occurred when executing IPAdapterUnifiedLoader: IPAdapter model not found. File "C:\Users\User\Desktop\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\User\Desktop\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\User\Desktop\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\User\Desktop\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 454, in load_models raise Exception("IPAdapter model not found.")

👍3

me too,how to solve it?it can't find clipvision,but obviously  it's there in the models /clipvision

🚀1
👍1
S
Simon Leea year ago

clipvision must be named exactly as per the official naming, or it cannot be loaded:

👍9

Wowwww! You're awesome! This is so super cool, I tried it out and I'm really really loving the results )))

❤️2
a
alexone7777a year ago

where is "ttplanetSDXLControlnet_v10Fp16_tile.safetensors" ?

Why my workflow stop at here:

[AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (374) greater than context_length 8.

[AnimateDiffEvo] - INFO - Using motion module mm_sdxl_v10_beta.ckpt:v1.

Requested to load ControlNet

Requested to load ControlNet

Requested to load AnimateDiffModel

Requested to load ControlNet

Requested to load SDXL

Loading 5 new models

0%|                                                                                            | 0/8 [00:00<?, ?it/s]

❤️1
👍1
l
lin 2k (newold)8 months ago

me too, anything help?

Fine, maybe VRAM is too low to run it

(Edited)
Y
YJ S7 months ago

same ,are you find answer?

x
xuxu4 months ago

I am having the exact same situation as well!

x
xuxu4 months ago

it went though at the end but it look very long time. deleting one of the controlnet made it go a lottt faster.

J
JangHo Parka year ago

Error occurred when executing IPAdapterUnifiedLoader: IPAdapter model not found. File "G:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 454, in load_models raise Exception("IPAdapter model not found.")

ComfyUI\models\clip_vision

On this path
CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors

I still get an error message. Even if I update it, it's still a problem.

J
JangHo Parka year ago

I received all the IPAdapter models and it went ahead.

(Edited)
余承谦10 months ago
How you received the IPAdapter models
r
raymond lee10 months ago

go to manager --->download model --->find ipadapter ---->downlad allSDXL related ipadapter models

洪励航a year ago

where is diffusers_xl_depth_full.safetensors

Thank you so much for your kindness and support.

👎1
洪励航a year ago

Error occurred when executing ColorMatch: Can't import color-matcher, did you install requirements.txt? Manual install: pip install color-matcher

这个如何去解决

J
JangHo Parka year ago

KSampler: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! File "G:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1344, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1314, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample raise e File "G:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 383, in motion_sample latents = orig_comfy_sample(model, noise, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_reference.py", line 47, in refcn_sample return orig_comfy_sample(model, *args, **kwargs)

J
JangHo Parka year ago

Can't I get 8GB of VRAM???

👎1
C
Cc T10 months ago

执行 ADE_AnimateDiffLoaderGen1 时出错:cumprod() 收到无效的参数组合 - 得到(numpy.ndarray,dim=int),但预期以下之一:*(Tensor input,int dim,*,torch.dtype dtype,Tensor out)*(Tensor input,name dim,*,torch.dtype dtype,Tensor out)文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\execution.py”,第 151 行,在 recursive_execute output_data 中,output_ui = get_output_data(obj,input_data_all)文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\execution.py”,第 81 行,在 get_output_data return_values = map_node_over_list(obj,input_data_all,obj.FUNCTION,allow_interrupt=True)文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\execution.py”, 第 74 行, 在 map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) 文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\nodes_gen1.py”, 第 95 行, 在 load_mm_and_inject_params new_model_sampling = BetaSchedules.to_model_sampling(beta_schedule, model) 文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\utils_model.py”, 第 167 行, 在 to_model_sampling return cls._to_model_sampling(别名=别名,model_type=model.model.model_type)文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\utils_model.py”,第 162 行,在 _to_model_sampling 中 ms_obj = evolve_model_sampling(cls.to_config(别名),model_type=model_type,别名=别名,original_timesteps=original_timesteps)文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\utils_model.py”,第 79 行,在 evolve_model_sampling 中返回 model_sampling(model_config,model_type)文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\comfy\model_base.py”,第 45 行,在 model_sampling 中返回 ModelSampling(model_config) 文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\comfy\model_sampling.py”,第 50 行,在 __init__ 中 self._register_schedule(given_betas=None, beta_schedule=beta_schedule, timesteps=1000, linear_start=linear_start, linear_end=linear_end, cosine_s=8e-3) 文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\comfy\model_sampling.py”,第 60 行,在 _register_schedule 中 alphas_cumprod = torch.cumprod(alphas, dim=0)

群力9 months ago

执行 ADE_AnimateDiffLoaderGen1 时出错:join() 参数必须是 str、bytes 或 os.PathLike 对象,而不是 'NoneType' 文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\execution.py”,第 151 行,在 recursive_execute output_data 中,output_ui = get_output_data(obj, input_data_all) 文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\execution.py”,第 81 行,在 get_output_data 中 return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) 文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\execution.py”,第 74 行,在 map_node_over_list 中results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) 文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\nodes_gen1.py”,第 50 行,在 load_mm_and_inject_params motion_model = load_motion_module_gen2(model_name=model_name, motion_model_settings=ad_settings) 文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py”,第 1229 行,在 load_motion_module_gen2 model_path = get_motion_model_path(model_name) 文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\utils_model.py”,第 346 行,在 get_motion_model_path 中返回 folder_paths.get_full_path(Folders.ANIMATEDIFF_MODELS, model_name) 文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\folder_paths.py”,第 179 行,在 get_full_path 中 filename = os.path.relpath(os.path.join("/", filename), "/") 文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\python\lib\ntpath.py”,第 143 行,在 join genericpath._check_arg_types('join', path, *paths)文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\python\lib\genericpath.py”,第 152 行,在 _check_arg_types 中引发 TypeError(f'{funcname}() 参数必须是 str、bytes 或 '

T
Tuatara9 months ago

comfyui萌新请教

解决了七八个errors之后终于跑通了workflow,但结果不太理想,最终效果看上去噪点很高,请问有人知道要怎么调整吗?

相比楼主原来的workflow,改了第二个controlNet的aio preprocessor为depthAnything(原为zoe depthAnything);

将cfg从2.0改为了1.0;

采样器保持dpmpp_sde;

scheduler保持karras


T
Tuatara9 months ago

另外在4070tis 上以step8跑下来需要21min,请问这是正常的吗,有什么优化手段吗

J
Jamson Liu9 months ago

+1 好慢啊

T
Tuatara9 months ago

你也是吗,我甚至怀疑是我显卡有问题😂

而且有点奇怪的是,不同视频素材也有影响,11s的素材,用step8只要21min;7s的素材,用step6却要跑30min+。你可以试试换素材

(Edited)
J
Jamson Liu9 months ago

最后解决了吗?

T
Tuatara9 months ago

没有,看我下面的回复

T
Tuatara9 months ago

试了一整天了,不管怎么调参都无法达到楼主这样的效果。我现在怀疑是不是我用的controlNet模型有问题。楼主3个controlNet中的后两个分别使用diffusers_xl_depth_full.safetensors 和 diffusers_xl_canny_full.safetensors,我都是用diffusion_pytorch_model.safetensors(因为搜索这两个controlNet实在找不到,在huggingface的diffusers下找到了我使用的这个,拿来代替了)

还有第二个controlNet楼主AIO使用Zoe_DepthAnythingPreprocessor, 我使用DepthAnythingV2Preprocessor,因为前者直接报错(似乎还是找不到这个preprocessor,但理论上应该自动下载,不知道这个为什么没有)。


求教有人知道问题出在哪吗,我可以提供workflow和素材,或者有好心人能给我👆提到的2个controlNet吗

(Edited)
T
Tuatara9 months ago

终于解决了。。上面提到的2个controlNet就是huggingface上diffusers组织下对应的depth和canny模型。楼主应该是有重命名过(也非常能理解,huggingface上这两个模型是同一个名字)。目前效果虽然不及楼主,但能接受了。(preprocessor仍然用了DepthAnythingV2Preprocessor

J
Jamson Liu9 months ago

我用了这两个cn,但还是很多噪点,太不清晰了

T
Tuatara8 months ago

你的preprocessor设置都没问题吗

J
Jamson Liu8 months ago

跟作者的一致,没有修改,您说的DepthAnythingV2Preprocessor和Zoe_DepthAnythingPreprocessor,我都试过,还是噪点很多

T
Tuatara8 months ago

不知道问题出在哪了。我也是噪点多。现在准备去复现另一个哈利波特的workflow,也有问题。。每天都在解决幺蛾子问题😂

J
Jamson Liu8 months ago

哈哈哈哈 真道友 一样一样

d
daniel yehezkeli8 months ago

Though I managed to run the workflow, when it reaches the ksampler it gets stuck and won't progress further. it says it's on 85% and it freezes after "loading 5 models". any ideas on how to tackle this?

C
Crazy Chinese8 months ago

Where does the PLUS (high strength) model for IPAdapter  Unified Loader came from??? I have being looking for it the entire afternoon....

这个叫 PLUS (high strength) 的模块在哪下啊。。找了一整天了都找不到。。。

(Edited)
🙏2
a
aitomspa8 months ago

Discord  在线运行,已经替换了视频上传,怎么还是报错呢?

Error occurred when executing VAELoader: 'NoneType' object has no attribute 'lower' File "/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/ComfyUI/nodes.py", line 689, in load_vae sd = comfy.utils.load_torch_file(vae_path) File "/ComfyUI/comfy/utils.py", line 13, in load_torch_file if ckpt.lower().endswith(".safetensors"):

🤣1
称橙逞秤4 months ago

🚀 Now you can Edit & Run it online (Fast & Error Free):

该工作流在线编辑运行地址:(速度快,不爆红)

https://www.runninghub.ai/post/1866056401050357762/?utm_source=openart

https://www.runninghub.cn/post/1866016887959142402/?utm_source=openart

RunningHub – Highly reliable Cloud-Based ComfyUI, Edit and Run Workflows Online, no local installation required. Powered by RTX 4090 GPUs for faster performance and fully error-free node operation.

RunningHub - 超高可用性云端ComfyUI,本地免安装,标配RTX4090带来更快运行速度,站内已发布海量高质量工作流可一键运行,速度快,不爆红

-------------------------------------

👋🏻 Hello Simon Lee,    
I’m Spike from RunningHub.  I hope this message finds you well! I wanted to kindly inform you that we’ve copied your workflow to RunningHub (with the original author clearly credited and a link to your OpenArt profile). The purpose is to allow more users who love your work to easily experience it online.  
We’d love to hear from you at: spike@runninghub.ai
🎉 If you’re open to joining us, we can transfer the workflows we’ve uploaded to your RunningHub account and provide you with additional benefits as a token of our appreciation.
😞 However, if you’re not comfortable with this approach, we will promptly remove all the uploaded content.    
Looking forward to your response!

R
Reinier Reynhout2 months ago

Hey! I love this flow! Great work and thanks for sharing. How did you achieve such a noise free result? I get it all working but it feels noisey. Not that clean as your examples.

Author

3
22.3K
795
99.1K

Reviews

y

y d

7 months ago

s

samon_07860

10 months ago

Versions (1)

  • - latest (a year ago)

Primitive Nodes (1)

Anything Everywhere3 (1)

Custom Nodes (36)

  • - ADE_AnimateDiffLoaderGen1 (1)

  • - ADE_StandardUniformContextOptions (1)

ComfyUI

  • - ControlNetApplyAdvanced (3)

  • - VAELoader (1)

  • - CLIPTextEncode (2)

  • - VAEEncode (2)

  • - KSampler (2)

  • - VAEDecode (2)

  • - LoraLoader (1)

  • - UpscaleModelLoader (1)

  • - CheckpointLoaderSimple (1)

  • - ImageScaleBy (1)

  • - ImageUpscaleWithModel (1)

  • - ImageScale (2)

  • - LoadImage (1)

  • - AIO_Preprocessor (3)

  • - IPAdapterUnifiedLoader (1)

  • - IPAdapterAdvanced (1)

  • - ControlNetLoaderAdvanced (3)

  • - VHS_VideoCombine (4)

  • - VHS_LoadVideoPath (1)

  • - ColorMatch (1)

Checkpoints (1)

dreamshaperXL_v21TurboDPMSDE.safetensors

LoRAs (1)

OIL_ON_CANVAS_v3.safetensors