Vid2Vid Style Transfer with IPA & Hotshot XL
5.0
2 reviewsDescription
Table of Contents
Found that the "Strong Style Transfer" of IPAdapter performs exceptionally well in Vid2Vid.
Just update your IPAdapter and have fun~!
Checkpoint I used:
Any turbo or lightning model will be good, like Dreamshaper XL Turbo or lightning, Juggernaut XL lightning etc. (Remember to check the required samplers and lower your CFG)
HotshotXL download:
Download: https://huggingface.co/Kosinkadink/HotShot-XL-MotionModels/tree/main
Context_length: 8 is good
And I recommend you take a look at Inner_Reflections_AI 's article, which provides very detailed explanations: https://civitai.com/articles/2601
A Lower Memory use vae:
Download: https://civitai.com/models/140686?modelVersionId=155933
Node Diagram
Discussion
hey i manage to find controlnets, but i get this message and am fine with ipadapter i have updatet it
Error occurred when executing IPAdapterUnifiedLoader: IPAdapter model not found. File "C:\Users\User\Desktop\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\User\Desktop\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\User\Desktop\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\User\Desktop\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 454, in load_models raise Exception("IPAdapter model not found.")
me too,how to solve it?it can't find clipvision,but obviously it's there in the models /clipvision
Wowwww! You're awesome! This is so super cool, I tried it out and I'm really really loving the results )))
where is "ttplanetSDXLControlnet_v10Fp16_tile.safetensors" ?
https://civitai.com/models/330313/ttplanetsdxlcontrolnettilerealistic
thank you!
Why my workflow stop at here:
[AnimateDiffEvo] - INFO - Sliding context window activated - latents passed in (374) greater than context_length 8.
[AnimateDiffEvo] - INFO - Using motion module mm_sdxl_v10_beta.ckpt:v1.
Requested to load ControlNet
Requested to load ControlNet
Requested to load AnimateDiffModel
Requested to load ControlNet
Requested to load SDXL
Loading 5 new models
0%| | 0/8 [00:00<?, ?it/s]
me too, anything help?
Fine, maybe VRAM is too low to run it
(Edited)Error occurred when executing IPAdapterUnifiedLoader: IPAdapter model not found. File "G:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 454, in load_models raise Exception("IPAdapter model not found.")
ComfyUI\models\clip_vision
On this path
CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors
CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors
I still get an error message. Even if I update it, it's still a problem.
I received all the IPAdapter models and it went ahead.
(Edited)How you received the IPAdapter models
go to manager --->download model --->find ipadapter ---->downlad allSDXL related ipadapter models
KSampler: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! File "G:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1344, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1314, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample raise e File "G:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 383, in motion_sample latents = orig_comfy_sample(model, noise, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "G:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control_reference.py", line 47, in refcn_sample return orig_comfy_sample(model, *args, **kwargs)
Can't I get 8GB of VRAM???
执行 ADE_AnimateDiffLoaderGen1 时出错:cumprod() 收到无效的参数组合 - 得到(numpy.ndarray,dim=int),但预期以下之一:*(Tensor input,int dim,*,torch.dtype dtype,Tensor out)*(Tensor input,name dim,*,torch.dtype dtype,Tensor out)文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\execution.py”,第 151 行,在 recursive_execute output_data 中,output_ui = get_output_data(obj,input_data_all)文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\execution.py”,第 81 行,在 get_output_data return_values = map_node_over_list(obj,input_data_all,obj.FUNCTION,allow_interrupt=True)文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\execution.py”, 第 74 行, 在 map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) 文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\nodes_gen1.py”, 第 95 行, 在 load_mm_and_inject_params new_model_sampling = BetaSchedules.to_model_sampling(beta_schedule, model) 文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\utils_model.py”, 第 167 行, 在 to_model_sampling return cls._to_model_sampling(别名=别名,model_type=model.model.model_type)文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\utils_model.py”,第 162 行,在 _to_model_sampling 中 ms_obj = evolve_model_sampling(cls.to_config(别名),model_type=model_type,别名=别名,original_timesteps=original_timesteps)文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\utils_model.py”,第 79 行,在 evolve_model_sampling 中返回 model_sampling(model_config,model_type)文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\comfy\model_base.py”,第 45 行,在 model_sampling 中返回 ModelSampling(model_config) 文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\comfy\model_sampling.py”,第 50 行,在 __init__ 中 self._register_schedule(given_betas=None, beta_schedule=beta_schedule, timesteps=1000, linear_start=linear_start, linear_end=linear_end, cosine_s=8e-3) 文件“D:\AI\ConfyUI-aki\ComfyUI-aki-v1.3\comfy\model_sampling.py”,第 60 行,在 _register_schedule 中 alphas_cumprod = torch.cumprod(alphas, dim=0)
执行 ADE_AnimateDiffLoaderGen1 时出错:join() 参数必须是 str、bytes 或 os.PathLike 对象,而不是 'NoneType' 文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\execution.py”,第 151 行,在 recursive_execute output_data 中,output_ui = get_output_data(obj, input_data_all) 文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\execution.py”,第 81 行,在 get_output_data 中 return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) 文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\execution.py”,第 74 行,在 map_node_over_list 中results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) 文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\nodes_gen1.py”,第 50 行,在 load_mm_and_inject_params motion_model = load_motion_module_gen2(model_name=model_name, motion_model_settings=ad_settings) 文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py”,第 1229 行,在 load_motion_module_gen2 model_path = get_motion_model_path(model_name) 文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\utils_model.py”,第 346 行,在 get_motion_model_path 中返回 folder_paths.get_full_path(Folders.ANIMATEDIFF_MODELS, model_name) 文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\folder_paths.py”,第 179 行,在 get_full_path 中 filename = os.path.relpath(os.path.join("/", filename), "/") 文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\python\lib\ntpath.py”,第 143 行,在 join genericpath._check_arg_types('join', path, *paths)文件“D:\AIGC-666\ConfyUI-aki\ComfyUI-aki-v1.3\python\lib\genericpath.py”,第 152 行,在 _check_arg_types 中引发 TypeError(f'{funcname}() 参数必须是 str、bytes 或 '
comfyui萌新请教
解决了七八个errors之后终于跑通了workflow,但结果不太理想,最终效果看上去噪点很高,请问有人知道要怎么调整吗?
相比楼主原来的workflow,改了第二个controlNet的aio preprocessor为depthAnything(原为zoe depthAnything);
将cfg从2.0改为了1.0;
采样器保持dpmpp_sde;
scheduler保持karras
试了一整天了,不管怎么调参都无法达到楼主这样的效果。我现在怀疑是不是我用的controlNet模型有问题。楼主3个controlNet中的后两个分别使用diffusers_xl_depth_full.safetensors 和 diffusers_xl_canny_full.safetensors,我都是用diffusion_pytorch_model.safetensors(因为搜索这两个controlNet实在找不到,在huggingface的diffusers下找到了我使用的这个,拿来代替了)
还有第二个controlNet楼主AIO使用Zoe_DepthAnythingPreprocessor, 我使用DepthAnythingV2Preprocessor,因为前者直接报错(似乎还是找不到这个preprocessor,但理论上应该自动下载,不知道这个为什么没有)。
求教有人知道问题出在哪吗,我可以提供workflow和素材,或者有好心人能给我👆提到的2个controlNet吗
(Edited)终于解决了。。上面提到的2个controlNet就是huggingface上diffusers组织下对应的depth和canny模型。楼主应该是有重命名过(也非常能理解,huggingface上这两个模型是同一个名字)。目前效果虽然不及楼主,但能接受了。(preprocessor仍然用了DepthAnythingV2Preprocessor)
我用了这两个cn,但还是很多噪点,太不清晰了
你的preprocessor设置都没问题吗
Though I managed to run the workflow, when it reaches the ksampler it gets stuck and won't progress further. it says it's on 85% and it freezes after "loading 5 models". any ideas on how to tackle this?
Where does the PLUS (high strength) model for IPAdapter Unified Loader came from??? I have being looking for it the entire afternoon....
这个叫 PLUS (high strength) 的模块在哪下啊。。找了一整天了都找不到。。。
(Edited)Discord 在线运行,已经替换了视频上传,怎么还是报错呢?
Error occurred when executing VAELoader: 'NoneType' object has no attribute 'lower' File "/ComfyUI/execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/ComfyUI/execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/ComfyUI/execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "/ComfyUI/nodes.py", line 689, in load_vae sd = comfy.utils.load_torch_file(vae_path) File "/ComfyUI/comfy/utils.py", line 13, in load_torch_file if ckpt.lower().endswith(".safetensors"):
🚀 Now you can Edit & Run it online (Fast & Error Free):
该工作流在线编辑运行地址:(速度快,不爆红)
https://www.runninghub.ai/post/1866056401050357762/?utm_source=openart
https://www.runninghub.cn/post/1866016887959142402/?utm_source=openart
RunningHub – Highly reliable Cloud-Based ComfyUI, Edit and Run Workflows Online, no local installation required. Powered by RTX 4090 GPUs for faster performance and fully error-free node operation.
RunningHub - 超高可用性云端ComfyUI,本地免安装,标配RTX4090带来更快运行速度,站内已发布海量高质量工作流可一键运行,速度快,不爆红
-------------------------------------
👋🏻 Hello Simon Lee,
I’m Spike from RunningHub. I hope this message finds you well! I wanted to kindly inform you that we’ve copied your workflow to RunningHub (with the original author clearly credited and a link to your OpenArt profile). The purpose is to allow more users who love your work to easily experience it online.
We’d love to hear from you at: spike@runninghub.ai
🎉 If you’re open to joining us, we can transfer the workflows we’ve uploaded to your RunningHub account and provide you with additional benefits as a token of our appreciation.
😞 However, if you’re not comfortable with this approach, we will promptly remove all the uploaded content.
Looking forward to your response!
Hey! I love this flow! Great work and thanks for sharing. How did you achieve such a noise free result? I get it all working but it feels noisey. Not that clean as your examples.
Node Details
Primitive Nodes (1)
Anything Everywhere3 (1)
Custom Nodes (36)
- ADE_AnimateDiffLoaderGen1 (1)
- ADE_StandardUniformContextOptions (1)
ComfyUI
- ControlNetApplyAdvanced (3)
- VAELoader (1)
- CLIPTextEncode (2)
- VAEEncode (2)
- KSampler (2)
- VAEDecode (2)
- LoraLoader (1)
- UpscaleModelLoader (1)
- CheckpointLoaderSimple (1)
- ImageScaleBy (1)
- ImageUpscaleWithModel (1)
- ImageScale (2)
- LoadImage (1)
- AIO_Preprocessor (3)
- IPAdapterUnifiedLoader (1)
- IPAdapterAdvanced (1)
- ControlNetLoaderAdvanced (3)
- VHS_VideoCombine (4)
- VHS_LoadVideoPath (1)
- ColorMatch (1)
Model Details
Checkpoints (1)
dreamshaperXL_v21TurboDPMSDE.safetensors
LoRAs (1)
OIL_ON_CANVAS_v3.safetensors