AnimateDiff Flicker-Free Animation Video Workflow

3.6

17 reviews
398
144.8K
40.0K
35
Description

***Thank you for some supporter join into my Patreon. But some people are trying to game the system subscribe and cancel at the same day, and that cause the Patreon fraud detection system mark your action as suspicious activity. Their fraud detection system are going to block this automatically.  When you try something shady on a system, then don't come here to blame me , try to leave a comment to bad mouth about it.  I am just try to focus on making workflow, improve things and publish on public share with like minded people.  I have no time to see every people joining in my Patreon activities.

What this workflow does

Just 1 Workflow! Just 1 ! And you are able to create amazing animation!

👉 Create amazing animation with vid2vid method to generate a unqiue looking style of a new action video. I leaverage the LCM Lora to speed up image frames generation process, but in other way, I am using the IP Adapter to enhance the style of each frames, it's a strong back up support for LCM low sample steps.

How to use this workflow

👉 Use AnimateDiff as the core for creating smooth flicker-free animation. Simply load a source video, and the user create a travel prompt to style the animation, also the user are able to use IPAdapter to skin the video style, such as character, objects, or background.


Tips about this workflow

👉 Workflow Version 10 Walkthrough : https://youtu.be/Sg3KgA3_fPU?si=YFykW0Dkpjz1D2Ii

👉 Workflow Version 8.5 Walkthrough :  https://youtu.be/j4BEWNvrYio

👉 Workflow Version 7 Walkthrough : https://youtu.be/md6YzGX741c

👉 Workflow Version 6 Walkthrough :  https://youtu.be/g9QYXvVxkkM

👉 Full Tutorial of using this workflow here : https://youtu.be/wFahkr-b7HI

👉 Tutorial Using LCM Checkpoint Mode : https://www.youtube.com/watch?v=AoUlADxSDAg


🎥 Video demo link (optional)

👉 V.10 Demo https://www.youtube.com/shorts/M6vcwuBn14s

👉 V.8.5 Demo  https://youtu.be/j4BEWNvrYio

👉 V.7 Demo (With Image Masks Enable for Background)  https://www.youtube.com/shorts/3V7bmNRwM2o

👉V.6 Demo (With Mask Background) :  https://www.youtube.com/shorts/EO2OQBugHU0

👉Version 6 Demo :  https://youtube.com/shorts/uBUaDZopjuw?feature=share

👉 Demo 1 :  https://youtube.com/shorts/FJiP8qEzb4Q?feature=share

👉Demo 2 :  https://youtube.com/shorts/Pa0Fd7ezb5I?feature=share

👉Demo 3 :  https://youtube.com/shorts/LUn2LG0LW38?feature=share

👉Demo 4 (Using Dreamshaper 8 LCM) :  https://www.youtube.com/shorts/w0xROu1TqfM

👉Demo 5 (AnimateDiff Evolution From Normal To Enhance Detail) :  https://www.youtube.com/shorts/WINY9ODRhm8


Updates:

Version: 2024-01-18 (Version 10)

- Add GET/SET Nodes making the diagram more clean

- Fixed the Version 7 SEG group for remove background.

- 2 IPAdapter group, one for single image, another for mutli-images IPA


Version: 2024-01-10

Explainor video :  https://youtu.be/j4BEWNvrYio

- Add IPAadpter Face Plus V2 feature.

- 3 Groups of ControlNet, can be optional to 2 ControlNet base on your need.

- Minor fix on nodes.


Version: 2023-12-24

- A little clean up for Bypass nodes, some nodes that don't need anymore.

- Image Mask Group for video background are going remove in this work, and it will going to create in another AnimateDiff new workflow.  After test, better result with solid background color source video.


Version: 2023-12-19 (V.7)

- Add Image Mask Group for detect character and remove background before LineArt ControlNet.

This group are suitable for animation with character focus, and use IPAdapter to stylize the animation background.


Version: 2023-12-14

- Fix slow loading time on OpenPose ControlNet

We are updating this part using DW Pose Preprocessor instead of Openpose Preprocessor. Because DW Pose are able to improve the loading image frames time way better than Openpose. (I have tested with 700 frames video.)

Second reason, DWPose are able to detect more detail on the hands , and face of a character.


Version : 2023-12-13 (2)

- Fixed Detailer Setting for LCM , please be aware the Sampling Method MUST be the same with the KSampler of AnimateDiff, and the "Detailer For AnimateDiff (SEGS/pipe)".  If you are using other Sampling Method, you must change both at the same sample name, same seed number, and CfG.

- Fixed Faceswap Nodes Group After Detailer , you can turn Off the ReActor Faceswap Node or just Bypass Group if you do not need face swap feature.


Version : 2023-12-13

- Add Detailer to enhance image frames quality.

- Added 3 Video Combine for result comparison.


Node Diagram
Discussion
P
Paul Chena year ago

There is warning in process, finally no output!

[AnimateDiffEvo] - WARNING - This warning can be ignored, you should not be using the deprecated AnimateDiff Combine node anyway. If you are, use Video Combine from ComfyUI-VideoHelperSuite instead. ffmpeg could not be found. Outputs that require it have been disabled

🤣1
B
Benjia year ago

Then it's very clear you have not install ffmpeg, and you try to save as MP4 format? :D

👍3
🤣1

This looks very promising!

I am crawling my may slowly forward. Like the terminator at the end of the movie. 🤗 A lot of the models that are used by this workflow are missing. Would you mind including some installation instructions. For each model I am checking if it exists in the ComfyUI installer, on HuggingFace, on Civitai... I guess I am not the only one who has to do that, so some instructions would help the community immensly!

I've been working a lot on this, especially to be able to increase the length of the video (targeting a 1024*1024 3min video atm).


I had a lot of ram issue loading a huge number of frame with the load video node.  Instead I'm using the Load images from advanced controlnet (marked Deprecated atm) and using the image sequence instead of the video.  


Both the amount of ram and vram consumption is down. Around 1.5gb of vram ( 16.5 to 15) and 34-37gb for 1024 frame in ram instead of saturating 64bg and 60gb of swap haha.


I have yet to try Load Image List From dir (inspire), since it's not deprecated.  It's also useful for the detailer, I've split the process in two. That way I can more easily try to find better detailer settings.

(repost from youtube if it helps anyone)

(Edited)
T
Thomas Millera year ago

i know you recently updated workflow to tackle masking the background. But check this out:  https://openart.ai/workflows/toad_shrill_9/openpose-to-region-map/OjiCSv3Bq2CEHB9Cwomu

could you maybe make a workflow to use multiple ipadapter inputs one with the mask of the person and the other with the mask for the background

B
Benjia year ago

yes, it's possible to use regional mask + IP together in this workflow.  I will try it on the next update.  Thanks Thomas :)

B
Benjia year ago

But 1 problem I am thinking of is that, what if the source video got a lots of people. In that case,  masked in red for people and it goes to IPAdapter, then everyone will have the same style of outfit.   :D  Then the generate result are going to looks like soldier uniform.

T
Thomas Millera year ago

yes this is true im primarily looking to use this in videos where there arent other people in shot, however i  think the mask uses the deep red for primary subject and not so deep red for secondary subjects, so it maybe possible to work around the potential problem you talking about.

im working on your latest workflow now and seeing if i can understand it enough so as to duplicate the mask section such that i can mask a subject/person and the background and use 2ipadapters for them respectively. appreciate the feedback and continued support etc.

T
Thomas Millera year ago

how can we pick out diffrent colours to select in the mask ?

s
stevanisyaa year ago

Hello, I really like your work and have been working on your tutorials for the last few days.

Just a small question, do you have a drive with all of the pth, ckpt files that you use for this workflow ?

Thank you !  Keep on the good work !


Hello, these workflows are really great. Thank you. Where do you find the source videos from? My tests with less clean Tik Tok videos, makes for very poor results in comparison to yours. I am struggling to find source material that is as good as what I am seeing in your tutorials. Many thanks

B
Benjia year ago

Yup, soild background preform better.  Just like how CG movie made.


Is there a separate video removal node?

M
Marioa year ago

This is a bad person, I signed up for his Patreon, he offers a service and when I ask him for help, he blocks me and does not provide the service. Be careful, it deceives.

B
Benjia year ago

Yes, that is my attitude. Depends on whom asking questions.  

As I have made tutorial already how what need to download, and install.

And I kick f__ker that is lazy without learning or watching tutorial and just want quick way to run sh_t.

I don't need supporters like this.

If you think join in my Pateron can act like a boss asking stuipd questions? Then Go F Yourself.

👍3
🤣1
B
Benjia year ago

Also.  I am here to make workflow, not mainly focus in being a technical support.

Answering basic question, where to download this, or how to install that.

You have a problem when your custom node, then ask the developer in Github.

How the F I know the custom node, I am not even the developer of it.

❤️2
J
Jean9 months ago

Hey, Mario. I'm sorry to hear that, but you shouldn't expect a western manner from a communist citizen.  

Hello Benji, Your Stable Diffusion Animation Create Consistent Character Youtube Dance Video (Tutorial Guide) video shows me that you have a very good understanding of the generative AI technical process. You understand the white papers. How did you learn this? Have you studied computer science? Your videos just get better and better. Many thanks

B
Benjia year ago

Yes, Com Sci background, and transition into digital marketing.  But still have the foundation and concept, althrough difference programming language , syntax, but concept is the same.  So I can study it myself and pick up things after I test.

s
sdfromoca year ago

I am getting this error on the latest(V10 I believe it is that is downloadable on this page) workflow:

"When loading the graph, the following node types were not found:

SetNode
GetNode

Nodes that have failed to load will show as red on the graph."

When I go into the comfymanager and click on "Install missing nodes", it is blank and does not show any nodes missing. I searched the internet and github for these nodes but came up with nothing.

Anyone have any idea what is missing?

B
Benjia year ago
B
Benjia year ago
l
leneth lva year ago

收米远程安装加Vailfreedom

I'm sorry this may be a stupid question but I'm new to this so how do I install this json file into my comfy UI system files where do I put it?

Error occurred when executing SEGSDetailerForAnimateDiff:

type object 'VAEEncode' has no attribute 'vae_encode_crop_pixels'

B
Benjia year ago

Then connect your VAE! :)

H
Hao Leea year ago

hi i met same question .You need to update "impact=pack"

R
Rajvir3 months ago

got prompt

Failed to validate prompt for output 54:

* DWPreprocessor 97:

 - Value not in list: bbox_detector: 'yolox_m.onnx' not in ['yolox_l.torchscript.pt', 'yolox_l.onnx', 'yolo_nas_l_fp16.onnx', 'yolo_nas_m_fp16.onnx', 'yolo_nas_s_fp16.onnx']

Output will be ignored

Failed to validate prompt for output 92:

Output will be ignored

Failed to validate prompt for output 90:

* ReActorFaceSwap 73:

 - Value not in list: face_restore_model: 'codeformer.pth' not in ['none', 'codeformer-v0.1.0.pth', 'GFPGANv1.3.pth', 'GFPGANv1.4.pth', 'GPEN-BFR-1024.onnx', 'GPEN-BFR-2048.onnx', 'GPEN-BFR-512.onnx']

Output will be ignored

Failed to validate prompt for output 156:

Output will be ignored

Prompt executed in 0.02 seconds

got this error please help

Author

6
57.8K
562
210.4K

Reviews

y

yueban5716

a year ago

s

skettalee337

a year ago

Error: Set node input undefined. Most likely you're missing custom nodes, the following node types were not found: InsightFaceLoader PrepImageForInsightFace IPAdapterApplyEncoded IPAdapterApply IPAdapterApplyFaceID Nodes that have failed to load will show as red on the graph. And not found in "install Missing Nodes"

.

.parkerhill

a year ago

IDK why people are bad-mouthing Benji about being blocked. I'm a $30/month Patron subscriber, and he delivers excellent workflows and tips. Very quick to reply for help if you ask him.

Y

Yoyo

a year ago

Interesting workflow. I will try both Rave and this one :)

D

Darren Lithgo

a year ago

Like a few on here signed up and paid for a months access and was blocked after just a day. I paid for a months access and got 1 day access be aware!

Looks great on this new animation. And I got similar result like this. thanks

M

Mario

a year ago

This is a bad person, I signed up for his Patreon, he offers a service and when I ask him for help, he blocks me and does not provide the service. Be careful, it deceives.

S

SirJ

a year ago

What is that nad how to fix it? Error occurred when executing CLIPSetLastLayer: 'NoneType' object has no attribute 'clone' File "E:\ComfyUI_windows_portable\ComfyUI\execution.py", line 154, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Really interesting workflow to work on :)

C

Coco

a year ago

Truly impressive!

Versions (10)

  • - latest (a year ago)

  • - v20240109-164712

  • - v20231224-205959

  • - v20231218-215410

  • - v20231213-192653

  • - v20231213-160434

  • - v20231213-092524

  • - v20231213-090558

  • - v20231209-135038

  • - v20231206-185736

Primitive Nodes (50)

GetNode (21)

IPAdapterApply (1)

IPAdapterApplyEncoded (1)

IPAdapterApplyFaceID (1)

InsightFaceLoader (1)

Integer (3)

Note (1)

PrepImageForInsightFace (1)

Reroute (2)

SetNode (18)

Custom Nodes (64)

  • - ADE_AnimateDiffUniformContextOptions (1)

  • - CheckpointLoaderSimpleWithNoiseSelect (1)

  • - ADE_AnimateDiffLoaderWithContext (1)

  • - CR Seed (1)

ComfyUI

  • - ImageScale (1)

  • - EmptyImage (1)

  • - LoraLoader (1)

  • - ModelSamplingDiscrete (1)

  • - PreviewImage (5)

  • - GrowMask (1)

  • - ImageCompositeMasked (1)

  • - ControlNetApplyAdvanced (3)

  • - LoadImage (6)

  • - CLIPVisionLoader (3)

  • - LoraLoaderModelOnly (1)

  • - VAELoader (1)

  • - CLIPSetLastLayer (1)

  • - CLIPTextEncode (1)

  • - VAEEncode (1)

  • - VAEDecode (1)

  • - ImageToMask (1)

  • - KSampler (1)

  • - MaskBlur+ (1)

  • - MaskFromColor+ (1)

  • - ImageCASharpening+ (1)

  • - SAMLoader (1)

  • - SEGSPaste (1)

  • - ToBasicPipe (1)

  • - UltralyticsDetectorProvider (1)

  • - ImpactSimpleDetectorSEGS_for_AD (1)

  • - SEGSDetailerForAnimateDiff (1)

  • - OneFormer-COCO-SemSegPreprocessor (1)

  • - DWPreprocessor (1)

  • - CannyEdgePreprocessor (1)

  • - LineArtPreprocessor (1)

  • - IPAdapterModelLoader (3)

  • - PrepImageForClipVision (1)

  • - IPAdapterEncoder (1)

  • - ControlNetLoaderAdvanced (3)

  • - VHS_LoadVideo (1)

  • - VHS_VideoCombine (3)

  • - BatchPromptSchedule (1)

  • - ReActorFaceSwap (1)

  • - Image Resize (1)

Checkpoints (1)

SD1_5\realisticVisionV60B1_v60B1VAE.safetensors

LoRAs (2)

SD1-5\lcm-lora-sdv1-5_lora_weights.safetensors

ip-adapter-faceid-plusv2_sd15_lora.safetensors