AnimateDiff Flicker-Free Animation Video Workflow
3.6
17 reviewsDescription
***Thank you for some supporter join into my Patreon. But some people are trying to game the system subscribe and cancel at the same day, and that cause the Patreon fraud detection system mark your action as suspicious activity. Their fraud detection system are going to block this automatically. When you try something shady on a system, then don't come here to blame me , try to leave a comment to bad mouth about it. I am just try to focus on making workflow, improve things and publish on public share with like minded people. I have no time to see every people joining in my Patreon activities.
What this workflow does
Just 1 Workflow! Just 1 ! And you are able to create amazing animation!
👉 Create amazing animation with vid2vid method to generate a unqiue looking style of a new action video. I leaverage the LCM Lora to speed up image frames generation process, but in other way, I am using the IP Adapter to enhance the style of each frames, it's a strong back up support for LCM low sample steps.
How to use this workflow
👉 Use AnimateDiff as the core for creating smooth flicker-free animation. Simply load a source video, and the user create a travel prompt to style the animation, also the user are able to use IPAdapter to skin the video style, such as character, objects, or background.
Tips about this workflow
👉 Workflow Version 10 Walkthrough : https://youtu.be/Sg3KgA3_fPU?si=YFykW0Dkpjz1D2Ii
👉 Workflow Version 8.5 Walkthrough : https://youtu.be/j4BEWNvrYio
👉 Workflow Version 7 Walkthrough : https://youtu.be/md6YzGX741c
👉 Workflow Version 6 Walkthrough : https://youtu.be/g9QYXvVxkkM
👉 Full Tutorial of using this workflow here : https://youtu.be/wFahkr-b7HI
👉 Tutorial Using LCM Checkpoint Mode : https://www.youtube.com/watch?v=AoUlADxSDAg
🎥 Video demo link (optional)
👉 V.10 Demo https://www.youtube.com/shorts/M6vcwuBn14s
👉 V.8.5 Demo https://youtu.be/j4BEWNvrYio
👉 V.7 Demo (With Image Masks Enable for Background) https://www.youtube.com/shorts/3V7bmNRwM2o
👉V.6 Demo (With Mask Background) : https://www.youtube.com/shorts/EO2OQBugHU0
👉Version 6 Demo : https://youtube.com/shorts/uBUaDZopjuw?feature=share
👉 Demo 1 : https://youtube.com/shorts/FJiP8qEzb4Q?feature=share
👉Demo 2 : https://youtube.com/shorts/Pa0Fd7ezb5I?feature=share
👉Demo 3 : https://youtube.com/shorts/LUn2LG0LW38?feature=share
👉Demo 4 (Using Dreamshaper 8 LCM) : https://www.youtube.com/shorts/w0xROu1TqfM
👉Demo 5 (AnimateDiff Evolution From Normal To Enhance Detail) : https://www.youtube.com/shorts/WINY9ODRhm8
Updates:
Version: 2024-01-18 (Version 10)
- Add GET/SET Nodes making the diagram more clean
- Fixed the Version 7 SEG group for remove background.
- 2 IPAdapter group, one for single image, another for mutli-images IPA
Version: 2024-01-10
Explainor video : https://youtu.be/j4BEWNvrYio
- Add IPAadpter Face Plus V2 feature.
- 3 Groups of ControlNet, can be optional to 2 ControlNet base on your need.
- Minor fix on nodes.
Version: 2023-12-24
- A little clean up for Bypass nodes, some nodes that don't need anymore.
- Image Mask Group for video background are going remove in this work, and it will going to create in another AnimateDiff new workflow. After test, better result with solid background color source video.
Version: 2023-12-19 (V.7)
- Add Image Mask Group for detect character and remove background before LineArt ControlNet.
This group are suitable for animation with character focus, and use IPAdapter to stylize the animation background.
Version: 2023-12-14
- Fix slow loading time on OpenPose ControlNet
We are updating this part using DW Pose Preprocessor instead of Openpose Preprocessor. Because DW Pose are able to improve the loading image frames time way better than Openpose. (I have tested with 700 frames video.)
Second reason, DWPose are able to detect more detail on the hands , and face of a character.
Version : 2023-12-13 (2)
- Fixed Detailer Setting for LCM , please be aware the Sampling Method MUST be the same with the KSampler of AnimateDiff, and the "Detailer For AnimateDiff (SEGS/pipe)". If you are using other Sampling Method, you must change both at the same sample name, same seed number, and CfG.
- Fixed Faceswap Nodes Group After Detailer , you can turn Off the ReActor Faceswap Node or just Bypass Group if you do not need face swap feature.
Version : 2023-12-13
- Add Detailer to enhance image frames quality.
- Added 3 Video Combine for result comparison.
Node Diagram
Discussion
There is warning in process, finally no output!
[AnimateDiffEvo] - WARNING - This warning can be ignored, you should not be using the deprecated AnimateDiff Combine node anyway. If you are, use Video Combine from ComfyUI-VideoHelperSuite instead. ffmpeg could not be found. Outputs that require it have been disabled
This looks very promising!
I am crawling my may slowly forward. Like the terminator at the end of the movie. 🤗 A lot of the models that are used by this workflow are missing. Would you mind including some installation instructions. For each model I am checking if it exists in the ComfyUI installer, on HuggingFace, on Civitai... I guess I am not the only one who has to do that, so some instructions would help the community immensly!
I've been working a lot on this, especially to be able to increase the length of the video (targeting a 1024*1024 3min video atm).
I had a lot of ram issue loading a huge number of frame with the load video node. Instead I'm using the Load images from advanced controlnet (marked Deprecated atm) and using the image sequence instead of the video.
Both the amount of ram and vram consumption is down. Around 1.5gb of vram ( 16.5 to 15) and 34-37gb for 1024 frame in ram instead of saturating 64bg and 60gb of swap haha.
I have yet to try Load Image List From dir (inspire), since it's not deprecated. It's also useful for the detailer, I've split the process in two. That way I can more easily try to find better detailer settings.
(repost from youtube if it helps anyone)
(Edited)i know you recently updated workflow to tackle masking the background. But check this out: https://openart.ai/workflows/toad_shrill_9/openpose-to-region-map/OjiCSv3Bq2CEHB9Cwomu
could you maybe make a workflow to use multiple ipadapter inputs one with the mask of the person and the other with the mask for the background
But 1 problem I am thinking of is that, what if the source video got a lots of people. In that case, masked in red for people and it goes to IPAdapter, then everyone will have the same style of outfit. :D Then the generate result are going to looks like soldier uniform.
yes this is true im primarily looking to use this in videos where there arent other people in shot, however i think the mask uses the deep red for primary subject and not so deep red for secondary subjects, so it maybe possible to work around the potential problem you talking about.
im working on your latest workflow now and seeing if i can understand it enough so as to duplicate the mask section such that i can mask a subject/person and the background and use 2ipadapters for them respectively. appreciate the feedback and continued support etc.
how can we pick out diffrent colours to select in the mask ?
Hello, I really like your work and have been working on your tutorials for the last few days.
Just a small question, do you have a drive with all of the pth, ckpt files that you use for this workflow ?
Thank you ! Keep on the good work !
Hello, these workflows are really great. Thank you. Where do you find the source videos from? My tests with less clean Tik Tok videos, makes for very poor results in comparison to yours. I am struggling to find source material that is as good as what I am seeing in your tutorials. Many thanks
I found this if it helps anyone for their tests, here is a video similar to Benji's
https://www.youtube.com/watch?v=89nTun64v2A
https://www.youtube.com/watch?v=Q2VNm87e4Hg
https://www.youtube.com/channel/UClfV5dOMn4GIKg0sA6AbNXg
(Edited)Is there a separate video removal node?
This is a bad person, I signed up for his Patreon, he offers a service and when I ask him for help, he blocks me and does not provide the service. Be careful, it deceives.
Yes, that is my attitude. Depends on whom asking questions.
As I have made tutorial already how what need to download, and install.
And I kick f__ker that is lazy without learning or watching tutorial and just want quick way to run sh_t.
I don't need supporters like this.
If you think join in my Pateron can act like a boss asking stuipd questions? Then Go F Yourself.
Also. I am here to make workflow, not mainly focus in being a technical support.
Answering basic question, where to download this, or how to install that.
You have a problem when your custom node, then ask the developer in Github.
How the F I know the custom node, I am not even the developer of it.
Hello Benji, Your Stable Diffusion Animation Create Consistent Character Youtube Dance Video (Tutorial Guide) video shows me that you have a very good understanding of the generative AI technical process. You understand the white papers. How did you learn this? Have you studied computer science? Your videos just get better and better. Many thanks
I am getting this error on the latest(V10 I believe it is that is downloadable on this page) workflow:
"When loading the graph, the following node types were not found:
SetNode
GetNode
Nodes that have failed to load will show as red on the graph."
When I go into the comfymanager and click on "Install missing nodes", it is blank and does not show any nodes missing. I searched the internet and github for these nodes but came up with nothing.
Anyone have any idea what is missing?
Set/GetNode from here, https://github.com/kijai/ComfyUI-KJNodes
If you are interested in a newer way to make consistance style, you can try this workflow : https://openart.ai/workflows/futurebenji/rave-animatediff-animation---text-prompt-consistency-styling-for-characters-and-background/AaH6b9J8oDPHmYenNJtS
I'm sorry this may be a stupid question but I'm new to this so how do I install this json file into my comfy UI system files where do I put it?
got prompt
Failed to validate prompt for output 54:
* DWPreprocessor 97:
- Value not in list: bbox_detector: 'yolox_m.onnx' not in ['yolox_l.torchscript.pt', 'yolox_l.onnx', 'yolo_nas_l_fp16.onnx', 'yolo_nas_m_fp16.onnx', 'yolo_nas_s_fp16.onnx']
Output will be ignored
Failed to validate prompt for output 92:
Output will be ignored
Failed to validate prompt for output 90:
* ReActorFaceSwap 73:
- Value not in list: face_restore_model: 'codeformer.pth' not in ['none', 'codeformer-v0.1.0.pth', 'GFPGANv1.3.pth', 'GFPGANv1.4.pth', 'GPEN-BFR-1024.onnx', 'GPEN-BFR-2048.onnx', 'GPEN-BFR-512.onnx']
Output will be ignored
Failed to validate prompt for output 156:
Output will be ignored
Prompt executed in 0.02 seconds
got this error please help
Node Details
Primitive Nodes (50)
GetNode (21)
IPAdapterApply (1)
IPAdapterApplyEncoded (1)
IPAdapterApplyFaceID (1)
InsightFaceLoader (1)
Integer (3)
Note (1)
PrepImageForInsightFace (1)
Reroute (2)
SetNode (18)
Custom Nodes (64)
- ADE_AnimateDiffUniformContextOptions (1)
- CheckpointLoaderSimpleWithNoiseSelect (1)
- ADE_AnimateDiffLoaderWithContext (1)
- CR Seed (1)
ComfyUI
- ImageScale (1)
- EmptyImage (1)
- LoraLoader (1)
- ModelSamplingDiscrete (1)
- PreviewImage (5)
- GrowMask (1)
- ImageCompositeMasked (1)
- ControlNetApplyAdvanced (3)
- LoadImage (6)
- CLIPVisionLoader (3)
- LoraLoaderModelOnly (1)
- VAELoader (1)
- CLIPSetLastLayer (1)
- CLIPTextEncode (1)
- VAEEncode (1)
- VAEDecode (1)
- ImageToMask (1)
- KSampler (1)
- MaskBlur+ (1)
- MaskFromColor+ (1)
- ImageCASharpening+ (1)
- SAMLoader (1)
- SEGSPaste (1)
- ToBasicPipe (1)
- UltralyticsDetectorProvider (1)
- ImpactSimpleDetectorSEGS_for_AD (1)
- SEGSDetailerForAnimateDiff (1)
- OneFormer-COCO-SemSegPreprocessor (1)
- DWPreprocessor (1)
- CannyEdgePreprocessor (1)
- LineArtPreprocessor (1)
- IPAdapterModelLoader (3)
- PrepImageForClipVision (1)
- IPAdapterEncoder (1)
ComfyUI_tinyterraNodes
- ttN text (1)
- ControlNetLoaderAdvanced (3)
- VHS_LoadVideo (1)
- VHS_VideoCombine (3)
- BatchPromptSchedule (1)
- ReActorFaceSwap (1)
- Image Resize (1)
Model Details
Checkpoints (1)
SD1_5\realisticVisionV60B1_v60B1VAE.safetensors
LoRAs (2)
SD1-5\lcm-lora-sdv1-5_lora_weights.safetensors
ip-adapter-faceid-plusv2_sd15_lora.safetensors