RAVE AnimateDiff Animation - Text Prompt Consistency Styling For Characters And Background
5.0
5 reviewsDescription
This workflow use RAVE , AnimateDiff
RAVE: https://rave-video.github.io/
RAVE allows you to unleash your imagination and bring your wildest ideas to life. With just a few simple text prompts, you can completely change the style, theme, and even the characters in your videos. Imagine transforming a girl doing exercise into a firefighter or morphing a car into a truck, train, or fire truck.
By combining the power of RAVE and Animate Diff, you can achieve stunning visual effects and enhance the overall impact of your videos.
Wolkthrough Of This Workflow : https://www.youtube.com/watch?v=2kCs6MglK70
This is the basic version of this RAVE+AnimateDiff workflow, it is easily change animation character and background with just text prompt, and result in a consistence output.
It does not required to run mutliple workflows, to make video into image frames, process, and upscale on difference workflow part.
With 1 workflow, it can be done, generate an unqiue style of animation via Video to Video.
For those who concren about preformance, please take a look at the requirement below.
If your computer setup do not have enough processing power, I suggest you run this in OpenArt A10 cloud server without waiting and get your computer freeze.
RAVE technology required higher computing power and VRAM to process compare with AnimateDiff only workflow. It use extra memory for preprocessing , U-NET, for creating consistance style.
----------------------------------------------------------------------------
Also you need , AnimateDiff : https://animatediff.github.io/
----------------------------------------------------------------------------
ComfyUI RAVE Required Nodes:
https://github.com/spacepxl/ComfyUI-RAVE/tree/main
https://github.com/BlenderNeko/ComfyUI_Noise
----------------------------------------------------------------------------
Other Nodes We need:
ComfyUI-KJNodes(For Get Set Nodes)
***You need this to connection each nodes , you have to search "KJNodes" in ComfyUI Manager to download it
or get it in here : https://github.com/kijai/ComfyUI-KJNodes
----------------------------------------------------------------------------
ControlNet (copy in models/controlnet folder)
Depth: can be downloaded from ComfyUI manager
AnimateDiff Motion ControlNet : https://huggingface.co/crishhh/animatediff_controlnet/tree/main
Animatediff Motion Models: https://huggingface.co/guoyww/animatediff/tree/main
----------------------------------------------------------------------------
System Requirement
Min. Requirement : 12GB VRAM
Best number of frame range: 60-200 frames per each generation
12GBVRAM Max frame range : 60 frames per each generation
16GB VRAM Max frame range : 150 frames per each generation
24GB VRAM Max frame range : 250 frames per each generation
Suggestted Frames Width And Height : 512, 560 , 980, 960, 920 px. (Do not try to run HD or 4K resolution in the LoadVideo. It will burn your computer memory in a second)
Suggestted ControlNet : Depth , DWPose, LineArt, Animate ControlNet.
----------------------------------------------------------------------------
If you do not have high end graphic card, I suggest you run this in OpenArt A10 cloud server.
I have not try other ComfyUI supported cloud server, so I cannot answer you about other company cloud servers.
----------------------------------------------------------------------------
P.S: Thank you for some supporters join into my Patreon. But some people are trying to game the system subscribe and cancel at the same day, and that cause the Patreon fraud detection system mark your action as suspicious activity. Their fraud detection system are going to block this automatically. When you try something shady on a system, then don't come here to blame me , try to leave a comment to bad mouth about it. I am just try to focus on making workflow, improve things and publish on public share with like minded people. I have no time to see every people joining in my Patreon activities.
Node Diagram
Discussion
What were the prompts you used for the other backgrounds (besides the beach one that's included in the current .json)?
best
I can run/convert wf to api for mobile app or website, tele @hongdthaui
Node Details
Primitive Nodes (54)
GetNode (26)
Integer (3)
Note (1)
SetNode (24)
Custom Nodes (60)
- ADE_AnimateDiffLoaderWithContext (1)
- ADE_AnimateDiffUniformContextOptions (1)
- CR Seed (1)
ComfyUI
- EmptyImage (1)
- ImageScale (1)
- ImageCompositeMasked (1)
- GrowMask (1)
- ImageUpscaleWithModel (1)
- PreviewImage (3)
- LoadImage (1)
- LoraLoader (2)
- ModelSamplingDiscrete (1)
- FlipSigmas (1)
- KSamplerSelect (1)
- VAEEncode (2)
- SamplerCustom (1)
- BasicScheduler (1)
- VAELoader (1)
- LoraLoaderModelOnly (1)
- FreeU_V2 (1)
- InvertMask (1)
- CLIPTextEncode (2)
- RescaleCFG (1)
- VAEDecode (3)
- CheckpointLoaderSimple (1)
- KSamplerAdvanced (1)
- MaskBlur+ (1)
- MaskFromColor+ (1)
- ImageResize+ (1)
- ImageCASharpening+ (1)
- RIFE VFI (1)
- SEGSPaste (1)
- SEGSDetailerForAnimateDiff (1)
- SAMLoader (1)
- ToBasicPipe (1)
- ImpactSimpleDetectorSEGS_for_AD (1)
- UltralyticsDetectorProvider (1)
- OneFormer-COCO-SemSegPreprocessor (1)
- PixelPerfectResolution (1)
- Zoe-DepthMapPreprocessor (1)
- ControlNetLoaderAdvanced (2)
- ACN_AdvancedControlNetApply (2)
- KSamplerRAVE (1)
- VHS_VideoCombine (5)
- VHS_LoadVideo (1)
- ReActorFaceSwap (1)
- Upscale Model Loader (1)
Model Details
Checkpoints (1)
SD1_5\realisticVisionV60B1_v60B1VAE.safetensors
LoRAs (3)
SD1-5\add_detail.safetensors
SD1-5\lcm-lora-sdv1-5_lora_weights.safetensors
v3_sd15_adapter.ckpt