Go to OpenArt main site
Upload workflow

AnimateDiff LCM+SD WF to perfectly blend foreground and background (with IPAdapter)

5.0

0 reviews
71
17.1K
3.8K
1
Description

Tips

Workflow development and tutorials not only take part of my time, but also consume resources. If you like the workflow, please consider a donation or to use the services of one of my affiliate links:

Help me with a ko-fi: https://ko-fi.com/koalanation

šŸšØUse Runpod and I will get credits! https://runpod.io?ref=617ypn0kšŸšØ

šŸ˜‰šŸ‘ŒšŸ”„ Run ComfyUI without installation with:

ThinkDiffusion

RunDiffusion

What this workflow does

Creates a Vid2Vid Animation where your hero (foreground) blends perfectly with the background (anything you want). The combination of foreground + background is achieved by creating a scene and later use conditional masking with separated controlnets streams (for foreground and background). It uses a combination of LCM + regular SD1.5 KSampler to speed up generation time while still having good detailed frames.


The resulting animation can be upscaled further (here I have Facedetailer and video frame interpolation. Further Ā upscaling/refining is also possible.


Video tutorials link šŸŽ„

šŸ‘‰ Ā LCM + AD https://youtu.be/QdQANF3YLuI

šŸ‘‰ Ā AD only https://youtu.be/gDUeqCErjt4


How to use this workflow

šŸ‘‰Details can be found here: https://tinyurl.com/34wvyzbs

  • Load all the corresponding models to be used (AnimateDiff, IPAdapter, clipvision, LCM, etc.)

If you are using Openart's runnable workflow, you can download the example assets by clicking here šŸ‘‰ Ā https://civitai.com/api/download/attachments/12274

  • Load your foreground (hero) and background images in the Load Images node in the Image Blending Group.
  • Adjust the images to have the foreground image in the right position.
  • Load the Images that are being used for the controlnets (foreground: openpose; background: zoe depth, MLSD lines,). In this workflow they are load directly, but can be generated in the workflow via preprocessors. Foreground requires of openpose/DWPose, but for the background others can be used.
  • Make sure the right models are introduced in the different nodes. In OpenArt's runnable workflow, they are all available but some of them have different name.
  • Write a prompt that describes the Animation
  • Adjust the different parameters of the workflow. Most critical are the foreground mask and the segmentation for the screen
  • Run the workflow. Start with small amount of frames, e.g. 12, then later adjust parameters of the different nodes. When everything looks nice, all the frames (or video) can be run. In OpenArt's runnable workflow, you may want to limit the frames to 32 (depending on the complexity, it might be more or less)


Tips about this workflow

  • Openpose/DWPose is creating the masks, so these are needed.
  • SDXL would need some adjustments
  • mm-Stabilized-mid has provided best results with the movement
  • Masks and automatic are tricky always...so some trial and error may be needed


Discussion

(No comments yet)

Loading...

Author

11
13.6K
254
57.9K
    Images.zip (4.9 MB)

No reviews yet

  • - latest (8 months ago)

  • - v20231203-185847

Primitive Nodes (23)

IPAdapterApply (1)

PrimitiveNode (2)

Reroute (20)

Custom Nodes (86)

AnimateDiff Evolved

  • - ADE_AnimateDiffLoaderWithContext (1)

  • - ADE_AnimateDiffModelSettings (1)

  • - ADE_AnimateDiffUniformContextOptions (1)

  • - ADE_AnimateDiffUnload (1)

ComfyUI

  • - MaskComposite (1)

  • - ControlNetApplyAdvanced (3)

  • - ConditioningCombine (2)

  • - CLIPVisionLoader (1)

  • - LoadImage (3)

  • - KSampler (3)

  • - VAEDecode (2)

  • - ImageUpscaleWithModel (1)

  • - UpscaleModelLoader (1)

  • - MaskToImage (1)

  • - PreviewImage (5)

  • - CheckpointLoaderSimple (1)

  • - ConditioningSetMask (4)

  • - EmptyLatentImage (1)

  • - ImageCompositeMasked (1)

  • - VAELoader (1)

  • - CLIPVisionEncode (1)

  • - CLIPSetLastLayer (1)

  • - CLIPTextEncode (4)

  • - unCLIPConditioning (1)

  • - FreeU (1)

  • - VAEEncode (1)

  • - SetLatentNoiseMask (1)

  • - ModelSamplingDiscrete (2)

  • - LoraLoader (1)

  • - ImageResize+ (1)

  • - FromBasicPipe_v2 (3)

  • - UltralyticsDetectorProvider (2)

  • - SAMLoader (2)

  • - ImpactSimpleDetectorSEGS (1)

  • - ImpactSEGSToMaskList (1)

  • - ToBasicPipe (1)

  • - EditBasicPipe (1)

  • - ImpactImageBatchToImageList (1)

  • - FaceDetailer (1)

  • - MaskListToMaskBatch (1)

  • - ImageListToImageBatch (1)

  • - IPAdapterModelLoader (1)

  • - ControlNetLoaderAdvanced (3)

  • - VHS_LoadImages (3)

  • - VHS_VideoCombine (3)

  • - ColorToMask (2)

  • - ResizeMask (1)

  • - BatchUncropAdvanced (1)

  • - BatchCropFromMaskAdvanced (1)

  • - GrowMaskWithBlur (1)

  • - Image Resize (2)

  • - Image Save (3)

Checkpoints (1)

Dreamshaperv8.safetensors

LoRAs (1)

LCM_SD15.safetensors