AnimateDiff LCM+SD WF to perfectly blend foreground and background (with IPAdapter)
5.0
1 reviewsDescription
Tips
Workflow development and tutorials not only take part of my time, but also consume resources. If you like the workflow, please consider a donation or to use the services of one of my affiliate links:
Help me with a ko-fi: https://ko-fi.com/koalanation
šØUse Runpod and I will get credits! https://runpod.io?ref=617ypn0kšØ
ššš„ Run ComfyUI without installation with:
What this workflow does
Creates a Vid2Vid Animation where your hero (foreground) blends perfectly with the background (anything you want). The combination of foreground + background is achieved by creating a scene and later use conditional masking with separated controlnets streams (for foreground and background). It uses a combination of LCM + regular SD1.5 KSampler to speed up generation time while still having good detailed frames.
The resulting animation can be upscaled further (here I have Facedetailer and video frame interpolation. Further Ā upscaling/refining is also possible.
Video tutorials link š„
š Ā LCM + AD https://youtu.be/QdQANF3YLuI
š Ā AD only https://youtu.be/gDUeqCErjt4
How to use this workflow
šDetails can be found here: https://tinyurl.com/34wvyzbs
- Load all the corresponding models to be used (AnimateDiff, IPAdapter, clipvision, LCM, etc.)
If you are using Openart's runnable workflow, you can download the example assets by clicking here š Ā https://civitai.com/api/download/attachments/12274
- Load your foreground (hero) and background images in the Load Images node in the Image Blending Group.
- Adjust the images to have the foreground image in the right position.
- Load the Images that are being used for the controlnets (foreground: openpose; background: zoe depth, MLSD lines,). In this workflow they are load directly, but can be generated in the workflow via preprocessors. Foreground requires of openpose/DWPose, but for the background others can be used.
- Make sure the right models are introduced in the different nodes. In OpenArt's runnable workflow, they are all available but some of them have different name.
- Write a prompt that describes the Animation
- Adjust the different parameters of the workflow. Most critical are the foreground mask and the segmentation for the screen
- Run the workflow. Start with small amount of frames, e.g. 12, then later adjust parameters of the different nodes. When everything looks nice, all the frames (or video) can be run. In OpenArt's runnable workflow, you may want to limit the frames to 32 (depending on the complexity, it might be more or less)
Tips about this workflow
- Openpose/DWPose is creating the masks, so these are needed.
- SDXL would need some adjustments
- mm-Stabilized-mid has provided best results with the movement
- Masks and automatic are tricky always...so some trial and error may be needed
Discussion
(No comments yet)
Loading...
Resources (1)
Reviews
No reviews yet
Versions (2)
- latest (10 months ago)
- v20231203-185847
Node Details
Primitive Nodes (23)
IPAdapterApply
PrimitiveNode
Reroute
Custom Nodes (86)
- ADE_AnimateDiffLoaderWithContext
- ADE_AnimateDiffModelSettings
- ADE_AnimateDiffUniformContextOptions
- ADE_AnimateDiffUnload
ComfyUI
- MaskComposite
- ControlNetApplyAdvanced
- ConditioningCombine
- CLIPVisionLoader
- LoadImage
- KSampler
- VAEDecode
- ImageUpscaleWithModel
- UpscaleModelLoader
- MaskToImage
- PreviewImage
- CheckpointLoaderSimple
- ConditioningSetMask
- EmptyLatentImage
- ImageCompositeMasked
- VAELoader
- CLIPVisionEncode
- CLIPSetLastLayer
- CLIPTextEncode
- unCLIPConditioning
- FreeU
- VAEEncode
- SetLatentNoiseMask
- ModelSamplingDiscrete
- LoraLoader
- ImageResize+
- FILM VFI
- FromBasicPipe_v2
- UltralyticsDetectorProvider
- SAMLoader
- ImpactSimpleDetectorSEGS
- ImpactSEGSToMaskList
- ToBasicPipe
- EditBasicPipe
- ImpactImageBatchToImageList
- FaceDetailer
- MaskListToMaskBatch
- ImageListToImageBatch
- IPAdapterModelLoader
- ControlNetLoaderAdvanced
- VHS_LoadImages
- VHS_VideoCombine
- ColorToMask
- ResizeMask
- BatchUncropAdvanced
- BatchCropFromMaskAdvanced
- GrowMaskWithBlur
- Image Resize
- Image Save
Model Details
Checkpoints (1)
Dreamshaperv8.safetensors
LoRAs (1)
LCM_SD15.safetensors