Thicc Lines Comic Style
5.0
0 reviewsDescription
What this workflow does
👉 Turns any video into a comic style inspired by GTA5 artwork and Archer. With thicc lines.
The main "trick" I found that works really well for this specific
style is to make use of comfies normal image manipulation nodes,
especially blending and sharpening. Before the frames are even sent to
the AI sampler, I blend
1) the output of the controlnet anime lineart preprocess applied to the normal video frame and
2) the output of a realistic lineart preprocessor applied to a midas depth map of the normal video frame and
3) invert the image.
This gives a black lines on white background "sketch" of the entire
video. While the details are less strongly sketched because of the way
anime lineart preprocessor works, the outlines of characters/prominent
objects are highlighted way stronger.
I then blend this sketch version of each frame again with the
original video, twice (because if I only do it once the picture gets
fairly bright because of all the white in the sketch). This results
basically in a rotoscoped version of the normal frames.
THEN I send this "pre-processed" picture to the AI sampler. Since the
input now already contains "comic lines" to begin with, the AI manages
to pull off quite nice results with some caveats:
-Backgrounds are meh
-Far away characters are meh or terrible
These are the two main problems reamaining with the workflow,
allthough I haven't tried some post processing methods like adetailer
yet.
How to use this workflow
👉 Chose input video, set your frame divider (every nth frame) and
resolution and in the output set your framerate according to your
divider.
Tips about this workflow
👉 Tested with Comic Babes model: https://civitai.com/models/20294?modelVersionId=24129
👉 Has a bypassed quantize node at the end - enable for color compression for a different kind of style
👉 Has both normal VAE Decode and Tiled VAE Decode in Workflow. Tiled is disconncted in the uploaded version. If you run out of VRAM on VAE Decode just connect the Tiled VAE Decode to the (bypassed) quantize node and set your tile size to whatever your hardware can handle (takes longer but saves VRAM).
👉 Depending on hardware and video length, you might run out of normal RAM before running out of VRAM (this happens for me with clips longer than 25sec) at various sharpening/blending or even video save steps. After comfy gives you the out of memory error you can just queue again (if you haven't changed anyhting in between) and it will resume where it left off and usually succeed (unless you went wayyy overboard with your clip length or resolution). Normal RAM management seems to be way worse than VRAM management in comfy so this happens quite often when I push it on length.
🎥 Video demo link (optional)
Discussion
(No comments yet)
Loading...
Reviews
No reviews yet
Versions (1)
- latest (2 years ago)
Node Details
Primitive Nodes (3)
PrimitiveNode (3)
Custom Nodes (43)
- ADE_AnimateDiffLoaderWithContext (1)
- ADE_AnimateDiffUniformContextOptions (1)
- CheckpointLoaderSimpleWithNoiseSelect (1)
ComfyUI
- KSampler (1)
- CLIPTextEncode (2)
- CLIPSetLastLayer (1)
- LoraLoader (3)
- FreeU_V2 (1)
- HyperTile (1)
- ImageSharpen (2)
- VAELoader (1)
- ImageInvert (2)
- ImageQuantize (2)
- ImageBlur (1)
- ImageBlend (3)
- ControlNetApplyAdvanced (4)
- ImageScale (1)
- VAEDecode (1)
- VAEDecodeTiled (1)
- VAEEncode (1)
- MiDaS-DepthMapPreprocessor (1)
- LineArtPreprocessor (1)
- DWPreprocessor (1)
- AnimeLineArtPreprocessor (1)
- TilePreprocessor (1)
- ControlNetLoaderAdvanced (4)
- VHS_VideoCombine (2)
- VHS_LoadVideo (1)
Model Details
Checkpoints (1)
comicBabes_v1.safetensors
LoRAs (3)
GTA_Style.safetensors
SD1.5\animatediff\v3_sd15_adapter.ckpt
lcm_sd15.safetensors