【Update】Comfy Daily | Harry Potter Style Live Newspaper + Single-person poster

5.0

11 reviews
225
28.7K
5.6K
44
Description

ComfyDaily by Hyacinth & Simon Lee


ComfyDaily is a workflow to generate live portraits in Harry Potter style.


Update:

Add a Single-person poster version.


NOTE:

1. This workflow is super memory hugry in 1 run. If you run out of CUDA memory, you could split it into 2 steps. 1st generate the photo, 2nd generate the live portrait.

  • 2. Ollama is not necessary. If you haven't installed it, you can disable that node and write the prompt yourself.
  • 3. You can record any facial expression video to replace face.mp4

  • ————————————————


    INSTRUCTION:


    Base Model - RealVisXL V4.0:

    https://civitai.com/models/139562/realvisxl-v40


    LoRA:

    Harry Potter Style / Uniforms XL Lora:

    https://civitai.com/models/301881/harry-potter-styleuniforms-xl-lora


    American Newspaper front paper:https://civitai.com/models/555597/american-newspaper-front-page


    ————————————————


    Ollama:

    https://ollama.com/

    https://github.com/stavsap/comfyui-ollama


    InstantID:

    https://github.com/cubiq/ComfyUI_InstantID


    PuLID:

    https://github.com/cubiq/PuLID_ComfyUI


    Live Portrait:

    https://github.com/kijai/ComfyUI-LivePortrait


    ——————————————————


    License


    © Since this workflow was created with InstantID and Pulid, which used Insight Face, this workflow is avaiable for non-commercial research purposes only.



    ❤ Have Fun!


    by Hyacinth & Simon Lee

    Node Diagram
    Discussion
    C
    CyberDickLang9 months ago

    Brilliant idea

    X
    XiaoHuangGua9 months ago

    牛的。

    A
    Anatomica9 months ago

    Pretty fun stuff.

    吴显9 months ago

    Could you upload the face2.mp4 to let me try the Live Portrait‘s result?

    👍3
    N
    Nan Wei9 months ago

    能不能一次只生成一张图,生成4张会爆显存,批次设置成1后会报错,怎么修改?

    x
    xi Xi9 months ago

    你把剩下的三个 bypass不就好了

    g
    gtbloody9 months ago

    多年前追的网黄开始进军openart,感动

    🤣4
    许启腾9 months ago

    老板口中的“给我打印一张gif图”

    胖子王9 months ago

    face2.m p4   ?

    S
    Simon Lee9 months ago

    随便录个表情视频就行,我们也是自己录的,所以没有放进去

    李杰9 months ago

    太帮了,我跑的效果很好

    S
    Simon Lee9 months ago

    喜欢就好!

    D
    Daniel Hofmann9 months ago

    Unfortunately, I don't understand what I have to connect to the single version.

    L
    Lau Shine9 months ago

    !!! Exception during processing!!! Sizes of tensors must match except in dimension 2. Expected size 120 but got size 109 for tensor number 1 in the list.

    Traceback (most recent call last):

     File "D:\ComfyUI-aki-v1.3\execution.py", line 151, in recursive_execute

       output_data, output_ui = get_output_data(obj, input_data_all)

     File "D:\ComfyUI-aki-v1.3\execution.py", line 81, in get_output_data

       return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)

     File "D:\ComfyUI-aki-v1.3\execution.py", line 74, in map_node_over_list

       results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))

     File "D:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-KJNodes\nodes\image_nodes.py", line 1288, in combine

       image, = ImageConcanate.concanate(self, image, new_image, direction, match_image_size, first_image_shape=first_image_shape)

     File "D:\ComfyUI-aki-v1.3\custom_nodes\ComfyUI-KJNodes\nodes\image_nodes.py", line 257, in concanate

       concatenated_image = torch.cat((image1, image2_resized), dim=2)  # Concatenate along width

    RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 120 but got size 109 for tensor number 1 in the list.

    吴秉儒9 months ago


    Have you solved it? I also encountered the same problem.


    T
    Tuatara8 months ago

    见⬇️回答

    B
    Ber Yab8 months ago

    貌似face视频长度问题,我把下载的每秒25帧视频转换成每秒60帧 视频长度8秒的 就跑通了

    T
    Tuatara8 months ago

    我遇到同样问题,查了下是图片宽高不对。看下你的图片宽高(Image Scale Down)跟 Image Crop 宽高是不是倍数关系

    倪博洋8 months ago

    !!! Exception during processing!!! Sizes of tensors must match except in dimension 2. Expected size 120 but got size 109 for tensor number 1 in the list.

    吴秉儒8 months ago

    为啥四张图片都是一个表情呢

    a
    aa aa8 months ago

    为什么我跑出来的效果没办法像up一样摇头晃脑啊,只能做到眼部和嘴部的变化,自己录的视频是有的

    T
    Tuatara8 months ago

    我也这样,我的表情已经非常夸张了,但最后效果只有嘴在动。。

    L
    LIOR ROITER8 months ago

    WHAT IS THE PROBLEM?

    Prompt outputs failed validation ImageConcatMulti: - Return type mismatch between linked nodes: image_1, LP_OUT != IMAGE - Return type mismatch between linked nodes: image_2, LP_OUT != IMAGE ImageConcatMulti: - Return type mismatch between linked nodes: image_1, LP_OUT != IMAGE - Return type mismatch between linked nodes: image_2, LP_OUT != IMAGE

    b
    baiqiuxiu yang5 months ago

    me too!

    have you solved it ?what is the problem?

    L
    LIOR ROITER8 months ago

    WITH : PULID-CPU


    Error occurred when executing ApplyPulid: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(1, 577, 16, 64) (torch.float16) key : shape=(1, 577, 16, 64) (torch.float16) value : shape=(1, 577, 16, 64) (torch.float16) attn_bias :
    p : 0.0
    `decoderF` is not supported because:
    xFormers wasn't build with CUDA support
    attn_bias type is
    operator wasn't built - see `python -m xformers.info` for more info
    `flshattF@0.0.0` is not supported because:
    xFormers wasn't build with CUDA support
    operator wasn't built - see `python -m xformers.info` for more info
    `cutlassF` is not supported because:
    xFormers wasn't build with CUDA support
    operator wasn't built - see `python -m xformers.info` for more info
    `smallkF` is not supported because:
    max(query.shape[-1] != value.shape[-1]) > 32
    xFormers wasn't build with CUDA support
    dtype=torch.float16 (supported: {torch.float32})
    has custom scale
    operator wasn't built - see `python -m xformers.info` for more info
    unsupported embed per head: 64

    File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\pulid.py", line 383, in apply_pulid
    id_cond_vit, id_vit_hidden = eva_clip(face_features_image, return_all_features=False, return_hidden=True, shuffle=False)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\eva_vit_model.py", line 544, in forward
    x, hidden_states = self.forward_features(x, return_all_features, return_hidden, shuffle)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\eva_vit_model.py", line 531, in forward_features
    x = blk(x, rel_pos_bias=rel_pos_bias)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\eva_vit_model.py", line 293, in forward
    x = x + self.drop_path(self.attn(self.norm1(x), rel_pos_bias=rel_pos_bias, attn_mask=attn_mask))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\eva_vit_model.py", line 208, in forward
    x = xops.memory_efficient_attention(
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\xformers\ops\fmha\__init__.py", line 268, in memory_efficient_attention
    return _memory_efficient_attention(
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\xformers\ops\fmha\__init__.py", line 387, in _memory_efficient_attention
    return _memory_efficient_attention_forward(
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\xformers\ops\fmha\__init__.py", line 403, in _memory_efficient_attention_forward
    op = _dispatch_fw(inp, False)
    ^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\xformers\ops\fmha\dispatch.py", line 125, in _dispatch_fw
    return _run_priority_list(
    ^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\xformers\ops\fmha\dispatch.py", line 65, in _run_priority_list
    raise NotImplementedError(msg)

    (Edited)
    L
    LIOR ROITER8 months ago

    WITH: PULID CUDA


    return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\eva_vit_model.py", line 293, in forward x = x + self.drop_path(self.attn(self.norm1(x), rel_pos_bias=rel_pos_bias, attn_mask=attn_mask)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\PuLID_ComfyUI\eva_clip\eva_vit_model.py", line 208, in forward x = xops.memory_efficient_attention( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\xformers\ops\fmha\__init__.py", line 268, in memory_efficient_attention return _memory_efficient_attention( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\xformers\ops\fmha\__init__.py", line 387, in _memory_efficient_attention return _memory_efficient_attention_forward( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\xformers\ops\fmha\__init__.py", line 403, in _memory_efficient_attention_forward op = _dispatch_fw(inp, False) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\xformers\ops\fmha\dispatch.py", line 125, in _dispatch_fw return _run_priority_list( ^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\xformers\ops\fmha\dispatch.py", line 65, in _run_priority_list raise NotImplementedError(msg)

    L
    Lord Lethris7 months ago

    This is an excellent concept - but...

    There is a huge amount of unnecessary nodes here.


    One Huge change I made was to remove the comfyui-ollama node - this is a huge waste of resource.  I replaced it with WAS Node Suite's "BLIP Analyze Image" - it does exactly the same thing without the need to run a separate application.


    Issues I found (not blaming the Dev, this was probably due to ComfyUI changes):


    • No guide on how to install InstanID correctly
    • No guide on how to install PuLID correctly
    • The Seed input was missing - So I added a Seed Generator node
    • "Use Everywhere" doesn't work.


    I also think "InstantID" and "PuLID" is also unnecessary as "IPAdapter Plus" does all of this natively.  But I haven't tested this yet.


    Overall quite good - but it did take over 4hrs to get working after fixing all the issues, and removing redundant and unnecessary nodes

    (Edited)
    d
    daren09227 months ago

    Error occurred when executing PulidEvaClipLoader: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on. File "G:\ComfyUI-aki-v1.3\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "G:\ComfyUI-aki-v1.3\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "G:\ComfyUI-aki-v1.3\execution.py", line 70, in map_node_over_list results.append(getattr(obj, func)()) File "G:\ComfyUI-aki-v1.3\custom_nodes\PuLID_ComfyUI\pulid.py", line 259, in load_eva_clip model, _, _ = create_model_and_transforms('EVA02-CLIP-L-14-336', 'eva_clip', force_custom_clip=True) File "G:\ComfyUI-aki-v1.3\custom_nodes\PuLID_ComfyUI\eva_clip\factory.py", line 377, in create_model_and_transforms model = create_model( File "G:\ComfyUI-aki-v1.3\custom_nodes\PuLID_ComfyUI\eva_clip\factory.py", line 279, in create_model checkpoint_path = download_pretrained(pretrained_cfg, cache_dir=cache_dir) File "G:\ComfyUI-aki-v1.3\custom_nodes\PuLID_ComfyUI\eva_clip\pretrained.py", line 328, in download_pretrained target = download_pretrained_from_hf(model_id, filename=filename, cache_dir=cache_dir) File "G:\ComfyUI-aki-v1.3\custom_nodes\PuLID_ComfyUI\eva_clip\pretrained.py", line 300, in download_pretrained_from_hf cached_file = hf_hub_download(model_id, filename, revision=revision, cache_dir=cache_dir) File "G:\ComfyUI-aki-v1.3\python\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "G:\ComfyUI-aki-v1.3\python\lib\site-packages\huggingface_hub\file_download.py", line 1371, in hf_hub_download raise LocalEntryNotFoundError(                      这个应该怎么解决啊

    d
    daren09227 months ago

    Error occurred when executing ApplyPulid: 'ModelPatcher' object has no attribute 'get_model_object'    求解

    C
    CHUIZI GUOGUO7 months ago

    why my mask parts are only white or grey color?

    r
    rice_zhang6 months ago

    does anyone succeed? I can not use it ,always error occurred

    r
    rice_zhang6 months ago

    I really love it but so hard to handle

    A
    Alvin5 months ago

    单人版有参考图?

    L
    Lio Kyiv4 months ago

    How are you doing

    a
    aybar avcı3 months ago

    how much vram we need ?

    Author

    3
    22.3K
    797
    99.1K

    Resources (4)

      Single-person poster.zip (9.6 MB)
      BG_Canvas.jpg (2.4 MB)
      Openart_LiveportraitOnly.json (33.3 kB)
      MASK.png (14.5 kB)

    Reviews

    T

    TN

    8 months ago

    目前看到最有心在搭建的工作流!!!!

    a

    akidio

    8 months ago

    nice

    D

    Drift Crow

    8 months ago

    nice

    麦克斯

    9 months ago

    y

    yixiang9429

    9 months ago

    Brilliant idea

    E

    ERICK CHEN

    9 months ago

    R

    Ring Hyacinth

    9 months ago

    黄春茂

    9 months ago

    W

    William Wue

    9 months ago

    l

    linjun3739

    9 months ago

    Versions (1)

    • - latest (9 months ago)

    Primitive Nodes (15)

    Anything Everywhere3 (2)

    DownloadAndLoadLivePortraitModels (1)

    ImageConcatMulti (3)

    LivePortraitProcess (4)

    Note (1)

    PrimitiveNode (3)

    Simple String (1)

    Custom Nodes (59)

    • - ImageCompositeAbsolute (1)

    • - CR Image Grid Panel (2)

    ComfyUI

    • - InvertMask (2)

    • - JoinImageWithAlpha (2)

    • - ImageScaleBy (1)

    • - LoadImage (3)

    • - CheckpointLoaderSimple (1)

    • - ImageScale (1)

    • - ControlNetLoader (1)

    • - DifferentialDiffusion (1)

    • - LatentUpscaleBy (1)

    • - EmptyLatentImage (1)

    • - LoraLoader (2)

    • - VAELoader (1)

    • - CLIPTextEncode (1)

    • - MaskToImage (1)

    • - SaveImage (1)

    • - CLIPSetLastLayer (1)

    • - easy imageScaleDown (3)

    • - easy imageRemBg (1)

    • - ImageCrop+ (4)

    • - InstantIDFaceAnalysis (1)

    • - InstantIDModelLoader (1)

    • - ApplyInstantID (1)

    • - LayerStyle: Stroke V2 (1)

    • - OllamaVision (1)

    • - PrepareImageAndMaskForInpaint (1)

    • - VHS_DuplicateMasks (1)

    • - VHS_VideoCombine (1)

    • - VHS_LoadVideoPath (4)

    • - BatchPromptSchedule (1)

    • - GrowMaskWithBlur (1)

    • - Image To Mask (2)

    • - PulidEvaClipLoader (1)

    • - PulidModelLoader (1)

    • - PulidInsightFaceLoader (1)

    • - ApplyPulid (1)

    • - ImageRGBA2RGB (1)

    • - StyleAlignedBatchAlign (1)

    • - Seed Everywhere (1)

    • - Masks Add (1)

    Checkpoints (1)

    realvisxlV40_v40LightningBakedvae.safetensors

    LoRAs (2)

    aninewspaper-sdxl.safetensors

    harry_potter_v1.safetensors