Openpose To Region Map (v2)
5.0
2 reviewsDescription
(This template is used for Workflow Contest)
What this workflow does
👉
1. This workflow demonstrates how to generate a Region Map from an Openpose Image and provides an example of using it to create an image with a Regional IP Adapter.
- Given an openpose image where two people are interacting, it automatically generates separate region map for each person and the background.
- If a photo capturing the interaction of two people is provided, it converts it into an Openpose image and likewise automatically generates a region map.
2. This workflow demonstrates a method of separating the backend and control parts.
- Backend workflow
- The backend workflow positioned on the right in the overall workflow is a sub-workflow implemented for the actual execution.
- Section A is a sub-workflow that generates a Region Map from an Openpose image, and Section B is a workflow that utilizes the Regional IP Adapter to create images for three regions based on the Region Map.
- Frontend workflow
- The frontend workflow consists of user input and output sections.
- The user input section is composed of two switch nodes, allowing the configuration of input images for workflow execution and determining the mode of operation.
- The output section consists of nodes that allow viewing the results of the workflow, including the Region Map generated by the workflow execution and the images produced by the Regional IP Adapter based on the Region Map.
How to use this workflow
👉
1. Set the pose image for "Pose Picture (src)" and "Pose Skeleton (src)".
(The Pose Skeleton (src) node already contains example images.)
2. In the left "Reference Switch", choose either 1 or 2. Selecting 1 will use "Pose Picture (src)," while selecting 2 will use "Pose Skeleton (src)."
3. Set the reference images for Background, Person1, and Person2 for use in the IPAdapter. Each image corresponds to the respective area it will be applied to.
4. The "Section B Switch" on the right determines whether to execute Section B. If set to "pass," it will be executed; if set to "block," it will not be executed.
5. When you run the workflow, the results will appear in the "Show: ..." nodes. In "Show: Region Map," you'll see the Region Map for the Openpose image, and in "Show: Result," the final result image created based on the generated Region Map will be displayed.
Tips about this workflow
👉
1. To convert Openpose into a Region Map, a trick is employed by generating a small image with 10 short steps. High quality is not necessary for the region, and the mask and controlnet images do not require high resolution.
2. In SDXL, there is a limitation where the effects of ControlNet are not well applied to Openpose. Therefore, an intermediate step image created for generating the Region is converted to Canny to be used as the control image.
3. Choose the detection model for extracting a person's silhouette based on their respective capabilities. According to personal experiments, using the person_yolov8-seg model proved to be the most effective for such tasks.
4. Utilizing the "Use Everywhere" node allows for an effective separation of Frontend and Backend.
5. However, to properly verify and manage the operation of the backend workflow, the inputs and structure of the backend workflow need to be clearly revealed. To achieve this, use the "Preview Bridge" node to display the images used as inputs.
6. I configured ImageSender and ImageReceiver to enable checking the result images from both the backend and frontend.
Make sure latest version of Impact Pack!! (bug is fixed)
🎥 Video demo link (optional)
👉 https://www.youtube.com/watch?v=VWpKZjNGaYU
Change Log:
v2: Change the structure by using Remote Int and Remote Boolean instead of connecting Switches in the Front part to Primitive nodes. (Requires Impact Pack V4.44)
Discussion
(No comments yet)
Loading...
Reviews
No reviews yet
Versions (2)
- latest (2 years ago)
- v20231129-170339
Node Details
Primitive Nodes (15)
Anything Everywhere? (7)
Note (3)
PrimitiveNode (1)
Reroute (4)
Custom Nodes (73)
- KepStringLiteral (2)
ComfyUI
- EmptyImage (3)
- PreviewImage (5)
- ImageCompositeMasked (2)
- VAELoader (2)
- EmptyLatentImage (2)
- CLIPTextEncode (4)
- ControlNetApply (2)
- VAEDecode (2)
- CLIPVisionLoader (1)
- CheckpointLoaderSimple (2)
- ControlNetLoader (2)
- LoadImage (4)
- KSampler (1)
- ImageScaleToTotalPixels (1)
- SegsToCombinedMask (2)
- ImpactDilateMask (4)
- SegmDetectorSEGS (1)
- ImpactImageInfo (2)
- UltralyticsDetectorProvider (1)
- FromBasicPipe_v2 (1)
- ToBasicPipe (1)
- EditBasicPipe (1)
- ImpactSEGSOrderedFilter (1)
- ImpactKSamplerBasicPipe (1)
- PreviewBridge (5)
- ImageSender (2)
- ImpactSwitch (1)
- ImageReceiver (2)
- ImpactControlBridge (1)
- ImpactRemoteInt (1)
- ImpactRemoteBoolean (1)
- ToIPAdapterPipe //Inspire (1)
- RegionalIPAdapterColorMask //Inspire (3)
- ApplyRegionalIPAdapters //Inspire (1)
- LoadImage //Inspire (1)
- GlobalSeed //Inspire (1)
- OpenposePreprocessor (1)
- CannyEdgePreprocessor (1)
- IPAdapterModelLoader (1)
Model Details
Checkpoints (2)
SD1.5/majicmixRealistic_v6.safetensors
SDXL/MOHAWK_v18VAEBaked.safetensors
LoRAs (0)