My stuff

  • My Workflows

  • Liked Workflows

  • Following Workflows

Go to OpenArt main site
Upload workflow

IPAdapter + GroundingDino (Segments) to Change a Characters Clothes

5.0

0 reviews
11
8.1K
1.6K
2
Description

A Workflow for Segmented Style Transfers


You're likely familiar with the tedious process of changing outfits using inpainting and ControlNets. However, with the right combination of nodes, you can achieve remarkably accurate and hassle-free outfit changes with minimal post-processing.

This workflow leverages the power of IPAdapter, Grounding Dino, and Segment Anything models to transfer styles and segment objects with precision. This workflow is designed to simplify the process of changing outfits, allowing you to focus on creative experimentation.


Workflow Overview


The workflow is divided into three main groups: Basic Workflow, IPAdapter, and Segmentation.

  1. Basic Workflow: This section sets up the foundation for the entire process, using an inpainting checkpoint and a good SDXL checkpoint (such as RealVision or ICBINP XL).
  2. IPAdapter: This node is responsible for transferring styles from a reference image to the target image. It requires the CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors and ip-adapter-plus_sdxl_vit-h.safetensors models.
  3. Segmentation: This section utilizes the Grounding Dino model to segment specific objects within an image. You can input a textual prompt to identify the object you want to segment, such as a shirt or glasses.


How to Use This Workflow


This workflow is ideal for creating virtual try-on experiences, batch processing images, or simply experimenting with different styles and objects. With its zero-shot object detection capabilities, you can segment and retouch images with ease.

To get started, simply set up the nodes as described above, and input your desired image and style reference. You can adjust the settings and parameters to achieve the desired outcome.


Want to See This Workflow in Action?


If you're interested in learning more about this workflow and seeing it in action, check out the video tutorial and downloadable workflow on Prompting Pixels.

With this workflow, you can unlock the full potential of ComfyUI and take your AI art skills to the next level. Experiment with different styles, objects, and settings to achieve stunning results.

Discussion

(No comments yet)

Loading...

Author

3
5.2K
40
28.2K
    before.jpg (481.6 kB)

No reviews yet

  • - latest (2 years ago)

Primitive Nodes (1)

Note (1)

Custom Nodes (18)

ComfyUI

  • - LoadImage (2)

  • - PreviewImage (2)

  • - MaskToImage (1)

  • - FeatherMask (1)

  • - VAEEncodeForInpaint (1)

  • - CheckpointLoaderSimple (1)

  • - KSampler (1)

  • - VAEDecode (1)

  • - CLIPTextEncode (2)

  • - CLIPVisionLoader (1)

ComfyUI_IPAdapter_plus

  • - IPAdapterUnifiedLoader (1)

  • - IPAdapterAdvanced (1)

  • - SAMModelLoader (segment anything) (1)

  • - GroundingDinoModelLoader (segment anything) (1)

  • - GroundingDinoSAMSegment (segment anything) (1)

Checkpoints (1)

realvisxlV40_v30InpaintBakedvae.safetensors

LoRAs (0)