Old Photo Restoration XL

5.0

1 reviews
267
49.9K
18.0K
89
Node Diagram
Discussion
w
wxwhj8 months ago

先赞为敬

z
zhirui li8 months ago

很牛!

u
ultraman36968 months ago

非常好用

👏1
  • BOPBTL_ScratchMask 🔗
  • BOPBTL_RestoreOldPhotos 🔗
  • BOPBTL_LoadScratchMaskModel 🔗
  • BOPBTL_LoadRestoreOldPhotosModel 🔗


swears at these nodes, what should I do? even though they are installed


D
Datou7 months ago

You may not have dlib installed correctly.

A
Ahmad Zaini7 months ago

I have same problem, install microsoft visual studio, cmake and dlib. Make it PATH, tested at cmd and installed correctly. But the BOPBTL nodes still in red.

I am waiting someone make youtube tutorial:)

许启腾6 months ago

metoo,但我用github中提到的第二种方法安装成功了:你可以在这个地方找到文件,并尝试安装。 https://github.com/eddiehe99/dlib-whl

S
Shawn Q6 months ago

same problem。

明明安装了BOPBTL,用编辑器或者git 直接安装,都是一样在启动工作流时显示节点缺失。为什么不识别或者装不进去呢?

S
Shawn Q6 months ago

I found that it may be because some versions of comfyui added the prefix "comfyui-" to the path when importing nodes, causing inconsistencies in the import paths of nodes.

O
Oni 6 months ago

same problem. can you specify which file ?

Does anyone have a video tutorial? I have the same problem


D
Don zhu7 months ago

大佬,controlnet是下载diffusion_pytorch_model.safetensors,然后改名么?

在您给的链接中我没找到对应的safetensors



D
Datou7 months ago

自己手动改名一下

D
Don zhu7 months ago

Thanks very much.

J
Jack7 months ago

劳烦问下,这里Ollama推荐运行的是什么模型?

D
Datou7 months ago

phi3 llava

y
yongguang huang5 months ago

请问一下这个ollama的模型怎么安装啊,我下回来,放在该节点的文件夹内没反应呢

T
Tony mark4 months ago

ollama不是模型,你可以理解为加载大模型的。 https://ollama.com/library?q=li

去搜一下教程。comfy里就可以用了

sheeran7 months ago

请问调整哪些参数,可以让色彩更加鲜明呢?

D
Datou7 months ago

可以在提示词里加上一些vivid、hdr之类的试试

sheeran7 months ago

非常感谢,另外我的ollama使用三个phi3 llava模型时,得到的效果并不是很好,当我将后两个替换成llama3,反而得到不错的效果,所以想请问您在进行上面那些示例图片修复时,使用的ollama模型组合是怎样的呢?

p
peter wang2 months ago

我发现识别的提示词无法修改,有什么操作方法吗


D
Datou2 months ago

加一个text节点,把自动提示词复制进来手动修改,然后连到下一步。

石增欢7 months ago

大佬,有什么私域群不

D
Datou7 months ago

一个没什么营养的知识星球: https://t.zsxq.com/vYyYJ

谢军7 months ago

装这个会污染环境吗 我尝试装了插件结果出现了一堆冲突

y
yong peng6 months ago

我的也是,各种冲突,不知道怎么解决了

m
man pan7 months ago

为什么我的自定义节点安装不了呢

c
cun43087 months ago

节点已经安装,一直是节点加载失败

(Edited)
E
Ethan Cassmeyer7 months ago

大佬,以bopbtl\开头并以.pt 和 .pth结尾的存储模型参数和状态的文件,例如bopbtl\mapping_Patch_Attention\latest_net_mapping_net.pth、bopbtl\detection\FT_Epoch_latest.pt等。在哪里可以获取到。

z
zijian wang7 months ago

在那个github仓库的中有 就是那个custom中的back old pic to life这个插件

J
Joe peng7 months ago
  • ReActor Node for ComfyUI 🔗
  • ComfyUI Bringing Old Photos Back to Life 🔗

这两个缺失节点安装不了

b
bo7 months ago

我也是这样的问题.请问你解决了吗.

c
chi wang3 months ago

me too

王豪贵7 months ago

下面这三个节点都找不到github地址了~

nsightFaceLoader

Multiply

IPAdapterApplyFaceID

o
ocean yuri7 months ago

请问,load restore old photos model这个节点,其中的mapping_net,如何可以像您这种以文件夹路径的方式进行加载?为什么我这个只能在我的checkpoint主模型的文件里进行选择,无法填写文件夹路径。谢谢


D
Datou7 months ago

我是在checkpoint目录下面建了子目录

c
cun43087 months ago

这个怎么解决?

The above exception was the direct cause of the following exception:


Traceback (most recent call last):

 File "I:\ComfyUI-aki-v1.3\execution.py", line 152, in recursive_execute

   output_data, output_ui = get_output_data(obj, input_data_all)

 File "I:\ComfyUI-aki-v1.3\execution.py", line 82, in get_output_data

   return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)

 File "I:\ComfyUI-aki-v1.3\execution.py", line 75, in map_node_over_list

   results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))

 File "I:\ComfyUI-aki-v1.3\custom_nodes\ComfyUi-Ollama-YN\CompfyuiOllama.py", line 81, in ollama_vision

   response = client.generate(model=model, prompt=query, keep_alive=keep_alive, options=options, images=images_b64)

 File "I:\ComfyUI-aki-v1.3\python\lib\site-packages\ollama\_client.py", line 162, in generate

   return self._request_stream(

 File "I:\ComfyUI-aki-v1.3\python\lib\site-packages\ollama\_client.py", line 98, in _request_stream

   return self._stream(*args, **kwargs) if stream else self._request(*args, **kwargs).json()

 File "I:\ComfyUI-aki-v1.3\python\lib\site-packages\ollama\_client.py", line 69, in _request

   response = self._client.request(method, url, **kwargs)

 File "I:\ComfyUI-aki-v1.3\python\lib\site-packages\httpx\_client.py", line 827, in request

   return self.send(request, auth=auth, follow_redirects=follow_redirects)

 File "<enhanced_experience vendors.sentry_sdk.integrations.httpx>", line 84, in send

 File "I:\ComfyUI-aki-v1.3\python\lib\site-packages\httpx\_client.py", line 914, in send

   response = self._send_handling_auth(

 File "I:\ComfyUI-aki-v1.3\python\lib\site-packages\httpx\_client.py", line 942, in _send_handling_auth

   response = self._send_handling_redirects(

 File "I:\ComfyUI-aki-v1.3\python\lib\site-packages\httpx\_client.py", line 979, in _send_handling_redirects

   response = self._send_single_request(request)

 File "I:\ComfyUI-aki-v1.3\python\lib\site-packages\httpx\_client.py", line 1015, in _send_single_request

   response = transport.handle_request(request)

 File "I:\ComfyUI-aki-v1.3\python\lib\site-packages\httpx\_transports\default.py", line 232, in handle_request

   with map_httpcore_exceptions():

 File "I:\ComfyUI-aki-v1.3\python\lib\contextlib.py", line 153, in __exit__

   self.gen.throw(typ, value, traceback)

 File "I:\ComfyUI-aki-v1.3\python\lib\site-packages\httpx\_transports\default.py", line 86, in map_httpcore_exceptions

   raise mapped_exc(message) from exc httpx.ConnectError: [WinError 10061] 由于目标计算机积极拒绝,无法连接。

陈晓峰7 months ago

可以开放在线使用吗

M
Max Stepaniuk7 months ago

How My Ollama vision node works. I have installed Ollama app, but i have no clue what should i do to make the node work

A
Ahmad Zaini7 months ago

When i followed github to install ComfyUI-BOPBTL, theres instruction i didnt understand "  Set device_ids as a comma separated list of device ids (i.e. 0 or 1,2). Use -1 for cpu."

Anyone know how to set 'device_ids"?

s
sichao hou7 months ago

Load Restore Old Photos Model中的mapping_net

Load Scratch Mask Model中的scratch_model

这两个加载的是什么模型啊~~~

R
Rick Zhang7 months ago

经过一天的努力,下载的工作流终于全部跑通了。可是不知为什么出来的图质量更差。求老师指点。

D
Datou7 months ago

看不到你的图片怎么差

abnormal result...

https://iimg.su/i/IMj9e

https://iimg.su/i/ugsWQ

why?


D
Datou7 months ago

Bringing-Old-Photos-Back-to-Life is not working properly. It may be that the model is not set correctly.

x
xavier jam7 months ago

用本地部署多模态羊驼来生成提示词,约束生图的质量,可以说实棋高一着了,有没有推荐的亚洲人底膜?用你这个生出来的图全是洋人面相~

A
Adam Davis7 months ago

ComfyUI-Bringing-Old-Photos-Back-to-Life
这个插件一定要python 3.11版本吗  你的python版本是什么

T
Tangerine6 months ago

python 3.10, dlib using 19.24.2, installed dlib using wheel library:  https://github.com/eddiehe99/dlib-whl/blob/main/dlib-19.24.2-cp310-cp310-win_amd64.whl

W
Weipeng Huang7 months ago

Error occurred when executing DepthAnythingV2Preprocessor: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(1, 1925, 12, 64) (torch.float32) key : shape=(1, 1925, 12, 64) (torch.float32) value : shape=(1, 1925, 12, 64) (torch.float32) attn_bias :
p : 0.0
`cutlassF` is not supported because:
xFormers wasn't build with CUDA support
Operator wasn't built - see `python -m xformers.info` for more info
`flshattF` is not supported because:
xFormers wasn't build with CUDA support
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
Operator wasn't built - see `python -m xformers.info` for more info
`tritonflashattF` is not supported because:
xFormers wasn't build with CUDA support
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
Operator wasn't built - see `python -m xformers.info` for more info
triton is not available
requires A100 GPU
`smallkF` is not supported because:
xFormers wasn't build with CUDA support
max(query.shape[-1] != value.shape[-1]) > 32
Operator wasn't built - see `python -m xformers.info` for more info
unsupported embed per head: 64

File "D:\comfyui\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\comfyui\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\comfyui\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\comfyui\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\depth_anything_v2.py", line 20, in execute
out = common_annotator_call(model, image, resolution=resolution, max_depth=1)
File "D:\comfyui\ComfyUI\custom_nodes\comfyui_controlnet_aux\utils.py", line 85, in common_annotator_call
np_result = model(np_image, output_type="np", detect_resolution=detect_resolution, **kwargs)
File "D:\comfyui\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\depth_anything_v2\__init__.py", line 44, in __call__
depth = self.model.infer_image(cv2.cvtColor(input_image, cv2.COLOR_RGB2BGR), input_size=518, max_depth=max_depth)
File "C:\tools\Miniconda3\envs\comflowy\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "D:\comfyui\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\depth_anything_v2\dpt.py", line 189, in infer_image
depth = self.forward(image, max_depth)
File "D:\comfyui\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\depth_anything_v2\dpt.py", line 179, in forward
features = self.pretrained.get_intermediate_layers(x, self.intermediate_layer_idx[self.encoder], return_class_token=True)
File "D:\comfyui\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\depth_anything_v2\dinov2.py", line 308, in get_intermediate_layers
outputs = self._get_intermediate_layers_not_chunked(x, n)
File "D:\comfyui\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\depth_anything_v2\dinov2.py", line 277, in _get_intermediate_layers_not_chunked
x = blk(x)
File "C:\tools\Miniconda3\envs\comflowy\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\tools\Miniconda3\envs\comflowy\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "D:\comfyui\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\depth_anything_v2\dinov2_layers\block.py", line 247, in forward
return super().forward(x_or_x_list)
File "D:\comfyui\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\depth_anything_v2\dinov2_layers\block.py", line 105, in forward
x = x + attn_residual_func(x)
File "D:\comfyui\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\depth_anything_v2\dinov2_layers\block.py", line 84, in attn_residual_func
return self.ls1(self.attn(self.norm1(x)))
File "C:\tools\Miniconda3\envs\comflowy\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\tools\Miniconda3\envs\comflowy\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "D:\comfyui\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\depth_anything_v2\dinov2_layers\attention.py", line 76, in forward
x = memory_efficient_attention(q, k, v, attn_bias=attn_bias)
File "C:\tools\Miniconda3\envs\comflowy\lib\site-packages\xformers\ops\fmha\__init__.py", line 192, in memory_efficient_attention
.. code-block:: python
File "C:\tools\Miniconda3\envs\comflowy\lib\site-packages\xformers\ops\fmha\__init__.py", line 290, in _memory_efficient_attention
def memory_efficient_attention_forward(
File "C:\tools\Miniconda3\envs\comflowy\lib\site-packages\xformers\ops\fmha\__init__.py", line 306, in _memory_efficient_attention_forward
query=query,
File "C:\tools\Miniconda3\envs\comflowy\lib\site-packages\xformers\ops\fmha\dispatch.py", line 104, in _dispatch_fw
priority_list_ops.remove(flash.FwOp)
File "C:\tools\Miniconda3\envs\comflowy\lib\site-packages\xformers\ops\fmha\dispatch.py", line 79, in _run_priority_list
if not mqa_or_gqa:


这个怎么解决,我是windows环境,搞了一天了

x
xiaokun wen7 months ago

我这里直接下载的也是缺少三个节点,可以再更新一下吗

T
Tangerine6 months ago

Im running on dlib 19.24.2 (using library wheel) and it is fine as 19.24.1 always installed with failure, I dont know why.

I notice that there are two Ollama models, but I can only run one at once. Can I know how to run 2 at once in the system? Or halfway in the generations, run the second model?

张伟6 months ago

You can try installing the comfyui old photo restoration node, which uses the same dependencies and files, so that you can use it

(Edited)
l
li keke6 months ago

赞啊

yangzi6 months ago

my ollama vision: must provide a model , how to fix it?

M
Mydcww6 months ago

cmd 打开,运行ollma run 模型  ,模型可以去ollma官网找

yangzi6 months ago

好的 感谢

n
nxl shirley6 months ago

请问这是什么问题?

Mapping: You are using multi-scale patch attention, conv combine + mask input

!!! Exception during processing !!! unpickling stack underflow

Traceback (most recent call last):

 File "F:\shirley\Program\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Bringing-Old-Photos-Back-to-Life-master\Global\models\base_model.py", line 98, in load_network_from_path

   network.load_state_dict(torch.load(save_path))

                           ^^^^^^^^^^^^^^^^^^^^^

 File "F:\shirley\Program\ComfyUI_windows_portable\python_embeded\Lib\site-packages\ultralytics\utils\patches.py", line 86, in torch_load

   return _torch_load(*args, **kwargs)

          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

 File "F:\shirley\Program\ComfyUI_windows_portable\python_embeded\Lib\site-packages\ultralytics\utils\patches.py", line 86, in torch_load

   return _torch_load(*args, **kwargs)

          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

 File "F:\shirley\Program\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1040, in load

   return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)

          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

 File "F:\shirley\Program\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1262, in _legacy_load

   magic_number = pickle_module.load(f, **pickle_load_args)

                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

_pickle.UnpicklingError: unpickling stack underflow


During handling of the above exception, another exception occurred:


Traceback (most recent call last):

 File "F:\shirley\Program\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute

   output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

 File "F:\shirley\Program\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data

   return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

 File "F:\shirley\Program\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list

   process_inputs(input_dict, i)

 File "F:\shirley\Program\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs

   results.append(getattr(obj, func)(**inputs))

                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

 File "F:\shirley\Program\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Bringing-Old-Photos-Back-to-Life-master\nodes.py", line 222, in run

   return BOPBTL_LoadRestoreOldPhotosModel.load_models(

          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

 File "F:\shirley\Program\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Bringing-Old-Photos-Back-to-Life-master\nodes.py", line 209, in load_models

   model = Restorer.load_model(opt)

           ^^^^^^^^^^^^^^^^^^^^^^^^

 File "F:\shirley\Program\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Bringing-Old-Photos-Back-to-Life-master\Global\test.py", line 110, in load_model

   model.initialize(opt)

 File "F:\shirley\Program\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Bringing-Old-Photos-Back-to-Life-master\Global\models\mapping_model.py", line 141, in initialize

   self.load_network(self.netG_A, "G", opt.use_vae_which_epoch, opt.load_pretrainA, test_path=opt.test_vae_a)

 File "F:\shirley\Program\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Bringing-Old-Photos-Back-to-Life-master\Global\models\base_model.py", line 76, in load_network

   BaseModel.load_network_from_path(network, test_path)

 File "F:\shirley\Program\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Bringing-Old-Photos-Back-to-Life-master\Global\models\base_model.py", line 101, in load_network_from_path

   pretrained_dict = torch.load(save_path)

                     ^^^^^^^^^^^^^^^^^^^^^

 File "F:\shirley\Program\ComfyUI_windows_portable\python_embeded\Lib\site-packages\ultralytics\utils\patches.py", line 86, in torch_load

   return _torch_load(*args, **kwargs)

          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

 File "F:\shirley\Program\ComfyUI_windows_portable\python_embeded\Lib\site-packages\ultralytics\utils\patches.py", line 86, in torch_load

   return _torch_load(*args, **kwargs)

          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

 File "F:\shirley\Program\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1040, in load

   return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)

          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

 File "F:\shirley\Program\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1262, in _legacy_load

   magic_number = pickle_module.load(f, **pickle_load_args)

                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

_pickle.UnpicklingError: unpickling stack underflow

S
Snowfrost Wang6 months ago

跑通了,跑出来的照片色彩接近原片,还是复古的感觉。能改成鲜艳一点的色彩吗?我加了提示词没有效果。

D
Datou6 months ago

换模型试试

5
51pwn5 months ago

When loading the graph, the following node types were not found:

  • DisplayText_Zho
  • ConcatText_Zho
  • easy cleanGpuUsed
  • ReActorFaceSwap
  • Automatic CFG
  • BOPBTL_ScratchMask
  • BOPBTL_RestoreOldPhotos
  • BOPBTL_LoadScratchMaskModel
  • ComfyUI_Image_Round__ImageRoundAdvanced
  • BOPBTL_LoadRestoreOldPhotosModel
  • ImageScaleToMegapixels
  • ComfyUI_Image_Round__ImageCropAdvanced
  • OllamaVision
  • OllamaGenerate
  • DepthAnythingV2Preprocessor

Nodes that have failed to load will show as red on the graph.


2
271495 months ago

这bopbtl很吃显存吗?我跑512的图都爆显,我a100啊40g显存跑512都爆?

D
Datou5 months ago

我24g显存没问题啊

李光远5 months ago

大佬这个模型在哪啊,scratch model

bopbtlldetectionFT Epoch latest.pt

y
yongguang huang5 months ago

那个ollama的模型如何安装啊,谢谢

y
yongguang huang5 months ago

装好ollama客户端,下载了模型,但comfyui里面的ollama模型输入框无法输入数据是怎么回事

T
Tony mark4 months ago

我32G内存,4090,直接爆内存了~

w
wenjingtiandi4 months ago

hi,up主,我使用的是 JoyCaption2 去做的提示词反推,如果是黑白图片,他会反推成 a black-and-white photo,直接把这个提示词输入给工作流,会导致出来的图还是黑白的,有没有啥办法可以在工作流中自动的将这种词替换为例如a color photo 这样的?

D
Datou4 months ago

加个语言模型节点,让它给改写成彩色照片提示词

w
wenjingtiandi4 months ago

好的 感谢 加上 textjoin 节点后可以改写了

m
mikefeng4 months ago

提供以上工作流  云环境  v:abcxyz20231986

大佬,你好。一些有破损的老照片,跑完没法自动补全生成图像,是什么原因。是我模型选择的原因吗

D
Datou3 months ago

要具体看是卡在哪个节点了,报错信息是什么

没有报错,就是说照片有破损的部位,他生成后不会识别破损的部位,没法补全破损的那块,而是会自己生成其他的东西。

n
ni yan2 months ago
n
ni yan2 months ago

1、打开工作流报错:

Missing Node Types

When loading the graph, the following node types were not found

BOPBTL_ScratchMask

BOPBTL_RestoreOIdPhotos

BOPBTL_LoadScratchMaskModel

BOPBTL_LoadRestoreOldPhotosModel

2、插件安装后报错:(IMPORT FAILED) ComfyUI Bringing Old Photos Back to Life

3、运行工作流报:Cannot execute because a node is missing the class_type property.: Node ID '#85'

请问大佬,怎么解决呢?谢谢


p
peter wang2 months ago

你好需要设置什么关键词吗。我跑完还是黑白照片,没有上色,只是比原来的照片清晰了

M
Mexico5 days ago

Why does it give sometimes a fully black picture in output ? I can't find out the problem/solution, so heed help anyone!

Author

221
579.6K
10.1K
2.1M

Reviews

T

Tony mark

4 months ago

Versions (4)

  • - latest (4 months ago)

  • - v20240717-030742

  • - v20240628-152158

  • - v20240628-070419

Primitive Nodes (3)

DepthAnythingV2Preprocessor (1)

DownloadAndLoadFlorence2Model (1)

Florence2Run (1)

Custom Nodes (29)

ComfyUI

  • - ControlNetLoader (3)

  • - CLIPTextEncode (2)

  • - PreviewImage (3)

  • - VAEEncode (1)

  • - ControlNetApplyAdvanced (3)

  • - VAEDecode (1)

  • - CheckpointLoaderSimple (1)

  • - KSampler (1)

  • - SaveImage (1)

  • - LoadImage (1)

  • - BOPBTL_ScratchMask (1)

  • - BOPBTL_RestoreOldPhotos (1)

  • - BOPBTL_LoadScratchMaskModel (1)

  • - BOPBTL_LoadRestoreOldPhotosModel (1)

  • - easy cleanGpuUsed (1)

  • - ImageScaleToMegapixels (1)

  • - Automatic CFG (1)

  • - DisplayText_Zho (1)

  • - ComfyUI_Image_Round__ImageRoundAdvanced (1)

  • - ComfyUI_Image_Round__ImageCropAdvanced (1)

  • - ReActorFaceSwap (1)

Checkpoints (1)

sdxl\juggernautXL_v9Rdphoto2Lightning.safetensors

LoRAs (0)