Coder Social home page Coder Social logo

comfyui_fictiverse_workflows's People

Contributors

fictiverse avatar if-ai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

comfyui_fictiverse_workflows's Issues

Nice work! what is this error?

Hi really nice work what you are doing here. However, i'm facing a few issues
what clip model are you using? in the json file, its said its model.safetensors
I also got this error when trying to run the workflow with clip model clip_vision_g.safetensors
RuntimeError: Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).

Error with BLIP module?

Workflows look amazing - can't wait to try.
Was trying to use Fictiverse_Magnifake.json
Any idea why I'm getting this error from the BLIP stuff? everything before it seems to run perfectly.

Error occurred when executing BLIP Analyze Image:

The size of tensor a (6) must match the size of tensor b (36) at non-singleton dimension 0

File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\WAS_Node_Suite.py", line 10866, in blip_caption_image
caption = model.generate(tensor, sample=False, num_beams=6, max_length=74, min_length=20)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\modules\BLIP\blip_module.py", line 162, in generate
outputs = self.text_decoder.generate(input_ids=input_ids,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\generation\utils.py", line 1752, in generate
return self.beam_search(
^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\generation\utils.py", line 3091, in beam_search
outputs = self(
^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\modules\BLIP\blip_med.py", line 886, in forward
outputs = self.bert(
^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\modules\BLIP\blip_med.py", line 781, in forward
encoder_outputs = self.encoder(
^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\modules\BLIP\blip_med.py", line 445, in forward
layer_outputs = layer_module(
^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\modules\BLIP\blip_med.py", line 361, in forward
cross_attention_outputs = self.crossattention(
^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\modules\BLIP\blip_med.py", line 277, in forward
self_outputs = self.self(
^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\modules\BLIP\blip_med.py", line 178, in forward
attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

ERROR:root:Traceback (most recent call last):

Hi, thanks for your scripts, it really great. I have problem with: Fictiverse_Magnifake.json
When running script it stops on Clip Processing, when clip is analizing image:

ERROR:root:Traceback (most recent call last):
File "D:\SD\extensions\sd-webui-comfyui\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\SD\extensions\sd-webui-comfyui\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\SD\extensions\sd-webui-comfyui\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\SD\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\was-node-suite-comfyui\WAS_Node_Suite.py", line 10866, in blip_caption_image
caption = model.generate(tensor, sample=False, num_beams=6, max_length=74, min_length=20)
File "D:\SD\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\was-node-suite-comfyui\modules\BLIP\blip_module.py", line 162, in generate
outputs = self.text_decoder.generate(input_ids=input_ids,
File "D:\SD\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\SD\venv\lib\site-packages\transformers\generation\utils.py", line 1611, in generate
return self.beam_search(
File "D:\SD\venv\lib\site-packages\transformers\generation\utils.py", line 2909, in beam_search
outputs = self(
File "D:\SD\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\SD\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\was-node-suite-comfyui\modules\BLIP\blip_med.py", line 886, in forward
outputs = self.bert(
File "D:\SD\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\SD\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\was-node-suite-comfyui\modules\BLIP\blip_med.py", line 781, in forward
encoder_outputs = self.encoder(
File "D:\SD\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\SD\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\was-node-suite-comfyui\modules\BLIP\blip_med.py", line 445, in forward
layer_outputs = layer_module(
File "D:\SD\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\SD\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\was-node-suite-comfyui\modules\BLIP\blip_med.py", line 361, in forward
cross_attention_outputs = self.crossattention(
File "D:\SD\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\SD\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\was-node-suite-comfyui\modules\BLIP\blip_med.py", line 277, in forward
self_outputs = self.self(
File "D:\SD\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\SD\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\was-node-suite-comfyui\modules\BLIP\blip_med.py", line 178, in forward
attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
RuntimeError: The size of tensor a (6) must match the size of tensor b (36) at non-singleton dimension 0

Anyone had the same problem and know how to solve it?

eror: face swap

i can't used workflow.
Error:
[WinError 1314] A required privilege is not held by the client: 'F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyLiterals\js' -> 'F:\AI\ComfyUI_windows_portable\ComfyUI\web\extensions\ComfyLiterals'
Failed to create symlink to F:\AI\ComfyUI_windows_portable\ComfyUI\web\extensions\ComfyLiterals. Please copy the folder manually.
Source: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyLiterals\js
Target: F:\AI\ComfyUI_windows_portable\ComfyUI\web\extensions\ComfyLiterals

ComfyUI-FaceSwap: Check dependencies

ComfyUI-FaceSwap: Check basic models

Traceback (most recent call last):
File "F:\AI\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1735, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-FaceSwap_init
.py", line 4, in
from .FaceSwapNode import FaceSwapNode
File "F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-FaceSwap\FaceSwapNode.py", line 1, in
import insightface
ModuleNotFoundError: No module named 'insightface'

Cannot import F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-FaceSwap module for custom nodes: No module named 'insightface'

Loading: ComfyUI-Manager (V0.36.1)

ComfyUI Revision: 1646 [ae2acfc2] | Released on '2023-11-03'

[VideoHelperSuite] - INFO - ffmpeg could not be found. Using ffmpeg from imageio-ffmpeg.
Comfyroll Custom Nodes: Loaded
Imported node: FV_NodeGroup_1
Imported node: FV_NodeGroup_1
[comfy_mtb] | INFO -> loaded 48 nodes successfuly
[comfy_mtb] | INFO -> Some nodes (7) could not be loaded. This can be ignored, but go to http://127.0.0.1:8188/mtb if you want more information.
Efficiency Nodes: Attempting to add 'AnimatedDiff Script' Node (ComfyUI-AnimateDiff-Evolved add-on)...Success!
Total VRAM 24575 MB, total RAM 65485 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
VAE dtype: torch.bfloat16
WAS Node Suite: OpenCV Python FFMPEG support is enabled
WAS Node Suite Warning: ffmpeg_bin_path is not set in F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\was_suite_config.json config file. Will attempt to use system ffmpeg binaries if available.
WAS Node Suite: Finished. Loaded 197 nodes successfully.

    "Everything you've ever wanted is on the other side of fear." - George Addair

Import times for custom nodes:
0.0 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Image-Selector
0.0 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus
0.0 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_experiments
0.0 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyLiterals
0.0 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-various
0.0 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts
0.0 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved
0.0 seconds (IMPORT FAILED): F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-FaceSwap
0.0 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-KJNodes
0.0 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui
0.0 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Frame-Interpolation
0.0 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\Derfuu_ComfyUI_ModdedNodes
0.1 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Fictiverse
0.1 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\CharacterFaceSwap
0.1 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\facerestore_cf
0.1 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite
0.4 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes
0.5 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
1.1 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfy_mtb
1.7 seconds: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui

Starting server

To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: F:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json
[AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled
[AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled
[AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled
[AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled
[AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled
[AnimateDiffEvo] - WARNING - ffmpeg could not be found. Outputs that require it have been disabled
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json

i try delete [MTB Nodes] and insta agian but dont done.
1
2
can u help me.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.