xxadonesxx / nodegpt Goto Github PK
View Code? Open in Web Editor NEWComfyUI Extension Nodes for Automated Text Generation.
License: GNU Affero General Public License v3.0
ComfyUI Extension Nodes for Automated Text Generation.
License: GNU Affero General Public License v3.0
I'm using the TextGeneration node of your suite. It works great, but I'm not certain it follows the cache = false
setting. Or, perhaps, I don't understand the intended behavior.
The behavior I would want, or expect from cache = false
is that, even if the submitted input remains identical, the node will still invoke the LLM.
Instead, if nothing changes in the system (including the input to the node), the node doesn't do anything, regardless of the setting for the cache value.
The only way for me to trigger a new generation is either by killing ComfyUI, or changing the cache value from false to true or from true to false. Irrespective of the actual setting, the change of state forces a new LLM generation.
Issue:
Local models such as loaded via the llama-cpp node behave in an unconventional way and don't have the dropdown like all the other comfy nodes. This will result in low adoption rates and higher false bug reports as people struggle with the unique aspects of NodeGPT's eccentricity here.
Solution:
I have asked comfyanonymous to add an 'llm' folder to 'models', and two lines to the folder_paths.py file so that work with LLMs can be done using ComfyUI conventions used for other kinds of models.
Once that has been adopted (if it is) I would recommend the following changes to llama-cpp.py to use the normal model dropdown style that everyone's familiar with:
Programmatic access to GPT-4V would allow to caption an uploaded image better than with the BLIP model (which is the current approach AFAIK). Which would be useful for inpainting and upscaling tasks in ComfyUI.
Thanks for considering it.
I'm getting this error when trying to use the TextGeneration node. I think it has to do with this line in TextGeneration.py
config_list = LLM['LLM'],
not adhering to the correct format required by https://microsoft.github.io/autogen/docs/reference/oai/completion/
I'm a little confused because it was working a week or so ago then it just stopped working with this error.
Error occurred when executing TextGeneration:
'model'
File "C:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\new_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\TextGeneration.py", line 45, in execute
response = oai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^
I would love to have a node that is a definable agent. The system message should be a text box so the agent can be customized beyond the defaults.
Right now, the TextGeneration
node outputs a text and, despite that, I couldn't get a single 3rd party node (that accepts text as input) to connect with it. For example, as you can see in the screenshot below, it won't connect to a ttn textDebug
node, even if that one accepts text as input.
It seems like the only way to export the text is by using the Output2String
node, which I'm using:
It would be very helpful if the TextGeneration
node would be compatible with things like ttn textDebug
node and ImpactSwitch
nodes out of the box, without the need for extra nodes to do conversions.
Thanks!
The conflict is caused by:
pyautogen 0.2.7 depends on openai>=1.3
pymemgpt 0.1.6 depends on openai<0.29.0 and >=0.28.1
pyautogen 0.2.7 depends on openai>=1.3
pymemgpt 0.1.5 depends on openai<0.29.0 and >=0.28.1
pyautogen 0.2.7 depends on openai>=1.3
pymemgpt 0.1.4 depends on openai<0.29.0 and >=0.28.1
pyautogen 0.2.7 depends on openai>=1.3
pymemgpt 0.1.3 depends on openai<0.29.0 and >=0.28.1
pyautogen 0.2.7 depends on openai>=1.3
pymemgpt 0.1.2 depends on openai<0.29.0 and >=0.28.1
pyautogen 0.2.7 depends on openai>=1.3
pymemgpt 0.1.1 depends on openai<0.29.0 and >=0.28.1
pyautogen 0.2.7 depends on openai>=1.3
pymemgpt 0.1.0 depends on openai<0.29.0 and >=0.28.1
Since the memgpt is in TODO, I removed it and successfully finished installation.
hi / i get this error
i've fallowed the steps for installation , but i'm getting the red nodes all the time.
Traceback (most recent call last):
File "V:\comfyui portable\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1735, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "V:\comfyui portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT_init.py", line 32, in
imported_module = importlib.import_module(".{}".format(module_name), name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "importlib_init_.py", line 126, in import_module
File "", line 1204, in _gcd_import
File "", line 1176, in _find_and_load
File "", line 1140, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'NodeGPT.Output2String'
Cannot import V:\comfyui portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT module for custom nodes: No module named 'NodeGPT.Output2String'
Import times for custom nodes:
0.0 seconds: V:\comfyui portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved
0.4 seconds: V:\comfyui portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite
0.4 seconds (IMPORT FAILED): V:\comfyui portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT
0.5 seconds: V:\comfyui portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
Starting server
Background:
Because the llama-cpp node's model-path is different from all the other model loaders and I couldn't get it to load the .gguf I put in the ComfyUI/custom_nodes/NodeGPT/models folder, I tried using an absolute path.
Bug:
I tried using an absolute path. That results in the assert on line 46 tripping, as the model is None type and doesn't load as per the images.
Desired Behavior:
For absolute paths to successfully load the model.
Transcending the problem though:
Make it function like other model loaders, for example the one for AnimateDiff:
It doesn't allow a model_path as a string, but instead provides an easy to use drop down if your model is in the correct folder.
Perhaps a setting somewhere to define the path on a more permanent basis?
Hi there,
could you please share a workflow.json from comfy as I can't make this work.
I get this error:
got prompt
Error generating text: {"error":"This app has no endpoint /api/textgen/."}
Traceback (most recent call last):
File "G:\ComfyUI\ComfyUI\execution.py", line 184, in execute
executed += recursive_execute(self.server, prompt, self.outputs, x, extra_data)
File "G:\ComfyUI\ComfyUI\execution.py", line 62, in recursive_execute
input_data_all = get_input_data(inputs, class_def, unique_id, outputs, prompt, extra_data)
File "G:\ComfyUI\ComfyUI\execution.py", line 25, in get_input_data
obj = outputs[input_unique_id][output_index]
KeyError: 0
MyComfy.bat is:
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --listen
pause
My ooba bat is:
@echo off
@echo Starting the web UI...
cd /D "%~dp0"
set MAMBA_ROOT_PREFIX=%cd%\installer_files\mamba
set INSTALL_ENV_DIR=%cd%\installer_files\env
if not exist "%MAMBA_ROOT_PREFIX%\condabin\micromamba.bat" (
call "%MAMBA_ROOT_PREFIX%\micromamba.exe" shell hook >nul 2>&1
)
call "%MAMBA_ROOT_PREFIX%\condabin\micromamba.bat" activate "%INSTALL_ENV_DIR%" || ( echo MicroMamba hook not found. && goto end )
cd text-generation-webui
call python server.py --auto-devices --chat --listen --no-stream --model vicuna-13b-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type llama
:end
pause
Thank you in advance
I tried the update.bat
F:\comfyNodeGPT>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
** ComfyUI startup time: 2023-12-12 13:10:18.471906
** Platform: Windows
** Python version: 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)]
** Python executable: F:\comfyNodeGPT\python_embeded\python.exe
** Log path: F:\comfyNodeGPT\comfyui.log
Prestartup times for custom nodes:
0.0 seconds: F:\comfyNodeGPT\ComfyUI\custom_nodes\ComfyUI-Manager
Total VRAM 12288 MB, total RAM 64821 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync
VAE dtype: torch.bfloat16
Using pytorch cross attention
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.jsonF:\comfyNodeGPT\ComfyUI\custom_nodes\NodeGPT\AutoUpdate.json
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
Traceback (most recent call last):
File "F:\comfyNodeGPT\ComfyUI\custom_nodes\NodeGPT\API_Nodes\llama-cpp.py", line 9, in
from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "F:\comfyNodeGPT\ComfyUI\nodes.py", line 1800, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "F:\comfyNodeGPT\ComfyUI\custom_nodes\NodeGPT_init.py", line 127, in
imported_module = importlib.import_module(".{}".format(module_name), name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "importlib_init_.py", line 126, in import_module
File "", line 1204, in _gcd_import
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in _load_unlocked
File "", line 940, in exec_module
File "", line 241, in _call_with_frames_removed
File "F:\comfyNodeGPT\ComfyUI\custom_nodes\NodeGPT\API_Nodes\llama-cpp.py", line 21, in
from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'
Cannot import F:\comfyNodeGPT\ComfyUI\custom_nodes\NodeGPT module for custom nodes: No module named 'llama_cpp'
Import times for custom nodes:
0.0 seconds (IMPORT FAILED): F:\comfyNodeGPT\ComfyUI\custom_nodes\NodeGPT
0.3 seconds: F:\comfyNodeGPT\ComfyUI\custom_nodes\ComfyUI-Manager
Starting server
Good day. I keep getting this error message every time. Please help me. Can't find anything.
Error occurred when executing TextGeneration:
(Deprecated) The autogen.Completion class requires openai<1 and diskcache.
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\TextGeneration.py", line 32, in execute
response = oai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 792, in create
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\TextGeneration.py", line 32, in execute
response = oai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 792, in create
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\TextGeneration.py", line 32, in execute
response = oai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 792, in create
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\TextGeneration.py", line 32, in execute
response = oai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 792, in create
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\TextGeneration.py", line 32, in execute
response = oai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 792, in create
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\Chat.py", line 53, in execute
autogen.ChatCompletion.start_logging(conversations)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 1180, in start_logging
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\Chat.py", line 53, in execute
autogen.ChatCompletion.start_logging(conversations)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 1180, in start_logging
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\Chat.py", line 53, in execute
autogen.ChatCompletion.start_logging(conversations)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 1180, in start_logging
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\Chat.py", line 53, in execute
autogen.ChatCompletion.start_logging(conversations)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 1180, in start_logging
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\Chat.py", line 53, in execute
autogen.ChatCompletion.start_logging(conversations)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 1180, in start_logging
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\Chat.py", line 53, in execute
autogen.ChatCompletion.start_logging(conversations)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 1180, in start_logging
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\Chat.py", line 53, in execute
autogen.ChatCompletion.start_logging(conversations)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 1180, in start_logging
raise ERROR
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\TextGeneration.py", line 32, in execute
response = oai.ChatCompletion.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\autogen\oai\completion.py", line 792, in create
raise ERROR
It only returns the first letter of the generated prompt, I'm no coder but changing the return to
return(text,);
seems to work for me :)
Traceback (most recent call last):
File "D:\dev\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1734, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 940, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\dev\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT_init.py", line 32, in
imported_module = importlib.import_module(".{}".format(module_name), name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "importlib_init_.py", line 126, in import_module
File "", line 1204, in _gcd_import
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in _load_unlocked
File "", line 940, in exec_module
File "", line 241, in _call_with_frames_removed
File "D:\dev\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT\Agents\Assistant.py", line 7, in
import autogen
ModuleNotFoundError: No module named 'autogen'
Cannot import D:\dev\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT module for custom nodes: No module named 'autogen'
Import times for custom nodes:
0.0 seconds: D:\dev\ComfyUI_windows_portable\ComfyUI\custom_nodes\Zho_Main_Nodes_Chinese.py
0.0 seconds (IMPORT FAILED): D:\dev\ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT
Issue:
When configuring nodes as such:
The DisplayText node uses the print() function, which is printing to stdout on my machine and probably most other peoples.
Desired Behavior:
For it to display it to the node contents, preferably in a multiline text widget.
Solution:
Not yet 100% sure, but I can tell the problem likely lies in this usage in DisplayText.py
I'll update in the below comments if I come up with the necessary code.
I added NodeGPT suite to the AP Workflow 6.0. It's working very well, but some users are reporting a timeout error generated by ComfyUI if LM Studio doesn't respond quickly enough.
They solve the problem by increasing the timeout_parameter
in /ComfyUI/venv/lib/python3.11/site-packages/autogen/oai/completion.py
. Which is great, but not exactly convenient.
I wonder if it's possible to expose that parameter in the LM_Studio node itself or, at least, increase its default for new installations.
Thank you.
Really cool project idea, NodeGPT!
I followed the installation instructions with a fresh and current install on MacOS for everything listed. However, when trying to load Task_Solving_with_Code_Generation.json
within ComfyUI there are errors such as
TypeError: undefined is not an object (evaluating 'this.widgets_values.length')
onAfterGraphConfigured@http://127.0.0.1:8188/extensions/core/widgetInputs.js:322:46
@http://127.0.0.1:8188/scripts/app.js:1221:34
@http://127.0.0.1:8188/lib/litegraph.core.js:2260:20
@http://127.0.0.1:8188/scripts/app.js:1201:27
loadGraphData@http://127.0.0.1:8188/scripts/app.js:1451:24
@http://127.0.0.1:8188/scripts/app.js:1298:23
This may be due to the following script:
/extensions/core/widgetInputs.js
Any thoughts what is needed to solve it?
Thanks!
https://github.com/omar92/ComfyUI-QualityOfLifeSuit_Omar92 does this if examples are needed.
Happy to work on a PR at some point if no one gets to it in the next few weeks.
Installing "autogen" via pip did not work for me, the correct package is called "pyautogen" in the pypi repository.
I tried a git clone and manual requirements install, and a manager install. Same error with the repo's workflow json in both respects
Loading aborted due to error reloading workflow data
TypeError: widget[GET_CONFIG] is not a function
TypeError: widget[GET_CONFIG] is not a function
at #onFirstConnection (http://serverip/extensions/core/widgetInputs.js:389:54)
at PrimitiveNode.onAfterGraphConfigured (http://serverip/extensions/core/widgetInputs.js:318:29)
at app.graph.onConfigure (http://serverip/scripts/app.js:1221:34)
at LGraph.configure (http://serverip/lib/litegraph.core.js:2260:9)
at LGraph.configure (http://serverip/scripts/app.js:1201:22)
at ComfyApp.loadGraphData (http://serverip/scripts/app.js:1451:15)
at ComfyApp.setup (http://serverip/scripts/app.js:1298:10)
at async http://serverip/:14:4
This may be due to the following script:
/extensions/core/widgetInputs.js
I already implemented the NodeGPT
suite in the AP Workflow 6.0, but I still cannot completely rely on it because the ChatGPT
node exposes the API directly:
I am not sure it's possible in ComfyUI, but the best possible implementation (even better than the Quality of Life Suite
one) would be if I could choose a file from a dropdown menu inside the node and that file contains my API Key. Just like the stock LoadImage node in ComfyUI.
In that way, not only I would avoid showing my API key when I take screenshots of the workflow or when I export it, but I would also be able to quickly switch between different API keys if I need to.
Another thing that is helpful for reproducibility is the capability to control the Seed. It's less important that the previous suggestion, but it would be a very nice-to-have.
Thank you!
P.s.: we don't have this problem yet with the LM_Studio node, but I'd expect that in a future we will (for example, if the author creates a hosted version). So, this same feature request would apply to that node in the future.
Hi there,
I am trying to install NodeGPT inside ComfyUI,
I have comfyUI already installed and working,( I updated it today with the python scripts.)
I did Gitclone this repository
cd NodeGPT
install.bat
pip install -r requirements.txt
(I did not installed LM Studio yet, but I dont think it is a problem at this step.
Here is the folder structure : ComfyUI_windows_portable\ComfyUI\custom_nodes\NodeGPT
I have the venv folder inside. So it look everything was installed
If I load one of your workflow, the Nodes from NodesGPT appear in red the other nodes are fine tho.
Here is the error message at startup
When loading the graph, the following node types were not found:
UserProxy
DisplayString
Chat
Assistant
PrimitiveNode
LM_Studio
Any idea what could be wrong ? I am not too technical, so I am lost myself.
Thank you!
https://github.com/oobabooga/text-generation-webui is under the AGPL license, so anyone looking to build on top of it, even when used through a network, needs to open source their code and license it under AGPL.
Hello,
Firstly, thank you for this repo. When I try to connect the Ollama node to the Mistral7B model which is locally served by Ollama serve. I am getting this error again and again. Is using the LM Studio the only solution for utilizing NodeGPT in ComfyUI? If so, how can I achieve serving the models on Ubuntu Server that have no GUI?
for testing this app
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.