Coder Social home page Coder Social logo

comfyui-exllama-nodes's People

Contributors

scottnealon avatar zuellni avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

comfyui-exllama-nodes's Issues

Comfyui cu version 121, unable to load bug fixes.

portable version:
mode 1:
D:/AI/ComfyUI_cu121/python_embeded/python.exe -s -m pip install -r requirements.txt

mode 2:
D:/AI/ComfyUI_cu121/python_embeded/python.exe -s -m pip install exllamav2-0.0.6+cu121-cp311-cp311-win_amd64.whl --target=D:\AI\ComfyUI_cu121\python_embeded\Lib\site-packages
download exllamav2-0.0.6+cu121-cp311-cp311-win_amd64.whl ,put into "exllama nodes"。 then run the above code 。
I do not understand programming, inadvertently saw was node operation, found that can solve the problem. Presented for the author's reference only

Any plans to add more detailed instructions for using the nodes? Specifically the Replace node?

Hi, I'm starting to test these nodes in a workflow, but I can't seem to grasp how the Replace node is supposed to work. I'm also unsure exactly where my initial prompt is supposed to go.

I see the Generator node's text field, and how it's being used to instruct the LLM on how to behave and receive the appropriate response. When I try to define a variable like [a] in that text field, it doesn't seem to work as intended. Perhaps I'm just not understanding the default workflow you've provided. Are you able to clarify things such as:

  1. Where the user is supposed to put their initial prompt.
  2. The purpose of the Replace node, with an example of it being used to affect image generation.

That would be immensely helpful. My ultimate goal is to include LLM prompt enrichment in my workflow in order to create better images. Thanks!

Trevor

Nodes missing

There are no nodes after install (through comfy manager or manual install). idk why. Repo cloned properly i think.
"Intall missing nodes" did nothing

image image

No module named 'exllama_ext'

** ComfyUI start up time: 2023-09-19 18:59:09.101002

Prestartup times for custom nodes:
0.0 seconds: E:\AI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 11264 MB, total RAM 32725 MB
xformers version: 0.0.20
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2080 Ti : cudaMallocAsync
VAE dtype: torch.float32
Using xformers cross attention
Traceback (most recent call last):
File "E:\AI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1725, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "E:\AI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-ExLlama-Nodes_init
.py", line 1, in
from .nodes import Generator, Loader, Previewer
File "E:\AI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-ExLlama-Nodes\nodes.py", line 6, in
from exllama.alt_generator import ExLlamaAltGenerator
File "E:\AI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\exllama_init_.py", line 1, in
from . import cuda_ext, generator, model, tokenizer
File "E:\AI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\exllama\cuda_ext.py", line 9, in
import exllama_ext
ModuleNotFoundError: No module named 'exllama_ext'
提示:Python 运行时抛出了一个异常。请检查疑难解答页面。

Cannot import E:\AI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-ExLlama-Nodes module for custom nodes: No module named 'exllama_ext'

Loading: ComfyUI-Manager (V0.26.2)

ComfyUI Revision: 1465 [6d3dee9d]

Import times for custom nodes:
0.3 seconds: E:\AI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager
0.5 seconds (IMPORT FAILED): E:\AI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-ExLlama-Nodes

Starting server

To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: E:\AI\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json

Error occurred after comfyui upgrade

Error occurred when executing ZuellniExLlamaGenerator:

sample_basic(): incompatible function arguments. The following argument types are supported:

  1. (arg0: torch.Tensor, arg1: float, arg2: int, arg3: float, arg4: float, arg5: float, arg6: torch.Tensor, arg7: torch.Tensor) -> None

Invoked with: tensor([[-9.0547, -8.7422, 7.3477, ..., -7.6328, -5.0000, -3.2949]]), 0.7, 20, 0.9, 0.98, 0.9980835713764517, tensor([[0]]), tensor([[0.]]), tensor([[True, True, True, ..., True, True, True]])

File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-ExLlama-Nodes\nodes.py", line 96, in generate
chunk, eos, _ = model.stream()
File "D:\AI\ComfyUI_windows_portable\python_embeded\lib\site-packages\exllamav2\generator\streaming.py", line 111, in stream
next_token, eos = self._gen_single_token(self.settings)
File "D:\AI\ComfyUI_windows_portable\python_embeded\lib\site-packages\exllamav2\generator\streaming.py", line 204, in _gen_single_token
token, _, eos = ExLlamaV2Sampler.sample(logits, gen_settings, self.sequence_ids, random.random(), self.tokenizer, prefix_token)
File "D:\AI\ComfyUI_windows_portable\python_embeded\lib\site-packages\exllamav2\generator\sampler.py", line 116, in sample
ext_c.sample_basic(logits,

cannot find nodes even after installing everything and no errors during startup

I'm getting error about missing nodes even I installed exllama and everything necessarry. There are no errors during startup.

When loading the graph, the following node types were not found:
String Variable
GPTSampler
GPT Loader Simple
Integer Variable
Nodes that have failed to load will show as red on the graph.

This is output from the start of Pinokio comfyUI, it says ComfyUI-ExLlama was loaded.

C:\...\pinokio\api\comfyui.git\app>C:\...\pinokio\api\comfyui.git\app\env\Scripts\activate C:\...\pinokio\api\comfyui
.git\app\env && python main.py --disable-xformers
** ComfyUI startup time: 2024-02-21 12:49:13.044990
** Platform: Windows
** Python version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:27:34) [MSC v.1937 64 bit (AMD64)]
** Python executable: C:\...\pinokio\api\comfyui.git\app\env\Scripts\python.exe
** Log path: C:\...\pinokio\api\comfyui.git\app\comfyui.log

Prestartup times for custom nodes:
   0.0 seconds: C:\...\pinokio\api\comfyui.git\app\custom_nodes\rgthree-comfy
   0.1 seconds: C:\...\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-Manager

Total VRAM 16384 MB, total RAM 32469 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3080 Laptop GPU : cudaMallocAsync
VAE dtype: torch.bfloat16
Using pytorch cross attention
Adding C:\...\pinokio\api\comfyui.git\app\custom_nodes to sys.path
Loaded efficiency nodes from C:\...\pinokio\api\comfyui.git\app\custom_nodes\efficiency-nodes-comfyui
Could not find ControlNetPreprocessors nodes
Could not find AdvancedControlNet nodes
Could not find AnimateDiff nodes       
Could not find IPAdapter nodes
Could not find VideoHelperSuite nodes
### Loading: ComfyUI-Impact-Pack (V4.78)
### Loading: ComfyUI-Impact-Pack (Subpack: V0.4)
### Loading: ComfyUI-Manager (V2.7.2)
[Impact Pack] Wildcards loading done.
### ComfyUI Revision: 2005 [0d0fbabd] | Released on '2024-02-20'
Failed to auto update `Quality of Life Suit` 
QualityOfLifeSuit_Omar92_DIR: C:\...\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-QualityOfLifeSuit_Omar92

[rgthree] Loaded 34 extraordinary nodes.
[rgthree] Will use rgthree's optimized recursive execution.

[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json  
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
WAS Node Suite: OpenCV Python FFMPEG support is enabled
WAS Node Suite Warning: `ffmpeg_bin_path` is not set in `C:\Users\matus\pinokio\api\comfyui.git\app\custom_nodes\was-node-suite-comfyui\was_s
uite_config.json` config file. Will attempt to use system ffmpeg binaries if available.
WAS Node Suite: Finished. Loaded 211 nodes successfully.

        "Art is the most intense mode of individualism that the world has known." - Oscar Wilde


Import times for custom nodes:
   0.0 seconds: C:\...\pinokio\api\comfyui.git\app\custom_nodes\efficiency-nodes-comfyui        
   0.0 seconds: C:\...\pinokio\api\comfyui.git\app\custom_nodes\sdxl_prompt_styler
   0.0 seconds: C:\...\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI_JPS-Nodes
   0.0 seconds: C:\...\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-post-processing-nodes   
   0.0 seconds: C:\...\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-QualityOfLifeSuit_Omar92
   0.0 seconds: C:\...\pinokio\api\comfyui.git\app\custom_nodes\ComfyMath
   0.0 seconds: C:\...\pinokio\api\comfyui.git\app\custom_nodes\comfy-image-saver
   0.0 seconds: C:\...\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-Custom-Scripts
   0.0 seconds: C:\...\pinokio\api\comfyui.git\app\custom_nodes\rgthree-comfy
   0.0 seconds: C:\...\pinokio\api\comfyui.git\app\custom_nodes\Derfuu_ComfyUI_ModdedNodes      
   0.1 seconds: C:\...\pinokio\api\comfyui.git\app\custom_nodes\comfyui-art-venture
   0.4 seconds: C:\...\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-Manager
   1.0 seconds: C:...\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-Impact-Pack
   1.6 seconds: C:\...\pinokio\api\comfyui.git\app\custom_nodes\was-node-suite-comfyui
   2.3 seconds: C:\...\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-ExLlama

Starting server

To see the GUI go to: http://127.0.0.1:8188

[Start proxy] Local Sharing http://127.0.0.1:8188
Proxy StartedFETCH DATA from: C:\...\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-Manager\extension-node-map.json
Error: OpenAI API key is invalid OpenAI features wont work for you
QualityOfLifeSuit_Omar92::NSP ready
FETCH DATA from: C:\...\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-Manager\.cache\1514988643_custom-node-list.json
FETCH DATA from: C:\...\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-Manager\.cache\1742899825_extension-node-map.json
FETCH DATA from: C:\...\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-Manager\extension-node-map.json

Model undefined not found?

Sorry I put the model to the right directory, but it still isnt found.

E:\ComfyUI_windows_portable\ComfyUI\models\llm\mistral-7b-exl2-b5

The model is not renamed . Its called output.safetensors
What am i missing ?

THX in Advance....

Metamountain

name 'exllamav2_ext' is not defined

File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-ExLlama-Nodes_init_.py", line 1, in
from . import exllama, text
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-ExLlama-Nodes\exllama.py", line 6, in
from exllamav2 import *
File "F:\ComfyUI\python_embeded\Lib\site-packages\exllamav2_init_.py", line 3, in
from exllamav2.model import ExLlamaV2
File "F:\ComfyUI\python_embeded\Lib\site-packages\exllamav2\model.py", line 31, in
from exllamav2.config import ExLlamaV2Config
File "F:\ComfyUI\python_embeded\Lib\site-packages\exllamav2\config.py", line 5, in
from exllamav2.fasttensors import STFile
File "F:\ComfyUI\python_embeded\Lib\site-packages\exllamav2\fasttensors.py", line 6, in
from exllamav2.ext import exllamav2_ext as ext_c
File "F:\ComfyUI\python_embeded\Lib\site-packages\exllamav2\ext.py", line 281, in
ext_c = exllamav2_ext
^^^^^^^^^^^^^
NameError: name 'exllamav2_ext' is not defined

No module named 'exllama'

`Traceback (most recent call last):
File "D:\Blender_ComfyUI\ComfyUI\nodes.py", line 1725, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI-ExLlama-Nodes_init
.py", line 1, in
from .nodes import Generator, Loader, Previewer
File "D:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI-ExLlama-Nodes\nodes.py", line 6, in
from exllama.alt_generator import ExLlamaAltGenerator
ModuleNotFoundError: No module named 'exllama'

Cannot import D:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI-ExLlama-Nodes module for custom nodes: No module named 'exllama'`

@Zuellni

language model install

Hello.
I Installed this node to ComfyUI\custom_nodes\ComfyUI-ExLlama-Nodes-main
and then I installed the requirements txt files. everything went smoothly. then I copied over your png image from your readme and it loaded up perfectly. however there are two things missing. the loader for the language model is not actually a loader it is a txt field that you have typed zephyr-7b-beta-5.0bpw-h6-exl2 I can see that model on hugging face https://huggingface.co/LoneStriker/zephyr-7b-beta-5.0bpw-h6-exl2 however I am unsure how to load the model. or if i download the model where do i put it? in what folder? is this thing automatically searching the hugging face cache? if i want to load a new model say llama 70b how do I do so. into what folder do I put the model itself.

Thanks

ImportError: DLL load failed while importing exllamav2_ext: 找不到指定的程序。

E:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --port 8192
Total VRAM 11264 MB, total RAM 32725 MB
xformers version: 0.0.21
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 2080 Ti : cudaMallocAsync
VAE dtype: torch.float32
Using xformers cross attention
Traceback (most recent call last):
File "E:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1734, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "E:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-ExLlama-Nodes_init
.py", line 1, in
from . import exllama, text
File "E:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-ExLlama-Nodes\exllama.py", line 7, in
from exllamav2 import ExLlamaV2, ExLlamaV2Cache, ExLlamaV2Config, ExLlamaV2Tokenizer
File "E:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\exllamav2_init_.py", line 3, in
from exllamav2.model import ExLlamaV2
File "E:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\exllamav2\model.py", line 12, in
from exllamav2.linear import ExLlamaV2Linear
File "E:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\exllamav2\linear.py", line 5, in
from exllamav2 import ext
File "E:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\python_embeded\lib\site-packages\exllamav2\ext.py", line 14, in
import exllamav2_ext
ImportError: DLL load failed while importing exllamav2_ext: 找不到指定的程序。

Cannot import E:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-ExLlama-Nodes module for custom nodes: DLL load failed while importing exllamav2_ext: 找不到指定的程序。

Import times for custom nodes:
0.2 seconds (IMPORT FAILED): E:\ComfyUI_windows_portable_nvidia_cu118_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-ExLlama-Nodes

Starting server

To see the GUI go to: http://127.0.0.1:8192

ModuleNotFoundError: No module named 'exllamav2'

Hi, I'm very new to all this. I've been messing around trying to install your custom node but I keep getting this:
Screenshot 2024-02-21 155702

Some more info that might help, I got : Python 3.11.8, Torch 2.2.0+cu118, CUDA 11.8
The last wheel I've tried: exllama-0.0.18+cu118-cp311-cp311-win_amd64.whl

MAX_TOKENS in error

hello,nice work.but when I load the example workflow it come out:


  • ZuellniExLlamaGenerator 5:
    • Required input is missing: max_tokens
      Output will be ignored
      Failed to validate prompt for output 11:
      Output will be ignored
      invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': 'Required input is missing: max_tokens', 'extra_info': {}}

it seems MAX_TOKENS isn't set ,would you please check?

IndexError: list index out of range

Hi, I try to install this node on Linux,

I get this error:

Traceback (most recent call last):
  File "/comfyui/nodes.py", line 1866, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/comfyui/custom_nodes/ComfyUI-ExLlama-Nodes/__init__.py", line 1, in <module>
    from . import exllama, text
  File "/comfyui/custom_nodes/ComfyUI-ExLlama-Nodes/exllama.py", line 6, in <module>
    from exllamav2 import *
  File "/usr/local/lib/python3.10/site-packages/exllamav2/__init__.py", line 3, in <module>
    from exllamav2.model import ExLlamaV2
  File "/usr/local/lib/python3.10/site-packages/exllamav2/model.py", line 31, in <module>
    from exllamav2.config import ExLlamaV2Config
  File "/usr/local/lib/python3.10/site-packages/exllamav2/config.py", line 5, in <module>
    from exllamav2.fasttensors import STFile
  File "/usr/local/lib/python3.10/site-packages/exllamav2/fasttensors.py", line 6, in <module>
    from exllamav2.ext import exllamav2_ext as ext_c
  File "/usr/local/lib/python3.10/site-packages/exllamav2/ext.py", line 250, in <module>
    maybe_set_arch_list_env()
  File "/usr/local/lib/python3.10/site-packages/exllamav2/ext.py", line 43, in maybe_set_arch_list_env
    arch_list[-1] += '+PTX'
IndexError: list index out of range

Could you tell me how to fix it?

Thanks,
Jimmy

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.