Coder Social home page Coder Social logo

comfyui-prompt-control's People

Contributors

asagi4 avatar drjkl avatar haohaocreates avatar puggleslesser avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

comfyui-prompt-control's Issues

If the Lora file name has . symbol, it will show that Lora was not found and Lora cannot be used.

If the Lora file name has the . symbol, it will show Lora not found and cannot use Lora.
For example, the "PAseerGoldenWindSD1.5-V1.safetensors" file, when I type lora:PAseerGoldenWindSD1.5-V1:0.5, it will show PromptControl: Lora PAseerGoldenWindSD1.5-V1 not found.
When I change "PAseerGoldenWindSD1.5-V1.safetensors" to "PAseerGoldenWindSD15-V1.safetensors" and type lora:PAseerGoldenWindSD15-V1:0.5 it works!

Embedding alt syntax does not work

Thanks for the hard work making this project, it's amazing.

I noticed when I tried to use the alt syntax for embeddings, <emb:xyz>, i would be returned:

[ERROR] PromptControl: Prompt editing parse error: Error trying to process rule "embedding":

'str' object has no attribute 'value'

I was able to fix the error by modifying the embedding() function to:

def embedding(self, args):
        return "embedding:" + args[0]

I believe omitting the .value property works because the function is trying to access a value attribute on a string object, which does not exist. I don't know if this is the correct fix, or if the embedding() function is intended to be passed something else other than a string.

EDIT: embedding weighting doesn't work with my solution, but doing like <emb:xyz:0.5> doesn't seem to work for me with the deployed version either, so I'm not sure what the right approach is.

Does this work with SD 1.5?

I loaded up the example workflow and tried it with a SD 1.5 checkpoint, but it throws an error from prepareXL. Is this an XL only node? (I didn't see anything in the docs that says so, but I could have easily missed it!)

Using AREA raises a RuntimeError with KSamplerAdvanced

When I use AREA in a prompt, I get:

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/execution.py", line 153, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/execution.py", line 83, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/execution.py", line 76, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/nodes.py", line 1270, in sample
    return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/nodes.py", line 1206, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 109, in animatediff_sample
    return orig_comfy_sample(model, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 22, in informative_sample
    raise e
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/sample.py", line 97, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_smZNodes/__init__.py", line 131, in KSampler_sample
    return _KSampler_sample(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 785, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler(), sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_smZNodes/__init__.py", line 139, in sample
    return _sample(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 690, in sample
    samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 630, in sample
    samples = getattr(k_diffusion_sampling, "sample_{}".format(sampler_name))(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **extra_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/k_diffusion/sampling.py", line 701, in sample_dpmpp_2m_sde_gpu
    return sample_dpmpp_2m_sde(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, eta=eta, s_noise=s_noise, noise_sampler=noise_sampler, solver_type=solver_type)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/k_diffusion/sampling.py", line 613, in sample_dpmpp_2m_sde
    denoised = model(x, sigmas[i] * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 323, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/k_diffusion/external.py", line 125, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/k_diffusion/external.py", line 151, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_smZNodes/smZNodes.py", line 795, in apply_model
    out = super().apply_model(x, timestep, cc, uu, cond_scale, cond_concat, model_options, seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 311, in apply_model
    out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 289, in sampling_function
    cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 265, in calc_cond_uncond_batch
    output = model_function(input_x, timestep_, **c).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_smZNodes/modules/sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
                                                                 ^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_smZNodes/modules/sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/model_base.py", line 63, in apply_model
    return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/SeargeSDXL/modules/custom_sdxl_ksampler.py", line 70, in new_unet_forward
    x0 = old_unet_forward(self, x, timesteps, context, y, control, transformer_options, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 625, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 61, in forward_timestep_embed
    x = layer(x)
        ^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 460, in forward
    return self._conv_forward(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Calculated padded input size per channel: (2 x 2). Kernel size: (3 x 3). Kernel size can't be greater than actual input size

EDIT: It seems to only happen when combining prompts, like "Prompt1 AND AREA(x1 x2, y1 y2, weight) another prompt". Am I just using the syntax incorrectly here?

Minimum `lark` version?

I kept getting float() argument must be a string or a real number, not 'Tree' earlier, had to update lark to 1.1.9 from 1.1.7.
Haven't gone back to see if 1.1.8 also would have worked.

Do you know what the minimum version should be in requirements?

`BREAK` support

There is already AND support, which is equivalent to ConditioningAverage, however there is no BREAK support, which would map to ConditioningConcat. Furthermore, because of how Comfy handles this, this node cannot be combined with ConditioningConcat because AND makes the prompt return multiple conds, of which only the first is taken.

ModuleNotFoundError: No module named 'lark'

Hey there,

I'm probably just dumb, but Comfy asks for "lark" and I installed it via "pip install lark" in Comfy's directory, but now it's not being found by Comfy.

When I try to reinstall I get "Requirement already satisfied: lark in c:\python311\lib\site-packages (1.1.7)".

(programmer novice here, please excuse me if this is basic)

Example SDXL workflow

Please provide an example SDXL workflow. I'm trying to integrate SDXL, and I'm having some trouble understanding what changes I need to use these nodes. Thanks!

[FR] Improved examples

Describe what's missing
A more visual guide for entry level people. I am really having a hard time understanding how scheduling lora's work here. More specific breakdowns in the readme, with particular examples would really help.
Not expecting youtube tutorial but maybe from concrete examples with image and values in the workspace would be super welcome. Very keen to learn

Additional context
Add any other context or screenshots about the feature request here.

[BUG] animatediff?

After extensive tries with animatediff, I'm a little bit confused.

Sometimes it works, sometimes it doesn't.
AREA never works with animatediff, it fails with an error.
MASK sometimes work, sometimes doesn't. Sometimes, you get only noise, sometimes you get a succession of images with zero temporal coherence, sometimes you get an animation, but its content (relative to the mask) seem to be ignored, and sometimes it just works.

The most amazingly incomprehensible is that this weird behavior seem to be dependent on changing any parameter is the workflow.

For example, using this workflow: weird.json
You get this:
AnimateDiff_00035.webm
which is pretty amazing.

But the prompt was:

couple, marriage proposal 

AND MASK(.65 1, 0 1, .75)
female, smile, (dress:1.3)

AND MASK(0 .35, 0 1, .75)
one_knee down, bearded male, casual clothes

So the male and female characters position are switched.

Then you hit queue prompt again, without changing anything (so the only thing changing is the seed), and you get this:
AnimateDiff_00039.webm
No temporal coherence.

Now you change the prompt to just add a single space anywhere:
AnimateDiff_00040.webm

Queue prompt again, without any other change:
AnimateDiff_00041.webm

sqrt_linear beta_schedule to lcm:
AnimateDiff_00044.webm

queue again:
AnimateDiff_00045.webm

animatediff end_percent from 1 to 0.999:
AnimateDiff_00046.webm

etc...
Changing the resolution has no effect on this weird behavior.

What's going on here?

[BUG]

Describe the bug
I updated ComfyUI and it seemed that is broke the node.

To Reproduce
Run Prompt scheduler [ Dog:cat:0.35]
The example workflow doesn't work anymore.

"Error occurred when executing PromptToSchedule:

float() argument must be a string or a real number, not 'Tree'
"

[FR] Programmatic Features (without needing special syntax)

Describe what's missing
I'd like to extend these nodes and provide alternative methods that aren't as complex/error prone as writing a string with special syntax to do all the various features like scheduling, alternating, sequences etc. At first I thought about generating the string providing the user input of tokens, but then realized that I'd likely run into similar issues with syntax getting messed up. This isn't as much of a feature request as it is a dialogue for how might I contribute to this in a more programmatic way, and avoid "special strings" and string parsing? Thanks for your time!

A very simple example is a node that takes 2 strings, and either a slider or a number, and either emits [cat|dog:0.05] or rather, if I contributed directly to this repo, a way to take that information, and "build up" the prompt/conditioning, before eventually "finalizing" it. (The Builder Pattern).

Clarification on

Regarding this section of the ReadMe:

https://github.com/asagi4/comfyui-prompt-control#sdxl

Is SDXL() automatically added if this node detects an SDXL model passed to it? What if I leave that function out (of the prompt)?

The reason I'm asking, because I'm trying to wire up a SD1.5/SDXL combined workflow that can intelligently switch a few things around depending on the name of the checkpoint detected. Here's a WIP workflow with some WIP nodes as well:

image

What I'd like to do is use a node like Add CLIP SDXL Params to the conditioning, feeding the width/height information in from elsewhere. I think this will be easier that doing text manipulation to build a "function string" to pass along to PromptControlSimple.

image

clarification on scheduling syntax

The readme describes the following:

a [b:c:0.3,0.7]

As

  • steps 0.0 - 0.3 (exclusive) = a
  • steps 0.3 (inclusive) - 0.7 = a b
  • steps 0.7 - 1.0 = c

I think there's a typo. Shouldn't the last be a c? Instead of just c?

[BUG] Weighting after some BREAKs is broken with comfy++ clip encoding

Hello, I've found a pretty weird bug(?) when using comfy++ in STYLE(). Weighting in some BREAKs seems to be getting ignored when used with comfy++ clip encoding, for some reason.

To Reproduce:
Try something like this: STYLE(comfy++) A dog BREAK (a cat:1) BREAK (a snake:9).
The output should be all artifacts, but it's not. It doesn't change no matter what you put in place of a snake and no matter the weight.
comfy, A1111 and compel all work fine.

Expected behavior:
Process all tokens properly and not ignore the weighting after some BREAKs.

CUT errors with 'not enough values to unpack'

Hi,

I have installed Cutoff and Prompt Control via ComfyUI Manager. When I use a prompt like man with [CUT:green eyes:green] and [CUT:purple hair:purple] on PromptControlSimple, I get the following error:

Error occurred when executing PromptControlSimple:

not enough values to unpack (expected 3, got 2)

File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_aio.py", line 33, in apply
pos_cond = pos_filtered = control_to_clip_common(self, clip, pos_sched, lora_cache, cond_cache)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 535, in control_to_clip_common
cond = encode(c)
^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 511, in encode
cond_cache[cachekey] = do_encode(clip, prompt)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 409, in do_encode
cond, pooled = encode_prompt(clip, prompts[0], style, normalization)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 254, in encode_prompt
return encode_regions(clip, tokens, regions, style, normalization)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 228, in encode_regions
(r,) = finalize_clip_regions(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Cutoff\cutoff.py", line 217, in finalize_clip_regions
base_embedding_full, pool = encode_from_tokens(clip, base_weighted_tokens, token_normalization, weight_interpretation, True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Cutoff\cutoff.py", line 184, in encode_from_tokens
embs_l, _ = advanced_encode_from_tokens(tokenized['l'],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Cutoff\adv_encode.py", line 162, in advanced_encode_from_tokens
tokens = [[t for t,_,_ in x] for x in tokenized]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Cutoff\adv_encode.py", line 162, in
tokens = [[t for t,_,_ in x] for x in tokenized]
^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Cutoff\adv_encode.py", line 162, in
tokens = [[t for t,_,_ in x] for x in tokenized]
^^^^^

If I add another semicolon to the end of each CUT block, it works, but the result isnโ€™t the same as using the Cutoff nodes directly.

Thanks,
Unai

[BUG] Using lora with prompt is much slower than with nodes

Description

When using LoRA with prompt, it always slower than using LoRA with Load LoRA nodes
although in both case, time taken per step is similar. maybe because load/patching?

My Setup

CPU: AMD Ryzen R7 5700X
GPU: AMD RX 6700XT
ComfyUI args: --normalvram --use-pytorch-cross-attention --disable-xformers
ComfyUI running on linux, so it use rocm, not directml

To Reproduce

image
image
with this workflow which load 2 LoRAs using prompt
1st queue is 21s, afterward it's around 15s

Expected Behavior

image
image
with this workflow load 2 LoRA using Load LoRA nodes
1st queue is 13s, afterward it's around 8s

Error occurred when executing PromptToSchedule:

Error occurred when executing PromptToSchedule:

float() argument must be a string or a real number, not 'Tree'

File "F:\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-0246\utils.py", line 381, in new_func
res_value = old_func(*final_args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_other.py", line 129, in parse
schedules = parse_prompt_schedules(text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 382, in parse_prompt_schedules
return PromptSchedule(prompt)
^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 277, in init
self.interpolations, self.parsed_prompt = self._parse()
^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 289, in _parse
interpolation_steps, steps = get_steps(tree)
^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 129, in get_steps
CollectSteps().visit(tree)
File "F:\ComfyUI\python_embeded\Lib\site-packages\lark\visitors.py", line 316, in visit
self._call_userfunc(subtree)
File "F:\ComfyUI\python_embeded\Lib\site-packages\lark\visitors.py", line 294, in _call_userfunc
return getattr(self, tree.data, self.default)(tree)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 105, in scheduled
tree.children[i] = tostep(tree.children[i])
^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 94, in tostep
w = float(s) * 100
^^^^^^^^

ERROR WHEN TRYING TO LOAD TWO LORAS AT DIFFERENT STEPS

Hello,

First of all, thank you for your work on these nodes. It has a lot of potential.

I am having issues when trying to load two loras at different steps of generation. The prompt I'm using is "[p1hf car<lora:p1hf car_V1:0.5>:cnptn car<lora:cnptn car_V1:0.5>:0.5]"

Here is a snapshot of the nodes:
Screenshot 2023-11-03 160813

(I USE ANYWHERE NODES, SO ALL INPUTS ARE CONNECTED PROPERLY EVEN IF THEIR NOT VISIBLE)

Error occurred when executing KSamplerAdvanced:

Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_addmm)

File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1271, in sample
return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)

File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1207, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\hijack.py", line 35, in pc_sample
r = cb(orig_sampler, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_lora.py", line 104, in sampler_cb
s = orig_sampler(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample
raise e
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\hijack.py", line 79, in sample
return super().sample(
^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 694, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler(), sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 600, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 560, in sample
samples = getattr(k_diffusion_sampling, "sample_{}".format(sampler_name))(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 277, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 267, in forward
return self.apply_model(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 264, in apply_model
out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 252, in sampling_function
cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 226, in calc_cond_uncond_batch
output = model_function(input_x, timestep
, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 140, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\custom_nodes\FreeU_Advanced\nodes.py", line 173, in __temp__forward
h = forward_timestep_embed(module, h, emb, context, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 56, in forward_timestep_embed
x = layer(x, context, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 557, in forward
x = self.proj_in(x)
^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 18, in forward
return torch.nn.functional.linear(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

BUG: apply_lora_for_step() missing 5 required positional arguments: 'step', 'total_steps', 'state', 'original_model', and 'lora_cache'

Traceback (most recent call last):
  File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1344, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1314, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\hijack.py", line 39, in pc_sample
    r = cb(orig_sampler, *args, **kwargs)
  File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_lora.py", line 67, in sampler_cb
    apply_lora_for_step(start_step)
TypeError: apply_lora_for_step() missing 5 required positional arguments: 'step', 'total_steps', 'state', 'original_model', and 'lora_cache'

[FR] How to change weights gradually?

Firstly, thank you very much for creating this incredible project!
I'm having a bit of difficulty making sure I'm doing things correctly, so I'd like to ask if you have the time, could you please provide an example of a prompt that would allow me to gradually increase the value from (yellow cat:1) to (yellow cat:2)?
Additionally, would it be possible to do this with the weights of a LoRa?

[BUG] `Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!`

Before you start
Verify the following:

  • 1. That your ComfyUI is up to date
  • 2. That the extension is up to date
  • 3. That the issue isn't in the "Known Issues" section in the README

Describe the bug

Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

CPU / Cuda error when running ComfyUI with --gpu-only flag, using the AIO node and the BlenderNeko Cutoff integration

Hard to reproduce since it seems to happen when running close to the VRAM limit.

I recognize this might need to be addressed in the other repo but figured I'd raise it here first.
I hit this intermittently, I can update this issue if I find more details that could help.

Error stack (Path pruned)

Error occurred when executing PromptControlSimple:

Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

  File "X:\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "X:\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "X:\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "X:\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_aio.py", line 32, in apply
    pos_cond = pos_filtered = control_to_clip_common(clip, pos_sched, lora_cache, cond_cache)
  File "X:\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 674, in control_to_clip_common
    cond = encode(c)
  File "X:\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 650, in encode
    cond_cache[cachekey] = do_encode(clip, prompt, schedules.defaults, schedules.masks)
  File "X:\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 570, in do_encode
    cond, pooled = encode_prompt(clip, prompt, style, normalization)
  File "X:\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 337, in encode_prompt
    return encode_regions(clip, tokens, regions, style, normalization)
  File "X:\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 246, in encode_regions
    (r,) = finalize_clip_regions(
  File "X:\ComfyUI\custom_nodes\ComfyUI_Cutoff\cutoff.py", line 225, in finalize_clip_regions
    base_embedding_full, pool = encode_from_tokens(clip, base_weighted_tokens, token_normalization, weight_interpretation, True)
  File "X:\ComfyUI\custom_nodes\ComfyUI_Cutoff\cutoff.py", line 192, in encode_from_tokens
    embs_l, _ = advanced_encode_from_tokens(tokenized['l'],
  File "X:\ComfyUI\custom_nodes\ComfyUI_Cutoff\adv_encode.py", line 203, in advanced_encode_from_tokens
    embs, pooled = from_masked(unweighted_tokens, weights, word_ids, base_emb, length, encode_func)
  File "X:\ComfyUI\custom_nodes\ComfyUI_Cutoff\adv_encode.py", line 107, in from_masked
    pooled = (pooled - pooled_start) * (ws - 1)

To Reproduce
I need a node that just fills the VRAM...
Tried a workflow that uses 4 different checkpoints, but it fails at `Error occurred when executing CheckpointLoaderSimple:

Allocation on device 0 would exceed allowed memory. (out of memory)` instead.

Expected behavior
Prompt control conditioning with cutoff applied, no errors.

Example Workflow Doesn't Load

Quite possibly I am doing something stupid, but I can't get the example workflow to load on latest ComfyUI + latest comfyui-prompt-control.

Repro:

Expected Behavior:

  • Example workflow loads

Experienced Behavior:

  • ComfyUI UI remains blank, no error message in browser or console

PromptControlSimple not in Readme

Is PromptControlSimple part of this repo? If so, is it a deprecated node? It's not in the README, so I'm not sure what the expected behavior is.

image

[FR] Custom mask support

Describe what's missing
A way to use nonrectangular masks.

Additional context
I know it might seem slightly against the ethos of the extension, however, it would be convenient to be able to feed custom masks (e.g. hand drawn ones) into a node. Unless I'm missing something, the existing syntax limits to rectangular masks and specification is a touch clunky. If masks were taken as input, then they could be activated within the text [perhaps with a syntax like (MASK:mask1)]

This would provide a similar functionality to Automatic1111's regional prompter extension's custom masks option.

Does Prompt control work with SDXL?

Hello, i was wondering if prompt control work with SDXL if it doesnt its fine im just wondering if i have to set it up differently.
i was trying to use prompt control to be able to use the lora loading using the prompt.
it works great for SD 1.5 models i love it its amazing. i do not get any errors and it seems to load the loras fine.
but i get an error message when trying to use the same set up with SDXL.
ill put screen shot of the error im getting and how i have the nodes hooked up. thank you.

Screenshot 2023-12-13 023337
Screenshot 2023-12-13 022530

When using Lora's schedule syntax, the second pass has no effect on any Lora effects

Hello. Your node was great enough to pull me back to ComfyUI from Forge.

I have a problem with the notation,
I noticed that when using Lora's schedule syntax, the first pass of the example.json worked as expected, but the second pass did not cause any LoRA effect itself.

I'm having difficulty with the example,
We adapted LoRA to display a realistic woman against a cartoon type model (e.g. animagine) with extreme settings.
[<lora:realmodel_xl_v01:1.5:1.6>::0.99]

In this case, 99% of the time, the LoRA should result in a realistic woman due to the LoRA effect, and the animated woman effect should only work on the remaining 1%.

However, the result is that the First Pass is a realistic woman as described, but the Second Pass does not have the 99% realistic woman effect at all and produces an animated woman.

This also occurs with the original topology other than example.json.
2024-03-10 22 08 03

[Feature Req] Limit LORA influence when using MASK or AREA

Hello and thanks for making these nodes!
They are really interesting and I've been playing around with them.

So I'm testing along the MASK and AREA modes, and I've come to wonder if its already possible to limit the influence of a lora only to the MASK or AREA regions without the lora baking the other parts of the generation.

If not, could it be possible to implement something similar to this?
As on Auto1111 the extensions Lora Masks and Composable Lora achieve the desired effect of only allowing a lora to work in a specific region without frying the rest of the generation.

Thanks a lot for your time!

Recommendations for composite SD1.5/SDXL workflow

I have a SD1.5 workflow that seems to work really well right now. I'd like to be able to use SDXL as well and seamlessly switch between the models without having to do much work. I've been told that using the same prompt for CLIP_L and CLIP_G is perfectly fine and still works pretty well. I was wondering if you have any recommendations or examples for a workflow that works well for both SD1.5 and SDXL without manual modifications when switching to an SDXL model?

I was considering doing some text manipulation, like looking at the checkpoint name, and looking for sdxl or sd_xl in the name, and based on that, taking the prompt, duplicating it, with and wrapping with CLIP_L( .. ) CLIP_G( .. ). But before I go through all that effort, I was wondering if you have any recommendations.

Just looking to use AND?

I'm pretty new to comfyui so sorry if this a dumb question.

My understanding is that the AND keyword separates the prompt such that the two parts won't interfere with one another. Is that the case or am I misunderstanding?

If I want to get that functionality do I need to still use the scheduling nodes or should it work in the regular CLIP text encode upon install?

Alternating prompt produces similar results

When you try to prompt [A|B] and [B|A] in Automatic1111, you will inevitably get vastly different results, as first step as A and first step as B drastically changes flow of whole picture. However, doing same thing in this node results in completely identical pictures, winch shouldn't be happening. Something is wrong, somewhere, somehow.

Replicateable used workflow: https://hastebin.pw/WF91epqcW3

nodes don't work with SDXL

Hi,

When I use the nodes with an SDXL model i get the following error:

Error occurred when executing ScheduleToCond:

list indices must be integers or slices, not str

File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 148, in apply
r = (control_to_clip_common(self, clip, prompt_schedule),)
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 330, in control_to_clip_common
cond = encode(c)
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 302, in encode
cond_cache[cachekey] = do_encode(clip, prompt)
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 221, in do_encode
cond, pooled = encode_prompt(clip, prompts[0], style, normalization)
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 213, in encode_prompt
return clip.encode_from_tokens(tokens, return_pooled=True)
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 586, in encode_from_tokens
cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\comfy\sdxl_clip.py", line 60, in encode_token_weights
token_weight_pairs_g = token_weight_pairs["g"]

Not sure if it is planned to make it support SDXL, but if not a hint in the docu would be great ;)
thanks!

[BUG] float() argument must be a string or a real number, not 'Tree'

Before you start
Verify the following:

  1. That your ComfyUI is up to date - YES
  2. That the extension is up to date - YES
  3. That the issue isn't in the "Known Issues" section in the README - YES

Describe the bug
When attempting to schedule (whether using a lora of just part of the prompt) I get the following error. This was working fine until recently. I'm not sure if it changed when I updated comfyui or when I installed other custom nodes. However, it still occurs even when I disable all other custom nodes through the manager (other than the manager itself).

I think this is related to lark, here are the versions I have installed in my venv:

lark 1.1.9
lark-parser 0.12.0

I've tried debugging this and the first time tostep is called (when the value of i in CollectSteps.scheduled = -1), the value of s is : Token('NUMBER', '0.5') and it succeeds.

The second time tostep is called (when i = -2) the value of s is Tree('prompt', [Token('PLAIN', 'by akira toriyama')])


Error occurred when executing PromptToSchedule:

float() argument must be a string or a real number, not 'Tree'

File "E:\comfyui2\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui2\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui2\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui2\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_other.py", line 152, in parse
schedules = parse_prompt_schedules(text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui2\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 394, in parse_prompt_schedules
return PromptSchedule(prompt)
^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui2\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 277, in init
self.interpolations, self.parsed_prompt = self._parse()
^^^^^^^^^^^^^
File "E:\comfyui2\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 292, in _parse
interpolation_steps, steps = get_steps(tree)
^^^^^^^^^^^^^^^
File "E:\comfyui2\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 129, in get_steps
CollectSteps().visit(tree)
File "e:\comfyui2\ComfyUI\venv\Lib\site-packages\lark\visitors.py", line 316, in visit
self._call_userfunc(subtree)
File "e:\comfyui2\ComfyUI\venv\Lib\site-packages\lark\visitors.py", line 294, in _call_userfunc
return getattr(self, tree.data, self.default)(tree)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui2\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 105, in scheduled
tree.children[i] = tostep(tree.children[i])
^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfyui2\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 94, in tostep
w = float(s) * 100
^^^^^^^^

To Reproduce
Load example.json from /workflows folder, attempt to generate

Support folder when loading lora?

In parser.py, line 29 we can see this:

FILENAME: /[^<>:\/\\]+/

Which actually prevents the use of folders for arranging loras.
Maybe change it to FILENAME: /[^<>:]+/ ?
Is there any reason for filtering slashes and backslashes?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.