Coder Social home page Coder Social logo

comfyui-prompt-control's Introduction

ComfyUI prompt control

Nodes for LoRA and prompt scheduling that make basic operations in ComfyUI completely prompt-controllable.

LoRA and prompt scheduling should produce identical output to the equivalent ComfyUI workflow using multiple samplers or the various conditioning manipulation nodes. If you find situations where this is not the case, please report a bug.

What can it do?

Things you can control via the prompt:

  • Prompt editing and filtering without multiple samplers
  • LoRA loading and scheduling (including LoRA block weights)
  • Prompt masking and area control, combining prompts and interpolation
  • SDXL parameters
  • Other miscellaneous things

This example workflow implements a two-pass workflow illustrating most scheduling features.

The tools in this repository combine well with the macro and wildcard functionality in comfyui-utility-nodes

Requirements

You need to have lark installed in your Python environment for parsing to work (If you reuse A1111's venv, it'll already be there)

If you use the portable version of ComfyUI on Windows with its embedded Python, you must open a terminal in the ComfyUI installation directory and run the command:

.\python_embeded\python.exe -m pip install lark

Then restart ComfyUI afterwards.

Notable changes

I try to avoid behavioural changes that break old prompts, but they may happen occasionally.

  • 2024-02-02 The node will now automatically enable offloading LoRA backup weights to the CPU if you run out of memory during LoRA operations, even when --highvram is specified. This change persists until ComfyUI is restarted.
  • 2024-01-14 Multiple CLIP_L instances are now joined with a space separator instead of concatenated.
  • 2024-01-09 AITemplate support dropped. I don't recommend or test AITemplate anymore. Use Stable-Fast instead (see below for info)
  • 2024-01-08 Prompt control now enables in-place weight updates on the model. This shouldn't affect anything, but increases performance slightly. You can disable this by setting the environment variable PC_NO_INPLACE_UPDATE to any non-empty value.
  • 2023-12-28 MASK now uses ComfyUI's mask_strength attribute instead of calculating it on its own. This changes its behaviour slightly.
  • 2023-12-06: Removed JinjaRender, SimpleWildcard, ConditioningCutoff, CondLinearInterpolate and StringConcat. For the first two, see this repository for mostly-compatible implementations.
  • 2023-10-04: STYLE:... syntax changed to STYLE(...)

Note on how schedules work

ComfyUI does not use the step number to determine whether to apply conds; instead, it uses the sampler's timestep value which is affected by the scheduler you're using. This means that when the sampler scheduler isn't linear, the schedules generated by prompt control will not be either.

Currently there doesn't seem to be a good way to change this.

You can try using the PCSplitSampling node to enable an alternative method of sampling.

Scheduling syntax

Syntax is like A1111 for now, but only fractions are supported for steps.

a [large::0.1] [cat|dog:0.05] [<lora:somelora:0.5:0.6>::0.5]
[in a park:in space:0.4]

You can also use a [b:c:0.3,0.7] as a shortcut. The prompt be a until 0.3, a b until 0.7, and then c. [a:0.1,0.4] is equivalent to [a::0.1,0.4]

LoRA loading

LoRAs can be loaded by referring to the filename without extension and subdirectories will also be searched. For example, <lora:cats:1>. will match both cats.safetensors and sd15/animals/cats.safetensors. If there are multiple LoRAs with the same name, the first match will be loaded.

Alternatively, the name can include the full directory path relative to ComfyUI's search paths, without extension: <lora:XL/sdxllora:0.5>. In this case, the full path must match.

If no match is found, the node will try to replace spaces with underscores and search again. That is, <lora:cats and dogs:1> will find cats_and_dogs.safetensors. This helps with some autocompletion scripts that replace underscores with spaces.

Finally, you can give the exact path (including the extension) as shown in LoRALoader.

Alternating

Alternating syntax is [a|b:pct_steps], causing the prompt to alternate every pct_steps. pct_steps defaults to 0.1 if not specified. You can also have more than two options.

Sequences

The syntax [SEQ:a:N1:b:N2:c:N3] is shorthand for [a:[b:[c::N3]:N2]:N1] ie. it switches from a to b to c to nothing at the specified points in sequence.

Might be useful with Jinja templating (see https://github.com/asagi4/comfyui-utility-nodes). For example:

[SEQ<% for x in steps(0.1, 0.9, 0.1) %>:<lora:test:<= sin(x*pi) + 0.1 =>>:<= x =><% endfor %>]

generates a LoRA schedule based on a sinewave

Tag selection

Using the FilterSchedule node, in addition to step percentages, you can use a tag to select part of an input:

a large [dog:cat<lora:catlora:0.5>:SECOND_PASS]

Set the tags parameter in the FilterSchedule node to filter the prompt. If the tag matches any tag tags (comma-separated), the second option is returned (cat, in this case, with the LoRA). Otherwise, the first option is chosen (dog, without LoRA).

the values in tags are case-insensitive, but the tags in the input must be uppercase A-Z and underscores only, or they won't be recognized. That is, [dog:cat:hr] will not work.

For example, a prompt

a [black:blue:X] [cat:dog:Y] [walking:running:Z] in space

with tags x,z would result in the prompt a blue cat running in space

Prompt interpolation

a red [INT:dog:cat:0.2,0.8:0.05] will attempt to interpolate the tensors for a red dog and a red cat between the specified range in as many steps of 0.05 as will fit.

SDXL

The nodes do not treat SDXL models specially, but there are some utilities that enable SDXL specific functionality.

You can use the function SDXL(width height, target_width target_height, crop_w crop_h) to set SDXL prompt parameters. SDXL() is equivalent to SDXL(1024 1024, 1024 1024, 0 0) unless the default values have been overridden by PCScheduleSettings.

To set the clip_l prompt, as with CLIPTextEncodeSDXL, use the function CLIP_L(prompt text goes here).

Things to note:

  • Multiple instances of CLIP_L are joined with a space. That is, CLIP_L(foo)CLIP_L(bar) is the same as CLIP_L(foo bar)
  • Using BREAK isn't supported in it; it'll just parse as the plain word BREAK.
  • similarly, AND inside CLIP_L does not do anything sensible; CLIP_L(foo AND bar) will parse as two prompts CLIP_L(foo and bar)
  • CLIP_L and SDXL have no effect on SD 1.5.
  • The rest of the prompt becomes the clip_g prompt.
  • If there is no CLIP_L or SDXL, the prompts will work as with CLIPTextEncode.

Other syntax:

  • <emb:xyz> is alternative syntax for embedding:xyz to work around a syntax conflict with [embedding:xyz:0.5] which is parsed as a schedule that switches from embedding to xyz.

  • The keyword BREAK causes the prompt to be tokenized in separate chunks, which results in each chunk being individually padded to the text encoder's maximum token length. This is mostly equivalent to the ConditioningConcat node.

Combining prompts

AND can be used to combine prompts. You can also use a weight at the end. It does a weighted sum of each prompt,

cat :1 AND dog :2

The weight defaults to 1 and are normalized so that a:2 AND b:2 is equal to a AND b. AND is processed after schedule parsing, so you can change the weight mid-prompt: cat:[1:2:0.5] AND dog

if there is COMFYAND() in the prompt, the behaviour of AND will change to work like ConditioningCombine, but in practice this seems to be just slower while producing the same output.

Functions

There are some "functions" that can be included in a prompt to do various things.

Functions have the form FUNCNAME(param1, param2, ...). How parameters are interpreted is up to the function. Note: Whitespace is not stripped from string parameters by default. Commas can be escaped with \,

Like AND, these functions are parsed after regular scheduling syntax has been expanded, allowing things like [AREA:MASK:0.3](...), in case that's somehow useful.

SHUFFLE and SHIFT

Default parameters: SHUFFLE(seed=0, separator=,, joiner=,), SHIFT(steps=0, separator=,, joiner=,)

SHIFT moves elements to the left by steps. The default is 0 so SHIFT() does nothing SHUFFLE generates a random permutation with seed as its seed.

These functions are applied to each prompt chunk after BREAK, AND etc. have been parsed. The prompt is split by separator, the operation is applied, and it's then joined back by joiner.

Multiple instances of these functions are applied in the order they appear in the prompt.

NOTE: These functions are not smart about syntax and will break emphasis if the separator occurs inside parentheses. I might fix this at some point, but for now, keep this in mind.

For example:

  • SHIFT(1) cat, dog, tiger, mouse does a shift and results in dog, tiger, mouse, cat. (whitespace may vary)

  • SHIFT(1,;) cat, dog ; tiger, mouse results in tiger, mouse, cat, dog

  • SHUFFLE() cat, dog, tiger, mouse results in cat, dog, mouse, tiger

  • SHUFFLE() SHIFT(1) cat, dog, tiger, mouse results in dog, mouse, tiger, cat

  • SHIFT(1) cat,dog BREAK tiger,mouse results in dog,cat BREAK tiger,mouse

  • SHIFT(1) cat, dog AND SHIFT(1) tiger, mouse results in dog, cat BREAK mouse, tiger

Whitespace is not stripped and may also be used as a joiner or separator

  • SHIFT(1,, ) cat,dog results in dog cat

NOISE

The function NOISE(weight, seed) adds some random noise into the prompt. The seed is optional, and if not specified, the global RNG is used. weight should be between 0 and 1.

MASK and AREA

You can use MASK(x1 x2, y1 y2, weight, op) to specify a region mask for a prompt. The values are specified as a percentage with a float between 0 and 1, or as absolute pixel values (these can't be mixed). 1 will be interpreted as a percentage instead of a pixel value.

If multiple MASKs are specified, they are combined together with ComfyUI's MaskComposite node, with op specifying the operation to use (default multiply). In this case, the combined mask weight can be set with MASKW(weight) (defaults to 1.0).

Masks assume a size of (512, 512), unless overridden with PCScheduleSettings and pixel values will be relative to that. ComfyUI will scale the mask to match the image resolution. You can change it manually by using MASK_SIZE(width, height) anywhere in the prompt,

These are handled per AND-ed prompt, so in prompt1 AND MASK(...) prompt2, the mask will only affect prompt2.

The default values are MASK(0 1, 0 1, 1) and you can omit unnecessary ones, that is, MASK(0 0.5, 0.3) is MASK(0 0.5, 0.3 1, 1)

Note that because the default values are percentages, MASK(0 256, 64 512) is valid, but MASK(0 200) will raise an error.

Similarly, you can use AREA(x1 x2, y1 y2, weight) to specify an area for the prompt (see ComfyUI's area composition examples). The area is calculated by ComfyUI relative to your latent size.

Masking does not affect LoRA scheduling unless you set unet weights to 0 for a LoRA.

FEATHER

When you use MASK, you can also call FEATHER(left top right bottom) to apply feathering using ComfyUI's FeatherMask node. The values are in pixels and default to 0.

If multiple MASKs are used, FEATHER is applied before compositing in the order they appear in the prompt, and any leftovers are applied to the combined mask. If you want to skip feathering a mask while compositing, just use FEATHER() with no arguments.

For example:

MASK(1) MASK(2) MASK(3) FEATHER(1) FEATHER() FEATHER(3) weirdmask FEATHER(4)

gives you a mask that is a combination of 1, 2 and 3, where 1 and 3 are feathered before compositing and then FEATHER(4) is applied to the composite.

The order of the FEATHER and MASK calls doesn't matter; you can have FEATHER before MASK or even interleave them.

Schedulable LoRAs

The ScheduleToModel node patches a model so that when sampling, it'll switch LoRAs between steps. You can apply the LoRA's effect separately to CLIP conditioning and the unet (model).

Swapping LoRAs often can be quite slow without the --highvram switch because ComfyUI will shuffle things between the CPU and GPU. When things stay on the GPU, it's quite fast.

If you run out of VRAM during a LoRA swap, the node will attempt to save VRAM by enabling CPU offloading for future generations even in highvram mode. This persists until ComfyUI is restarted.

You can also set the PC_RETRY_ON_OOM environment variable to any non-empty value to automatically retry sampling once if VRAM runs out.

LoRA Block Weight

If you have ComfyUI Inspire Pack installed, you can use its Lora Block Weight syntax, for example:

a prompt <lora:cars:1:LBW=SD-OUTALL;A=1.0;B=0.0;>

The ; is optional if there is only 1 parameter. The syntax is the same as in the ImpactWildcard node, documented here

Other integrations

Advanced CLIP encoding

You can use the syntax STYLE(weight_interpretation, normalization) in a prompt to affect how prompts are interpreted.

Without any extra nodes, only perp is available, which does the same as ComfyUI_PerpWeight extension.

If you have Advanced CLIP Encoding nodes cloned into your custom_nodes, more options will be available.

The style can be specified separately for each AND:ed prompt, but the first prompt is special; later prompts will "inherit" it as default. For example:

STYLE(A1111) a (red:1.1) cat with (brown:0.9) spots and a long tail AND an (old:0.5) dog AND a (green:1.4) (balloon:1.1)

will interpret everything as A1111, but

a (red:1.1) cat with (brown:0.9) spots and a long tail AND STYLE(A1111) an (old:0.5) dog AND a (green:1.4) (balloon:1.1)

Will interpret the first one using the default ComfyUI behaviour, the second prompt with A1111 and the last prompt with the default again

For things (ie. the code imports) to work, the nodes must be cloned in a directory named exactly ComfyUI_ADV_CLIP_emb.

Cutoff node integration

If you have ComfyUI Cutoff cloned into your custom_nodes, you can use the CUT keyword to use cutoff functionality

The syntax is

a group of animals, [CUT:white cat:white], [CUT:brown dog:brown:0.5:1.0:1.0:_]

the parameters in the CUT section are region_text:target_text:weight;strict_mask:start_from_masked:padding_token of which only the first two are required. If strict_mask, start_from_masked or padding_token are specified in more than one section, the last one takes effect for the whole prompt

Stable-Fast

The prompt control node works well with ComfyUI_stable_fast. However, you should apply ScheduleToModel after applying Apply StableFast Unet to prevent constant recompilations.

Nodes

PromptToSchedule

Parses a schedule from a text prompt. A schedule is essentially an array of (valid_until, prompt) pairs that the other nodes can use.

FilterSchedule

Filters a schedule according to its parameters, removing any changes that do not occur within [start, end).

The node also does tag filtering if any tags are specified.

Always returns at least the last prompt in the schedule if everything would otherwise be filtered.

start=0, end=0 returns the prompt at the start and start=1.0, end=1.0 returns the prompt at the end.

ScheduleToCond

Produces a combined conditioning for the appropriate timesteps. From a schedule. Also applies LoRAs to the CLIP model according to the schedule.

ScheduleToModel

Produces a model that'll cause the sampler to reapply LoRAs at specific steps according to the schedule.

This depends on a callback handled by a monkeypatch of the ComfyUI sampler function, so it might not work with custom samplers, but it shouldn't interfere with them either.

PCSplitSampling

Causes sampling to be split into multiple sampler calls instead of relying on timesteps for scheduling. This makes the schedules more accurate, but seems to cause weird behaviour with SDE samplers. (Upstream bug?)

PCScheduleSettings

Returns an object representing default values for the SDXL function and allows configuring MASK_SIZE outside the prompt. You need to apply them to a schedule with PCApplySettings. Note that for the SDXL settings to apply, you still need to have SDXL() in the prompt.

The "steps" parameter currently does nothing; it's for future features.

PCApplySettings

Applies the give default values from PCScheduleSettings to a schedule

PCPromptFromSchedule

Extracts a text prompt from a schedule; also logs it to the console. LoRAs are not included in the text prompt, though they are logged.

PromptControlSimple

This node exists purely for convenience. It's a combination of PromptToSchedule, ScheduleToCond, ScheduleToModel and FilterSchedule such that it provides as output a model, positive conds and negative conds, both with and without any specified filters applied.

This makes it handy for quick one- or two-pass workflows.

Older nodes

  • EditableCLIPEncode: A combination of PromptToSchedule and ScheduleToCond
  • LoRAScheduler: A combination of PromptToSchedule, FilterSchedule and ScheduleToModel

Known issues

  • If you use LoRA scheduling in a workflow with LoRALoader nodes, you might get inconsistent results. For now, just avoid mixing ScheduleToModel or LoRAScheduler with LoRALoader. See #36
  • Workflows using SamplerCustom will calculate LoRA schedules based on the number of sigmas given to the sampler instead of the number of steps, since that information isn't available.
  • CUT does not work with STYLE:perp
  • PCSplitSampling overrides ComfyUI's BrownianTreeNoiseSampler noise sampling behaviour so that each split segment doesn't add crazy amounts of noise to the result with some samplers.
  • Split sampling may have weird behaviour if your step percentages go below 1 step.
  • Interpolation is probably buggy and will likely change behaviour whenever code gets refactored.
  • If execution is interrupted and LoRA scheduling is used, your models might be left in an undefined state until you restart ComfyUI

comfyui-prompt-control's People

Contributors

asagi4 avatar drjkl avatar haohaocreates avatar puggleslesser avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

comfyui-prompt-control's Issues

[FR] How to change weights gradually?

Firstly, thank you very much for creating this incredible project!
I'm having a bit of difficulty making sure I'm doing things correctly, so I'd like to ask if you have the time, could you please provide an example of a prompt that would allow me to gradually increase the value from (yellow cat:1) to (yellow cat:2)?
Additionally, would it be possible to do this with the weights of a LoRa?

nodes don't work with SDXL

Hi,

When I use the nodes with an SDXL model i get the following error:

Error occurred when executing ScheduleToCond:

list indices must be integers or slices, not str

File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 148, in apply
r = (control_to_clip_common(self, clip, prompt_schedule),)
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 330, in control_to_clip_common
cond = encode(c)
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 302, in encode
cond_cache[cachekey] = do_encode(clip, prompt)
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 221, in do_encode
cond, pooled = encode_prompt(clip, prompts[0], style, normalization)
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 213, in encode_prompt
return clip.encode_from_tokens(tokens, return_pooled=True)
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 586, in encode_from_tokens
cond, pooled = self.cond_stage_model.encode_token_weights(tokens)
File "C:\AI-Images\ComfyUI_windows_portable\ComfyUI\comfy\sdxl_clip.py", line 60, in encode_token_weights
token_weight_pairs_g = token_weight_pairs["g"]

Not sure if it is planned to make it support SDXL, but if not a hint in the docu would be great ;)
thanks!

[BUG]

Describe the bug
I updated ComfyUI and it seemed that is broke the node.

To Reproduce
Run Prompt scheduler [ Dog:cat:0.35]
The example workflow doesn't work anymore.

"Error occurred when executing PromptToSchedule:

float() argument must be a string or a real number, not 'Tree'
"

Support folder when loading lora?

In parser.py, line 29 we can see this:

FILENAME: /[^<>:\/\\]+/

Which actually prevents the use of folders for arranging loras.
Maybe change it to FILENAME: /[^<>:]+/ ?
Is there any reason for filtering slashes and backslashes?

`BREAK` support

There is already AND support, which is equivalent to ConditioningAverage, however there is no BREAK support, which would map to ConditioningConcat. Furthermore, because of how Comfy handles this, this node cannot be combined with ConditioningConcat because AND makes the prompt return multiple conds, of which only the first is taken.

Embedding alt syntax does not work

Thanks for the hard work making this project, it's amazing.

I noticed when I tried to use the alt syntax for embeddings, <emb:xyz>, i would be returned:

[ERROR] PromptControl: Prompt editing parse error: Error trying to process rule "embedding":

'str' object has no attribute 'value'

I was able to fix the error by modifying the embedding() function to:

def embedding(self, args):
        return "embedding:" + args[0]

I believe omitting the .value property works because the function is trying to access a value attribute on a string object, which does not exist. I don't know if this is the correct fix, or if the embedding() function is intended to be passed something else other than a string.

EDIT: embedding weighting doesn't work with my solution, but doing like <emb:xyz:0.5> doesn't seem to work for me with the deployed version either, so I'm not sure what the right approach is.

If the Lora file name has . symbol, it will show that Lora was not found and Lora cannot be used.

If the Lora file name has the . symbol, it will show Lora not found and cannot use Lora.
For example, the "PAseerGoldenWindSD1.5-V1.safetensors" file, when I type lora:PAseerGoldenWindSD1.5-V1:0.5, it will show PromptControl: Lora PAseerGoldenWindSD1.5-V1 not found.
When I change "PAseerGoldenWindSD1.5-V1.safetensors" to "PAseerGoldenWindSD15-V1.safetensors" and type lora:PAseerGoldenWindSD15-V1:0.5 it works!

Just looking to use AND?

I'm pretty new to comfyui so sorry if this a dumb question.

My understanding is that the AND keyword separates the prompt such that the two parts won't interfere with one another. Is that the case or am I misunderstanding?

If I want to get that functionality do I need to still use the scheduling nodes or should it work in the regular CLIP text encode upon install?

ERROR WHEN TRYING TO LOAD TWO LORAS AT DIFFERENT STEPS

Hello,

First of all, thank you for your work on these nodes. It has a lot of potential.

I am having issues when trying to load two loras at different steps of generation. The prompt I'm using is "[p1hf car<lora:p1hf car_V1:0.5>:cnptn car<lora:cnptn car_V1:0.5>:0.5]"

Here is a snapshot of the nodes:
Screenshot 2023-11-03 160813

(I USE ANYWHERE NODES, SO ALL INPUTS ARE CONNECTED PROPERLY EVEN IF THEIR NOT VISIBLE)

Error occurred when executing KSamplerAdvanced:

Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_addmm)

File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1271, in sample
return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)

File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1207, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\hijack.py", line 35, in pc_sample
r = cb(orig_sampler, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_lora.py", line 104, in sampler_cb
s = orig_sampler(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample
raise e
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\hijack.py", line 79, in sample
return super().sample(
^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 694, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler(), sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 600, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 560, in sample
samples = getattr(k_diffusion_sampling, "sample_{}".format(sampler_name))(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 277, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 267, in forward
return self.apply_model(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 264, in apply_model
out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 252, in sampling_function
cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 226, in calc_cond_uncond_batch
output = model_function(input_x, timestep
, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 140, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\custom_nodes\FreeU_Advanced\nodes.py", line 173, in __temp__forward
h = forward_timestep_embed(module, h, emb, context, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 56, in forward_timestep_embed
x = layer(x, context, transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\attention.py", line 557, in forward
x = self.proj_in(x)
^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\yovan\ComfyUI_windows_portable_03\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 18, in forward
return torch.nn.functional.linear(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

[FR] Custom mask support

Describe what's missing
A way to use nonrectangular masks.

Additional context
I know it might seem slightly against the ethos of the extension, however, it would be convenient to be able to feed custom masks (e.g. hand drawn ones) into a node. Unless I'm missing something, the existing syntax limits to rectangular masks and specification is a touch clunky. If masks were taken as input, then they could be activated within the text [perhaps with a syntax like (MASK:mask1)]

This would provide a similar functionality to Automatic1111's regional prompter extension's custom masks option.

Using AREA raises a RuntimeError with KSamplerAdvanced

When I use AREA in a prompt, I get:

ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/execution.py", line 153, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/execution.py", line 83, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/execution.py", line 76, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/nodes.py", line 1270, in sample
    return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/nodes.py", line 1206, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-AnimateDiff-Evolved/animatediff/sampling.py", line 109, in animatediff_sample
    return orig_comfy_sample(model, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 22, in informative_sample
    raise e
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/sample.py", line 97, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_smZNodes/__init__.py", line 131, in KSampler_sample
    return _KSampler_sample(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 785, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler(), sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_smZNodes/__init__.py", line 139, in sample
    return _sample(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 690, in sample
    samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 630, in sample
    samples = getattr(k_diffusion_sampling, "sample_{}".format(sampler_name))(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **extra_options)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/k_diffusion/sampling.py", line 701, in sample_dpmpp_2m_sde_gpu
    return sample_dpmpp_2m_sde(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, eta=eta, s_noise=s_noise, noise_sampler=noise_sampler, solver_type=solver_type)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/k_diffusion/sampling.py", line 613, in sample_dpmpp_2m_sde
    denoised = model(x, sigmas[i] * s_in, **extra_args)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 323, in forward
    out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, cond_concat=cond_concat, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/k_diffusion/external.py", line 125, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/k_diffusion/external.py", line 151, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_smZNodes/smZNodes.py", line 795, in apply_model
    out = super().apply_model(x, timestep, cc, uu, cond_scale, cond_concat, model_options, seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 311, in apply_model
    out = sampling_function(self.inner_model.apply_model, x, timestep, uncond, cond, cond_scale, cond_concat, model_options=model_options, seed=seed)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 289, in sampling_function
    cond, uncond = calc_cond_uncond_batch(model_function, cond, uncond, x, timestep, max_total_area, cond_concat, model_options)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/samplers.py", line 265, in calc_cond_uncond_batch
    output = model_function(input_x, timestep_, **c).chunk(batch_chunks)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_smZNodes/modules/sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
                                                                 ^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/ComfyUI_smZNodes/modules/sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/model_base.py", line 63, in apply_model
    return self.diffusion_model(xc, t, context=context, y=c_adm, control=control, transformer_options=transformer_options).float()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/custom_nodes/SeargeSDXL/modules/custom_sdxl_ksampler.py", line 70, in new_unet_forward
    x0 = old_unet_forward(self, x, timesteps, context, y, control, transformer_options, **kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 625, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/net/dj/code/clones/github.com/comfyanonymous/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 61, in forward_timestep_embed
    x = layer(x)
        ^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 460, in forward
    return self._conv_forward(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jeff/.virtualenvs/comfyui/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Calculated padded input size per channel: (2 x 2). Kernel size: (3 x 3). Kernel size can't be greater than actual input size

EDIT: It seems to only happen when combining prompts, like "Prompt1 AND AREA(x1 x2, y1 y2, weight) another prompt". Am I just using the syntax incorrectly here?

Minimum `lark` version?

I kept getting float() argument must be a string or a real number, not 'Tree' earlier, had to update lark to 1.1.9 from 1.1.7.
Haven't gone back to see if 1.1.8 also would have worked.

Do you know what the minimum version should be in requirements?

[FR] Improved examples

Describe what's missing
A more visual guide for entry level people. I am really having a hard time understanding how scheduling lora's work here. More specific breakdowns in the readme, with particular examples would really help.
Not expecting youtube tutorial but maybe from concrete examples with image and values in the workspace would be super welcome. Very keen to learn

Additional context
Add any other context or screenshots about the feature request here.

When using Lora's schedule syntax, the second pass has no effect on any Lora effects

Hello. Your node was great enough to pull me back to ComfyUI from Forge.

I have a problem with the notation,
I noticed that when using Lora's schedule syntax, the first pass of the example.json worked as expected, but the second pass did not cause any LoRA effect itself.

I'm having difficulty with the example,
We adapted LoRA to display a realistic woman against a cartoon type model (e.g. animagine) with extreme settings.
[<lora:realmodel_xl_v01:1.5:1.6>::0.99]

In this case, 99% of the time, the LoRA should result in a realistic woman due to the LoRA effect, and the animated woman effect should only work on the remaining 1%.

However, the result is that the First Pass is a realistic woman as described, but the Second Pass does not have the 99% realistic woman effect at all and produces an animated woman.

This also occurs with the original topology other than example.json.
2024-03-10 22 08 03

CUT errors with 'not enough values to unpack'

Hi,

I have installed Cutoff and Prompt Control via ComfyUI Manager. When I use a prompt like man with [CUT:green eyes:green] and [CUT:purple hair:purple] on PromptControlSimple, I get the following error:

Error occurred when executing PromptControlSimple:

not enough values to unpack (expected 3, got 2)

File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_aio.py", line 33, in apply
pos_cond = pos_filtered = control_to_clip_common(self, clip, pos_sched, lora_cache, cond_cache)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 535, in control_to_clip_common
cond = encode(c)
^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 511, in encode
cond_cache[cachekey] = do_encode(clip, prompt)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 409, in do_encode
cond, pooled = encode_prompt(clip, prompts[0], style, normalization)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 254, in encode_prompt
return encode_regions(clip, tokens, regions, style, normalization)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_clip.py", line 228, in encode_regions
(r,) = finalize_clip_regions(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Cutoff\cutoff.py", line 217, in finalize_clip_regions
base_embedding_full, pool = encode_from_tokens(clip, base_weighted_tokens, token_normalization, weight_interpretation, True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Cutoff\cutoff.py", line 184, in encode_from_tokens
embs_l, _ = advanced_encode_from_tokens(tokenized['l'],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Cutoff\adv_encode.py", line 162, in advanced_encode_from_tokens
tokens = [[t for t,_,_ in x] for x in tokenized]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Cutoff\adv_encode.py", line 162, in
tokens = [[t for t,_,_ in x] for x in tokenized]
^^^^^^^^^^^^^^^^^^
File "C:\Users\Unai\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Cutoff\adv_encode.py", line 162, in
tokens = [[t for t,_,_ in x] for x in tokenized]
^^^^^

If I add another semicolon to the end of each CUT block, it works, but the result isn’t the same as using the Cutoff nodes directly.

Thanks,
Unai

Error occurred when executing PromptToSchedule:

Error occurred when executing PromptToSchedule:

float() argument must be a string or a real number, not 'Tree'

File "F:\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\ComfyUI-0246\utils.py", line 381, in new_func
res_value = old_func(*final_args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_other.py", line 129, in parse
schedules = parse_prompt_schedules(text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 382, in parse_prompt_schedules
return PromptSchedule(prompt)
^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 277, in init
self.interpolations, self.parsed_prompt = self._parse()
^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 289, in _parse
interpolation_steps, steps = get_steps(tree)
^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 129, in get_steps
CollectSteps().visit(tree)
File "F:\ComfyUI\python_embeded\Lib\site-packages\lark\visitors.py", line 316, in visit
self._call_userfunc(subtree)
File "F:\ComfyUI\python_embeded\Lib\site-packages\lark\visitors.py", line 294, in _call_userfunc
return getattr(self, tree.data, self.default)(tree)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 105, in scheduled
tree.children[i] = tostep(tree.children[i])
^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\parser.py", line 94, in tostep
w = float(s) * 100
^^^^^^^^

[Feature Req] Limit LORA influence when using MASK or AREA

Hello and thanks for making these nodes!
They are really interesting and I've been playing around with them.

So I'm testing along the MASK and AREA modes, and I've come to wonder if its already possible to limit the influence of a lora only to the MASK or AREA regions without the lora baking the other parts of the generation.

If not, could it be possible to implement something similar to this?
As on Auto1111 the extensions Lora Masks and Composable Lora achieve the desired effect of only allowing a lora to work in a specific region without frying the rest of the generation.

Thanks a lot for your time!

PromptControlSimple not in Readme

Is PromptControlSimple part of this repo? If so, is it a deprecated node? It's not in the README, so I'm not sure what the expected behavior is.

image

Does this work with SD 1.5?

I loaded up the example workflow and tried it with a SD 1.5 checkpoint, but it throws an error from prepareXL. Is this an XL only node? (I didn't see anything in the docs that says so, but I could have easily missed it!)

ModuleNotFoundError: No module named 'lark'

Hey there,

I'm probably just dumb, but Comfy asks for "lark" and I installed it via "pip install lark" in Comfy's directory, but now it's not being found by Comfy.

When I try to reinstall I get "Requirement already satisfied: lark in c:\python311\lib\site-packages (1.1.7)".

(programmer novice here, please excuse me if this is basic)

Alternating prompt produces similar results

When you try to prompt [A|B] and [B|A] in Automatic1111, you will inevitably get vastly different results, as first step as A and first step as B drastically changes flow of whole picture. However, doing same thing in this node results in completely identical pictures, winch shouldn't be happening. Something is wrong, somewhere, somehow.

Replicateable used workflow: https://hastebin.pw/WF91epqcW3

BUG: apply_lora_for_step() missing 5 required positional arguments: 'step', 'total_steps', 'state', 'original_model', and 'lora_cache'

Traceback (most recent call last):
  File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1344, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1314, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\hijack.py", line 39, in pc_sample
    r = cb(orig_sampler, *args, **kwargs)
  File "D:\SDLocal\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-prompt-control\prompt_control\node_lora.py", line 67, in sampler_cb
    apply_lora_for_step(start_step)
TypeError: apply_lora_for_step() missing 5 required positional arguments: 'step', 'total_steps', 'state', 'original_model', and 'lora_cache'

Does Prompt control work with SDXL?

Hello, i was wondering if prompt control work with SDXL if it doesnt its fine im just wondering if i have to set it up differently.
i was trying to use prompt control to be able to use the lora loading using the prompt.
it works great for SD 1.5 models i love it its amazing. i do not get any errors and it seems to load the loras fine.
but i get an error message when trying to use the same set up with SDXL.
ill put screen shot of the error im getting and how i have the nodes hooked up. thank you.

Screenshot 2023-12-13 023337
Screenshot 2023-12-13 022530

Recommendations for composite SD1.5/SDXL workflow

I have a SD1.5 workflow that seems to work really well right now. I'd like to be able to use SDXL as well and seamlessly switch between the models without having to do much work. I've been told that using the same prompt for CLIP_L and CLIP_G is perfectly fine and still works pretty well. I was wondering if you have any recommendations or examples for a workflow that works well for both SD1.5 and SDXL without manual modifications when switching to an SDXL model?

I was considering doing some text manipulation, like looking at the checkpoint name, and looking for sdxl or sd_xl in the name, and based on that, taking the prompt, duplicating it, with and wrapping with CLIP_L( .. ) CLIP_G( .. ). But before I go through all that effort, I was wondering if you have any recommendations.

Example Workflow Doesn't Load

Quite possibly I am doing something stupid, but I can't get the example workflow to load on latest ComfyUI + latest comfyui-prompt-control.

Repro:

Expected Behavior:

  • Example workflow loads

Experienced Behavior:

  • ComfyUI UI remains blank, no error message in browser or console

Clarification on

Regarding this section of the ReadMe:

https://github.com/asagi4/comfyui-prompt-control#sdxl

Is SDXL() automatically added if this node detects an SDXL model passed to it? What if I leave that function out (of the prompt)?

The reason I'm asking, because I'm trying to wire up a SD1.5/SDXL combined workflow that can intelligently switch a few things around depending on the name of the checkpoint detected. Here's a WIP workflow with some WIP nodes as well:

image

What I'd like to do is use a node like Add CLIP SDXL Params to the conditioning, feeding the width/height information in from elsewhere. I think this will be easier that doing text manipulation to build a "function string" to pass along to PromptControlSimple.

image

Example SDXL workflow

Please provide an example SDXL workflow. I'm trying to integrate SDXL, and I'm having some trouble understanding what changes I need to use these nodes. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.