Coder Social home page Coder Social logo

animatediff-colab's Introduction

🐣 Please follow me for new updates https://twitter.com/camenduru
🔥 Please join our discord server https://discord.gg/k5BwmmvJJU
🥳 Please join my patreon community https://patreon.com/camenduru

🦒 Colab

Colab Info
Open In Colab AnimateDiff_motion_lora_colab
Open In Colab AnimateDiff_colab
Open In Colab AnimateDiff_gradio_colab

Tutorial

Please edit /content/animatediff/configs/prompts/1-ToonYou.yaml file for different prompts. We can use it with any LoRA 🥳
output files here /content/animatediff/samples/

ToonYou:
  base: ""
  path: "models/DreamBooth_LoRA/toonyou_beta3.safetensors"
  motion_module:
    - "models/Motion_Module/mm_sd_v14.ckpt"
    - "models/Motion_Module/mm_sd_v15.ckpt"

  seed:           [10788741199826055526, 6520604954829636163, 6519455744612555650, 16372571278361863751]
  steps:          25
  guidance_scale: 7.5

  prompt:
    - "best quality, masterpiece, 1girl, looking at viewer, blurry background, upper body, contemporary, dress"
    - "masterpiece, best quality, 1girl, solo, cherry blossoms, hanami, pink flower, white flower, spring season, wisteria, petals, flower, plum blossoms, outdoors, falling petals, white hair, black eyes,"
    - "best quality, masterpiece, 1boy, formal, abstract, looking at viewer, masculine, marble pattern"
    - "best quality, masterpiece, 1girl, cloudy sky, dandelion, contrapposto, alternate hairstyle,"

  n_prompt:
    - ""
    - "badhandv4,easynegative,ng_deepnegative_v1_75t,verybadimagenegative_v1.3, bad-artist, bad_prompt_version2-neg, teeth"
    - ""
    - ""

Main Repo

https://github.com/guoyww/animatediff/

Page

https://animatediff.github.io/

Paper

https://arxiv.org/abs/2307.04725

Output

0-best-quality,-masterpiece,-1girl,-looking-at-viewer,-blurry-background,-upper (5) 1-masterpiece,-best-quality,-1girl,-solo,-cherry-blossoms,-hanami,-pink-flower, (3) 2-best-quality,-masterpiece,-1boy,-formal,-abstract,-looking-at-viewer,-masculine, (2)

animatediff-colab's People

Contributors

camenduru avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

animatediff-colab's Issues

AttributeError: module 'jax.random' has no attribute 'KeyArray'

Captura de tela 2024-04-30 193845

/content/animatediff
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/content/animatediff/scripts/animate.py", line 9, in
import diffusers
File "/usr/local/lib/python3.10/dist-packages/diffusers/init.py", line 46, in
from .pipeline_utils import DiffusionPipeline
File "/usr/local/lib/python3.10/dist-packages/diffusers/pipeline_utils.py", line 38, in
from .schedulers.scheduling_utils import SCHEDULER_CONFIG_NAME
File "/usr/local/lib/python3.10/dist-packages/diffusers/schedulers/init.py", line 51, in
from .scheduling_ddpm_flax import FlaxDDPMScheduler
File "/usr/local/lib/python3.10/dist-packages/diffusers/schedulers/scheduling_ddpm_flax.py", line 80, in
class FlaxDDPMScheduler(FlaxSchedulerMixin, ConfigMixin):
File "/usr/local/lib/python3.10/dist-packages/diffusers/schedulers/scheduling_ddpm_flax.py", line 216, in FlaxDDPMScheduler
key: random.KeyArray,
File "/usr/local/lib/python3.10/dist-packages/jax/_src/deprecations.py", line 54, in getattr
raise AttributeError(f"module {module!r} has no attribute {name!r}")
AttributeError: module 'jax.random' has no attribute 'KeyArray'

can someone help me?

5-RealisticVision.yaml points to a wrong safetensor

I think that the Realistic Vision YAML is bugged.

It points to "realisticVisionV20_v20.safetensors" but the first cell creates a file called "realisticVisionV40_v20Novae.safetensors"

So, I get this error, when I unselect the default model and prompt file on the Colab:

/content/animatediff
2023-07-24 20:44:41.657011: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
loaded temporal unet's pretrained weights from /content/animatediff/models/StableDiffusion/unet ...
### missing keys: 560; 
### unexpected keys: 0;
### Temporal Module Parameters: 417.1376 M
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/content/animatediff/scripts/animate.py", line 159, in <module>
    main(args)
  File "/content/animatediff/scripts/animate.py", line 78, in main
    with safe_open(model_config.path, framework="pt", device="cpu") as f:
FileNotFoundError: No such file or directory: "models/DreamBooth_LoRA/realisticVisionV20_v20.safetensors"

I made the change of filename by myself and it has generated a couple of gifs.

Error executing "AnimateDiff_gradio_colab" it in a Colab Pro account.

Hi, I am triying to execute the "AnimateDiff_gradio_colab" on a Colab Pro account, but it gets stuck at this point.

I haven´t edited the colab.

BTW, it would be wonderful if the models were a newer ones, like epiCRealism or the latest Realistic Vision.

Thanks for your great work.

Download Results:
gid   |stat|avg speed  |path/URI
======+====+===========+=======================================================
bbd938|OK  |   244MiB/s|/content/animatediff-hf/models/DreamBooth_LoRA/realisticVisionV20_v20.safetensors

Status Legend:
(OK):download completed.
/content/animatediff-hf
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1696959203.286023    1663 tfrt_cpu_pjrt_client.cc:349] TfrtCpuClient created.
2023-10-10 17:33:35.361911: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
### Cleaning cached examples ...
loaded temporal unet's pretrained weights from models/StableDiffusion/stable-diffusion-v1-5/unet ...
### missing keys: 560; 
### unexpected keys: 0;
### Temporal Module Parameters: 417.1376 M
Downloading (…)lve/main/config.json: 100% 4.52k/4.52k [00:00<00:00, 11.1MB/s]
Downloading model.safetensors: 100% 1.71G/1.71G [00:14<00:00, 115MB/s]
Traceback (most recent call last):
  File "/content/animatediff-hf/app.py", line 229, in <module>
    controller = AnimateController()
  File "/content/animatediff-hf/app.py", line 134, in __init__
    self.update_motion_module(self.motion_module_list[0])
  File "/content/animatediff-hf/app.py", line 168, in update_motion_module
    _, unexpected = self.unet.load_state_dict(motion_module_state_dict, strict=False)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for UNet3DConditionModel:
	size mismatch for down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 320]) from checkpoint, the shape in current model is torch.Size([1, 24, 320]).
	size mismatch for down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 320]) from checkpoint, the shape in current model is torch.Size([1, 24, 320]).
	size mismatch for down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 320]) from checkpoint, the shape in current model is torch.Size([1, 24, 320]).
	size mismatch for down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 320]) from checkpoint, the shape in current model is torch.Size([1, 24, 320]).
	size mismatch for down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 640]) from checkpoint, the shape in current model is torch.Size([1, 24, 640]).
	size mismatch for down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 640]) from checkpoint, the shape in current model is torch.Size([1, 24, 640]).
	size mismatch for down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 640]) from checkpoint, the shape in current model is torch.Size([1, 24, 640]).
	size mismatch for down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 640]) from checkpoint, the shape in current model is torch.Size([1, 24, 640]).
	size mismatch for down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for up_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for up_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for up_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for up_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for up_blocks.1.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for up_blocks.1.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 1280]) from checkpoint, the shape in current model is torch.Size([1, 24, 1280]).
	size mismatch for up_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 640]) from checkpoint, the shape in current model is torch.Size([1, 24, 640]).
	size mismatch for up_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 640]) from checkpoint, the shape in current model is torch.Size([1, 24, 640]).
	size mismatch for up_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 640]) from checkpoint, the shape in current model is torch.Size([1, 24, 640]).
	size mismatch for up_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 640]) from checkpoint, the shape in current model is torch.Size([1, 24, 640]).
	size mismatch for up_blocks.2.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 640]) from checkpoint, the shape in current model is torch.Size([1, 24, 640]).
	size mismatch for up_blocks.2.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 640]) from checkpoint, the shape in current model is torch.Size([1, 24, 640]).
	size mismatch for up_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 320]) from checkpoint, the shape in current model is torch.Size([1, 24, 320]).
	size mismatch for up_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 320]) from checkpoint, the shape in current model is torch.Size([1, 24, 320]).
	size mismatch for up_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 320]) from checkpoint, the shape in current model is torch.Size([1, 24, 320]).
	size mismatch for up_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 320]) from checkpoint, the shape in current model is torch.Size([1, 24, 320]).
	size mismatch for up_blocks.3.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 320]) from checkpoint, the shape in current model is torch.Size([1, 24, 320]).
	size mismatch for up_blocks.3.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe: copying a param with shape torch.Size([1, 32, 320]) from checkpoint, the shape in current model is torch.Size([1, 24, 320]).
I0000 00:00:1696959283.281624    1663 tfrt_cpu_pjrt_client.cc:352] TfrtCpuClient destroyed.

AttributeError: 'Row' object has no attribute 'style'

When running and of the two motion_lora colabs I get:

Traceback (most recent call last):
File "/content/animatediff/app.py", line 338, in
demo = ui()
File "/content/animatediff/app.py", line 294, in ui
with gr.Row().style(equal_height=False):
AttributeError: 'Row' object has no attribute 'style'

This is what i get when i use my model!

2023-07-21 14:19:14.540079: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
loaded temporal unet's pretrained weights from /content/animatediff/models/StableDiffusion/unet ...

missing keys: 560;

unexpected keys: 0;

Temporal Module Parameters: 417.1376 M

Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/content/animatediff/scripts/animate.py", line 159, in
main(args)
File "/content/animatediff/scripts/animate.py", line 74, in main
pipeline.unet.load_state_dict(state_dict)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for UNet3DConditionModel:
Missing key(s) in state_dict: "conv_in.weight", "conv_in.bias", "time_embedding.linear_1.weight", "time_embedding.linear_1.bias", "time_embedding.linear_2.weight", "time_embedding.linear_2.bias", "down_blocks.0.attentions.0.norm.weight", "down_blocks.0.attentions.0.norm.bias", "down_blocks.0.attentions.0.proj_in.weight", "down_blocks.0.attentions.0.proj_in.bias", "down_blocks.0.attentions.0.transformer_blocks.0.attn1.to_q.weight", "down_blocks.0.attentions.0.transformer_blocks.0.attn1.to_k.weight", "down_blocks.0.attentions.0.transformer_blocks.0.attn1.to_v.weight", "down_blocks.0.attentions.0.transformer_blocks.0.attn1.to_out.0.weight", "down_blocks.0.attentions.0.transformer_blocks.0.attn1.to_out.0.bias", "down_blocks.0.attentions.0.transformer_blocks.0.norm1.weight", "down_blocks.0.attentions.0.transformer_blocks.0.norm1.bias", "down_blocks.0.attentions.0.transformer_blocks.0.attn2.to_q.weight", "down_blocks.0.attentions.0.transformer_blocks.0.attn2.to_k.weight", "down_blocks.0.attentions.0.transformer_blocks.0.attn2.to_v.weight", "down_blocks.0.attentions.0.transformer_blocks.0.attn2.to_out.0.weight", "down_blocks.0.attentions.0.transformer_blocks.0.attn2.to_out.0.bias", "down_blocks.0.attentions.0.transformer_blocks.0.norm2.weight", "down_blocks.0.attentions.0.transformer_blocks.0.norm2.bias", "down_blocks.0.attentions.0.transformer_blocks.0.ff.net.0.proj.weight", "down_blocks.0.attentions.0.transformer_blocks.0.ff.net.0.proj.bias", "down_blocks.0.attentions.0.transformer_blocks.0.ff.net.2.weight", "down_blocks.0.attentions.0.transformer_blocks.0.ff.net.2.bias", "down_blocks.0.attentions.0.transformer_blocks.0.norm3.weight", "down_blocks.0.attentions.0.transformer_blocks.0.norm3.bias", "down_blocks.0.attentions.0.proj_out.weight", "down_blocks.0.attentions.0.proj_out.bias", "down_blocks.0.attentions.1.norm.weight", "down_blocks.0.attentions.1.norm.bias", "down_blocks.0.attentions.1.proj_in.weight", "down_blocks.0.attentions.1.proj_in.bias", "down_blocks.0.attentions.1.transformer_blocks.0.attn1.to_q.weight", "down_blocks.0.attentions.1.transformer_blocks.0.attn1.to_k.weight", "down_blocks.0.attentions.1.transformer_blocks.0.attn1.to_v.weight", "down_blocks.0.attentions.1.transformer_blocks.0.attn1.to_out.0.weight", "down_blocks.0.attentions.1.transformer_blocks.0.attn1.to_out.0.bias", "down_blocks.0.attentions.1.transformer_blocks.0.norm1.weight", "down_blocks.0.attentions.1.transformer_blocks.0.norm1.bias", "down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_q.weight", "down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_k.weight", "down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_v.weight", "down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_out.0.weight", "down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_out.0.bias", "down_blocks.0.attentions.1.transformer_blocks.0.norm2.weight", "down_blocks.0.attentions.1.transformer_blocks.0.norm2.bias", "down_blocks.0.attentions.1.transformer_blocks.0.ff.net.0.proj.weight", "down_blocks.0.attentions.1.transformer_blocks.0.ff.net.0.proj.bias", "down_blocks.0.attentions.1.transformer_blocks.0.ff.net.2.weight", "down_blocks.0.attentions.1.transformer_blocks.0.ff.net.2.bias", "down_blocks.0.attentions.1.transformer_blocks.0.norm3.weight", "down_blocks.0.attentions.1.transformer_blocks.0.norm3.bias", "down_blocks.0.attentions.1.proj_out.weight", "down_blocks.0.attentions.1.proj_out.bias", "down_blocks.0.resnets.0.norm1.weight", "down_blocks.0.resnets.0.norm1.bias", "down_blocks.0.resnets.0.conv1.weight", "down_blocks.0.resnets.0.conv1.bias", "down_blocks.0.resnets.0.time_emb_proj.weight", "down_blocks.0.resnets.0.time_emb_proj.bias", "down_blocks.0.resnets.0.norm2.weight", "down_blocks.0.resnets.0.norm2.bias", "down_blocks.0.resnets.0.conv2.weight", "down_blocks.0.resnets.0.conv2.bias", "down_blocks.0.resnets.1.norm1.weight", "down_blocks.0.resnets.1.norm1.bias", "down_blocks.0.resnets.1.conv1.weight", "down_blocks.0.resnets.1.conv1.bias", "down_blocks.0.resnets.1.time_emb_proj.weight", "down_blocks.0.resnets.1.time_emb_proj.bias", "down_blocks.0.resnets.1.norm2.weight", "down_blocks.0.resnets.1.norm2.bias", "down_blocks.0.resnets.1.conv2.weight", "down_blocks.0.resnets.1.conv2.bias", "down_blocks.0.motion_modules.0.temporal_transformer.norm.weight", "down_blocks.0.motion_modules.0.temporal_transformer.norm.bias", "down_blocks.0.motion_modules.0.temporal_transformer.proj_in.weight", "down_blocks.0.motion_modules.0.temporal_transformer.proj_in.bias", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_q.weight", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_k.weight", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_v.weight", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.weight", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.bias", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_q.weight", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_k.weight", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_v.weight", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.weight", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.bias", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.0.weight", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.0.bias", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.1.weight", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.1.bias", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.0.proj.weight", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.0.proj.bias", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.2.weight", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.2.bias", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.ff_norm.weight", "down_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.ff_norm.bias", "down_blocks.0.motion_modules.0.temporal_transformer.proj_out.weight", "down_blocks.0.motion_modules.0.temporal_transformer.proj_out.bias", "down_blocks.0.motion_modules.1.temporal_transformer.norm.weight", "down_blocks.0.motion_modules.1.temporal_transformer.norm.bias", "down_blocks.0.motion_modules.1.temporal_transformer.proj_in.weight", "down_blocks.0.motion_modules.1.temporal_transformer.proj_in.bias", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_q.weight", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_k.weight", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_v.weight", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.weight", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.bias", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_q.weight", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_k.weight", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_v.weight", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.weight", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.bias", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.0.weight", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.0.bias", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.1.weight", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.1.bias", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.0.proj.weight", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.0.proj.bias", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.2.weight", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.2.bias", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.ff_norm.weight", "down_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.ff_norm.bias", "down_blocks.0.motion_modules.1.temporal_transformer.proj_out.weight", "down_blocks.0.motion_modules.1.temporal_transformer.proj_out.bias", "down_blocks.0.downsamplers.0.conv.weight", "down_blocks.0.downsamplers.0.conv.bias", "down_blocks.1.attentions.0.norm.weight", "down_blocks.1.attentions.0.norm.bias", "down_blocks.1.attentions.0.proj_in.weight", "down_blocks.1.attentions.0.proj_in.bias", "down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.weight", "down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_k.weight", "down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_v.weight", "down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.weight", "down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_out.0.bias", "down_blocks.1.attentions.0.transformer_blocks.0.norm1.weight", "down_blocks.1.attentions.0.transformer_blocks.0.norm1.bias", "down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_q.weight", "down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_k.weight", "down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_v.weight", "down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.weight", "down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_out.0.bias", "down_blocks.1.attentions.0.transformer_blocks.0.norm2.weight", "down_blocks.1.attentions.0.transformer_blocks.0.norm2.bias", "down_blocks.1.attentions.0.transformer_blocks.0.ff.net.0.proj.weight", "down_blocks.1.attentions.0.transformer_blocks.0.ff.net.0.proj.bias", "down_blocks.1.attentions.0.transformer_blocks.0.ff.net.2.weight", "down_blocks.1.attentions.0.transformer_blocks.0.ff.net.2.bias", "down_blocks.1.attentions.0.transformer_blocks.0.norm3.weight", "down_blocks.1.attentions.0.transformer_blocks.0.norm3.bias", "down_blocks.1.attentions.0.proj_out.weight", "down_blocks.1.attentions.0.proj_out.bias", "down_blocks.1.attentions.1.norm.weight", "down_blocks.1.attentions.1.norm.bias", "down_blocks.1.attentions.1.proj_in.weight", "down_blocks.1.attentions.1.proj_in.bias", "down_blocks.1.attentions.1.transformer_blocks.0.attn1.to_q.weight", "down_blocks.1.attentions.1.transformer_blocks.0.attn1.to_k.weight", "down_blocks.1.attentions.1.transformer_blocks.0.attn1.to_v.weight", "down_blocks.1.attentions.1.transformer_blocks.0.attn1.to_out.0.weight", "down_blocks.1.attentions.1.transformer_blocks.0.attn1.to_out.0.bias", "down_blocks.1.attentions.1.transformer_blocks.0.norm1.weight", "down_blocks.1.attentions.1.transformer_blocks.0.norm1.bias", "down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_q.weight", "down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_k.weight", "down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_v.weight", "down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_out.0.weight", "down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_out.0.bias", "down_blocks.1.attentions.1.transformer_blocks.0.norm2.weight", "down_blocks.1.attentions.1.transformer_blocks.0.norm2.bias", "down_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.weight", "down_blocks.1.attentions.1.transformer_blocks.0.ff.net.0.proj.bias", "down_blocks.1.attentions.1.transformer_blocks.0.ff.net.2.weight", "down_blocks.1.attentions.1.transformer_blocks.0.ff.net.2.bias", "down_blocks.1.attentions.1.transformer_blocks.0.norm3.weight", "down_blocks.1.attentions.1.transformer_blocks.0.norm3.bias", "down_blocks.1.attentions.1.proj_out.weight", "down_blocks.1.attentions.1.proj_out.bias", "down_blocks.1.resnets.0.norm1.weight", "down_blocks.1.resnets.0.norm1.bias", "down_blocks.1.resnets.0.conv1.weight", "down_blocks.1.resnets.0.conv1.bias", "down_blocks.1.resnets.0.time_emb_proj.weight", "down_blocks.1.resnets.0.time_emb_proj.bias", "down_blocks.1.resnets.0.norm2.weight", "down_blocks.1.resnets.0.norm2.bias", "down_blocks.1.resnets.0.conv2.weight", "down_blocks.1.resnets.0.conv2.bias", "down_blocks.1.resnets.0.conv_shortcut.weight", "down_blocks.1.resnets.0.conv_shortcut.bias", "down_blocks.1.resnets.1.norm1.weight", "down_blocks.1.resnets.1.norm1.bias", "down_blocks.1.resnets.1.conv1.weight", "down_blocks.1.resnets.1.conv1.bias", "down_blocks.1.resnets.1.time_emb_proj.weight", "down_blocks.1.resnets.1.time_emb_proj.bias", "down_blocks.1.resnets.1.norm2.weight", "down_blocks.1.resnets.1.norm2.bias", "down_blocks.1.resnets.1.conv2.weight", "down_blocks.1.resnets.1.conv2.bias", "down_blocks.1.motion_modules.0.temporal_transformer.norm.weight", "down_blocks.1.motion_modules.0.temporal_transformer.norm.bias", "down_blocks.1.motion_modules.0.temporal_transformer.proj_in.weight", "down_blocks.1.motion_modules.0.temporal_transformer.proj_in.bias", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_q.weight", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_k.weight", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_v.weight", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.weight", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.bias", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_q.weight", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_k.weight", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_v.weight", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.weight", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.bias", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.0.weight", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.0.bias", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.1.weight", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.1.bias", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.0.proj.weight", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.0.proj.bias", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.2.weight", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.2.bias", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.ff_norm.weight", "down_blocks.1.motion_modules.0.temporal_transformer.transformer_blocks.0.ff_norm.bias", "down_blocks.1.motion_modules.0.temporal_transformer.proj_out.weight", "down_blocks.1.motion_modules.0.temporal_transformer.proj_out.bias", "down_blocks.1.motion_modules.1.temporal_transformer.norm.weight", "down_blocks.1.motion_modules.1.temporal_transformer.norm.bias", "down_blocks.1.motion_modules.1.temporal_transformer.proj_in.weight", "down_blocks.1.motion_modules.1.temporal_transformer.proj_in.bias", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_q.weight", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_k.weight", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_v.weight", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.weight", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.bias", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_q.weight", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_k.weight", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_v.weight", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.weight", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.bias", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.0.weight", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.0.bias", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.1.weight", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.1.bias", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.0.proj.weight", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.0.proj.bias", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.2.weight", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.2.bias", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.ff_norm.weight", "down_blocks.1.motion_modules.1.temporal_transformer.transformer_blocks.0.ff_norm.bias", "down_blocks.1.motion_modules.1.temporal_transformer.proj_out.weight", "down_blocks.1.motion_modules.1.temporal_transformer.proj_out.bias", "down_blocks.1.downsamplers.0.conv.weight", "down_blocks.1.downsamplers.0.conv.bias", "down_blocks.2.attentions.0.norm.weight", "down_blocks.2.attentions.0.norm.bias", "down_blocks.2.attentions.0.proj_in.weight", "down_blocks.2.attentions.0.proj_in.bias", "down_blocks.2.attentions.0.transformer_blocks.0.attn1.to_q.weight", "down_blocks.2.attentions.0.transformer_blocks.0.attn1.to_k.weight", "down_blocks.2.attentions.0.transformer_blocks.0.attn1.to_v.weight", "down_blocks.2.attentions.0.transformer_blocks.0.attn1.to_out.0.weight", "down_blocks.2.attentions.0.transformer_blocks.0.attn1.to_out.0.bias", "down_blocks.2.attentions.0.transformer_blocks.0.norm1.weight", "down_blocks.2.attentions.0.transformer_blocks.0.norm1.bias", "down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_q.weight", "down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_k.weight", "down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_v.weight", "down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_out.0.weight", "down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_out.0.bias", "down_blocks.2.attentions.0.transformer_blocks.0.norm2.weight", "down_blocks.2.attentions.0.transformer_blocks.0.norm2.bias", "down_blocks.2.attentions.0.transformer_blocks.0.ff.net.0.proj.weight", "down_blocks.2.attentions.0.transformer_blocks.0.ff.net.0.proj.bias", "down_blocks.2.attentions.0.transformer_blocks.0.ff.net.2.weight", "down_blocks.2.attentions.0.transformer_blocks.0.ff.net.2.bias", "down_blocks.2.attentions.0.transformer_blocks.0.norm3.weight", "down_blocks.2.attentions.0.transformer_blocks.0.norm3.bias", "down_blocks.2.attentions.0.proj_out.weight", "down_blocks.2.attentions.0.proj_out.bias", "down_blocks.2.attentions.1.norm.weight", "down_blocks.2.attentions.1.norm.bias", "down_blocks.2.attentions.1.proj_in.weight", "down_blocks.2.attentions.1.proj_in.bias", "down_blocks.2.attentions.1.transformer_blocks.0.attn1.to_q.weight", "down_blocks.2.attentions.1.transformer_blocks.0.attn1.to_k.weight", "down_blocks.2.attentions.1.transformer_blocks.0.attn1.to_v.weight", "down_blocks.2.attentions.1.transformer_blocks.0.attn1.to_out.0.weight", "down_blocks.2.attentions.1.transformer_blocks.0.attn1.to_out.0.bias", "down_blocks.2.attentions.1.transformer_blocks.0.norm1.weight", "down_blocks.2.attentions.1.transformer_blocks.0.norm1.bias", "down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_q.weight", "down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_k.weight", "down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_v.weight", "down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_out.0.weight", "down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_out.0.bias", "down_blocks.2.attentions.1.transformer_blocks.0.norm2.weight", "down_blocks.2.attentions.1.transformer_blocks.0.norm2.bias", "down_blocks.2.attentions.1.transformer_blocks.0.ff.net.0.proj.weight", "down_blocks.2.attentions.1.transformer_blocks.0.ff.net.0.proj.bias", "down_blocks.2.attentions.1.transformer_blocks.0.ff.net.2.weight", "down_blocks.2.attentions.1.transformer_blocks.0.ff.net.2.bias", "down_blocks.2.attentions.1.transformer_blocks.0.norm3.weight", "down_blocks.2.attentions.1.transformer_blocks.0.norm3.bias", "down_blocks.2.attentions.1.proj_out.weight", "down_blocks.2.attentions.1.proj_out.bias", "down_blocks.2.resnets.0.norm1.weight", "down_blocks.2.resnets.0.norm1.bias", "down_blocks.2.resnets.0.conv1.weight", "down_blocks.2.resnets.0.conv1.bias", "down_blocks.2.resnets.0.time_emb_proj.weight", "down_blocks.2.resnets.0.time_emb_proj.bias", "down_blocks.2.resnets.0.norm2.weight", "down_blocks.2.resnets.0.norm2.bias", "down_blocks.2.resnets.0.conv2.weight", "down_blocks.2.resnets.0.conv2.bias", "down_blocks.2.resnets.0.conv_shortcut.weight", "down_blocks.2.resnets.0.conv_shortcut.bias", "down_blocks.2.resnets.1.norm1.weight", "down_blocks.2.resnets.1.norm1.bias", "down_blocks.2.resnets.1.conv1.weight", "down_blocks.2.resnets.1.conv1.bias", "down_blocks.2.resnets.1.time_emb_proj.weight", "down_blocks.2.resnets.1.time_emb_proj.bias", "down_blocks.2.resnets.1.norm2.weight", "down_blocks.2.resnets.1.norm2.bias", "down_blocks.2.resnets.1.conv2.weight", "down_blocks.2.resnets.1.conv2.bias", "down_blocks.2.motion_modules.0.temporal_transformer.norm.weight", "down_blocks.2.motion_modules.0.temporal_transformer.norm.bias", "down_blocks.2.motion_modules.0.temporal_transformer.proj_in.weight", "down_blocks.2.motion_modules.0.temporal_transformer.proj_in.bias", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_q.weight", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_k.weight", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_v.weight", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.weight", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.bias", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_q.weight", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_k.weight", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_v.weight", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.weight", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.bias", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.0.weight", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.0.bias", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.1.weight", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.1.bias", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.0.proj.weight", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.0.proj.bias", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.2.weight", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.2.bias", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.ff_norm.weight", "down_blocks.2.motion_modules.0.temporal_transformer.transformer_blocks.0.ff_norm.bias", "down_blocks.2.motion_modules.0.temporal_transformer.proj_out.weight", "down_blocks.2.motion_modules.0.temporal_transformer.proj_out.bias", "down_blocks.2.motion_modules.1.temporal_transformer.norm.weight", "down_blocks.2.motion_modules.1.temporal_transformer.norm.bias", "down_blocks.2.motion_modules.1.temporal_transformer.proj_in.weight", "down_blocks.2.motion_modules.1.temporal_transformer.proj_in.bias", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_q.weight", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_k.weight", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_v.weight", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.weight", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.bias", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_q.weight", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_k.weight", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_v.weight", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.weight", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.bias", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.0.weight", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.0.bias", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.1.weight", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.1.bias", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.0.proj.weight", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.0.proj.bias", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.2.weight", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.2.bias", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.ff_norm.weight", "down_blocks.2.motion_modules.1.temporal_transformer.transformer_blocks.0.ff_norm.bias", "down_blocks.2.motion_modules.1.temporal_transformer.proj_out.weight", "down_blocks.2.motion_modules.1.temporal_transformer.proj_out.bias", "down_blocks.2.downsamplers.0.conv.weight", "down_blocks.2.downsamplers.0.conv.bias", "down_blocks.3.resnets.0.norm1.weight", "down_blocks.3.resnets.0.norm1.bias", "down_blocks.3.resnets.0.conv1.weight", "down_blocks.3.resnets.0.conv1.bias", "down_blocks.3.resnets.0.time_emb_proj.weight", "down_blocks.3.resnets.0.time_emb_proj.bias", "down_blocks.3.resnets.0.norm2.weight", "down_blocks.3.resnets.0.norm2.bias", "down_blocks.3.resnets.0.conv2.weight", "down_blocks.3.resnets.0.conv2.bias", "down_blocks.3.resnets.1.norm1.weight", "down_blocks.3.resnets.1.norm1.bias", "down_blocks.3.resnets.1.conv1.weight", "down_blocks.3.resnets.1.conv1.bias", "down_blocks.3.resnets.1.time_emb_proj.weight", "down_blocks.3.resnets.1.time_emb_proj.bias", "down_blocks.3.resnets.1.norm2.weight", "down_blocks.3.resnets.1.norm2.bias", "down_blocks.3.resnets.1.conv2.weight", "down_blocks.3.resnets.1.conv2.bias", "down_blocks.3.motion_modules.0.temporal_transformer.norm.weight", "down_blocks.3.motion_modules.0.temporal_transformer.norm.bias", "down_blocks.3.motion_modules.0.temporal_transformer.proj_in.weight", "down_blocks.3.motion_modules.0.temporal_transformer.proj_in.bias", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_q.weight", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_k.weight", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_v.weight", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.weight", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.bias", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_q.weight", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_k.weight", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_v.weight", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.weight", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.bias", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.0.weight", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.0.bias", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.1.weight", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.1.bias", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.0.proj.weight", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.0.proj.bias", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.2.weight", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.2.bias", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.ff_norm.weight", "down_blocks.3.motion_modules.0.temporal_transformer.transformer_blocks.0.ff_norm.bias", "down_blocks.3.motion_modules.0.temporal_transformer.proj_out.weight", "down_blocks.3.motion_modules.0.temporal_transformer.proj_out.bias", "down_blocks.3.motion_modules.1.temporal_transformer.norm.weight", "down_blocks.3.motion_modules.1.temporal_transformer.norm.bias", "down_blocks.3.motion_modules.1.temporal_transformer.proj_in.weight", "down_blocks.3.motion_modules.1.temporal_transformer.proj_in.bias", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_q.weight", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_k.weight", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_v.weight", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.weight", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.bias", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_q.weight", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_k.weight", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_v.weight", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.weight", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.bias", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.0.weight", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.0.bias", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.1.weight", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.1.bias", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.0.proj.weight", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.0.proj.bias", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.2.weight", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.2.bias", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.ff_norm.weight", "down_blocks.3.motion_modules.1.temporal_transformer.transformer_blocks.0.ff_norm.bias", "down_blocks.3.motion_modules.1.temporal_transformer.proj_out.weight", "down_blocks.3.motion_modules.1.temporal_transformer.proj_out.bias", "up_blocks.0.resnets.0.norm1.weight", "up_blocks.0.resnets.0.norm1.bias", "up_blocks.0.resnets.0.conv1.weight", "up_blocks.0.resnets.0.conv1.bias", "up_blocks.0.resnets.0.time_emb_proj.weight", "up_blocks.0.resnets.0.time_emb_proj.bias", "up_blocks.0.resnets.0.norm2.weight", "up_blocks.0.resnets.0.norm2.bias", "up_blocks.0.resnets.0.conv2.weight", "up_blocks.0.resnets.0.conv2.bias", "up_blocks.0.resnets.0.conv_shortcut.weight", "up_blocks.0.resnets.0.conv_shortcut.bias", "up_blocks.0.resnets.1.norm1.weight", "up_blocks.0.resnets.1.norm1.bias", "up_blocks.0.resnets.1.conv1.weight", "up_blocks.0.resnets.1.conv1.bias", "up_blocks.0.resnets.1.time_emb_proj.weight", "up_blocks.0.resnets.1.time_emb_proj.bias", "up_blocks.0.resnets.1.norm2.weight", "up_blocks.0.resnets.1.norm2.bias", "up_blocks.0.resnets.1.conv2.weight", "up_blocks.0.resnets.1.conv2.bias", "up_blocks.0.resnets.1.conv_shortcut.weight", "up_blocks.0.resnets.1.conv_shortcut.bias", "up_blocks.0.resnets.2.norm1.weight", "up_blocks.0.resnets.2.norm1.bias", "up_blocks.0.resnets.2.conv1.weight", "up_blocks.0.resnets.2.conv1.bias", "up_blocks.0.resnets.2.time_emb_proj.weight", "up_blocks.0.resnets.2.time_emb_proj.bias", "up_blocks.0.resnets.2.norm2.weight", "up_blocks.0.resnets.2.norm2.bias", "up_blocks.0.resnets.2.conv2.weight", "up_blocks.0.resnets.2.conv2.bias", "up_blocks.0.resnets.2.conv_shortcut.weight", "up_blocks.0.resnets.2.conv_shortcut.bias", "up_blocks.0.motion_modules.0.temporal_transformer.norm.weight", "up_blocks.0.motion_modules.0.temporal_transformer.norm.bias", "up_blocks.0.motion_modules.0.temporal_transformer.proj_in.weight", "up_blocks.0.motion_modules.0.temporal_transformer.proj_in.bias", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_q.weight", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_k.weight", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_v.weight", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.weight", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.bias", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_q.weight", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_k.weight", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_v.weight", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.weight", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.bias", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.0.weight", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.0.bias", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.1.weight", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.norms.1.bias", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.0.proj.weight", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.0.proj.bias", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.2.weight", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.ff.net.2.bias", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.ff_norm.weight", "up_blocks.0.motion_modules.0.temporal_transformer.transformer_blocks.0.ff_norm.bias", "up_blocks.0.motion_modules.0.temporal_transformer.proj_out.weight", "up_blocks.0.motion_modules.0.temporal_transformer.proj_out.bias", "up_blocks.0.motion_modules.1.temporal_transformer.norm.weight", "up_blocks.0.motion_modules.1.temporal_transformer.norm.bias", "up_blocks.0.motion_modules.1.temporal_transformer.proj_in.weight", "up_blocks.0.motion_modules.1.temporal_transformer.proj_in.bias", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_q.weight", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_k.weight", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_v.weight", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.weight", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.bias", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_q.weight", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_k.weight", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_v.weight", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.weight", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.bias", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.0.weight", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.0.bias", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.1.weight", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.norms.1.bias", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.0.proj.weight", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.0.proj.bias", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.2.weight", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.ff.net.2.bias", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.ff_norm.weight", "up_blocks.0.motion_modules.1.temporal_transformer.transformer_blocks.0.ff_norm.bias", "up_blocks.0.motion_modules.1.temporal_transformer.proj_out.weight", "up_blocks.0.motion_modules.1.temporal_transformer.proj_out.bias", "up_blocks.0.motion_modules.2.temporal_transformer.norm.weight", "up_blocks.0.motion_modules.2.temporal_transformer.norm.bias", "up_blocks.0.motion_modules.2.temporal_transformer.proj_in.weight", "up_blocks.0.motion_modules.2.temporal_transformer.proj_in.bias", "up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_q.weight", "up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_k.weight", "up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_v.weight", "up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.weight", "up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.0.to_out.0.bias", "up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.0.pos_encoder.pe", "up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_q.weight", "up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_k.weight", "up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_v.weight", "up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.weight", "up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.1.to_out.0.bias", "up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.attention_blocks.1.pos_encoder.pe", "up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.norms.0.weight", "up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.norms.0.bias", "up_blocks.0.motion_modules.2.temporal_transformer.transformer_blocks.0.norms.1.weight",

Prompt and Initial Image

Two questions and there is no Discussions tab in the github repo, so putting this under "Issues".

  1. What does "1girl" mean in the toons prompt? Is this an anime character or is "1" is typo?

  2. Is there a way to put in an initial image?

[Feature] Model Import

add a gradio tab for downloading models via link, like downloading extensions in the a1111 webui
would be very nice ^^

RuntimeError: Error(s) in loading state_dict for CLIPTextModel:

All the colab notebooks stopped working 07/18 and return the same issue.
Not sure exactly what triggered the issue but all animateDiff colab notebooks return the same issue when running last cell:

RuntimeError: Error(s) in loading state_dict for CLIPTextModel:
Unexpected key(s) in state_dict: "text_model.embeddings.position_ids".

here's the full thing:

Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/content/AnimateDiff/scripts/animate.py", line 176, in
main(args)
File "/content/AnimateDiff/scripts/animate.py", line 99, in main
pipeline.text_encoder = convert_ldm_clip_checkpoint(base_state_dict)
File "/content/AnimateDiff/animatediff/utils/convert_from_ckpt.py", line 726, in convert_ldm_clip_checkpoint
text_model.load_state_dict(text_model_dict)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for CLIPTextModel:
Unexpected key(s) in state_dict: "text_model.embeddings.position_ids".

Any idea?

Colab fps error

For those having the fps error in colab, you have to fix the /content/animatediff/animatediff/utils/util.py file changing 'fps' to 'duration'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.