Coder Social home page Coder Social logo

chenyangqiqi / fatezero Goto Github PK

View Code? Open in Web Editor NEW
1.1K 14.0 105.0 204.59 MB

[ICCV 2023 Oral] "FateZero: Fusing Attentions for Zero-shot Text-based Video Editing"

Home Page: http://fate-zero-edit.github.io/

License: MIT License

Python 7.11% Jupyter Notebook 92.87% Shell 0.02%
stable-diffusion text-driven-editing video-editing image-editing video-style-transfer

fatezero's Introduction

FateZero: Fusing Attentions for Zero-shot Text-based Video Editing (ICCV23 Oral')

Chenyang Qi, Xiaodong Cun, Yong Zhang, Chenyang Lei, Xintao Wang, Ying Shan, and Qifeng Chen

Open In Colab Hugging Face Spaces GitHub

"silver jeep ➜ posche car" "+ Van Gogh style" "squirrel,Carrot ➜ rabbit,eggplant"

🎏 Abstract

TL; DR: FateZero is the first zero-shot framework for text-driven video editing via pretrained diffusion models without training.

CLICK for the full abstract

The diffusion-based generative models have achieved remarkable success in text-based image generation. However, since it contains enormous randomness in generation progress, it is still challenging to apply such models for real-world visual content editing, especially in videos. In this paper, we propose FateZero, a zero-shot text-based editing method on real-world videos without per-prompt training or use-specific mask. To edit videos consistently, we propose several techniques based on the pre-trained models. Firstly, in contrast to the straightforward DDIM inversion technique, our approach captures intermediate attention maps during inversion, which effectively retain both structural and motion information. These maps are directly fused in the editing process rather than generated during denoising. To further minimize semantic leakage of the source video, we then fuse self-attentions with a blending mask obtained by cross-attention features from the source prompt. Furthermore, we have implemented a reform of the self-attention mechanism in denoising UNet by introducing spatial-temporal attention to ensure frame consistency. Yet succinct, our method is the first one to show the ability of zero-shot text-driven video style and local attribute editing from the trained text-to-image model. We also have a better zero-shot shape-aware editing ability based on the text-to-video model. Extensive experiments demonstrate our superior temporal consistency and editing capability than previous works.

πŸ“‹ Changelog

  • 2023.06.10 Two examples of large motion and multiple object.
  • 2023.04.18 Code refactoring and support local blending using blend_latents option.
  • 2023.04.04 Release Enhanced Tuning-a-Video configs and shape editing ckpts, data and config
  • 2023.03.31 Refine hugging face demo.
  • 2023.03.17 Release Code and Paper!

🚧 Todo

Click for Previous todos
  • Release the edit config and data for all results, Tune-a-video optimization
  • Memory and runtime profiling and Editing guidance documents
  • Colab and hugging-face
  • code refactoring
  • time & memory optimization
  • Release more application

πŸ›‘ Setup Environment

Our method is tested using cuda11, fp16 of accelerator and xformers on a single A100 or 3090.

conda create -n fatezero38 python=3.8
conda activate fatezero38

pip install -r requirements.txt

xformers is recommended for A100 GPU to save memory and running time.

Click for xformers installation

We find its installation not stable. You may try the following wheel:

wget https://github.com/ShivamShrirao/xformers-wheels/releases/download/4c06c79/xformers-0.0.15.dev0+4c06c79.d20221201-cp38-cp38-linux_x86_64.whl
pip install xformers-0.0.15.dev0+4c06c79.d20221201-cp38-cp38-linux_x86_64.whl

Validate the installation by

python test_install.py

You may download all data and checkpoints using the following bash command

bash download_all.sh

The above command take minutes and 100GB. Or you may download the required data and ckpts latter according to your interests.

Our environment is similar to Tune-A-video (official, unofficial) and prompt-to-prompt. You may check them for more details.

βš”οΈ FateZero Editing

Style and Attribute Editing in Teaser

Download the stable diffusion v1-4 (or other interesting image diffusion model) and put it to ./ckpt/stable-diffusion-v1-4.

Click for the bash command:
mkdir ./ckpt
cd ./ckpt
# download from huggingface face, takes 20G space
git lfs install
git clone https://huggingface.co/CompVis/stable-diffusion-v1-4

Then, you could reproduce style and shape editing results in our teaser by running:

accelerate launch test_fatezero.py --config config/teaser/jeep_watercolor.yaml
# or CUDA_VISIBLE_DEVICES=0 python test_fatezero.py --config config/teaser/jeep_watercolor.yaml
The result is saved at `./result` . (Click for directory structure)
result
β”œβ”€β”€ teaser
β”‚   β”œβ”€β”€ jeep_posche
β”‚   β”œβ”€β”€ jeep_watercolor
β”‚           β”œβ”€β”€ cross-attention  # visualization of cross-attention during inversion
β”‚           β”œβ”€β”€ sample           # result
β”‚           β”œβ”€β”€ train_samples    # the input video

Editing 8 frames on an Nvidia 3090, use 100G CPU memory, 12G GPU memory for editing. We also provide some low-cost setting of style editing by different hyper-parameters on a 16GB GPU. You may try these low-cost settings on colab. Open In Colab

More speed and hardware benchmarks are here.

Shape and large motion editing with Tune-A-Video

Besides style and attribution editing above, we also provide a Tune-A-Video checkpoint. You may download from onedrive or from hugging face model repository. Then move it to ./ckpt/jeep_tuned_200/.

Click for the bash command:
mkdir ./ckpt
cd ./ckpt
# download from huggingface face, takes 10G space
git lfs install
git clone https://huggingface.co/chenyangqi/jeep_tuned_200
The directory structure should be like this: (Click for directory structure)
ckpt
β”œβ”€β”€ stable-diffusion-v1-4
β”œβ”€β”€ jeep_tuned_200
...
data
β”œβ”€β”€ car-turn
β”‚   β”œβ”€β”€ 00000000.png
β”‚   β”œβ”€β”€ 00000001.png
β”‚   β”œβ”€β”€ ...
video_diffusion

You could reproduce the shape editing result in our teaser by running:

accelerate launch test_fatezero.py --config config/teaser/jeep_posche.yaml

Reproduce other results in the paper

Download the data from onedrive or from Github Release.

Click for wget bash command:
wget https://github.com/ChenyangQiQi/FateZero/releases/download/v0.0.1/attribute.zip
wget https://github.com/ChenyangQiQi/FateZero/releases/download/v0.0.1/style.zip
wget https://github.com/ChenyangQiQi/FateZero/releases/download/v0.0.1/shape.zip

Unzip and Place it in './data'. Then use the commands in 'config/style' and 'config/attribute' to get the results.

To reproduce other shape editing results, download Tune-A-Video checkpoints from huggingface :

Click for the bash command:
mkdir ./ckpt
cd ./ckpt
# download from huggingface face, takes 10G space
git lfs install
git clone https://huggingface.co/chenyangqi/man_skate_250
git clone https://huggingface.co/chenyangqi/swan_150

Then use the commands in 'config/shape'.

For above Tune-A-Video checkpoints, we fintune stable diffusion with a synthetic negative-prompt dataset for regularization and low-rank conovlution for temporal-consistent generation using tuning config

Click for the bash command example:
cd ./data
wget https://github.com/ChenyangQiQi/FateZero/releases/download/v0.0.1/negative_reg.zip
unzip negative_reg
cd ..
accelerate launch train_tune_a_video.py --config config/tune/jeep.yaml

To evaluate our results quantitatively, we provide CLIP/frame_acc_tem_con.py to calculate frame accuracy and temporal consistency using pretrained CLIP.

Editing guidance for YOUR video

We provided a editing guidance for in-the-wild video here. The work is still in progress. Welcome to give your feedback in issues.

Style Editing Results with Stable Diffusion

We show the difference between the source prompt and the target prompt in the box below each video.

Note mp4 and gif files in this GitHub page are compressed. Please check our Project Page for mp4 files of original video editing results.

"+ Ukiyo-e style" "+ watercolor painting" "+ Monet style"
"+ PokΓ©mon cartoon style" "+ Makoto Shinkai style" "+ watercolor painting"

Attribute Editing Results with Stable Diffusion

"rabbit, strawberry ➜ white rabbit, flower" "rabbit, strawberry ➜ squirrel, carrot" "rabbit, strawberry ➜ white rabbit, leaves"
"bear ➜ a red tiger" "bear ➜ a yellow leopard" "bear ➜ a brown lion"
"cat ➜ black cat, grass..." "cat ➜ red tiger" "cat ➜ Shiba-Inu"
"orange fish ➜ yellow fish" "squirrel ➜ robot squirrel" "squirrel, Carrot ➜ robot mouse, screwdriver"
"bus ➜ GPU" "gray dog ➜ yellow corgi" "gray dog ➜ robotic dog"
"white duck ➜ yellow rubber duck" "grass ➜ snow" "white fox ➜ grey wolf"

Shape and large motion editing with Tune-A-Video

"silver jeep ➜ posche car" "Swan ➜ White Duck" "Swan ➜ Pink flamingo"
"A man ➜ A Batman" "A man ➜ A Wonder Woman, With cowboy hat" "A man ➜ A Spider-Man"

πŸ•Ή Online Demo

Thanks to AK and the team from Hugging Face for providing computing resources to support our Hugging-face Demo, which supports up to 30 steps DDIM steps. Hugging Face Spaces.

You may use the UI for testing FateZero built with gradio locally.

git clone https://huggingface.co/spaces/chenyangqi/FateZero
python app_fatezero.py
# we will merge the FateZero on hugging face with that in github repo latter

We also provide a Colab demo, which supports 10 DDIM steps. Open In Colab You may launch the colab as a jupyter notebook on your local machine. We will refine and optimize the above demos in the following days.

πŸ“€ Demo Video

165a65fe9b83096a92a1bddb9bfff459.mp4

The video here is compressed due to the size limit of GitHub. The original full-resolution video is here.

πŸ“ Citation

@article{qi2023fatezero,
      title={FateZero: Fusing Attentions for Zero-shot Text-based Video Editing}, 
      author={Chenyang Qi and Xiaodong Cun and Yong Zhang and Chenyang Lei and Xintao Wang and Ying Shan and Qifeng Chen},
      year={2023},
      journal={arXiv:2303.09535},
}

πŸ’— Acknowledgements

This repository borrows heavily from Tune-A-Video and prompt-to-prompt. Thanks to the authors for sharing their code and models.

🧿 Maintenance

This is the codebase for our research work. We are still working hard to update this repo, and more details are coming in days. If you have any questions or ideas to discuss, feel free to contact Chenyang Qi or Xiaodong Cun.

fatezero's People

Contributors

chenyangqiqi avatar eltociear avatar mayuelala avatar vinthony avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fatezero's Issues

Application Extention: Long video

Hi,

Is there a way to make the method more memory-efficient for running on longer videos? I'm trying to run on 25 frames with an RTX 8000 GPU (48 GB of GPU RAM), and I'm running out of memory.

Thanks.

error

python test_fatezero.py --config config/teaser/jeep_watercolor.yaml

The config attributes {'scaling_factor': 0.18215} were passed to AutoencoderKL, but are not expected and will be ignored. Please verify your config.json configuration file.
use fp16
down_blocks
Number of attention layer registered 32
Invert clean image to noise latents by DDIM and Unet
64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 32/50 [00:58<00:32, 1.79s/it]
Killed

NVIDIA-SMI 525.89.02 Driver Version: 525.89.02 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-PCIE... On | 00000000:65:04.0 Off | Off |
| N/A 32C P0 25W / 250W | 0MiB / 32768MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+

Goggle Colab Not Working

I'm getting a error when running:
!accelerate launch test_fatezero.py --config=$CONFIG_NAME

It seems like your are missing a file called car-turn in the data directory.
image

Error:

Traceback (most recent call last):
  File "/content/FateZero/test_fatezero.py", line 286, in <module>
    run()
  File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "/content/FateZero/test_fatezero.py", line 259, in run
    test(config=config, **Omegadict)
  File "/content/FateZero/test_fatezero.py", line 141, in test
    video_dataset = ImageSequenceDataset(**dataset_config, prompt_ids=prompt_ids)
  File "/content/FateZero/video_diffusion/data/dataset.py", line 42, in __init__
    self.images = self.get_image_list(path)
  File "/content/FateZero/video_diffusion/data/dataset.py", line 143, in get_image_list
    for file in sorted(os.listdir(path)):
FileNotFoundError: [Errno 2] No such file or directory: 'data/car-turn'
Traceback (most recent call last):
  File "/usr/local/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py", line 45, in main
    args.func(args)
  File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 979, in launch_command
    simple_launcher(args)
  File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 628, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', 'test_fatezero.py', '--config=config/car-turn.yaml']' returned non-zero exit status 1.

evaluation metrics code

Hi,thanks for your good work, how can I reproduce the quality results in the paper, and where is the code for the evaluation Metrics?

How to apply the original Tune-A-Video model on FateZero

I finetuned a model based on the Tune-A-Video architecture, and it could inference in the framework of the Tune-A-Video, but when i try to use the pre-trained Tune-A-Video on your code, it reports an error:
KeyError: '2d state_dict key down_blocks.0.attentions.0.transformer_blocks.0.attn_temp.to_q.weight does not exist in 3d model'
image
I wonder how to solve the problem, or do you make any changes on the original Tune-A-Video frame work?
Do you provide the code for fine-tuning a Tune-A-Video model which could be used in your framework?

Using DDIM inversion for images

Hey,

I was curious if it is possible to reuse your approach to inversion for text-to-image pipeline? After briefly looking into the code, it seems that image is inverted relative to temporal SD. I am interested to edit only images.

Thanks.

Hand + Face of Human Pose

Hi,
Is it possible to generate a single character from the Pose for about 5 seconds?

I have a video of Pose ( openpose + hands + face) and i was wondering if it is possible to generate an output video withe the length of 5 seconds that has a consistent character/Avatar which plays Dance, .... from the controlled (pose) input?

I have a video of OpenPose+hands+face and i want to generate human like animation (No matter what, but just a consistent Character/Avatar)
Sample Video

Thanks
Best regards

Style prompts are very restrictive

I tried pokemon_rabbit example on Fatezero and changed the target prompt from pokemon cartoon of rabbit eating a watermelon to pokemon cartoon of an old rabbit eating a watermelon , so i want the rabbit to look old , such prompts literally have no effect on the model. How can i ensure (tweak any hyperparams) that i can see some change ?

Questions: resolution and supported prompt

Thanks for this!

  1. Is there a max resolution output on the video?
  2. Can we create any prompt/concept style through SD checkpoints or is it only through pretrained style YAML's you've provided?

About sample.yml

Hi, your work is really great! Thanks so much for open-source this.
I have a question regarding training the FateZero in train_tune_a_video.py, seems like the sample.yml isn't in the config folder.
It would be great if you can show this as well :)

@click.command()
@click.option("--config", type=str, default="config/sample.yml")
def run(config):
train(config=config, **OmegaConf.load(config))
if name == "main":
run()

How to produce the cross-attention map

Sorry to bother you.
I wonder is there any codes about creating the cross-attention map as shown in Fig.4 you could share?
Or maybe, at which stage, to produce this?
Thank U!

CPU memory

Nice job.
Are there any methods to decrease the CPU memory(100GB is unaffordable for me) while the shape can still be modified? My program was killed when doing the ddim inversion due to the limited memory

Google colab error

When I run this code block in Google Colab
!accelerate launch test_fatezero.py --config=$CONFIG_NAME
it raises an error:

image

Stable diffusion 1-5 doesnt work

Error

β”‚ C:\python\Projects\FateZero\test_fatezero.py:101 in test          β”‚
β”‚                                                                                                  β”‚
β”‚    98 β”‚   β”‚   subfolder="vae",                                                                   β”‚
β”‚    99 β”‚   )                                                                                      β”‚
β”‚   100 β”‚                                                                                          β”‚
β”‚ ❱ 101 β”‚   unet = UNetPseudo3DConditionModel.from_2d_model(                                       β”‚
β”‚   102 β”‚   β”‚   os.path.join(pretrained_model_path, "unet"), model_config=model_config             β”‚
β”‚   103 β”‚   )                                                                                      β”‚
β”‚   104                                                                                            β”‚
β”‚                                                                                                  β”‚
β”‚ C:\python\Projects\FateZero\video_diffusion\models\unet_3d_condit β”‚
β”‚ ion.py:477 in from_2d_model                                                                      β”‚
β”‚                                                                                                  β”‚
β”‚   474 β”‚   β”‚   β”‚   config.update(model_config)                                                    β”‚
β”‚   475 β”‚   β”‚   # else:                                                                            β”‚
β”‚   476 β”‚   β”‚   # config['model_config'] = model_config                                            β”‚
β”‚ ❱ 477 β”‚   β”‚   model = cls(**config)                                                              β”‚
β”‚   478 β”‚   β”‚                                                                                      β”‚
β”‚   479 β”‚   β”‚   state_dict_path_condidates = glob.glob(os.path.join(model_path, "*.bin"))          β”‚
β”‚   480 β”‚   β”‚   if state_dict_path_condidates:                                                     β”‚
β”‚                                                                                                  β”‚
β”‚   168 β”‚   β”‚   β”‚   β”‚   model_config=kwargs                                                        β”‚
β”‚   169 β”‚   β”‚   β”‚   )                                                                              β”‚
β”‚   170 β”‚   β”‚   else:                                                                              β”‚
β”‚ ❱ 171 β”‚   β”‚   β”‚   raise ValueError(f"unknown mid_block_type : {mid_block_type}")                 β”‚
β”‚   172 β”‚   β”‚                                                                                      β”‚
β”‚   173 β”‚   β”‚   # count how many layers upsample the images                                        β”‚
β”‚   174 β”‚   β”‚   self.num_upsamplers = 0                                                            β”‚
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: unknown mid_block_type : UNetMidBlock2DCrossAttn

I havent tried with SD 1-4 because it takes time to download the diffusers version from Hugging face.

non-square videos

Hi!

I would like to run your method on non-square videos. However, it seems simply changing the crop size in the dataset causes dimension errors afterwards. Are non-square videos not supported? are you planning to support this?

Thanks!

Background style

I have noticed that the background color tone has changed in many examples. What could be the possible reasons for this?

New "concept" training

Hi,

What if I would like to edit the video with a brand new concept that Stable diffusion never know before. For example, my own image..

If not, can I use the training techniques: textual inversion for personalizing image generation?

https://huggingface.co/docs/diffusers/training/text_inversion

Sorry if I missed the answer in this repo or paper, and please point me to that resource. Your help is highly appreciated.

Thank you

The shape editing output figure.

THanks for your good work.
I wanna ask in the output figures. I know step_0_1_0 is the editing video. But I have no idea what is the step_0_0_0?
Is it the DDIM reconstraction video or not?

About "strength" in sd_ddim_pipeline function

Hello, I put a description about the parameter "strength" below. My understanding is that if we designate "strength = 0.0," it implies the denoising of a clean image. Conversely, by setting "strength = 1.0," it suggests denoising from pure noise. However, I observed that even when I varied this parameter as 0.1, 0.5, and 1.0 respectively, the resulting outputs were identical. I'm just wondering where I can generate an output image purely from noise. Thank you for your assistance!

strength (`float`, *optional*, defaults to 1.0):
Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
will be used as a starting point, adding more noise to it the larger the `strength`. The number of
denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
be maximum and the denoising process will run for the full number of iterations specified in
`num_inference_steps`. A value of 1, therefore, essentially ignores `image`.

def sd_ddim_pipeline(
self,
prompt: Union[str, List[str]],
image: Union[torch.FloatTensor, PIL.Image.Image] = None,
height: Optional[int] = None,
width: Optional[int] = None,
strength: float = None,
num_inference_steps: int = 50,
guidance_scale: float = 7.5,
negative_prompt: Optional[Union[str, List[str]]] = None,
num_images_per_prompt: Optional[int] = 1,
eta: float = 0.0,
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
latents: Optional[torch.FloatTensor] = None,
output_type: Optional[str] = "pil",
return_dict: bool = True,
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
callback_steps: Optional[int] = 1,
controller: attention_util.AttentionControl = None,
**args
):

Strange Config Item.

Hi, I found in jeep_posche.yaml, it has "watercolor, painiting, car", which don't present in the edited prompt.

eq_params: words: ["watercolor", "painting"] values: [10,10] blend_words: [['jeep',], ["car",]] blend_self_attention: True

Unable to reproduce jeep watercolor example in demos/paper

Thank you for the excellent codebase and really useful research contributions. I was trying to replicate the jeep watercolor example given in demos/paper using the same hyperparameters (jeep_watercolor.yaml), but seems a bit degraded compared to what’s reported. Can you verify these hyperparameters are correct in the yaml?

I am able to reproduce some other examples closely including jeep posche.

DDIM inversion

Hi, Thanks for your exciting work, but I'm confused by a problem while reading this paper and want to ask for your help. That is: DDIM inversion process, eps_theta(xt) in equation 3 is still related to xt, how can I get xt according to xt-1?
image

I re-read Zero-shot Image-to-Image Translation and DDIM, but still confused. Can you help me with this question?

Colab Notebooks not working

Hi, unfortunately the colab notebooks are not working due to the environment dependencies not being fixed for all packages leading to dependency conflicts.

In the colab notebook:

%pip install -r requirements.txt
#%pip install -q -U --pre triton
#%pip install -q diffusers[torch]==0.11.1 transformers==4.26.0 bitsandbytes==0.35.4
#decord accelerate omegaconf einops ftfy gradio imageio-ffmpeg xformers

whether I comment out the use of the requirements file or comment out the pip installations neither actually works in compiling all the packages being specified.

These dependency conflicts result in the following issue when using the test script:

20:49:24.079709: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
WARNING The following values were not passed to accelerate launch and

No other information is given but the result directory is empty after this happens. What is confusing to me is that neither TensorRT or Tensorflow is something that was found in the requirements file.

How can I get these notebooks working and reproduce these results?

Stable Diffusion 3 for FateZero

Hi! I'm impressed by your excellent work β€”β€” FateZero.

Stable Diffusion 3 emerged a few weeks ago, and I wonder if it can be used for FateZero to further improve performance. I have tried it but failed.

Could you provide some help? Thanks a lot!

Where is the Spatial-Temporal Self-Attention?

Thanks for your works! I wonder where did you implement the Spatial-Temporal Self-Attention. In the appendix algorithm 1 and 2, only attention fusing is shown. And I also puzzled that would attention fusing conflict with Spatial-Temporal Self-Attention, because if attention fusing is performed afterSpatial-Temporal Self-Attention, it seems to be equivalent to not having performedSpatial-Temporal Self-Attention at all.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.