Coder Social home page Coder Social logo

omerbt / text2live Goto Github PK

View Code? Open in Web Editor NEW
875.0 28.0 82.0 1.6 MB

Official Pytorch Implementation for "Text2LIVE: Text-Driven Layered Image and Video Editing" (ECCV 2022 Oral)

Home Page: https://text2live.github.io/

License: MIT License

Python 100.00%
eccv2022 image-editing text2live clip generative-model image-manipulation video-editing text-driven-editing single-image single-video

text2live's People

Contributors

dolev104 avatar omerbt avatar rafailfridman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

text2live's Issues

Inference script

Hi! Thanks for the great work.

Is there a script for inference? I'm curious to run your model on my data to change details of a video.

Regards,
Surya

what's the funcion of Class "concate" ?

sorry but I wonder what’s the funcion of Class "Concate" in Text2LIVE-main/models/backbone/common.py?
thank you so muchhh

class Concat(nn.Module):
def init(self, dim, *args):
super(Concat, self).init()
self.dim = dim

    for idx, module in enumerate(args):
        self.add_module(str(idx), module)

def forward(self, input):
    inputs = []
    for module in self._modules.values():
        inputs.append(module(input))

    inputs_shapes2 = [x.shape[2] for x in inputs]
    inputs_shapes3 = [x.shape[3] for x in inputs]

    if np.all(np.array(inputs_shapes2) == min(inputs_shapes2)) and np.all(
        np.array(inputs_shapes3) == min(inputs_shapes3)
    ):
        inputs_ = inputs
        print("np-all")
    else:
        target_shape2 = min(inputs_shapes2)
        target_shape3 = min(inputs_shapes3)
        print("np-target")
        inputs_ = []
        for inp in inputs:
            diff2 = (inp.size(2) - target_shape2) // 2
            diff3 = (inp.size(3) - target_shape3) // 2
            
            inputs_.append(inp[:, :, diff2 : diff2 + target_shape2, diff3 : diff3 + target_shape3])
            
    return torch.cat(inputs_, dim=self.dim)

More of a Question than Issue

So, for each image-text or video-text pair, Text2LIVE would learn a GAN and then essentially can generate the pretty output for that specific image.

My question is with respect to generalization. If I learn a GAN model for the Giraffe-StainedGlass pair, would it generalize to other videos of Giraffe, or applicable to only the video on which the GAN is learned? And how is it practical to generate a single GAN for each pair of image/attribute, e.g.,

Bear+Fire
Man+Smoke
Car+Fire
Coffee Cup + Latte art heart
etc.

HuggingFace Space Not Working

On the HuggingFace Space, all fields are disabled, and therefore doesn't work.

Screen-Recording-2023-01-27-at-1

The HuggingFace Space is still linked from the README.md:

[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/weizmannscience/text2live)

It's isn't working

Output of powershell after command python train_image.py --example_config golden_horse.yaml:

Traceback (most recent call last):
  File "C:\Users\CodeSame\miniconda3\text2live\train_image.py", line 129, in <module>
    train_model(config)
  File "C:\Users\CodeSame\miniconda3\text2live\train_image.py", line 24, in train_model
    seed = np.random.randint(2 ** 32)
  File "mtrand.pyx", line 763, in numpy.random.mtrand.RandomState.randint
  File "_bounded_integers.pyx", line 1336, in numpy.random._bounded_integers._rand_int32
ValueError: high is out of bounds for int32

On my own, I was able to run this neural network by replacing in the 24th line of the file train_image.py the value from seed = np.random.randint(2** 32) to seed = np.random.randint(2** 31). BUT now all the other errors remain in the output. You can specify specific software versions and PC system requirements, if any.

AssertionError: Torch not compiled with CUDA enabled

Well I got passed the Type Error, and ran into this, I looked it up on google, and the only solutions I saw was to change the model to not include it. It seems to be erroring out on line "background_mapping," for "video_dataset.py"

Traceback (most recent call last):
File "T:\Tensorflow\Programs\depplearn\tools\Text2LIVE\train_video.py", line 102, in
train_model(config)
File "T:\Tensorflow\Programs\depplearn\tools\Text2LIVE\train_video.py", line 28, in train_model
dataset = AtlasDataset(config)
File "T:\Tensorflow\Programs\depplearn\tools\Text2LIVE\datasets\video_dataset.py", line 32, in init
self.original_video = load_video(
File "C:\Users\Tha Killa.conda\envs\text2live\lib\site-packages\torch\cuda_init_.py", line 208, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

The generated videos have two backgound

Thanks for your excellent work.

I have trained the video generation model by using the commond "python train_video.py --example_config car-turn_winter.yaml". But there are two background overlapped in the full video. How can I correct it?

Videos_full_video.mp4

Python dependencies with specified version, not found by pip

The issue applies to torch and torchvision

pip install torch~=1.10.0

Defaulting to user installation because normal site-packages is not writeable
ERROR: Could not find a version that satisfies the requirement torch~=1.10.0 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1)
ERROR: No matching distribution found for torch~=1.10.0

pip install torchvision~=0.11.2

Defaulting to user installation because normal site-packages is not writeable
ERROR: Could not find a version that satisfies the requirement torchvision~=0.11.2 (from versions: 0.1.6, 0.1.7, 0.1.8, 0.1.9, 0.2.0, 0.2.1, 0.2.2, 0.2.2.post2, 0.2.2.post3, 0.12.0, 0.13.0, 0.13.1, 0.14.0, 0.14.1)
ERROR: No matching distribution found for torchvision~=0.11.2

possible solution

Would it be possible to switch to torch 1.11.0 and torchvision 0.12.0?

colab demo

can you please add a demo colab to test this

ValueError: high is out of bounds for int32

Just installed to test, but it is giving me the error...
Traceback (most recent call last):
File "T:\Tensorflow\Programs\depplearn\tools\Text2LIVE\train_image.py", line 129, in
train_model(config)
File "T:\Tensorflow\Programs\depplearn\tools\Text2LIVE\train_image.py", line 24, in train_model
seed = np.random.randint(2 ** 32)
File "mtrand.pyx", line 763, in numpy.random.mtrand.RandomState.randint
File "_bounded_integers.pyx", line 1336, in numpy.random._bounded_integers._rand_int32
ValueError: high is out of bounds for int32

Result is not good as the paper.

Thank you for your great work!

I try to train the cake using the default config. But I see the eggs are also to be iced, while the result in paper is not. Can you give me some advice. This is my result.

Thank you!

image

RuntimeError: CUDA out of memory

Running on Windows 10, 3060 Ti, 8gb.
Any clue what's going on? I've tried allocating memory with:
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:1024"
But I'm unsure where in the script to place this. I was putting it in train_model() in train_video.py

Also unsure how to format this for readability on github but whatever

Thanks for the help.

Model has 402945 params 0%| | 0/49 [00:00<?, ?it/s]C:\Users\olive\anaconda3\envs\text2live\lib\site-packages\torch\functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:2157.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] 0%| | 0/3000 [00:00<?, ?it/s]C:\Users\olive\anaconda3\envs\text2live\lib\site-packages\torch\nn\functional.py:3631: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. warnings.warn( C:\Users\olive\anaconda3\envs\text2live\lib\site-packages\torch\nn\functional.py:3631: UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. warnings.warn( C:\Users\olive\anaconda3\envs\text2live\lib\site-packages\torch\nn\functional.py:3679: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. warnings.warn( 0%| | 0/3000 [00:03<?, ?it/s] Traceback (most recent call last): File "C:\Users\olive\Desktop\text2live\train_video.py", line 109, in <module> train_model(config) File "C:\Users\olive\Desktop\text2live\train_video.py", line 45, in train_model losses = criterion(outputs, inputs) File "C:\Users\olive\anaconda3\envs\text2live\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\olive\Desktop\text2live\util\atlas_loss.py", line 54, in forward losses["foreground"] = self.loss(outputs["foreground"], inputs) File "C:\Users\olive\anaconda3\envs\text2live\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\olive\Desktop\text2live\util\losses.py", line 92, in forward losses["loss_screen"] = self.calculate_clip_loss(all_outputs_greenscreen, self.target_greenscreen_e) File "C:\Users\olive\Desktop\text2live\util\losses.py", line 122, in calculate_clip_loss img_e = self.clip_extractor.get_image_embedding(img.unsqueeze(0)) File "C:\Users\olive\Desktop\text2live\models\clip_extractor.py", line 100, in get_image_embedding image_embeds = self.encode_image(self.clip_normalize(views)) File "C:\Users\olive\Desktop\text2live\models\clip_extractor.py", line 104, in encode_image return self.model.encode_image(x) File "C:\Users\olive\Desktop\text2live\CLIP\clip\model.py", line 388, in encode_image return self.visual(image.type(self.dtype)) File "C:\Users\olive\anaconda3\envs\text2live\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\olive\Desktop\text2live\CLIP\clip\model.py", line 249, in forward x = self.transformer_first_blocks_forward(x) File "C:\Users\olive\Desktop\text2live\CLIP\clip\model.py", line 272, in transformer_first_blocks_forward x = self.transformer.resblocks[:-1](x) File "C:\Users\olive\anaconda3\envs\text2live\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\olive\anaconda3\envs\text2live\lib\site-packages\torch\nn\modules\container.py", line 141, in forward input = module(input) File "C:\Users\olive\anaconda3\envs\text2live\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\olive\Desktop\text2live\CLIP\clip\model.py", line 188, in forward x = x + self.mlp(self.ln_2(x)) File "C:\Users\olive\anaconda3\envs\text2live\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\olive\anaconda3\envs\text2live\lib\site-packages\torch\nn\modules\container.py", line 141, in forward input = module(input) File "C:\Users\olive\anaconda3\envs\text2live\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\olive\Desktop\text2live\CLIP\clip\model.py", line 165, in forward return x * torch.sigmoid(1.702 * x) RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 8.00 GiB total capacity; 7.25 GiB already allocated; 0 bytes free; 7.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

training script fails to run

Apparently pytorch fails to allocate the required ram

/home/maximilian/.local/lib/python3.10/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2228.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
Traceback (most recent call last):                                   
  File "/home/maximilian/Documents/git/Text2LIVE/train_video.py", line 102, in <module>
    train_model(config)
  File "/home/maximilian/Documents/git/Text2LIVE/train_video.py", line 28, in train_model
    dataset = AtlasDataset(config)
  File "/home/maximilian/Documents/git/Text2LIVE/datasets/video_dataset.py", line 57, in __init__
    self.background_reconstruction = reconstruct_video_layer(original_background_all_uvs, background_atlas_model)
  File "/home/maximilian/.local/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/maximilian/Documents/git/Text2LIVE/util/atlas_utils.py", line 143, in reconstruct_video_layer
    rgb = (atlas_model(uv_values[frame].reshape(-1, 2)) + 1) * 0.5
  File "/home/maximilian/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/maximilian/Documents/git/Text2LIVE/models/implicit_neural_networks.py", line 80, in forward
    x = F.relu(x)
  File "/home/maximilian/.local/lib/python3.10/site-packages/torch/nn/functional.py", line 1442, in relu
    result = torch.relu(input)
RuntimeError: CUDA out of memory. Tried to allocate 324.00 MiB (GPU 0; 5.78 GiB total capacity; 1.37 GiB already allocated; 250.06 MiB free; 1.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Could you provide the entire source codes?

Thank you for your marvelous work!

I have seen the web version of your excellent demonstration.
Could you provide the entire source codes not only on inferencing the results but also other significant research processes, such as the source codes on the pre-training model using VQGAN-CLIP and so on?

Any help is appreciated.

Nothing happened after I modified log_images_freq in "image_config.yaml" and bootstrap_epoch: in "golden_horse.yaml" and "ice_cake.yaml".

Thank you for your marvelous work.

However, no result by performing 'Run examples'.

Run examples
...
Video Editing
...
python train_video.py --example_config car-turn_winter.yaml

Image Editing
...
python train_image.py --example_config golden_horse.yaml

Intermediate results will be saved to results during optimization. The frequency of saving intermediate results is indicated in the log_images_freq flag of the configuration.

Nothing happened after I modified log_images_freq in "image_config.yaml" and bootstrap_epoch: in "golden_horse.yaml" and "ice_cake.yaml".

python train_image.py --example_config golden_horse.yaml
python train_image.py --example_config ice_cake.yaml

Any help is appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.