Coder Social home page Coder Social logo

genpromp's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

cv-det wmeiqi sab148

genpromp's Issues

RuntimeError: CUDA out of memory.

Hello,

When i run

python main.py --function test --config configs/cub_stage2.yml --opt "{'test': {'load_token_path': 'ckpts/cub983/tokens/', 'load_unet_path': 'ckpts/cub983/unet/', 'save_log_path': 'ckpts/cub983/log.txt'}}”

I am encountering this error
Traceback (most recent call last): File "/p/project/atmlaml/benassou1/ega/GenPromp/main.py", line 646, in <module> eval(args.function)(config) File "/p/project/atmlaml/benassou1/ega/GenPromp/main.py", line 300, in test noise_pred = unet(noisy_latents, timesteps, combine_embeddings).sample File "/p/software/juwelsbooster/stages/2023/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/p/project/atmlaml/benassou1/ega/GenPromp/sc_venv_template/venv/lib/python3.10/site-packages/diffusers/models/unet_2d_condition.py", line 615, in forward sample = upsample_block( File "/p/software/juwelsbooster/stages/2023/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/p/project/atmlaml/benassou1/ega/GenPromp/sc_venv_template/venv/lib/python3.10/site-packages/diffusers/models/unet_2d_blocks.py", line 1813, in forward hidden_states = attn( File "/p/software/juwelsbooster/stages/2023/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/p/project/atmlaml/benassou1/ega/GenPromp/sc_venv_template/venv/lib/python3.10/site-packages/diffusers/models/transformer_2d.py", line 265, in forward hidden_states = block( File "/p/software/juwelsbooster/stages/2023/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/p/project/atmlaml/benassou1/ega/GenPromp/sc_venv_template/venv/lib/python3.10/site-packages/diffusers/models/attention.py", line 321, in forward ff_output = self.ff(norm_hidden_states) File "/p/software/juwelsbooster/stages/2023/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/p/project/atmlaml/benassou1/ega/GenPromp/sc_venv_template/venv/lib/python3.10/site-packages/diffusers/models/attention.py", line 379, in forward hidden_states = module(hidden_states) File "/p/software/juwelsbooster/stages/2023/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/p/software/juwelsbooster/stages/2023/software/PyTorch/1.12.0-foss-2022a-CUDA-11.7/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 39.56 GiB total capacity; 7.06 GiB already allocated; 1.94 MiB free; 17.07 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I changed the batch size to 1, reduced the size of the image, max_split_size_mb, and still does not work. Could you please help me to fix this problem ?

test visulization

Thanks for your greate work and sharing.

How do you do the visulization about attention activation?

Thans.

Google Drive link does not work

Thank you for your great works.
I want to use your pretrained weights but your given google drive link does not work.
And I also tried to download the Baidu file, but the program is all chinese and I failed to download it.
Can you check the google drive link?

Thank you!

When do `train_unet`, Why don't you use pretrained weight of tokens?

First and foremost, I'd like to express my profound gratitude for the outstanding paper and the code implementation. I have one point of curiosity.

In attempting train_unet, isn't it the case that the initial weights of each category token pretrained in train_token are not used?

In the code, train_unet is executed with split="train". Given this, just like when running train_token, wouldn't the initial weights of the concept_token in the text_encoder be initialized identically to the initial weights of the meta_token?

Since all parameters of the text_encoder are frozen during train_unet, wouldn't this mean that the unet is fine-tuned with the initial weights of both the meta_token and concept_token being the same?

In the Loss formula (5) mentioned in the paper, it is depicted as in the linked image. This Loss seems to utilize f* (pretrained initial weight), hence my query.
image

Thank you always for your hard work.

the embeddings in training process

Thanks for the great work!

I have some questions regarding the two types of embeddings, or tokens, mentioned in the paper.

Prior to the training process, the concept tokens are initialized using the meta tokens.
However, I would like to clarify what happens once the training commences.

Do the meta tokens remain static and not participate in the entire training process? Is it solely the concept tokens that are involved throughout the entire training process?

I'm confused about how to make only the concept token embedding learnable

Thank you for the excellent paper and the implemented code. I have a point of confusion. In Figure 3 of the paper, only v_r is colored in orange.

Does this mean that among the embeddings for each word in "a photo of a ", only the word embedding corresponding to is trainable?

However, I find the following part of your code confusing:

# in datasets/base.py
def init_embeddings(self, text_encoder):
      token_embeds = text_encoder.get_input_embeddings().weight.data.clone()
      for token in self.cat2tokens:
          meta_token_id = self.tokenizer.encode(token['meta_token'], add_special_tokens=False)[0]
          concept_token_id = self.tokenizer.encode(token['concept_token'], add_special_tokens=False)[0]
          token_embeds[concept_token_id] = token_embeds[meta_token_id]
      text_encoder.get_input_embeddings().weight = torch.nn.Parameter(token_embeds)
      return text_encoder

This code sets token_embeds as trainable by making it a torch.nn.Parameter.
However, contrary to what is shown in Figure 3, this seems to make the entire token_embeds trainable, not just the concept token vector corresponding to .

Could you please clarify my confusion? Thank you very much.

P.S. If my understanding is correct, I believe the following lines of code would be necessary to make only the concept token vector trainable:

text_encoder.get_input_embeddings().weight.requires_grad = False
text_encoder.get_input_embeddings().weight[concept_token_id].requires_grad = True

Asking for questions about evaluation

Thanks for your great work! There is an issue during testing.
When using python main.py --function test --config configs/cub_stage2.yml --opt "{'test': {'load_token_path': 'ckpts/cub983/tokens/', 'load_unet_path': 'ckpts/cub983/unet/', 'save_log_path': 'ckpts/cub983/log.txt'}}" for evaluation, I found that self.step_store、self. attention_store and self.attention_maps are all empty. Would you please tell me where is wrong?
Looking forward to your reply!

Clarification Needed on Model Selection Strategy Across Epochs

I am currently looking into the implementation details of the model training process, particularly focusing on the model saving mechanism as delineated in the code. In if block, on line 489, it is observed that the model is persistently saved at the conclusion of each training epoch. However, the methodology employed for the selection of the optimal model based on the test/validation set performance remains unclear.

Could you kindly provide an elucidation on the criteria or algorithm used for identifying the most effective epoch based on the validation/test set? This clarification will significantly aid in understanding the overall model selection strategy within the training loop.

Thank you for your assistance.

Google Drive download path is invalid.

I think your Google Drive Download Link path is invalid.
Please check your README.md
When click the Google Drive Download Link attatched, only reload the page of the repo.
Thank you for your appreciate :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.