Coder Social home page Coder Social logo

mlnt's People

Contributors

lijunnan1992 avatar tverous avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

mlnt's Issues

GPU runs out of memory with --batch_num=32

Could you please help me out?

When I use --batch_num=32, I cannot run the code on a single GPU. My GPU is Tesla P100-SXM2 16G. I only can run the code with --batch_num=1. Since we have create_graph=True and retain_graph=True in the inner loop with torch.autograd.grad, and M=10, the GPU memory gets allocated too fast. It is also very tricky to run MAML on multiple GPUs.

I am wondering in your implementation, what was the batch size? and how did you address the problem of increasing GPU memory allocation in the inner for loop? Did you use multiple GPUs or a single GPU?

Thanks alot

The true baseline should be Iterative training without Meta-learning?

Dear authors, your ideas are interesting and novel:

  1. Oracle/Mentor (Consistency loss): To make meta-test reliable, the teacher/mentor model should be reliable and robust to real noisy examples. Therefore, they apply iterative training and iterative data cleaning to make the meta-test consistency loss reliable and an optimisation oracle against real noise. (I suppose this should be the true baseline.)
  2. Unaffected by synthetic noise: The meta-training sees synthetic noisy training examples. After training on them, the meta-testing evaluates its consistency with oracle and aims to maximise the consistency, i.e., making it unaffected after seeing synthetic noise. (I suppose this is the key meta-learning proposal.)

In this case, the baseline should be Iterative training without Meta-learning. That is without meta-learning on synthetic noisy examples.
It is more interesting to see how much exactly meta-learning proposal improves the performance versus the true baseline.

Could you please share something about this? Thanks so much.

After one iter training speed become very slow

微信截图_20191107154400
pytorch version = 1.2.0
Tesla v100
CUDA 10.0

I thing your envirment is pytorch<0.4.0, so I have to change code in order to run it.
The baseline.py has no problem, but the main.py has.

I have two questions for your paper and code.

Dear Junnan Li,

I read your paper and it was so impressed. I have two questions for your paper and code.
First of all, what is args.alpha in your code (main.py line 31)? I read your paper, but it seems that it was not written. Could you tell me about this alpha?
Finally, how can I do iterative learning? I could train 1 epoch, but I couldn't do iterative learning 3 epochs like your paper. Could you help me reproduce your paper?

Sorry to ask this of you when you are busy but I appreciate your help.
Thanks so much.

baseline

Your baseline model seems to be no noise, so what is his role?

how can I run this code in CIFAR data?

since the pretrain size is 224 but cifar size is 32, so, directly use this code would cause the out size error, could you please tell me how can I use the cifar 10 data in this code?

how can i run the main code with my own dataset?

when I run the main code with my dataset, it comes to this error:

line 116, in train
targets_fast[idx] = targets[neighbor[random.randint(1,num_neighbor)]]
IndexError: index 5 is out of bounds for dimension 0 with size 5

how to select num_neighbor?is that based on batch_size ,or my dataset's label?

Can't detach views in-place.

RuntimeError: Can't detach views in-place. Use detach() instead
I have no idea why it can't detach views in-place. Any way to get around this problem? thanks!

pytorch 1.3.1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.