Coder Social home page Coder Social logo

deep-co-training-for-semi-supervised-image-recognition's People

Contributors

alanchou avatar leftice avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

deep-co-training-for-semi-supervised-image-recognition's Issues

lower performance

Hey Alan,
Thanks for your great work.
I ran your code on python 3.6 (you are using 3.5 but I don't see this is a problem), I could only get accuracy around 88%. I have also tried to add .detach() to perturbed example when creating pseudo label, result is still lower than 89%.
I am wondering if there is any other tricks to get 89% acc?
Thanks again for your work.

Cheers

question in batchsize

Hi,

I am reading your code, which is a great implementation.
But I have a question. In the following code, why the batch_size in testloader is set to 100 but 1 in trainloader.

testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size, shuffle=True, num_workers=2)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=1, shuffle=False, num_workers=2)

details on updating parameters

Hi Zhou,
Very happy to see your repo. I am noticing something and want to ask for the details.

The main idea of adversarial loss for two view is to let a hard example of model1 to be easy for model2. So I reduce the loss function KL(predict_model2(adversarial image targeted at model1), predict_model1(image) ). If the two models are the same, predict_model1(image) should perform well than predict_model2(adversarial image targeted at model1). I would wonder why there is no .detach() operation for the target tensor? Without detach(), you would reduce the performance of easy example.

need normalize?

I noticed that normalization of data is not used(the code is commented out) when using CIFAR10 . Is it needed?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.