Coder Social home page Coder Social logo

promix's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

promix's Issues

loss: nan

Hi, I'm Youngjae Kim.
I'm doing an experiment with custom data.
After a certain point (e.g. Epoch 37 Iter 5), the loss is still coming out as nan.
Has this ever happened to you?
If so, how did you solve it?

confusion

hi there,
老师请问,噪声的pt文件在哪里呢?

您觉得这个方法具备通用性吗?对于不同的分类任务/不同的分类模型而言。

我在寻找一种可以解决以下问题的方法:
因为人失误造成的错标,或者说模棱两可的数据标成任意一个类都可以,这样产生的数据集本身存在问题。

可是我的分类任务可能差异性并不是很明显,属于细颗粒度的分类,不像狗和茶杯这样的类别,而是OK和NG这样的判定,可能稍微有一点裂缝没啥问题,裂缝太大才有问题,稍微有一点缺失没问题,缺失很大才有问题,您的这个方法可以应用在我这个任务上吗?像我这种二分类是不是使用回归的方法效果更好哇,希望能得到您的回复。

您代码中修改学习率的一个参数没有,我加上后,可以训练了,但是train_cifarn报了对应的标签是空标签的问题

cifar10:clean_label | Epoch [ 10/ 40] Iter[ 1/782] Net1 loss: 0.95 Net2 loss: 1.22
Traceback (most recent call last):
File "E:\pycharmproject\weak_supervised\ProMix-main\ProMix-main\Train_cifarn.py", line 463, in
train(epoch, dualnet.net1, dualnet.net2, optimizer1, total_trainloader, unlabeled_trainloader)
File "E:\pycharmproject\weak_supervised\ProMix-main\ProMix-main\Train_cifarn.py", line 181, in train
loss_fmix = fmix.loss(logits_fmix, (pseudo_label_c.detach()).long())
File "E:\pycharmproject\weak_supervised\ProMix-main\ProMix-main\fmix.py", line 94, in loss
return fmix_loss(y_pred, y, self.index, self.lam, train, self.reformulate)
File "E:\pycharmproject\weak_supervised\ProMix-main\ProMix-main\fmix.py", line 46, in fmix_loss
y1 = y1.max(1)[1]
RuntimeError: cannot perform reduction function max on tensor with no elements because the operation does not have an identity

strong or weak augmentation have an ambiguity

on train_promin.py

        # inputs_x: strong augmentation
        # inputs_x2: weak augmentation

Code comments and actual code execution are exactly opposite. What should be the correct one? Is the comment written wrong?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.