Coder Social home page Coder Social logo

Comments (6)

gkrasin avatar gkrasin commented on July 22, 2024

Hey @pengpaiSH

If I understand you correctly, you're asking if we plan to use the raters to find the false negatives.

A definite answer is hard to give, as I am asked to avoid any forward looking statements, but in general, I think that there are better and cheaper ways to find false negatives than just using raters. In particular, a promising idea would be to train a classifier on the released annotations (we have one coming, see #3) and use it to annotate the validation set. Then, if we see that the classifier sets a label (even with relatively small confidence, such as 0.2) on an image that does not have any info about this label yet, it might be a good candidate for sending to the raters to verify. That allows to only review a small subset of possible image-label pairs (as potentially there are num_images * num_labels of them)

There are certainly even better ideas, and I am looking forward to see them coming from the research community. One of the goals of the released dataset is to develop better techniques of semi-automated data cleaning, as the current state of art here is not as good as for training the models, while real-world datasets are pretty noisy.

from dataset.

pengpaiSH avatar pengpaiSH commented on July 22, 2024

@gkrasin Thank you for your quick response! I think you might miss my points. Let me give a toy example. Say for an image I with ground-truth labels "dog, girl, beach, sea, sky" (as you may imagine the beautiful scene), the visual classifier (as you mentioned above) predicts the labels: "dog, boy, beach, sea, sky" and with each unnormalized score "0.9, 0.2, 0.9, 0.9, 0.9". Then, according to you above suggestion, the human rater will receive an image-label pair (I, boy) and the rater will justify it was a false-positive prediction by classifier, right? Thus, the label list now becomes "dog, beach, sea, sky" (without "boy", removed by rater). Now my question comes: will the rater add "girl" to the ultimate label file published to us? Or, the final label for this image is "dog, beach, sea, sky", even verified by human raters as claimed?

from dataset.

gkrasin avatar gkrasin commented on July 22, 2024

@pengpaiSH I believe we're talking about the same, but using different words (sorry). You're correct, the current state is "dog, beach, sea, sky" with boy removed and girl not added.

We didn't ask raters to suggest what should be added instead of boy, because unlike boy/girl, in real cases there are significantly larger number of possible choices (think dog breeds or butterfly species), where it's not always easy to make a good guess, unless the rater is super-tuned for the particular subset of the knowledge graph.

What I was trying to tell is that we would rather have a suggestion that "girl" should be added to the list of labels from a program, and then ask our raters to verify it. This way we can keep the complexity of each questions to raters uniform (assuming that raters are familiar with the categories they are asked about), which increases the throughput overall.

Does that sound reasonable to you?

from dataset.

pengpaiSH avatar pengpaiSH commented on July 22, 2024

@gkrasin Thank you for your reply, again! I totally agree with you that verifying false-positive image-label pairs will significantly reduce human labor and it is much more feasible than asking for completing all the missing labels. I would like to share my motivation for opening this issue: If I develop a new model and I would like to verify its performance compared with Inception-ResNet-V3, then the experimental results would be unfair if the ground-truth is "dog, beach, sea, sky". What if my model predicts "dog, girl, beach, sea, sky"? As a result, my predicted label "girl" will be an uncorrected prediction, right?

from dataset.

gkrasin avatar gkrasin commented on July 22, 2024

As a result, my predicted label "girl" will be an uncorrected prediction, right?

That's right. And that's why I believe that (semi-automatic) annotations cleanup is the most important thing that needs to happen with the dataset before it becomes truly useful.

from dataset.

pengpaiSH avatar pengpaiSH commented on July 22, 2024

@gkrasin Thanks for your time, again! Discussion with you makes me learn a lot!

from dataset.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.