Coder Social home page Coder Social logo

Comments (8)

Thartvigsen avatar Thartvigsen commented on May 21, 2024

Hi @DUT-lujunyu, thanks for your interest in our work, sorry for the delayed response.

Here's some answers:

  1. The annotated files include annotations from human experts, while the main toxigen file does not. The train file are the annotations we collected first, which made it into the original paper submission. The test file contains the annotations collected afterwards (same annotators). Together, they create ~10k human-annotated samples.
  2. Where are you getting the label column from in annotated_train.csv? I do not see that in the original dataset on huggingface.

from toxigen.

DUT-lujunyu avatar DUT-lujunyu commented on May 21, 2024

Thanks for your detailed answers!
I downloaded the annotated_train.csv from the link of huggingface "https://huggingface.co/datasets/skg/toxigen-data/blob/main/annotated_train.csv", and got the data as follows. The "label" does not seem to agree with the calculation method in the paper. So what does the label refer to?

image

from toxigen.

Thartvigsen avatar Thartvigsen commented on May 21, 2024

Sorry for the slow response, this is a strange problem. The annotated_train.csv file indeed has that label field, but when you download the dataset using huggingface, I don't see it. I believe this label might be whether or not the original intention was to generate hate or non-hate for this instance.

from toxigen.

AmenRa avatar AmenRa commented on May 21, 2024

Hi @Thartvigsen,

I have dowloaded the dataset from HuggingFace.
However, this version of the dataset is different from the paper's one.

The paper reports a total of 274186 generated prompts.
However, the dataset available on HuggingFace contains 8960, 940, and 250951 prompts in annotated_train.csv, annotated_test.csv, and toxigen.csv, respectively.
Why is that? Am I missing something here?

Also, from your previous responses, I do not understand a few things:

  1. Which is the test set used in the paper?
  2. Are annotated_train.csv and annotated_test.csv also present in toxigen.csv?
  3. Which field of annotated_train.csv and annotated_test.csv should we consider the ground truth?

Could you clarify?

Thank you.

from toxigen.

Thartvigsen avatar Thartvigsen commented on May 21, 2024

Hi @AmenRa thanks for your interest in our work!

I believe the 274k vs 260k issue is from duplication removal but the original resources were made unavailable, so I can't go back and check to be certain, unfortunately

  1. The original test set is is the 940 annotations in annotated_test.csv
  2. annotated_train.csv and annotated_test.csv are not present in toxigen.csv I don't believe, though this can be double checked by looking for the overlap
  3. We compute ground-truth as a balance from annotator scores for toxicity, introduced in the Convert human scores to binary labels section of this notebook

from toxigen.

AmenRa avatar AmenRa commented on May 21, 2024

Thanks for the fast reply!
However, I am still a bit confused.

The paper reports "We selected 792 statements from TOXIGEN to include in our test set".
The shared test set, which you are telling me is the original one, comprises 940 samples.

Could you clarify?

Thanks.

from toxigen.

Thartvigsen avatar Thartvigsen commented on May 21, 2024

This is a good question and I'm not sure. I don't have access to some of the original internal docs, so this confusion is likely irreducible for us both. I will try to hunt this down. I suspect that the root issue is that at the time of the original submission, we'd gotten annotations for <1k samples. Then at the time of paper acceptance, we'd gotten annotations for ~10k samples, resulting in two versions of the dataset for which we conducted splits. That 792 may be an artifact of the original numbers, not the larger annotated set. The 8960 annotated_train.csv set should include the annotations collected in the second wave post-submission, but this may have also impacted the count for 792 somehow.

from toxigen.

AmenRa avatar AmenRa commented on May 21, 2024

Ok, thanks!

from toxigen.

Related Issues (18)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.