Update 2022.03.26: We're close to launch! Hackathon is scheduled for April 2nd - April 15th. We're excited to have you!
Huggingface's BigScience🌸 initative is an open scientific collaboration of nearly 600 researchers from 50 countries and 250 institutions who collaborate on various projects within the natural language processing (NLP) space to broaden accessibility of language datasets while working on challenging scientific questions around language modeling.
We are running a Biomedical Datasets hackathon to centralize many NLP datasets in the biological and medical space. Biological data is diverse, so a unified location that joins multiple sources while preserving the data closest to the original form can greatly help accessbility.
Our goal is to enable easy programatic access to these datasets using Huggingface's (🤗) datasets
library. To do this, we propose a unified schema for dataset extraction, with the intention of implementing as many biomedical datasets as possible to enable reproducibility in data processing.
There are two broad licensing categories for biomedical datasets:
We will accept data-loading scripts for either type; please see the FAQs for more explicit details on what we propose.
Biomedical language data is highly specialized, requiring expert curation and annotation. Many great initiatives have created different language data sets across a variety of biological domains. A centralized source that allows users to access relevant information reproducibly greatly increases accessibility of these datasets, and promotes research.
Our unified schema allows researchers and practioners to access the same type of information across a variety of datasets with fixed keys. This can enable researchers to quickly iterate, and write scripts without worrying about pre-processing nuances specific to a dataset.
To be considered a contributor, participants must implement an accepted data-loading script to the bigscience-biomedical collection for at least 3 datasets.
Explicit instructions are found in the next section, but the steps for getting a data-loading script accepted are as follows:
- Fork this repo and write a data-loading script in a new branch
- PR your branch back to this repo and ping the admins
- An admin will review and approve your PR or ping you for changes
Details for contributor acknowledgements and rewards can be found here
There are two options to choose a dataset to implement; you can choose either option, but we recommend option A.
Option A: Assign yourself a dataset from our curated list
- Choose a dataset from the list of Biomedical datasets.
- Assign yourself an issue by clicking the dataset in the project list, and comment
#self-assign
under the issue. Please assign yourself to issues with no other collaborators assigned. You should see your GitHub username associated to the issue within 1-2 minutes of making a comment.
- Search to see if the dataset exists in the 🤗 Hub. If it exists, please use the current implementation as the
source
and focus on implementing the task-specificbigbio
schema.
Option B: Implement a new dataset not on the list
If you have a biomedical or clinical dataset you would like to propose in this collection, you are welcome to make a new issue. Choose Add Dataset
and fill out relevant information. Make sure that your dataset does not exist in the 🤗 Hub.
If an admin approves it, then you are welcome to implement this dataset and it will count toward contribution credit.
Check out our step-by-step guide to implementing a dataloader with the bigbio schema.
Please do not upload the data directly; if you have a specific question or request, reach out to an admin
We welcome contributions from a wide variety of backgrounds; we are more than happy to guide you through the process. For instructions on how to get involved or ask for help, check out the following options:
Please join the BigScience initiative here; there is a google form to fill out to have access to the biomedical working group slack. Once you have filled out this form, you'll get access to BigScience's google drive. There is a document where you can fill your name next to a working group; be sure to fill your name on the "Biomedical" group.
Alternatively, you can ping us on the Biomedical Discord Server. The Discord server can be used to share information quickly or ask code-related questions.
For quick questions and clarifications, you can make an issue via Github.
You are welcome to use any of the above resources as necessary.
We understand that some biomedical datasets require external licensing. To respect the agreement of the license, we recommend implementing a dataloader script that works if the user has a locally downloaded file. You can find an example here and follow the local/private dataset specific instructions in template.
Eventually, your dataloader script will need to run using only the packages supplied by the datasets package. If you find a well supported package that makes your implementation easier (e.g. bioc), then feel free to use it.
We will address the specifics during review of your PR to the BigScience biomedical repo and find a way to make it usable in the final submission to huggingface bigscience-biomedical
No. Please do not upload your dataset directly. This is not the goal of the hackathon and many datasets have external licensing agreements. If the dataset is public (i.e. can be downloaded without credentials or signed data user agreement), include a downloading component in your dataset loader script. Otherwise, include only an "extraction from local files" component in your dataset loader script. You can see examples of both in the examples directory. If you have a custom dataset you would like to submit, please make an issue and an admin will get back to you.
In some cases, a single dataset will support multiple tasks with different bigbio schemas. For example, the muchmore
dataset can be used for a translation task (supported by the Text to Text (T2T)
schema) and a named entity recognition task (supported by the Knowledge Base (KB)
schema). In this case, please implement one config for each supported schema and name the config <datasetname>_bigbio_<schema>
. In the muchmore
example, this would mean one config called muchmore_bigbio_t2t
and one config called muchmore_bigbio_kb
.
Full details on how to handle offsets and text in the bigbio kb schema can be found in the schema documentation.
Yes! Please join the hack-a-thon Biomedical Discord Server and ask for help.
Yes! Some datasets are easier to write dataloader scripts for than others. If you find yourself working on a dataset that you can not make progress on, please make a comment in the associated issue, asked to be un-assigned from the issue, and start the search for a new unclaimed dataset.
We greatly appreciate your help!
The artifacts of this hackathon will be described in a forthcoming academic paper targeting a machine learning or NLP audience. Implementing 3 or more dataset loaders will guarantee authorship. We recognize that some datasets require more effort than others, so please reach out if you have questions. Our goal is to be inclusive with credit!
This hackathon guide was heavily inspired by the BigScience Datasets Hackathon.