Coder Social home page Coder Social logo

dearborn-open-ai / olm-datasets Goto Github PK

View Code? Open in Web Editor NEW

This project forked from huggingface/olm-datasets

0.0 0.0 0.0 332 KB

Pipeline for pulling and processing online language model pretraining data from the web

License: Apache License 2.0

Shell 0.45% Python 99.55%

olm-datasets's Introduction

Online Language Modelling Dataset Pipeline

This repo enables you to pull a large and up-to-date text corpus from the web. It uses state-of-the-art processing methods to produce a clean text dataset that you can immediately use to pretrain a large language model, like BERT, GPT, or BLOOM. The main use-case for this repo is the Online Language Modelling Project, where we want to keep a language model up-to-date by pretraining it on the latest Common Crawl and Wikipedia dumps every month or so. You can see the models for the OLM project here: https://huggingface.co/olm. They actually get better performance than their original static counterparts.

Specifically, this repo has modular Python commands that enable you to:

  • Specify Common Crawl web snapshots, or just Wikipedia snapshots. Then pull the data.
  • Filter the data for a particular language, like English or French.
  • Run the OSCAR filters used by BigScience for the BLOOM language model. These filters ensure some level of text quality and reduce pornographic content.
  • Deduplicate the data.

This code is also fairly parallelized, although it can certianly be improved further. It can process over a terabyte from Common Crawl in a day or two, and all of English Wikipedia in less than an hour if you have:

  • A machine with a lot of CPUs and memory.
  • A fast internet connection.

Setup

  1. If you want to use this repo to generate a decent amount of data, get a machine with lots of CPUs and memory. We use an n2d-standard-224 running Ubuntu 20.04 LTS on GCP. Add Terabytes of disk space too. You may need an even larger machine if you want to process close to 100% of a Common Crawl snapshot or several snapshots, particularly due to how much memory the deduplication process uses. Alternatively, you can specify in the deduplication arguments that you want to deduplicate the dataset in chunks so your memory doesn't explode.
  2. Clone with submodules: git clone --recursive [email protected]:huggingface/olm-datasets.git
  3. Install cargo (rust package manager) with curl https://sh.rustup.rs -sSf | sh. Then install Ungoliant with cargo install [email protected]. You may need to install gcc and cmake first.
  4. Set up a Python 3.9 environment, and run pip install -r requirements.txt
  5. Run huggingface-cli login. This cli should have been installed from requirements.txt. To login, you need to paste a token from your account at https://huggingface.co. This step is necessary for the pipeline to push the generated datasets to your Hugging Face account.

Getting a clean and up-to-date Common Crawl corpus

Follow the instructions at pipeline_scripts/common_crawl.

Here is the output dataset to expect from a 20% random segment sample of the August 2022 Common Crawl Snapshot: https://huggingface.co/datasets/Tristan/olm-CC-MAIN-2022-33-sampling-ratio-0.20

Getting a clean and up-to-date Wikipedia corpus

Follow the instructions at pipeline_scripts/wikipedia.

Here is the output dataset to expect from a September 2022 snapshot of Wikipedia: https://huggingface.co/datasets/Tristan/olm-wikipedia-20220920

Analyzing the corpora

Follow the instructions at analysis_scripts.

Here is a tweet thread which utilizes these scripts: https://twitter.com/TristanThrush/status/1582356055794733057

Here is another tweet thread that dives a little deeper: https://twitter.com/TristanThrush/status/1588156731909029889

And here is a colab where you can quickly run some of the analysis yourself! https://colab.research.google.com/drive/18Wv7ghW2rRjEe3oWDqh2iz9qqO8O6XcX?usp=sharing

Citation

@misc{thrush2022pipeline,
    title={Online Language Modelling Data Pipeline},
    author={Tristan Thrush and Helen Ngo and Nathan Lambert and Douwe Kiela},
    year={2022},
    howpublished={\url{https://github.com/huggingface/olm-datasets}}
}

olm-datasets's People

Contributors

tristanthrush avatar stas00 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.