Coder Social home page Coder Social logo

madiator / lilac Goto Github PK

View Code? Open in Web Editor NEW

This project forked from lilacai/lilac

0.0 0.0 0.0 37.92 MB

Curate better data for LLMs

Home Page: http://lilacml.com

License: Apache License 2.0

Shell 0.42% JavaScript 0.10% Python 50.05% TypeScript 11.21% CSS 0.23% HTML 0.03% Jupyter Notebook 18.12% Dockerfile 0.12% Svelte 19.74%

lilac's Introduction

Lilac

Better data, better AI

๐Ÿ”— Try the Lilac web demo!

Site Discord License Apache 2.0
Follow on Twitter

Lilac is a tool for exploration, curation and quality control of datasets for training, fine-tuning and monitoring LLMs.

Lilac is used by companies like Cohere and Databricks to visualize, quantify and improve the quality of pre-training and fine-tuning data.

Lilac runs on-device using open-source LLMs with a UI and Python API.

๐Ÿ†’ New

  • Lilac Garden is our hosted platform for blazing fast dataset-level computations. Sign up to join the pilot.
  • Cluster & title millions of documents with the power of LLMs. Explore and search over 36,000 clusters of 4.3M documents in OpenOrca

Why use Lilac?

  • Explore your data interactively with LLM-powered search, filter, clustering and annotation.
  • Curate AI data, applying best practices like removing duplicates, PII and obscure content to reduce dataset size and lower training cost and time.
  • Inspect and collaborate with your team on a single, centralized dataset to improve data quality.
  • Understand how data changes over time.

Lilac can offload expensive computations to Lilac Garden, our hosted platform for blazing fast dataset-level computations.

image

See our 3min walkthrough video

๐Ÿ”ฅ Getting started

๐Ÿ’ป Install

pip install lilac[all]

If you prefer no local installation, you can duplicate our Spaces demo by following documentation here.

For more detailed instructions, see our installation guide.

๐ŸŒ Start a webserver

Start a Lilac webserver with our lilac CLI:

lilac start ~/my_project

Or start the Lilac webserver from Python:

import lilac as ll

ll.start_server(project_dir='~/my_project')

This will open start a webserver at http://localhost:5432/ where you can now load datasets and explore them.

Lilac Garden

Lilac Garden is our hosted platform for running dataset-level computations. We utilize powerful GPUs to accelerate expensive signals like Clustering, Embedding, and PII. Sign up to join the pilot.

  • Cluster and title a million data points in 20 mins
  • Embed your dataset at half a billion tokens per min
  • Run your own signal

๐Ÿ“Š Load data

Datasets can be loaded directly from HuggingFace, Parquet, CSV, JSON, LangSmith from LangChain, SQLite, LLamaHub, Pandas, Parquet, and more. More documentation here.

import lilac as ll

ll.set_project_dir('~/my_project')
dataset = ll.from_huggingface('imdb')

If you prefer, you can load datasets directly from the UI without writing any Python:

image

๐Ÿ”Ž Explore

Note

๐Ÿ”— Explore OpenOrca and its clusters before installing!

Once we've loaded a dataset, we can explore it from the UI and get a sense for what's in the data. More documentation here.

image

โœจ Clustering

Cluster any text column to get automated dataset insights:

dataset = ll.get_dataset('local', 'imdb')
dataset.cluster('text') # add `use_garden=True` to offload to Lilac Garden

Tip

Clustering on device can be slow or impractical, especially on machines without a powerful GPU or large memory. Offloading the compute to Lilac Garden, our hosted data processing platform, can speedup clustering by more than 100x.

image

โšก Annotate with Signals (PII, Text Statistics, Language Detection, Neardup, etc)

Annotating data with signals will produce another column in your data.

dataset = ll.get_dataset('local', 'imdb')
dataset.compute_signal(ll.LangDetectionSignal(), 'text') # Detect language of each doc.

# [PII] Find emails, phone numbers, ip addresses, and secrets.
dataset.compute_signal(ll.PIISignal(), 'text')

# [Text Statistics] Compute readability scores, number of chars, TTR, non-ascii chars, etc.
dataset.compute_signal(ll.PIISignal(), 'text')

# [Near Duplicates] Computes clusters based on minhash LSH.
dataset.compute_signal(ll.NearDuplicateSignal(), 'text')

# Print the resulting manifest, with the new field added.
print(dataset.manifest())

We can also compute signals from the UI:

image

๐Ÿ”Ž Search

Semantic and conceptual search requires computing an embedding first:

dataset.compute_embedding('gte-small', path='text')

Semantic search

In the UI, we can search by semantic similarity or by classic keyword search to find chunks of documents similar to a query:

image image

We can run the same search in Python:

rows = dataset.select_rows(
  columns=['text', 'label'],
  searches=[
    ll.SemanticSearch(
      path='text',
      embedding='gte-small')
  ],
  limit=1)

print(list(rows))

Conceptual search

Conceptual search is a much more controllable and powerful version of semantic search, where "concepts" can be taught to Lilac by providing positive and negative examples of that concept.

Lilac provides a set of built-in concepts, but you can create your own for very specif

image

We can create a concept in Python with a few examples, and search by it:

concept_db = ll.DiskConceptDB()
db.create(namespace='local', name='spam')
# Add examples of spam and not-spam.
db.edit('local', 'spam', ll.concepts.ConceptUpdate(
  insert=[
    ll.concepts.ExampleIn(label=False, text='This is normal text.'),
    ll.concepts.ExampleIn(label=True, text='asdgasdgkasd;lkgajsdl'),
    ll.concepts.ExampleIn(label=True, text='11757578jfdjja')
  ]
))

# Search by the spam concept.
rows = dataset.select_rows(
  columns=['text', 'label'],
  searches=[
    ll.ConceptSearch(
      path='text',
      concept_namespace='lilac',
      concept_name='spam',
      embedding='gte-small')
  ],
  limit=1)

print(list(rows))

๐Ÿท๏ธ Labeling

Lilac allows you to label individual points, or slices of data: image

We can also label all data given a filter. In this case, adding the label "short" to all text with a small amount of characters. This field was produced by the automatic text_statistics signal.

image

We can do the same in Python:

dataset.add_labels(
  'short',
  filters=[
    (('text', 'text_statistics', 'num_characters'), 'less', 1000)
  ]
)

Labels can be exported for downstream tasks. Detailed documentation here.

๐Ÿ’ฌ Contact

For bugs and feature requests, please file an issue on GitHub.

For general questions, please visit our Discord.

lilac's People

Contributors

nsthorat avatar dsmilkov avatar brilee avatar halfdanj avatar dechantoine avatar albertvillanova avatar hynky1999 avatar contributorrandom avatar drikster80 avatar hinthornw avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.