Coder Social home page Coder Social logo

aalok-sathe / surprisal Goto Github PK

View Code? Open in Web Editor NEW
19.0 5.0 5.0 663 KB

A unified interface for computing surprisal (log probabilities) from language models! Supports neural, symbolic, and black-box API models.

Home Page: https://aalok-sathe.github.io/surprisal/

License: MIT License

Python 100.00%
language-modeling large-language-models log-likelihood surprisal gpt next-word-prediction

surprisal's Introduction

surprisal

Compute surprisal from language models!

surprisal supports most Causal Language Models (GPT2- and GPTneo-like models) from Huggingface or local checkpoint, as well as GPT3 models from OpenAI using their API! We also support KenLM N-gram based language models using the KenLM Python interface.

Masked Language Models (BERT-like models) are in the pipeline and will be supported at a future time (see #9).

Usage

The snippet below computes per-token surprisals for a list of sentences

from surprisal import AutoHuggingFaceModel, KenLMModel

sentences = [
    "The cat is on the mat",
    "The cat is on the hat",
    "The cat is on the pizza",
    "The pizza is on the mat",
    "I told you that the cat is on the mat",
    "I told you the cat is on the mat",
]

m = AutoHuggingFaceModel.from_pretrained('gpt2')
m.to('cuda') # optionally move your model to GPU!

k = KenLMModel(model_path='./literature.arpa')

for result in m.surprise(sentences):
    print(result)
for result in k.surprise(sentences):
    print(result)

and produces output of this sort (gpt2):

       The       Ġcat        Ġis        Ġon       Ġthe       Ġmat  
     3.276      9.222      2.463      4.145      0.961      7.237  
       The       Ġcat        Ġis        Ġon       Ġthe       Ġhat  
     3.276      9.222      2.463      4.145      0.961      9.955  
       The       Ġcat        Ġis        Ġon       Ġthe     Ġpizza  
     3.276      9.222      2.463      4.145      0.961      8.212  
       The     Ġpizza        Ġis        Ġon       Ġthe       Ġmat  
     3.276     10.860      3.212      4.910      0.985      8.379  
         I      Ġtold       Ġyou      Ġthat       Ġthe       Ġcat        Ġis        Ġon       Ġthe       Ġmat 
     3.998      6.856      0.619      2.443      2.711      7.955      2.596      4.804      1.139      6.946 
         I      Ġtold       Ġyou       Ġthe       Ġcat        Ġis        Ġon       Ġthe       Ġmat  
     3.998      6.856      0.619      4.115      7.612      3.031      4.817      1.233      7.033 

extracting surprisal over a substring

A surprisal object can be aggregated over a subset of tokens that best match a span of words or characters. Word boundaries are inherited from the model's standard tokenizer, and may not be consistent across models, so using character spans when slicing is the default and recommended option. Surprisals are in log space, and therefore added over tokens during aggregation. For example:

>>> [s] = m.surprise("The cat is on the mat")
>>> s[3:6, "word"] 
12.343366384506226
Ġon Ġthe Ġmat
>>> s[3:6, "char"]
9.222099304199219
Ġcat
>>> s[3:6]
9.222099304199219
Ġcat

GPT-3 using OpenAI API

⚠ NOTE: OpenAI no longer returns log probabilities in most of their models as of recently. See #15. In order to use a GPT-3 model from OpenAI's API, you will need to obtain your organization ID and user-specific API key using your account. Then, use the OpenAIModel in the same way as a Huggingface model.

m = surprisal.OpenAIModel(model_id='text-davinci-002',
                          openai_api_key="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", 
                          openai_org="org-xxxxxxxxxxxxxxxxxxxxxxxx")

These values can also be passed using environment variables, OPENAI_API_KEY and OPENAI_ORG before calling a script.

You can also call Surprisal.lineplot() to visualize the surprisals:

from matplotlib import pyplot as plt
f, a = None, None
for result in m.surprise(sentences):
    f, a = result.lineplot(f, a)

plt.show()

surprisal also has a minimal CLI:

python -m surprisal -m distilgpt2 "I went to the train station today."
      I      Ġwent        Ġto       Ġthe     Ġtrain   Ġstation     Ġtoday          . 
  4.984      5.729      0.812      1.723      7.317      0.497      4.600      2.528 

python -m surprisal -m distilgpt2 "I went to the space station today."
      I      Ġwent        Ġto       Ġthe     Ġspace   Ġstation     Ġtoday          . 
  4.984      5.729      0.812      1.723      8.425      0.707      5.182      2.574

Installing

Because surprisal is used by people from different communities for different purposes, by default, core dependencies related to language modeling are marked optional. Depending on your use case, install surprisal with the appropriate extras.

Installing from PyPI (latest stable release)

Use a command like pip install surprisal[optional], replacing [optional] with whatever optional support you need. For multiple optional extras, use a comma-separated list:

pip install surprisal[kenlm,transformers]
# the above is equivalent to
pip install surprisal[all]

Possible options include: transformers, kenlm, openai, petals

If you use poetry for your existing project, use the -E option to add surprisal together with the desired optional dependencies:

poetry add surprisal -E transformers -E kenlm
# the above is equivalent to
poetry add surprisal -E all

To also install openai and petals, you can do

poetry add surprisal -E transformers -E kenlm -E openai -E petals
# the above is equivalent to 
poetry add surprisal -E allplus

Installing from GitHub (bleeding edge)

The -e flag allows an editable install, so you can make changes to surprisal.

git clone https://github.com/aalok-sathe/surprisal.git
pip install .[transformers] -e

Acknowledgments

Inspired from the now-inactive lm-scorer; thanks to folks from CPLlab and EvLab for comments and help.

License

MIT License. (C) 2022-23, contributors.

surprisal's People

Contributors

aalok-sathe avatar benlipkin avatar smeylan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

surprisal's Issues

compute surprisal for Chinese characters

Is there any way to compute surprisal for Chinese sentences? Right now, the Chinese characters are processed in a weird way and the output does not match the number of Chinese characters in the input.

Slicing in SurprisalArray is not fully Pythonic

Need to either: [1] make a note somewhere or [2] add a warning or [3] add a workaround implementation that slicing doesn't exactly work the same way as it does with Python lists or numpy arrays.

  • [0:None] has undefined behavior
  • [:] has undefined behavior
  • [x:-1] has undefined behavior

What does work: providing actual or overshooting indices to characters or words within the stimulus/input.

  • [1:3, 'char'] works fine and returns surprisal over all tokens overlapping with chars 1:3
  • [0:99, 'char'] works fine and returns surprisal over all tokens that appear within the first 99 chars

Dependency issues in Python 3.12

Hello,

I installed your Surprisal package with Python 3.12. Upon running a script that was essentially your test examples (the open AI variant), I received the message, "ModuleNotFoundError: No module named 'torch'." Looking further into the issue, I found that PyTorch has not been released yet for Python 3.12. Could you verify that your package works on Python 3.12, and if not, which version of Python do you recommend installing to use Surprisal?

Error when using Python-based tokenizers

Traceback (most recent call last):
  File "/net/vast-storage/scratch/vast/evlab/asathe/code/composlang/lmsurprisal/notebooks/extract_surprisals.py", line 73, in <module>
    main()
  File "/net/vast-storage/scratch/vast/evlab/asathe/code/composlang/lmsurprisal/notebooks/extract_surprisals.py", line 57, in main
    surprisals = [
  File "/net/vast-storage/scratch/vast/evlab/asathe/code/composlang/lmsurprisal/surprisal/model.py", line 133, in extract_surprisal
    surprisals = self.surprise([*textbatch])
  File "/net/vast-storage/scratch/vast/evlab/asathe/code/composlang/lmsurprisal/surprisal/model.py", line 184, in surprise
    tokens=tokenized[b], surprisals=-logprobs[b, :].numpy()
  File "/home/asathe/om2-home/anaconda3/envs/surprisal/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 240, in __getitem__
    raise KeyError(
KeyError: 'Indexing with integers (to access backend Encoding for a given batch index) is not available when using Python based tokenizers'

Support GPU

Currently batch evaluation is implemented, but there is no support for using GPU. This will likely require a modification in surprisal/model.py at initialization of a HuggingFaceModel instance.

Conflating causal LM and "gpt" model class

The current implementation of AutoHuggingFaceModel.from_pretrained takes in a model_class argument, where passing gpt as model_class redirects to the CausalHuggingFaceModel constructor. This is a bit confusing, because users may want to get surprisals from other causal LMs like LLaMa or Mistral.

Maybe the from_pretrained function (https://github.com/aalok-sathe/surprisal/blob/main/surprisal/model.py#L470) can take more abstract options for model_class: for example, causal or masked, instead of gpt and bert?

Make the CI stuff work

(non-breaking: currently the CI stuff is added just as an in-progress TODO. surprisal is released manually to PyPI at the moment and works fine regardless of CI/CD tests)

  • pylint
  • automatically push tagged releases to PyPI

Are surprisal values across different batch sizes slightly different?

Observed small differences in results for batch_size=1 vs larger batch sizes and was trying a number of things to get to the bottom of it. Padding/attention masks didn’t solve it. I just set batch size to 1 so that it's perfectly deterministic across runs (since it still runs fast enough) and deferred this issue to the future.
contributed by @benlipkin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.