Traceback (mostrecentcalllast):
File"/net/vast-storage/scratch/vast/evlab/asathe/code/composlang/lmsurprisal/notebooks/extract_surprisals.py", line73, in<module>main()
File"/net/vast-storage/scratch/vast/evlab/asathe/code/composlang/lmsurprisal/notebooks/extract_surprisals.py", line57, inmainsurprisals= [
File"/net/vast-storage/scratch/vast/evlab/asathe/code/composlang/lmsurprisal/surprisal/model.py", line133, inextract_surprisalsurprisals=self.surprise([*textbatch])
File"/net/vast-storage/scratch/vast/evlab/asathe/code/composlang/lmsurprisal/surprisal/model.py", line184, insurprisetokens=tokenized[b], surprisals=-logprobs[b, :].numpy()
File"/home/asathe/om2-home/anaconda3/envs/surprisal/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line240, in__getitem__raiseKeyError(
KeyError: 'Indexing with integers (to access backend Encoding for a given batch index) is not available when using Python based tokenizers'
Is there any way to compute surprisal for Chinese sentences? Right now, the Chinese characters are processed in a weird way and the output does not match the number of Chinese characters in the input.
Need to either: [1] make a note somewhere or [2] add a warning or [3] add a workaround implementation that slicing doesn't exactly work the same way as it does with Python lists or numpy arrays.
[0:None] has undefined behavior
[:] has undefined behavior
[x:-1] has undefined behavior
What does work: providing actual or overshooting indices to characters or words within the stimulus/input.
[1:3, 'char'] works fine and returns surprisal over all tokens overlapping with chars 1:3
[0:99, 'char'] works fine and returns surprisal over all tokens that appear within the first 99 chars
The current implementation of AutoHuggingFaceModel.from_pretrained takes in a model_class argument, where passing gpt as model_class redirects to the CausalHuggingFaceModel constructor. This is a bit confusing, because users may want to get surprisals from other causal LMs like LLaMa or Mistral.
Observed small differences in results for batch_size=1 vs larger batch sizes and was trying a number of things to get to the bottom of it. Padding/attention masks didn’t solve it. I just set batch size to 1 so that it's perfectly deterministic across runs (since it still runs fast enough) and deferred this issue to the future.
contributed by @benlipkin
(non-breaking: currently the CI stuff is added just as an in-progress TODO. surprisal is released manually to PyPI at the moment and works fine regardless of CI/CD tests)
I installed your Surprisal package with Python 3.12. Upon running a script that was essentially your test examples (the open AI variant), I received the message, "ModuleNotFoundError: No module named 'torch'." Looking further into the issue, I found that PyTorch has not been released yet for Python 3.12. Could you verify that your package works on Python 3.12, and if not, which version of Python do you recommend installing to use Surprisal?
Currently batch evaluation is implemented, but there is no support for using GPU. This will likely require a modification in surprisal/model.py at initialization of a HuggingFaceModel instance.
OpenAI no longer provides log probs for the prompt, making it impossible to use as a probability over a string function. It does, however, continue to provide logprobs over its own completions.