Coder Social home page Coder Social logo

allenai / papermage Goto Github PK

View Code? Open in Web Editor NEW
605.0 11.0 44.0 49.98 MB

library supporting NLP and CV research on scientific papers

Home Page: https://papermage.org

License: Apache License 2.0

Python 99.57% Shell 0.43%
computer-vision machine-learning multimodal natural-language-processing pdf-processing scientific-papers python

papermage's Introduction

papermage

Setup

conda create -n papermage python=3.11
conda activate papermage

If you're installing from source:

pip install -e '.[dev,predictors,visualizers]'

If you're installing from PyPi:

pip install 'papermage.[dev,predictors,visualizers]'

(you may need to add/remove quotes depending on your command line shell).

If you're on MacOSX, you'll also want to run:

conda install poppler

Unit testing

python -m pytest

for latest failed test

python -m pytest --lf --no-cov -n0

for specific test name of class name

python -m pytest -k 'TestPDFPlumberParser' --no-cov -n0

Quick start

1. Create a Document for the first time from a PDF

from papermage.recipes import CoreRecipe

recipe = CoreRecipe()
doc = recipe.run("tests/fixtures/papermage.pdf")

2. Understanding the output: the Document class

What is a Document? At minimum, it is some text, saved under the .symbols layer, which is just a <str>. For example:

> doc.symbols
"PaperMage: A Unified Toolkit for Processing, Representing, and\nManipulating Visually-..."

But this library is really useful when you have multiple different ways of segmenting .symbols. For example, segmenting the paper into Pages, and then each page into Rows:

for page in doc.pages:
    print(f'\n=== PAGE: {page.id} ===\n\n')
    for row in page.rows:
        print(row.text)
        
...
=== PAGE: 5 ===

4
Vignette: Building an Attributed QA
System for Scientific Papers
How could researchers leverage papermage for
their research? Here, we walk through a user sce-
nario in which a researcher (Lucy) is prototyping
an attributed QA system for science.
System Design.
Drawing inspiration from Ko
...

This shows two nice aspects of this library:

  • Document provides iterables for different segmentations of symbols. Options include things like pages, tokens, rows, sentences, sections, .... Not every Parser will provide every segmentation, though.

  • Each one of these segments (in our library, we call them Entity objects) is aware of (and can access) other segment types. For example, you can call page.rows to get all Rows that intersect a particular Page. Or you can call sent.tokens to get all Tokens that intersect a particular Sentence. Or you can call sent.rows to get the Row(s) that intersect a particular Sentence. These indexes are built dynamically when the Document is created and each time a new Entity type is added. In the extreme, as long as those layers are available in the Document, you can write:

for page in doc.pages:
    for sent in page.sentences:
        for row in sent.rows: 
            ...

You can check which layers are available in a Document via:

> doc.layers
['tokens',
 'rows',
 'pages',
 'words',
 'sentences',
 'blocks',
 'vila_entities',
 'titles',
 'authors',
 'abstracts',
 'keywords',
 'sections',
 'lists',
 'bibliographies',
 'equations',
 'algorithms',
 'figures',
 'tables',
 'captions',
 'headers',
 'footers',
 'footnotes',
 'symbols',
 'images',
 'metadata',
 'entities',
 'relations']

3. Understanding intersection of Entities

Note that Entitys don't necessarily perfectly nest each other. For example, what happens if you run:

for sent in doc.sentences:
    for row in sent.rows:
        print([token.text for token in row.tokens])

Tokens that are outside each sentence can still be printed. This is because when we jump from a sentence to its rows, we are looking for all rows that have any overlap with the sentence. Rows can extend beyond sentence boundaries, and as such, can contain tokens outside that sentence.

A key aspect of using this library is understanding how these different layers are defined & anticipating how they might interact with each other. We try to make decisions that are intuitive, but we do ask users to experiment with layers to build up familiarity.

4. What's in an Entity?

Each Entity object stores information about its contents and position:

  • .spans: List[Span], A Span is a pointer into Document.symbols (that is, Span(start=0, end=5) corresponds to symbols[0:5]). By default, when you iterate over an Entity, you iterate over its .spans.

  • .boxes: List[Box], A Box represents a rectangular region on the page. Each span is associated a Box.

  • .metadata: Metadata, A free form dictionary-like object to store extra metadata about that Entity. These are usually empty.

5. How can I manually create my own Document?

A Document is created by stitching together 3 types of tools: Parsers, Rasterizers and Predictors.

  • Parsers take a PDF as input and return a Document compared of .symbols and other layers. The example one we use is a wrapper around PDFPlumber - MIT License utility.

  • Rasterizers take a PDF as input and return an Image per page that is added to Document.images. The example one we use is PDF2Image - MIT License.

  • Predictors take a Document and apply some operation to compute a new set of Entity objects that we can insert into our Document. These are all built in-house and can be either simple heuristics or full machine-learning models.

6. How can I save my Document?

import json
with open('filename.json', 'w') as f_out:
    json.dump(doc.to_json(), f_out, indent=4)

will produce something akin to:

{
    "symbols": "PaperMage: A Unified Toolkit for Processing, Representing, an...",
    "entities": {
        "rows": [...],
        "tokens": [...],
        "words": [...],
        "blocks": [...],
        "sentences": [...]
    },
    "metadata": {...}
}

7. How can I load my Document?

These can be used to reconstruct a Document again via:

with open('filename.json') as f_in:
    doc_dict = json.load(f_in)
    doc = Document.from_json(doc_dict)

Note: A common pattern for adding layers to a document is to load in a previously saved document, run some additional Predictors on it, and save the result.

See papermage/predictors/README.md for more information about training custom predictors on your own data.

See papermage/examples/quick_start_demo.ipynb for a notebook walking through some more usage patterns.

papermage's People

Contributors

amanpreet692 avatar bnewm0609 avatar eltociear avatar josephcc avatar kyleclo avatar mdr223 avatar soldni avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

papermage's Issues

`Layer` definition with `Slice` compatibility

define Layer explicitly and make it such that doc.sentences: Layer has reasonable slicing behavior, so like doc.sentences[3:10] produces a Slice with a reasonable repl and ability to perform Entity-level operations on it in distributed maner, such as doc.sentences[3:10].text instead of [s.text for s in doc.sentences[3:10]]

Pip install failing

pip install papermage.[dev,predictors,visualizers] 
zsh: no matches found: papermage.[dev,predictors,visualizers]

Ubuntu 22.04, Python 3.12.2

Examples mentioned in Papermage paper don't work

I was reading through the paper: "PaperMage: A Unified Toolkit for Processing, Representing, and Manipulating Visually-Rich Scientific Documents" and wanted to try the examples in the paper. Below one is not working
image

It seems there is no Parser:

Code I am running:

import papermage as pm
parser = pm.PDF2TextParser()

Error I am getting:
AttributeError: module 'papermage' has no attribute 'PDF2TextParser'

Merging documents / Standardizing protocols

Rasterizers, DocMetadataExtractors, Parsers, etc.

all should emit Documents

And we should have some sort of Doc.update() functionality or merge() functionality to combine Documents.

Unable to run quick_start_demo

Hi,

When I run the second cell, I get the following error:

`---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[2], line 5
2 from papermage.recipes import CoreRecipe
3 fixture_path = pathlib.Path(pwd).parent / "tests/fixtures"
----> 5 recipe = CoreRecipe()
6 doc = recipe.run(fixture_path / "papermage.pdf")

File ~/workspace/papermage/papermage/recipes/core_recipe.py:94, in CoreRecipe.init(self, ivila_predictor_path, bio_roberta_predictor_path, svm_word_predictor_path, dpi)
92 with warnings.catch_warnings():
93 warnings.simplefilter("ignore")
---> 94 self.word_predictor = SVMWordPredictor.from_path(svm_word_predictor_path)
96 self.publaynet_block_predictor = LPEffDetPubLayNetBlockPredictor.from_pretrained()
97 self.ivila_predictor = IVILATokenClassificationPredictor.from_pretrained(ivila_predictor_path)

File ~/workspace/papermage/papermage/predictors/word_predictors.py:227, in SVMWordPredictor.from_path(cls, tar_path)
225 @classmethod
226 def from_path(cls, tar_path: str):
--> 227 classifier = SVMClassifier.from_path(tar_path=tar_path)
228 predictor = SVMWordPredictor(classifier=classifier)
229 return predictor

File ~/workspace/papermage/papermage/predictors/word_predictors.py:107, in SVMClassifier.from_path(cls, tar_path)
105 with tarfile.open(tar_path, "r:gz") as tar:
106 tar.extractall(path=tmp_dir)
--> 107 return cls.from_directory(tmp_dir)

File ~/workspace/papermage/papermage/predictors/word_predictors.py:111, in SVMClassifier.from_directory(cls, dir)
109 @classmethod
110 def from_directory(cls, dir: str):
--> 111 classifier = SVMClassifier.from_paths(
112 ohe_encoder_path=os.path.join(dir, "svm_word_predictor/ohencoder.joblib"),
113 scaler_path=os.path.join(dir, "svm_word_predictor/scaler.joblib"),
114 estimator_path=os.path.join(dir, "svm_word_predictor/hyphen_clf.joblib"),
115 unigram_probs_path=os.path.join(dir, "svm_word_predictor/unigram_probs.pkl"),
116 )
117 return classifier

File ~/workspace/papermage/papermage/predictors/word_predictors.py:128, in SVMClassifier.from_paths(cls, ohe_encoder_path, scaler_path, estimator_path, unigram_probs_path)
119 @classmethod
120 def from_paths(
121 cls,
(...)
125 unigram_probs_path: str,
126 ):
127 ohe_encoder = load(ohe_encoder_path)
--> 128 scaler = load(scaler_path)
129 estimator = load(estimator_path)
130 unigram_probs = load(unigram_probs_path)

File ~/anaconda3/envs/papermage/lib/python3.11/site-packages/joblib/numpy_pickle.py:587, in load(filename, mmap_mode)
581 if isinstance(fobj, str):
582 # if the returned file object is a string, this means we
583 # try to load a pickle file generated with an version of
584 # Joblib so we load it with joblib compatibility function.
585 return load_compatibility(fobj)
--> 587 obj = _unpickle(fobj, filename, mmap_mode)
588 return obj

File ~/anaconda3/envs/papermage/lib/python3.11/site-packages/joblib/numpy_pickle.py:506, in _unpickle(fobj, filename, mmap_mode)
504 obj = None
505 try:
--> 506 obj = unpickler.load()
507 if unpickler.compat_mode:
508 warnings.warn("The file '%s' has been generated with a "
509 "joblib version less than 0.10. "
510 "Please regenerate this pickle file."
511 % filename,
512 DeprecationWarning, stacklevel=3)

File ~/anaconda3/envs/papermage/lib/python3.11/pickle.py:1213, in _Unpickler.load(self)
1211 raise EOFError
1212 assert isinstance(key, bytes_types)
-> 1213 dispatchkey[0]
1214 except _Stop as stopinst:
1215 return stopinst.value

KeyError: 173`

Running quick_start_demo.ipynb reported error OSError

Hello, I'm glad to read your paper, but I ran into a problem while running the code:
Running quick_start_demo.ipynb reported error OSError: [Errno 22] Invalid argument: 'D:\\Users\ure/.torch/iopath_cache\s/ukbw5s673633hsw\publaynet-tf_ efficientdet_d0.pth.tar?dl=1.lock'

how to extract figures in pdf ?

After setup, I tried
1.
doc.figures
2.
json.dump

but the results showed only figure box's position and its metadata, how can i get figure in the pdf?

API fix - Empty layers

  1. To avoid @soldni type checking horrors, let's add something akin to .get()
  2. We may still want to represent fields as None to separate them from [] or Layer(entities=[])

`ModuleNotFoundError: No module named 'decontext'` trying to import `CoreRecipe`

Hi! First of all, thank you for the white paper and the library. The concept of layers is interesting. It's a smart way to represent a document for different purposes.

Unfortunately, I face an issue trying to follow the steps from the README.
Steps to reproduce:

  1. pip install -U papermage
  2. ipython
  3. from papermage.recipes import CoreRecipe => ModuleNotFoundError: No module named 'decontext'

Installing the latest version from the source code fixes the issue.

An small error in ReadMe.md?

When I run setup and follow the readme.md, pip install -e '.[dev,predictors,visualizers]' ,an error occurs:

ERROR: '.[dev,predictors,visualizers]' is not a valid editable requirement. It should either be a path to a local project or a VCS URL (beginning with bzr+http, bzr+https, bzr+ssh, bzr+sftp, bzr+ftp, bzr+lp, bzr+file, git+http, git+https, git+ssh, git+git, git+file, hg+file, hg+http, hg+https, hg+ssh, hg+static-http, svn+ssh, svn+http, svn+https, svn+svn, svn+file).

After changing to python install -e .[dev,predictors,visualizers], no errors will be reported. I'm not sure if this is a error in ReadMe.md .

How to extract Authorname, Institution, Country from "authors" box

Hi,

At the moment it is possible to derive author information via doc.authors. Is it also possible to further finegrain this information and retrieve the authors name, their institution and country ? doc.authors returns all author information in a single string and I don't know how to retrieve the single entities.

Enable Python 3.11

Some dependencies require python<3.11. Heard there's speedups in 3.11 though, so could be worth figuring out.

how to extract figures from the pdf?

Hi there,

Thank you so much for the nice package!

Can I ask how to extract the figures from the pdf? I have tried:

recipe = CoreRecipe()
doc = recipe.run("papermage/tests/fixtures/2020.acl-main.447.pdf")
doc.figures

But it seems that this is not returning the figure data. Is the figure extraction achievable with your package?

Best,
Bowen

How to improve detection of sections?

Hi,

Congrats for your great work and beautiful API!

I'm especially interested in using it to create a hierarchical document based on the original PDF.
My issue is that some sections are not correctly identified.

For example in your papermage.pdf file, the 2nd section is mixed with the 2.1 section:
image

And the title of the 3.3 section is partially identified:
image

I have similar issues on some of my documents.

I would like to know how it could be improved. Could it be more trained if there was a training set of documents with the correct sections that were pre-identified?

Let me know how I could help, the topic is really interesting!

The parser stability check usually fails.

When running pytest tests/test_parsers/test_pdf_plumber_parser.py, the test_parser_stability test usually fails. A work around is to overwrite the test fixture json with the doucment parse (tests/fixtures/2304.02623v1.json) before running the test by running:

parser = PDFPlumberParser()
doc = parser.parse(input_pdf_path="tests/fixtures/2304.02623v1.pdf")
with open("tests/fixtures/2304.02623v1.json", "w") as f:
    json.dump(doc.to_json(), f)

However, this ruins the point of having a stability test---the pdf parses won't be stable between runs. Can we make a better stability test?

How to use doc.blocks

hello, Could you teach me how to use this function? please show me some examples ,thanks

Blank pages in pdf lead to the wrong number of pages

I was dealing with a document triggered this error in papermage/rasterizers/rasterizer.py:

raise ValueError(f"Failed to attach. {len(images)} images != {len(pages)} pages in doc.")

I did a deep debug found that the reason is my pdf has a blank page, and this code, in papermage/parsers/pdfplumber_parser.py, to determine the number of pages is by traversing the existence of all the objects, which will skip the blank page, resulting in the number of page objects in page_annos list to be less than the actual number of pages.

for page_id, tups in itertools.groupby(iterable=tokens_with_group_ids, key=lambda tup: tup[2]):

        for page_id, tups in itertools.groupby(iterable=tokens_with_group_ids, key=lambda tup: tup[2]):
            page_tokens = [token for token, _, _ in tups]
            page_w, page_h, page_unit = dims[page_id]
            page = Entity(
                spans=[
                    Span(
                        start=page_tokens[0].spans[0].start,
                        end=page_tokens[-1].spans[0].end,
                    )
                ],
                boxes=[Box.create_enclosing_box(boxes=[box for t in page_tokens for box in t.boxes])],
                metadata=Metadata(width=page_w, height=page_h, user_unit=page_unit),
            )
            page_annos.append(page)

Some further modifications may be needed here to deal with this rare case. Thank you.

`TestEntityClassificationPredictorTrainer` fails on Mac M1 chip

FAILED tests/test_trainers/test_entity_classification_predictor_trainer.py::TestEntityClassificationPredictorTrainer::test_train - RuntimeError: Placeholder storage has not been allocated on MPS device!

googling, looks like it's an M1 chip issue. need to modify statement where send .to() in torch:

https://stackoverflow.com/questions/74724120/pytorch-on-m1-mac-runtimeerror-placeholder-storage-has-not-been-allocated-on-m

Not that high priority

problems when parsing older paper in PDF format

Hi, thanks for this great toolkit!
I tried the papermage with several PDF files. It works really well with recent papers but when I tried to parse some papers published in 1980 or 1989, papermage failed to parse the sentences.

doc = recipe.run("1980.pdf")
for sen in doc.sentences:
    print(sen.text)
'''
output:
Received
January
1978;
revised
October
1979;
accepted
December 1979
References
1.
Avery,
K.
R.
,
and
Avery,
C.
A.
Design
and
development
of an interactive
statistical
system
(SIPS).
Proc.
Comptr.
Sci.
and
Statistics: 8th
Ann.
Symp.
on
'''

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.