Coder Social home page Coder Social logo

compression artifacts in GT about assets HOT 10 OPEN

ocr-d avatar ocr-d commented on July 24, 2024
compression artifacts in GT

from assets.

Comments (10)

tboenig avatar tboenig commented on July 24, 2024

Hi @bertsky,

Thank you very much.
In order to understand GroundTruth I have to look at the background of the creation of the data.
The GroundTruth data is based on the German Text Archive. The data was written manually on the basis of very legible and high-resolution images. The quality of the images should offer the transcriber a high magnification, so that he can capture the text 100% in full text.

The listed objects come from different libraries.
Because these libraries did not provide the German Text Archive with TIFF files, the JPG files provided had to be used. Even in the case of queries addressed to some libraries, no TIFF files could be provided for the titles mentioned. The DTA project was also unable to afford the costs for subsequent digitisation. See for example:
https://www.sub.uni-goettingen.de/fileadmin/media/texte/benutzung/Preisliste_Reproductions_20150306.pdf

Even today, no TIFF images can simply be downloaded.

TIFF header:
The files were previously JPG files, so there can be no correct header available, which might correspond to the guidelines: https://www.slub-dresden.de/fileadmin/groups/slubsite/SLUBArchiv/SLUBArchiv_Hanreichung_TIFF_v1.3.pdf.
As far as I know, there is no uniform rule for libraries which header data to use. For this reason, heterogeneity must always be expected.

Why are there such data in GroundTruth?
It is not unrealistic that such data, despite all due care, are stored in the libraries and have to be converted into full text. The goal of OCR-D should be that the programs and algorithms are so stable that they can handle the artifacts easily.

However, we know that training requires the best data, which should be available in a very large number and variety. We are still trying to increase the number of training data.

from assets.

bertsky avatar bertsky commented on July 24, 2024

Thanks @tboenig for this thorough investigation and explanation!

If those files are there to stay, and for good reasons too, then I recommend at least marking them as degenerate in the GT repos (or even splitting GT into a "good" and a "robust" set).

Also, under these circumstances, I think we should give binarization a closer look (effective DPI, artifacts).

from assets.

tboenig avatar tboenig commented on July 24, 2024

@bertsky splitting GT into a "good" and a "robust" set
That's a really good idea. I'll see how I implement it.

from assets.

kba avatar kba commented on July 24, 2024

@tboenig will provide those lists and we will evaluate how to integrate automated checks (image characterization) into workspace validation in core.

from assets.

cneud avatar cneud commented on July 24, 2024

I strongly opt for keeping the above part of assets for testing purposes as this well reflects real-life scenarios for which the OCR-D stack should be made robust (what @tboenig said).

from assets.

kba avatar kba commented on July 24, 2024

I strongly opt for keeping the above part of assets for testing purposes as this well reflects real-life scenarios for which the OCR-D stack should be made robust (what @tboenig said).

@bertsky was referring to the GT we offer for training not the assets repo itself.

from assets.

bertsky avatar bertsky commented on July 24, 2024

What's the status of the work on a good vs robust split of GT data?

And related but independent, those datasets which have wrong resolution metadata (e.g. praetorius_syntagma02_1619_teil2 and glauber_opera01_1658 reporting 72 DPI, whereas they are in fact 600 DPI), shouldn't their header information be corrected at least? (Remember, we now rely on pixel density – where annotated – in core and other processors.)

(The images in the 2 mentioned bags also contain a digital footer added to the scan – this is clearly wrong, isn't it?)

dfgviewer

from assets.

bertsky avatar bertsky commented on July 24, 2024

(Remember, we now rely on pixel density – where annotated – in core and other processors.)

To illustrate, this is what happens during ocrd-cis-ocropy-dewarp in a sensible preprocessing pipeline:

OCR-D-IMG-DEWARP_0001_TextRegion_1479909781070_10_tl_27

Thus, because

  • 600 actual DPI got interpreted as 72 reported DPI),
  • the region was deemed too large for line segmentation in ocrd-cis-ocropy-resegment,
  • the GT line segmentation (which has large overlaps) was applied unchanged,
  • intruders from the neighbouring lines interfered with center line estimation,
  • dewarping actually warps (deteriorates) the line images even more.

from assets.

kba avatar kba commented on July 24, 2024

@tboenig being the GT guru should answer this.

Pragmatically, I would relax the requirements on pixel density since we just cannot rely on image metadata for this. Unfortunately. c.f. OCR-D/spec#129 and OCR-D/core#339

from assets.

bertsky avatar bertsky commented on July 24, 2024

Thanks @kba for addressing this quickly. This is a real problem for our workflows – for preprocessing (as can be seen above) just like segmentation and OCR (e.g. Tesseract's DPI variable).

I am a bit surprised by your stance, though. When @wrznr and I brought this up on the last developer workshop, we encouraged module projects to make their components DPI-aware/relative. Why was there no objection at the time?

However, if you want to do it this way, please do it better. I took the liberty to add reviews on both your spec PR (for a better definition of exceptions) and core PR (for a more manageable reaction). I know it's much more work, but I believe we risk loosing big time in overall achievable quality if we just let this slip through.

from assets.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.