Comments (10)
Hi @bertsky,
Thank you very much.
In order to understand GroundTruth I have to look at the background of the creation of the data.
The GroundTruth data is based on the German Text Archive. The data was written manually on the basis of very legible and high-resolution images. The quality of the images should offer the transcriber a high magnification, so that he can capture the text 100% in full text.
The listed objects come from different libraries.
Because these libraries did not provide the German Text Archive with TIFF files, the JPG files provided had to be used. Even in the case of queries addressed to some libraries, no TIFF files could be provided for the titles mentioned. The DTA project was also unable to afford the costs for subsequent digitisation. See for example:
https://www.sub.uni-goettingen.de/fileadmin/media/texte/benutzung/Preisliste_Reproductions_20150306.pdf
Even today, no TIFF images can simply be downloaded.
TIFF header:
The files were previously JPG files, so there can be no correct header available, which might correspond to the guidelines: https://www.slub-dresden.de/fileadmin/groups/slubsite/SLUBArchiv/SLUBArchiv_Hanreichung_TIFF_v1.3.pdf.
As far as I know, there is no uniform rule for libraries which header data to use. For this reason, heterogeneity must always be expected.
Why are there such data in GroundTruth?
It is not unrealistic that such data, despite all due care, are stored in the libraries and have to be converted into full text. The goal of OCR-D should be that the programs and algorithms are so stable that they can handle the artifacts easily.
However, we know that training requires the best data, which should be available in a very large number and variety. We are still trying to increase the number of training data.
from assets.
Thanks @tboenig for this thorough investigation and explanation!
If those files are there to stay, and for good reasons too, then I recommend at least marking them as degenerate in the GT repos (or even splitting GT into a "good" and a "robust" set).
Also, under these circumstances, I think we should give binarization a closer look (effective DPI, artifacts).
from assets.
@bertsky splitting GT into a "good" and a "robust" set
That's a really good idea. I'll see how I implement it.
from assets.
@tboenig will provide those lists and we will evaluate how to integrate automated checks (image characterization) into workspace validation in core.
from assets.
I strongly opt for keeping the above part of assets
for testing purposes as this well reflects real-life scenarios for which the OCR-D stack should be made robust (what @tboenig said).
from assets.
I strongly opt for keeping the above part of
assets
for testing purposes as this well reflects real-life scenarios for which the OCR-D stack should be made robust (what @tboenig said).
@bertsky was referring to the GT we offer for training not the assets repo itself.
from assets.
What's the status of the work on a good
vs robust
split of GT data?
And related but independent, those datasets which have wrong resolution metadata (e.g. praetorius_syntagma02_1619_teil2
and glauber_opera01_1658
reporting 72 DPI, whereas they are in fact 600 DPI), shouldn't their header information be corrected at least? (Remember, we now rely on pixel density – where annotated – in core and other processors.)
(The images in the 2 mentioned bags also contain a digital footer added to the scan – this is clearly wrong, isn't it?)
from assets.
(Remember, we now rely on pixel density – where annotated – in core and other processors.)
To illustrate, this is what happens during ocrd-cis-ocropy-dewarp
in a sensible preprocessing pipeline:
Thus, because
- 600 actual DPI got interpreted as 72 reported DPI),
- the region was deemed too large for line segmentation in
ocrd-cis-ocropy-resegment
, - the GT line segmentation (which has large overlaps) was applied unchanged,
- intruders from the neighbouring lines interfered with center line estimation,
- dewarping actually warps (deteriorates) the line images even more.
from assets.
@tboenig being the GT guru should answer this.
Pragmatically, I would relax the requirements on pixel density since we just cannot rely on image metadata for this. Unfortunately. c.f. OCR-D/spec#129 and OCR-D/core#339
from assets.
Thanks @kba for addressing this quickly. This is a real problem for our workflows – for preprocessing (as can be seen above) just like segmentation and OCR (e.g. Tesseract's DPI variable).
I am a bit surprised by your stance, though. When @wrznr and I brought this up on the last developer workshop, we encouraged module projects to make their components DPI-aware/relative. Why was there no objection at the time?
However, if you want to do it this way, please do it better. I took the liberty to add reviews on both your spec PR (for a better definition of exceptions) and core PR (for a more manageable reaction). I know it's much more work, but I believe we risk loosing big time in overall achievable quality if we just let this slip through.
from assets.
Related Issues (20)
- 1000pages: Inconsistent annotation of separators in "hobrecht_strassenbau_1890" HOT 1
- 1000pages: Incomplete annotation on page 0001 of "immermann_muenchhausen02_1839"" HOT 2
- 1000pages: Separators missing on page 0010 of "immermann_muenchhausen02_1839" HOT 1
- 1000pages: Inconsistent annotation of column separators in "krafft_landwirtschaft02_1876"" HOT 1
- 1000pages: Non-existent separator annotated on page 0018 of "krafft_landwirthschaft03_1876"" HOT 2
- 1000pages: Missing text on page 0003 and 0004 of "lenau_gedichte_1832" HOT 3
- Change the file name in DFKI test data HOT 2
- Most/All workspaces in bag files don't validate HOT 4
- Add references to OCR-D Ground Truth repo. HOT 1
- provide TableRegion/Grid examples HOT 6
- Repository not usable on case insensitive filesystems (like macOS and Windows) HOT 6
- Update scribo-tests with correct `k` parameters for sauvola-ms-fg HOT 1
- Add a METS with lots of files for testing HOT 9
- Lots of XSD validation errors HOT 2
- Self-contained make "update-bagit" target
- zip files broken links
- euler_rechenkunst01_1738 has wrong structLink
- OCR-D GT uses wrong mods:languageTerm/@authority
- wrong image references
- Validation errors for 'gutachten'
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from assets.