Comments (18)
Thanks for your interest in our work, my email is [email protected]. One thing to note is that we only work on public datasets (or datasets that can be released later), such that the public can benefit from our research.
Alternatively, if you want to keep the dataset private, you can also consider cloud computing services such as Amazon EC2, Google GCE, Microsoft Azure, which provides GPU instances paid by the hour.
from im2markup.
Thanks @da03 for the quick reply I am sending you an email for further discussion
from im2markup.
@da03 what is the machine configuration is required for 20k training images like RAM DISK GPU and how many hours will it take if we train on cpu and what will the difference when using GPU
from im2markup.
And how many training examples are required to get a decent result like the results which you have shown on your website
from im2markup.
or can you tell me what configuration I should at least use to train the model on nearly 20k images. I am asking so that I could try that configuration directly of AWS or else I will end up using either too less or too high and I will be wasting money (because of the hourly charge)
from im2markup.
Regarding hardware, I think it's almost impossible to train on CPU, it would probably take forever. For GPU training would take less than a day even using 100k images. On AWS any GPU configuration is probably ok since your dataset of 20k images is small.
Regarding dataset size, I think 20k is a bit small, combining it with the im2latex-100k might give some reasonable results, but ideally you might need 100k real images to train. Besides, are your images of roughly the same font size? If not, standard image normalization techniques (such as denoising, resizing to same font size) might produce better results.
from im2markup.
btw, if you got a GPU instance, I would recommend using this dockerfile to save you the trouble of installing luaTorch: https://github.com/OpenNMT/OpenNMT/blob/master/Dockerfile
from im2markup.
Thanks a lot @da03 for helping me out
from im2markup.
@da03 one last question I have, I don't have the latex, I have the ocr of the images will that work (like this (5+2sqrt3)/(7+4sqrt3) = a-b sqrt3). And I have 150k such images (and even more) will that work or do I need latex only
from im2markup.
Cool that will work if you do a proper tokenization: the label shall be something like "( 5 + 2 sqrt 3 ) / ( 7 + 4 sqrt 3 ) = a - b sqrt 3" (separated by blanks). The algorithm should work for whatever output format.
from im2markup.
ok Thanks @da03 You are helping a lot
from im2markup.
Hello @da03 ,
I have one quick question how much disk space it will require for 150k training examples I took 250 GB of space but it got full during creating demo.train.1.pt like that (during
onmt_preprocess) using the default parameters given in the doc
from im2markup.
That's surprising. What are the sizes of those images?
from im2markup.
(187, 720, 3)
(2448, 3264, 3)
(2209, 1752, 3)
(1275, 4160, 3)
(3456, 4608, 3)
(1821, 4657, 3)
(226, 1080, 3)
(388, 2458, 3)
(3264, 2448, 3)
(625, 4100, 3)
(379, 2640, 3)
(1011, 4110, 3) like this @da03
from im2markup.
How much disk space I need @da03 ? Any rough idea
from im2markup.
I am using Open-NMT python to do this should I use the main repo which is using lua
from im2markup.
@vyaslkv Have you had any progress on your data?
I agree that working towards a public model is important.
from im2markup.
@vyaslkv Sorry for the delay. The images you are using seem to be huge: for example, an image of resolution 3264 x 2448 has ~7M pixels, and if we use a dataset containing 10k such images (we need at least thousands of training instances to learn a reasonable model), it would take 280G (7M x 10k x 4). The dataset used in this repo im2latex-450k is much smaller, since the images are much smaller (they are mostly single math formulas), and we've downsampled them to make that even smaller in the preprocessing.
I think you need to crop your images to ONLY contain the useful parts, cutting off any paddings, and downsample them as much as you can (while we humans can still identify the formulas from the reduced resolutions).
from im2markup.
Related Issues (20)
- - HOT 1
- not working for below type of images (other than given by you). I think we need to put images in particular format HOT 8
- can anyone share the trained model file which is genralized on any type of image like mathpix HOT 3
- [Please Respond] Can you help me training the model for to recognize the out of given data image set HOT 1
- how to remove katex parser error HOT 1
- target vocab size HOT 5
- There is a bug in preprocess_latex.js HOT 3
- error importation cudnn HOT 20
- I am getting None with intermediate weights HOT 1
- UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe7 in position 2270: invalid continuation byte HOT 7
- How to make code show predicted mathematical expression in latex format HOT 1
- can you explain about value 'Accuracy'?
- why downsample by 2 in preprocess HOT 2
- Why using lua instead of python? HOT 1
- can you explain src\modeel\cnn.lua
- Getting low accuracy using customized images for test. HOT 2
- 'perl' and 'cat' is not recognized
- Can you provide a vocab dictionary?
- The python version of the dataset resource is not working
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from im2markup.