fawazsammani / show-edit-tell Goto Github PK
View Code? Open in Web Editor NEWShow, Edit and Tell: A Framework for Editing Image Captions, CVPR 2020
Show, Edit and Tell: A Framework for Editing Image Captions, CVPR 2020
Calculating Evalaution Metric Scores......
loading annotations into memory...
0:00:00.743513
creating index...
index created!
Loading and preparing results...
DONE (t=0.07s)
creating index...
index created!
tokenization...
Traceback (most recent call last):
File "/home/chenzhanghui/.pycharm_helpers/pydev/pydevd.py", line 1741, in
main()
File "/home/chenzhanghui/.pycharm_helpers/pydev/pydevd.py", line 1735, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/chenzhanghui/.pycharm_helpers/pydev/pydevd.py", line 1135, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/chenzhanghui/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/chenzhanghui/code/showEditAndTell/editnet.py", line 828, in
word_map = word_map)
File "/home/chenzhanghui/code/showEditAndTell/editnet.py", line 727, in evaluate
cocoEval.evaluate()
File "coco-caption/pycocoevalcap/eval.py", line 36, in evaluate
gts = tokenizer.tokenize(gts)
File "coco-caption/pycocoevalcap/tokenizer/ptbtokenizer.py", line 54, in tokenize
stdout=subprocess.PIPE)
File "/home/chenzhanghui/anaconda3/envs/py36/lib/python3.6/subprocess.py", line 729, in init
restore_signals, start_new_session)
File "/home/chenzhanghui/anaconda3/envs/py36/lib/python3.6/subprocess.py", line 1295, in _execute_child
restore_signals, start_new_session, preexec_fn)
File "/home/chenzhanghui/.pycharm_helpers/pydev/_pydev_bundle/pydev_monkey.py", line 424, in new_fork_exec
return getattr(_posixsubprocess, original_name)(args, *other_args)
OSError: [Errno 12] Cannot allocate memory
When running the editnet.py, the error occurs.
Mostly because of this line of code in the class PTBTokenizer.
p_tokenizer = subprocess.Popen(cmd, cwd=path_to_jar_dirname,
stdout=subprocess.PIPE)
But I don't know how to solve it....
Can you help me with this?
When I run the dcnet.py, it can work normally !!!
I see the textual alignment plots in your paper,but how to output it?thank you
As the instruction I downloaded the file 'trainval_36', unzipped it and place it at 'bottom-up_features'.
I ran code "python bottom-up_features/tsv.py" and it raises error that no such file or directory: '../data/train2014'. Is there any other implementation that I should do before run the code "../data/train2014" except placing 'trainval_36' file at "bottom-up_features" folder?
*I'm using google colab
Excuse me,where can I download the existing captioning file about mscoco 2014test? Can you upload it easily. Thank you!
Can you give the paper link for this code?
Where is the Supplementary material about DCNet?
Excuse me, where can I download the existing caption to be edited? And how organize them in a list containing dictionaries with each dictionary. Thank you!
Enter the picture and print out its caption!thanks!
Sorry to bother you. I want to test the performence of meshed decoder part in harvardnlp code for transformer followed as the snippet you mentioned in aimagelab/meshed-memory-transformer#4,
but i got the memory'shape [batch_size, num_boxes, d_model] which not contain the num_layers. For your Transformer model, are there other important things to make it work?
Thanks a lot for your help!
hello sir, thanks for your talent work on show, edit and tell. it really helps me a lot.
I'm choosing one coco eval tool for my project and you choose the https://github.com/mtanti/coco-caption as your eval tools. But i see your issue on https://github.com/salaniz/pycocoevalcap.
i wanna know why you not choose salaniz/pycocoevalcap ?
is there any wrong with salaniz/pycocoevalcap?
Thanks a lot.
Is this the code train_style_transfer.py I should use?
and How to apply the trained model on my dataset?
Thank you for open-source code,
Can I use the pretrained models, which are provided in link"https://drive.google.com/drive/folders/1kPoRVsUuj57Scon-SbUJXNl555ee6sjo" to generate caption for new data?
Hello! Thanks for your work. Your code is quite clear and easy to understand. Thus, I'm doing some experiments based on it.
However, I got some problems while training with CIDEr optimization. When I use self-critical strategy to train my pre-trained model, the score on CIDEr tends to drop about 5 points after the first epoch. And it then costs quite a few epoches for the model to achieve the same score as it does with XE loss. Only after these epoches, the model begins to outperform the pre-trained one.
I checked my code and found that I didn't save the state dict of optimizer while training with XE loss. So when I start to train with self-critical strategy, I just initialize a new optimizer with a learning rate about 2e-5 or 5e-5. Is this the reason why I got the problems described above?
how to download the existing captios to be edited?or how to download AoAnet captions result ?thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.