jayleicn / tvretrieval Goto Github PK
View Code? Open in Web Editor NEW[ECCV 2020] PyTorch code for XML on TVRetrieval dataset - TVR: A Large-Scale Dataset for Video-Subtitle Moment Retrieval
Home Page: https://tvr.cs.unc.edu
License: MIT License
[ECCV 2020] PyTorch code for XML on TVRetrieval dataset - TVR: A Large-Scale Dataset for Video-Subtitle Moment Retrieval
Home Page: https://tvr.cs.unc.edu
License: MIT License
Hi,
I've been trying to finetune the language model but I'm getting segmentation fault while doing so.
I get the same issue when I try to run the lm_finetuning_on_single_sentences.py directly, this happens when I comment out invocation of main() like this
if __name__ == "__main__":
# main()
The download link of video features in TVR dataset (https://drive.google.com/uc?id=1j4mVkXjKCgafW3ReNjZ2Rk6CKx0Fk_n5) is unavailable because recent downloads go beyond the limit of gg drive. Is there any other way to get the extracted video features? Thanks a lot!
Hi there!
Thanks for sharing your great work.
It seems you conducted experiments with the DiDeMo dataset without using subscript information to check the performance of your method.
I have a couple of questions to ask you about it.
clip length of the input features (in this case ResNet)
In the main experiments in your paper, TVR features are divided and fed into the model with the clip length of 1.5 sec.
Is it also the case with the DiDeMo dataset?
Or did you treat the feature in a different way from the TVR dataset?
how to deal with the timestamp information at the time of both training and inference (for training, also about tef)
In the DiDeMo dataset, the moment timestamp information are given in the form of index (0-5).
Did you translate it into the form of seconds, i.e., (0 sec - 30 sec)?
Or did you use the index as the timestamp information as it is?
If there is any information I missed about the didemo dataset, please also let me know.
Thank you in advance!
Hi, in the data collection part, What automatic tool do you use to check the quality of the annotations in the automatic check part?
Hi,
When I run "bash baselines/crossmodal_moment_localization/scripts/inference.sh MODEL_DIR_NAME val" everything works as expected.
However when I run "bash baselines/crossmodal_moment_localization/scripts/inference.sh MODEL_DIR_NAME test_public" I get the following error:
2020-04-09 17:27:16.745:INFO:__main__ - CUDA enabled. 2020-04-09 17:27:16.756:INFO:__main__ - Starting inference... 2020-04-09 17:27:16.757:INFO:__main__ - Computing scores Computing query2video scores: 100%|█████████████████████████████████████████████████| 6/6 [00:02<00:00, 2.23it/s] 2020-04-09 17:27:22.153:INFO:__main__ - Inference with full-script. Traceback (most recent call last): File "baselines/crossmodal_moment_localization/inference.py", line 584, in <module> start_inference() File "baselines/crossmodal_moment_localization/inference.py", line 578, in start_inference tasks=opt.tasks, max_after_nms=100) File "baselines/crossmodal_moment_localization/inference.py", line 486, in eval_epoch eval_submission_raw = get_eval_res(model, eval_dataset, opt, tasks, max_after_nms=max_after_nms) File "baselines/crossmodal_moment_localization/inference.py", line 456, in get_eval_res tasks=tasks) File "baselines/crossmodal_moment_localization/inference.py", line 277, in compute_query2ctx_info eval_dataset.load_gt_vid_name_for_query(is_svmr) File "/home/kevin/TVRetrieval/baselines/crossmodal_moment_localization/start_end_dataset.py", line 241, in load_gt_vid_name_for_query assert "vid_name" in self.query_data[0] AssertionError
I notice that the "data/tvr_val_release.jsonl" a different format has than " data/tvr_test_public_release.jsonl" So I suspect this is the culprit and needs to be handled differently in the inference code.
P.S. kudos for all the code and clear documentation provided in this repository.
Where can we download the raw videos of TVR dataset.
Hi there.
I was trying to use multi-gpu for training. So I put the gpu ids in '--device_ids', baselines/crossmodal_moment_localization/config.py.
I fixed the code like below.
if opt.train_span_start_epoch != -1 and epoch_i >= opt.train_span_start_epoch: model.set_train_st_ed(opt.lw_st_ed) -> model.set_train_st_ed(opt.lw_st_ed)
Then I added the following code in the front of my script
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3,4,5"
But it still not working. What should I do?
I have used
Thanks you for your time.
Is it possible to download the original video clips to extract the audio?
When I follow the example to submit my zip of json to codalab, it gives me an error:
Traceback (most recent call last):
File "/worker/worker.py", line 323, in run
bundles = get_bundle(root_dir, 'run', bundle_url)
File "/worker/worker.py", line 180, in get_bundle
metadata[k] = get_bundle(bundle_path, k, v)
File "/worker/worker.py", line 180, in get_bundle
metadata[k] = get_bundle(bundle_path, k, v)
File "/worker/worker.py", line 171, in get_bundle
metadata = yaml.load(mf)
File "/usr/local/lib/python2.7/dist-packages/yaml/init.py", line 69, in load
loader = Loader(stream)
File "/usr/local/lib/python2.7/dist-packages/yaml/loader.py", line 34, in init
Reader.init(self, stream)
File "/usr/local/lib/python2.7/dist-packages/yaml/reader.py", line 85, in init
self.determine_encoding()
File "/usr/local/lib/python2.7/dist-packages/yaml/reader.py", line 135, in determine_encoding
self.update(1)
File "/usr/local/lib/python2.7/dist-packages/yaml/reader.py", line 165, in update
exc.encoding, exc.reason)
ReaderError: 'utf8' codec can't decode byte #xaf: invalid start byte
in "/tmp/codalab/tmpOjOaDx/run/input/res/metadata", position 11
How can I fix the error?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.