Coder Social home page Coder Social logo

meta_trackers's Introduction

MetaTrackers

0. Prerequisites

PyTorch >= v0.2.0

0. Dataset download

Download dataset(OTB, VOT) and prepared the link to the dataset in $(meta_trackers_root)/dataset/ directory.

$(meta_trackers_root)/dataset/VID
$(meta_trackers_root)/dataset/OTB
$(meta_trackers_root)/dataset/vot2013
$(meta_trackers_root)/dataset/vot2015
$(meta_trackers_root)/dataset/vot2016

0. Prepare dataset meta files

I already prepared all necessary meta-files for ILSVRC VID dataset, OTB, VOT dataset. Either you use them or you could generate via scripts in $(meta_trackers_root)/dataset/ directory.

$(meta_trackers_root)/dataset/ilsvrc_train.json # meta file for loading ILSVRC VID dataset to meta-train.
$(meta_trackers_root)/dataset/vot-otb.pkl # meta file for loading VOT dataset to meta-train(for OTB experiments)
$(meta_trackers_root)/dataset/otb-vot.pkl # meta file for loading OTB dataset to meta-train(for VOT experiments)

Also need to download imagenet pretrained models(our base feature extractors) into $(meta_trackers_root)/meta_crest(and meta_sdnet)/models/. (We used the same networks that original trackers used. For meta_sdnet - imagenet-vgg-m.mat, and for meta_crest - imagenet-vgg-verydeep-16.mat)

1. Meta-Training

You can skip this step and download pretrain models, and use them to test the trackers. If you want to meta-train MetaCREST trackers,

$(meta_trackers_root)/meta_crest/meta_pretrain$> python train_meta_init.py -e OTB # for OTB experiments, for VOT use -e VOT

To meta-train MetaSDNet trackers,

$(meta_trackers_root)/meta_sdnet/meta_pretrain$> python train_meta_init.py -e OTB # for OTB experiments, for VOT use -e VOT

2. Downloading pretrained models

We provide pretrained models for both meta trackers for your convenience. You can download it from following links and locate them in models directory.

$(meta_trackers_root)/meta_sdnet/models/meta_init_vot_ilsvrc.pth (~35M)

$(meta_trackers_root)/meta_sdnet/models/meta_init_otb_ilsvrc.pth (~35M)

$(meta_trackers_root)/meta_crest/models/meta_init_vot_ilsvrc.pth (~59K)

$(meta_trackers_root)/meta_crest/models/meta_init_otb_ilsvrc.pth (~59K)

3. Testing MetaTrackers

$(meta_trackers_root)/meta_crest/meta_tracking$>python run_tracker.py # meta_crest tracker for OTB experiments
$(meta_trackers_root)/meta_sdnet/meta_tracking$>python run_tracker.py # meta_sdnet tracker for OTB experiments

To run VOT2016 experiments, I provided following VOT integration files. You can use them and run it via VOT2016 toolkit. Please refer to VOT homepage

$(meta_trackers_root)/meta_crest/meta_tracking/run_tracker_vot.py
$(meta_trackers_root)/meta_sdnet/meta_tracking/run_tracker_vot.py

4. Evaluations

If you used pre-trained models, you should be able to get same results(or small variation due to randomness in trackers) reported in the papers. If you meta-trained the model, you should also be able to get similar results.

$(meta_trackers_root)/meta_crest$> python eval_otb.py 
$(meta_trackers_root)/meta_sdnet$> python eval_otb.py

Similarly, please refer to VOT homepage for VOT evaluations. I also provided all raw results for both OTB and VOT experiments that used in the paper(meta_crest_result, meta_sdnet_result)

Acknowledgments

Many parts of this code are adopted from other related works(pytorch-maml, py-MDNet, CREST).

meta_trackers's People

Contributors

silverbottlep avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

meta_trackers's Issues

The doubt about random results

Hi, @silverbottlep , thaks for your excellent work. I just run your code (meta_sdnet/run_tracker.py) and find that you fix the seed in the experiment, like np.random.seed(1), torch.manual_seed(2) and torch.cuda.manual_seed(3). But i still find that the result is random, could you explain this ?

inconsistent result with python 2.7

Someone tried meta_crest with pre-trained model with python 2.7, he reported it gave different result. When he changed to python 3.5, he could reproduce the result. At this point, I have no idea, hopefully have a chance to look at it later. FYI, my environment was python 3.6, pytorch 0.2.0+3f6fccd.

Change the CNN Model

Dear @silverbottlep,
Thank you for your fantastic work.
I want to change the Base CNN model of your code via another one (e.g., ResNet18). Do you have any idea that how can I do that?

Simple example

Can you add a simple example that can work on a custom sequence/video initialization?

My interpretation of this project?

As far as MetaSDNet is concerned, I think meta-learning plays a role in learning good initial parameters in order to adapt the network to the changes in future frames. In addition, since the parameters learned by the meta-learning are used, only the iteration is required once in the first frame and the required samples are reduced, which brings the advantage of speed increase(MDnet iterates 30 times in the first frame, positive sample 500, negative sample 5000). In the tracking of subsequent frames, the same settings as MDNet are used. The main purpose of the article is to use meta-learning to get a good initialization parameter. I don't know if my understanding is right?
Thank you for your contribution!

KeyError: 'conv1.weight'

when I run "run_tracker" in meta_crest, it doesn't work and tell me that KeyError: 'conv1.weight', and i am confused,can you tell me how to solve it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.