Coder Social home page Coder Social logo

s3d's Introduction

S3D in PyTorch

S3D Network is reported in the ECCV 2018 paper Rethinking Spatiotemporal Feature Learning: Speed-Accuracy Trade-offs in Video Classification. It has been shown by Xie that replacing standard 3D convolutions with spatial and temporal separable 3D convolutions 1) reduces the total number of parameters, 2) is more computationally efficient, and even 3) improves the performance in terms of accuracy.

Overview

In this repository, we release the pretrained S3D network in PyTorch. We pretrained S3D on Kinetics-400 dataset and it achieves 72.08% top1 accuracy (top5: 90.35%) on the validation set of the dataset.

Network Top 1 (%) Top 5 (%)
I3D 71.1 89.3
S3D (reported by author) 72.2 90.6
S3D (our implementation) 72.1 90.4

Weight file & Sample code

First, clone this repository and download this weight file. Then, just run the code using

$ python main.py

This will output the top 5 Kinetics classes predicted by the model with corresponding probability.

Top 5 classes ... with probability
riding a bike ... 0.9937429
biking through snow ... 0.0041600233
riding mountain bike ... 0.0010456557
riding unicycle ... 0.00088055385
motorcycling ... 0.00014455811

Notes

We implement and release this repository together with our TASED-Net project (ICCV 2019). If you find it useful, you might want to consider citing our work.

@inproceedings{min2019tased,
  title={TASED-Net: Temporally-Aggregating Spatial Encoder-Decoder Network for Video Saliency Detection},
  author={Min, Kyle and Corso, Jason J},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  pages={2394--2403},
  year={2019}
}

s3d's People

Contributors

kylemin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

s3d's Issues

Query for Pre-training Details

Hi Kyle Min, thank you for releasing the code and pre-trained model.

Could you kindly provide the training details like below for reproducing the accuracy on Kinetics? Thank you.

  1. image size, is it resized to 224384 (heightwidth);
  2. number of video frames, is it 32 frames, or 64 frames, or whole video frames;
  3. do you use normalization by using the mean ([0.485, 0.456, 0.406]) and std ([0.229, 0.224, 0.225]).

Right now, without normalization and follow your code (i.e., the pre-processing in the main.py), I could get around 55% of accuracy on Kinetics 400 by using the image size of 224*384 and 32 frames.

Thank you in advance and appreciate your excellent work.

Recommended Dataclass

Thanks for sharing this work. I've found oddly different ways of loading the kinetics dataset. Is there a Dataset / Dataloader you could recommend? I'm trying to use your weights for transfer learning and would prefer staying as close as possible to your training regimen.

Mean and std for normalization?

Thanks for your implementation!
Would you mind sharing your mean and std for video normalization for training process? In main.py you use 0.5, is it the same in training?

Imagenet Weights

Thanks for providing the pytorch version of this model! To get the 72.2% result on Kinetics, you must have started from imagenet-trained weights? Would you mind sharing those weights (or a conversion script perhaps on i3d-rgb weights; to map them to your model)?

At the moment it seems I would need to write a custom script that loads the kernel(1,1,1) weights as-is and then splits the kernel(3,3,3) weights using

The training setup

For the weights linked in this repo,
(What are & How many) GPUs was the model trained on?
And how long did the training take?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.