Coder Social home page Coder Social logo

group_loss's Introduction

The Group Loss for Deep Metric Learning

Official PyTorch implementation of The Group Loss for Deep Metric Learning paper (ECCV 2020) published at European Conference on Computer Vision (ECCV) 2020.

Installation

Please download the code:

To use our code, first download the repository:

git clone https://github.com/dvl-tum/The_Group_Loss_for_Deep_Metric_Learning.git

To install the dependencies:

pip install -r requirements.txt

Datasets

The code assumes that the CUB-200-2011 dataset is given in the format:

CUB_200_2011/images/001
CUB_200_2011/images/002
CUB_200_2011/images/003
...
CUB_200_2011/images/200

The code assumes that the CARS-196 dataset is given in the format:

CARS/images/001
CARS/images/002
CARS/images/003
...
CARS/images/198

The code assumes that the Stanford Online Products dataset is given in the format:

Stanford/images/00001
Stanford/images/00002
Stanford/images/00003
...
Stanford/images/22634

where 001, 002, ..., N are the IDs of the folders for each class in the dataset.

Training

In order to train, evaluate and save a model, run the following command:

python train.py

For convenience, we provide models trained in the classification task. For three datasets (CUB-200-2011, CARS-196, and Stanford Online Products, in addition to ImageNet) they can be found at:

net/bn_inception_weights_pt04.pt
net/finetuned_cub_bn_inception.pth
net/finetuned_cars_bn_inception.pth
net/finetuned_Stanford_bn_inception.pth

Please see the file:

train_finetune.py

on how to pretrain the networks for the classification task (if you want to use some other type of network). For DenseNets, please email us to send you the pretrained networks (bear in mind though, the difference in performance is minimal, so you can skip the pretraining).

For convenience (in case you only want to use networks for feature extraction), we provide trained networks in the task of Group Loss, that reach similar results to those in the paper. They can be found at:

net/trained_cub_bn_inception.pth
net/trained_cars_bn_inception.pth
net/trained_stanford_bn_inception.pth

Citation

If you find this code useful, please consider citing the following paper:

@InProceedings{Elezi_2020_ECCV,
author = {Elezi, Ismail and Vascon, Sebastiano and Torcinovich, Alessandro and Pelillo, Marcello and Leal-Taixe, Laura},
title = {The Group Loss for Deep Metric Learning},
booktitle = {European Conference on Computer Vision (ECCV)},
month = {August},
year = {2020}
}

group_loss's People

Contributors

therevanchist avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.