Coder Social home page Coder Social logo

matteo-dunnhofer / trek-150-toolkit Goto Github PK

View Code? Open in Web Editor NEW
10.0 1.0 1.0 2.35 MB

Official code repository to download the TREK-150 benchmark dataset and run experiments on it.

Python 100.00%
visual-object-tracking trek-150 first-person-vision egocentric-vision vot2021 visual-tracking computer-vision ijcv

trek-150-toolkit's Introduction

The TREK-150 Benchmark Dataset and Toolkit

arXiv-2209.13502 arXiv-2108.13665

TREK-150

The understanding of human-object interactions is fundamental in First Person Vision (FPV). Visual tracking algorithms which follow the objects manipulated by the camera wearer can provide useful information to effectively model such interactions. In the last years, the computer vision community has significantly improved the performance of tracking algorithms for a large variety of target objects and scenarios. Despite a few previous attempts to exploit trackers in the FPV domain, a methodical analysis of the performance of state-of-the-art trackers is still missing. This research gap raises the question of whether current solutions can be used ``off-the-shelf'' or more domain-specific investigations should be carried out. This paper aims to provide answers to such questions. We present the first systematic investigation of single object tracking in FPV. Our study extensively analyses the performance of 42 algorithms including generic object trackers and baseline FPV-specific trackers. The analysis is carried out by focusing on different aspects of the FPV setting, introducing new performance measures, and in relation to FPV-specific tasks. The study is made possible through the introduction of TREK-150, a novel benchmark dataset composed of 150 densely annotated video sequences. Our results show that object tracking in FPV poses new challenges to current visual trackers. We highlight the factors causing such behavior and point out possible research directions. Despite their difficulties, we prove that trackers bring benefits to FPV downstream tasks requiring short-term object tracking. We expect that generic object tracking will gain popularity in FPV as new and FPV-specific methodologies are investigated.

Authors

Matteo Dunnhofer (1) Antonino Furnari (2) Giovanni Maria Farinella (2) Christian Micheloni (1)

  • (1) Machine Learning and Perception Lab, University of Udine, Italy
  • (2) Image Processing Laboratory, University of Catania, Italy

Contact: [email protected]

Citing

When using the dataset or toolkit, please reference:

@Article{TREK150ijcv,
author = {Dunnhofer, Matteo and Furnari, Antonino and Farinella, Giovanni Maria and Micheloni, Christian},
title = {Visual Object Tracking in First Person Vision},
journal = {International Journal of Computer Vision (IJCV)},
year = {2022}
}

@InProceedings{TREK150iccvw,
author = {Dunnhofer, Matteo and Furnari, Antonino and Farinella, Giovanni Maria and Micheloni, Christian},
title = {Is First Person Vision Challenging for Object Tracking?},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2021}
}

The TREK-150 Dataset

The annotations produced for this dataset are contained in this archive (you will find a zip archive for every sequence contained in TREK-150). Video frames of the TREK-150's sequences cannot be directly re-distributed due to the EK-55 policy. So you won't directly find them in the annotations folder, but they will be automatically downloaded for you.

The full TREK-150 dataset can be built just by running

pip install got10k
git clone https://github.com/matteo-dunnhofer/TREK-150-toolkit
cd TREK-150-toolkit
python download.py

This will download the original EK-55 MP4 videos, extract the frames of interest using ffmpeg, and prepare the annotation files that will be extracted from the zip archives. After the whole process is completed, you will find 100 directories in the dataset folder. Each one defines a video sequence.

Each sequence folder will contain a directory

  • img/: Contains the video frames of the sequence as *.jpg files.

and the following *.txt files:

  • groundtruth_rect.txt: Contains the ground-truth trajectory of the target object. The comma-separated values on each line represent the bounding-box locations [x,y,w,h] (coordinates of the top-left corner, and width and height) of the target object at each respective frame (1st line -> target location for the 1st frame, last line -> target location for the last frame). A line with values -1,-1,-1,-1 specifies that the target object is not visible in such a frame.
  • action_target.txt: Contains the labels for the action performed by the camera wearer (as verb-noun pair) and the target object category. The file reports 3 line-separated numbers. The first value is the action verb label, the second is the action noun label, the third is the noun label for the target object (action noun and target noun do not coincide on some sequences). The verb labels are obtained considering the verb_id indices of this file. The noun labels and target noun labels are obtained considering the noun_id indices of this file.
  • attributes.txt: Contains the tracking attributes of the sequence. The file reports line-separated strings that depend on the tracking situations happening in the sequence. The strings are acronyms and explanations can be found in Table 2 of the main paper.
  • frames.txt: Contains the frame indices of the sequence with respect to the full EK-55 video.
  • anchors.txt: Contains the frame indices of the starting points (anchors) and the direction of evaluation (0 -> forward in time, 1 -> backward in time) to implement the MSE (multi-start evaluation) protocol.
  • lh_rect.txt: Contains the ground-truth bounding-boxes of the camera wearer's left hand. The comma-separated values on each line represent the bounding-box locations [x,y,w,h] (coordinates of the top-left corner, and width and height) of hand at each respective frame (1st line -> target location for the 1st frame, last line -> hand location for the last frame). A line with values -1,-1,-1,-1 specifies that the hand is not visible in such a frame.
  • rh_rect.txt: Contains the ground-truth bounding-boxes of the camera wearer's right hand. The comma-separated values on each line represent the bounding-box locations [x,y,w,h] (coordinates of the top-left corner, and width and height) of hand at each respective frame (1st line -> target location for the 1st frame, last line -> hand location for the last frame). A line with values -1,-1,-1,-1 specifies that the hand is not visible in such a frame.
  • lhi_labels.txt: Contains the ground-truth labels expressing whether the camera wearer's left hand is in contact with the target object. The binary values on each line represent the presence of contact (0 -> no contact, 1 -> contact) between hand and object at each respective frame (1st line -> interaction for the 1st frame, last line -> interaction for the last frame).
  • rhi_labels.txt: Contains the ground-truth labels expressing whether the camera wearer's right hand is in contact with the target object. The binary values on each line represent the presence of contact (0 -> no contact, 1 -> contact) between hand and object at each respective frame (1st line -> interaction for the 1st frame, last line -> interaction for the last frame).
  • bhi_labels.txt: Contains the ground-truth labels expressing whether both camera wearer's hands are in contact with the target object. The binary values on each line represent the presence of contact (0 -> no contact, 1 -> contact) between hands and object at each respective frame (1st line -> interaction for the 1st frame, last line -> interaction for the last frame).
  • anchors_hoi.txt: Contains the frame indices of the starting and ending points (anchors) and the type of interaction(0 -> left hand interaction, 1 -> right hand interaction, 2 -> both hands interaction) to implement the HOI (hand object interaction evaluation) protocol.

The code was tested with Python 3.7.9 and ffmpeg 4.0.2. All the temporary files (e.g. *.MP4 files, not relevant frames) generated during the download procedure will be removed automatically after the process is completed. The download process can be resumed from the last downloaded sequence if prematurely stopped.

The download process could take up to 24h to complete.

Toolkit

The code available in this repository allows you to replicate the experiments and results presented in our paper. Our code is built upon the got10k-toolkit toolkit and inherits the same tracker definition. Please check such a GitHub repository to learn how to use our toolkit. The only difference is that you have to change the name of the toolkit when importing the python sources (e.g. you have to use from toolkit.experiments import ExperimentTREK150 instead of from got10k.experiments import ExperimentTREK150). Otherwise, you can try to integrate the orginal got10k-toolkit with the sources of this repository (it should be easy).

In the following, we provide examplar code to run an evaluation of the SiamFC tracker on the TREK-150 benchmark.

git clone https://github.com/matteo-dunnhofer/TREK-150-toolkit
cd TREK-150-toolkit

# Clone a the GOT-10k pre-trained SiamFC
pip install torch opencv-python got10k
git clone https://github.com/matteo-dunnhofer/siamfc-pytorch.git siamfc_pytorch
wget -nc --no-check-certificate "https://drive.google.com/uc?export=download&id=1UdxuBQ1qtisoWYFZxLgMFJ9mJtGVw6n4" -O siamfc_pytorch/pretrained/siamfc_alexnet_e50.pth
      
python example_trek150.py

This will download and prepare the TREK-150 dataset if you have not done before.

You can proceed similarly to perform experiments on the OTB benchmarks using our performance measures.

git clone https://github.com/matteo-dunnhofer/TREK-150-toolkit
cd TREK-150-toolkit

# Clone a the GOT-10k pre-trained SiamFC
pip install torch opencv-python got10k
git clone https://github.com/matteo-dunnhofer/siamfc-pytorch.git siamfc_pytorch
wget -nc --no-check-certificate "https://drive.google.com/uc?export=download&id=1UdxuBQ1qtisoWYFZxLgMFJ9mJtGVw6n4" -O siamfc_pytorch/pretrained/siamfc_alexnet_e50.pth

python example_otb100.py

Tracker Results

The raw results of the trackers benchmarked in our paper can be downloaded from this link.

License

All files in this dataset are copyright by us and published under the Creative Commons Attribution-NonCommercial 4.0 International License, found here. This means that you must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. You may not use the material for commercial purposes.

Copyright © Machine Learning and Perception Lab - University of Udine - 2021 - 2022

trek-150-toolkit's People

Contributors

matteo-dunnhofer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

trek-150-toolkit's Issues

BBoxes from TREK-150: confused as to format

Hi there, thanks so much for the super helpful dataset!

I wanted to see what would happen if I prompted SAM with the boxes from TREK-150 (i.e. is it possible to get a mask of the object within each of the boxes), so following the current documentation/README I set up TREK-150 and visualized the boxes and corresponding masks for P03_02 and the P03_02-56 sequence. However, it seems like although the boxes are listed as xywh (where x and y are the top left coordinates), when converting to xyxy the corresponding boxes don't seem to match to objects when visualized (instead they're just parts of the environment, drift to objects, or are somewhat ambiguous with multiple objects). Any idea as to what's going on?

Here are a couple of screenshots:
image
image

My initial suspicion is that I wasn't handling the coordinate system correctly, but after a few permutations, I'm unsure if there's a clear "missing piece" here. My initial code is a bit sequential as I'm figuring out how this works before using utilities like np.loadtxt.

My code looks like the following (mostly taken from the SAM demo and reading through the TREK-150 readme):

from tracker_modules.bbox_to_mask import SamBBoxToMask
import argparse
from decord import VideoReader

import os
import numpy as np
import torch
import matplotlib.pyplot as plt
import cv2
from tqdm import tqdm

def show_mask(mask, ax, random_color=False):
    if random_color:
        color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0)
    else:
        color = np.array([30/255, 144/255, 255/255, 0.6])
    h, w = mask.shape[-2:]
    mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1)
    ax.imshow(mask_image)
    
def show_points(coords, labels, ax, marker_size=375):
    pos_points = coords[labels==1]
    neg_points = coords[labels==0]
    ax.scatter(pos_points[:, 0], pos_points[:, 1], color='green', marker='*', s=marker_size, edgecolor='white', linewidth=1.25)
    ax.scatter(neg_points[:, 0], neg_points[:, 1], color='red', marker='*', s=marker_size, edgecolor='white', linewidth=1.25)   
    
def show_box(box, ax):
    x0, y0 = box[0], box[1]
    w, h = box[2] - box[0], box[3] - box[1]
    ax.add_patch(plt.Rectangle((x0, y0), w, h, edgecolor='green', facecolor=(0,0,0,0), lw=2))    

# These are symlinked
video_path = "assets/epic-kitchens-55-torrent/videos/train/P03/P03_02.MP4"
trek_folder_path = "assets/epic-kitchens-trek-150/P03/P03_02/P03_02-56"

 # Load video
video = VideoReader(video_path)

# Load trek zip folder
frame_file = os.path.join(trek_folder_path, "frames.txt")
gt_file = os.path.join(trek_folder_path, "groundtruth_rect.txt")
frame_id_mask_lst = []
bbox_to_mask = SamBBoxToMask()

# Making bboxes and masks
with open(frame_file, "r") as ff, open(gt_file, "r") as gf:
    frame_ids = ff.readlines()
    gts = gf.readlines()
    frame_ids = [int(frame.strip()) for frame in frame_ids]
    gt_bboxes = [list(map(int, gt.strip().split(","))) for gt in gts]
    frames = video.get_batch(frame_ids).asnumpy()
    gt_bboxes = [np.array([bbox[0], bbox[1], bbox[0]+bbox[2], bbox[1]+bbox[3]]) if bbox !=[-1,-1,-1,-1] else None for bbox in gt_bboxes
    for i, (frame_id, frame, bbox) in enumerate(tqdm(zip(frame_ids, frames, gt_bboxes))):
        if bbox is None:
            continue
        mask = bbox_to_mask.create_masks_from_bbox(frame, bbox)
        frame_id_mask_lst.append((frame_id, frame, bbox, mask))

# Visualization code
for frame_id, frame, bbox, mask in frame_id_mask_lst:
    print("Frame id: ", frame_id)
    plt.figure(figsize=(10,10))
    plt.imshow(frame)
    show_mask(mask[0], plt.gca())
    show_box(bbox, plt.gca())
    plt.axis('off')
    plt.show()
    plt.close()

Problems with incomplete annotation of datasets

The following problem occurred when I tested my tracker
’‘’
FileNotFoundError: /TREK-150-toolkit/TREK-150/P03-P03_02-56/anchors_hoi.txt not found.
‘’‘
I check the annotation of the dataset is indeed missing this .txt file, and p03-p03_04-57 is also missing anchors_hoi.txt, maybe the annotation of other video sequences will also have this problem, I hope you can answer for me

Is TREK-150/sequences.txt missing?

I got the following error when I executed python download.py.

Checking and downloading TREK-150. This process might take a while...
100% [......................................................] 764679 / 764679Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/kumatheworld/datasets/TREK-150-toolkit/toolkit/datasets/trek150.py", line 28, in __init__
    self._download(self.root_dir)
  File "/Users/kumatheworld/datasets/TREK-150-toolkit/toolkit/datasets/trek150.py", line 83, in _download
    assert os.path.exists(seqs_file)
AssertionError

I have successfully downloaded TREK-150-annotations.zip and it was extracted to TREK-150-annotations/, but it seems that TREK-150/sequences.txt is still missing.

$ ls TREK-150/
TREK-150-annotations     TREK-150-annotations.zip __MACOSX

Understanding the GSR Metric

@matteo-dunnhofer Hi there, congratulations on the great work. I am hoping to understand a little more about the GSR metric you proposed. I see that in the paper you mentioned that (for a range of thresholds combined) it measures the normalized extent of a tracking sequence before a failure, and that failure here is defined by a variable threshold on bounding box overlap. Here I'm hoping to ask is this bounding box overlap calculated between the ground truth bbox against the predicted bbox or the previous bbox against the current bbox? Thank you very much!

Problem during inference under HOI protocol

Hi, I got an error here when inferencing under HOI protocol.

NameError: name 'direction' is not defined

BTW, it seems like "dir_str" is not used. Could you help me fix it?
Thanks in advance !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.