Coder Social home page Coder Social logo

adipandas / multi-object-tracker Goto Github PK

View Code? Open in Web Editor NEW
665.0 20.0 99.0 87.8 MB

Multi-object trackers in Python

Home Page: https://adipandas.github.io/multi-object-tracker/

License: MIT License

Python 100.00%
multi-object-tracking computer-vision caffe object-tracking object-detection deep-learning python3 tracker opencv yolov3

multi-object-tracker's Introduction

Multi-object trackers in Python

Easy to use implementation of various multi-object tracking algorithms.

DOI

YOLOv3 + CentroidTracker TF-MobileNetSSD + CentroidTracker
Cars with YOLO Cows with tf-SSD
Video source: link Video source: link

Available Multi Object Trackers

  • CentroidTracker
  • IOUTracker
  • CentroidKF_Tracker
  • SORT

Available OpenCV-based object detectors:

  • detector.TF_SSDMobileNetV2
  • detector.Caffe_SSDMobileNet
  • detector.YOLOv3

Installation

Pip install for OpenCV (version 3.4.3 or later) is available here and can be done with the following command:

pip install motrackers

Additionally, you can install the package through GitHub instead:

git clone https://github.com/adipandas/multi-object-tracker
cd multi-object-tracker
pip install [-e] .

Note - for using neural network models with GPU
For using the opencv dnn-based object detection modules provided in this repository with GPU, you may have to compile a CUDA enabled version of OpenCV from source.

  • To build opencv from source, refer the following links: [link-1], [link-2]

How to use?: Examples

The interface for each tracker is simple and similar. Please refer the example template below.

from motrackers import CentroidTracker # or IOUTracker, CentroidKF_Tracker, SORT
input_data = ...
detector = ...
tracker = CentroidTracker(...) # or IOUTracker(...), CentroidKF_Tracker(...), SORT(...)
while True:
    done, image = <read(input_data)>
    if done:
        break
    detection_bboxes, detection_confidences, detection_class_ids = detector.detect(image)
    # NOTE: 
    # * `detection_bboxes` are numpy.ndarray of shape (n, 4) with each row containing (bb_left, bb_top, bb_width, bb_height)
    # * `detection_confidences` are numpy.ndarray of shape (n,);
    # * `detection_class_ids` are numpy.ndarray of shape (n,).
    output_tracks = tracker.update(detection_bboxes, detection_confidences, detection_class_ids)
    # `output_tracks` is a list with each element containing tuple of
    # (<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z>)
    for track in output_tracks:
        frame, id, bb_left, bb_top, bb_width, bb_height, confidence, x, y, z = track
        assert len(track) == 10
        print(track)

Please refer examples folder of this repository for more details. You can clone and run the examples.

Pretrained object detection models

You will have to download the pretrained weights for the neural-network models. The shell scripts for downloading these are provided here below respective folders. Please refer DOWNLOAD_WEIGHTS.md for more details.

Notes

  • There are some variations in implementations as compared to what appeared in papers of SORT and IoU Tracker.
  • In case you find any bugs in the algorithm, I will be happy to accept your pull request or you can create an issue to point it out.

References, Credits and Contributions

Please see REFERENCES.md and CONTRIBUTING.md.

Citation

If you use this repository in your work, please consider citing it with:

@misc{multiobjtracker_amd2018,
  author = {Deshpande, Aditya M.},
  title = {Multi-object trackers in Python},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/adipandas/multi-object-tracker}},
}

multi-object-tracker's People

Contributors

adipandas avatar cansik avatar edavalosanaya avatar partheee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multi-object-tracker's Issues

error while loading the model.

Hi while running the tensorflow tracker iam unable to load the tensorflow model using open cv as written in the code.the following is the error iam getting when loading a model. C:\projects\opencv-python\opencv\modules\dnn\src\tensorflow\tf_graph_simplifier.cpp:960: error: (-215:Assertion failed) nodesMapIt != nodesMap.end() in function 'cv::dnn::dnn4_v20191202::sortByExecutionOrder'..how can i solve this error?.please help me with this

Project dependencies may have API risk issues

Hi, In multi-object-tracker, inappropriate dependency versioning constraints can cause risks.

Below are the dependencies and version constraints that the project is using

numpy
scipy
matplotlib
opencv-contrib-python
pandas
motmetrics
setuptools
ipyfilechooser

The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

After further analysis, in this project,
The version constraint of dependency numpy can be changed to >=1.8.0,<=1.23.0rc3.
The version constraint of dependency scipy can be changed to >=0.12.0,<=1.8.1.

The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.

The invocation of the current project includes all the following methods.

The calling methods from the numpy
numpy.linalg.inv
The calling methods from the scipy
scipy.spatial.distance.cdist
The calling methods from the all methods
numpy.sqrt
motrackers.detectors.Caffe_SSDMobileNet.detect
numpy.arange
numpy.minimum
iou
format
motrackers.utils.misc.get_centroid
scipy.spatial.distance.cdist
matplotlib.pyplot.plot
motrackers.track.KFTrackCentroid
class_ids.np.array.astype.append
numpy.random.randint.tolist
x.flatten.flatten
self.update
cv2.dnn.NMSBoxes
numpy.random.random_integers
assign_tracks2detection_centroid_distances
unmatched_tracks.append
json.load.items
motrackers.CentroidKF_Tracker
os.path.join
motrackers.detectors.TF_SSDMobileNetV2.detect
scores.astype.astype
track.output
KFTracker1D
sys.path.insert
cv2.imshow
print
select_caffemodel_prototxt
motrackers.utils.misc.load_labelsjson
motrackers.detectors.YOLOv3.detect
int
numpy.random.randint
image.self.forward.squeeze.squeeze
bboxes.np.array.xyxy2xywh.tolist
motrackers.tracker.Tracker.preprocess_input
ValueError
self.kf.update
self.net.forward
numpy.amin
test_KFTracker2D
argparse.ArgumentParser
ord
motrackers.IOUTracker
max
metrics_motchallenge_files
centroid_distances.np.amin.argsort
vars
confidences.append
motmetrics.metrics.create.compute_many
select_pbtxt
dict2jsonfile
motrackers.kalman_tracker.KFTracker4D
updated_detections.append
Tracker.preprocess_input
motrackers.track.Track
dict
KFTracker2D.predict
numpy.ones_like
cv2.destroyAllWindows
self.kf.predict
motmetrics.utils.compare_to_groundtruth
numpy.empty.append
numpy.hstack
select_yolo_weights
kwargs.items
motrackers.detectors.YOLOv3.draw_bboxes
assign_tracks2detection_iou
min
super.update
test_KFTracker1D
numpy.random.randn
numpy.dot
vars.OPENCV_OBJECT_TRACKERS
cv2.MultiTracker_create.update
cv2.putText
y.x.xywh.np.concatenate.astype
wh.xymin.np.concatenate.astype
yb.xr.y.x.np.array.astype
cv2.dnn.readNetFromCaffe
height.width.top.left.np.array.astype
enumerate
self.IOUTracker.super.__init__
range
self.forward
open
argparse.ArgumentParser.add_argument
track_id.self.tracks.update
matplotlib.pyplot.xlim
cv2.dnn.readNetFromDarknet
setattr
create_filechooser
setuptools.setup
numpy.random.random_integers.reshape
numpy.concatenate
numpy.asarray
motrackers.kalman_tracker.KFTracker2D
create_data
len
float
motrackers.kalman_tracker.KFTrackerSORT
numpy.array.append
cv2.circle
cv2.resize
scipy.optimize.linear_sum_assignment
indices.bboxes.np.array.astype
motrackers.IOUTracker.update
motmetrics.io.loadtxt
numpy.ones
json.dump
self.nms_threshold.self.confidence_threshold.confidences.bboxes.cv.dnn.NMSBoxes.flatten
cv2.selectROI
h.w.h.ymid.w.xmid.np.array.astype
motmetrics.io.render_summary
self._update_track
self.net.setPreferableTarget
self.object_names.keys
image.self.forward.squeeze
numpy.array.astype
self.net.setInput
numpy.where
updated_tracks.append
list
matplotlib.pyplot.grid
matplotlib.pyplot.show
zip
numpy.random.seed
main
numpy.sin
cv2.getTextSize
cv2.rectangle
cv2.VideoCapture
self._remove_track
track_id.self.tracks.predict
self.net.getUnconnectedOutLayers
self.tracks.keys
KFTracker2D.update
cv2.VideoCapture.read
os.path.abspath
motrackers.track.KFTrack4DSORT
motrackers.SORT
unmatched_detections.append
select_caffemodel_weights
collections.OrderedDict
cv2.MultiTracker_create
bboxes.np.array.xyxy2xywh.tolist.append
KFTracker2D
motrackers.utils.misc.xyxy2xywh
self.net.getLayerNames
super.__init__
numpy.eye
super
motrackers.detectors.TF_SSDMobileNetV2
cv2.dnn.readNetFromTensorflow
numpy.maximum
matplotlib.pyplot.legend
numpy.argmax
get_transition_matrix
numpy.array
pick.append
numpy.empty
motrackers.detectors.Caffe_SSDMobileNet.draw_bboxes
self.net.setPreferableBackend
get_centroid
motrackers.utils.misc.iou_xywh
get_process_covariance_matrix
boxes.astype.astype
motrackers.detectors.TF_SSDMobileNetV2.draw_bboxes
motrackers.utils.draw_tracks
cv2.dnn.blobFromImage
compute_motchallenge
self._get_tracks.append
select_yolo_config
cv2.MultiTracker_create.add
select_coco_labels
numpy.argsort
self._add_track
self._get_tracks
numpy.linalg.inv
cv2.waitKey
motrackers.detectors.Caffe_SSDMobileNet
bboxes.append
class_ids.np.array.astype
time.sleep
h.w.h.ymid.w.xmid.np.array.astype.astype
json.load
cv2.VideoCapture.release
setuptools.find_packages
argparse.ArgumentParser.parse_args
cv2.resize.copy
motrackers.detectors.YOLOv3
numpy.zeros
motmetrics.metrics.create
bbox.copy
motrackers.CentroidTracker
select_tfmobilenet_weights
ipyfilechooser.FileChooser
numpy.delete
tracks.items
h.w.ymin.xmin.np.array.astype

@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.

PyPI Package

Hello @adipandas

Thanks for developing and maintaining a fantastic project! Are you interested in making a PyPI package for multi-object-tracker? If so, I can help with directions (as you will need to create a PyPI account) and create a GitHub actions workflow for automatically publishing your package when a new version is ready.

Bug: SORT always same iou_threshold

Hi there,

I think there is a small bug here
iou_threshold is hard coded to 0.3, I think you wanted something more like

matches, unmatched_detections, unmatched_tracks = assign_tracks2detection_iou(
                bbox_tracks, bbox_detections, iou_threshold=self.iou_threshold)

Getting the exact same result when changing tracker type

Hello
I tried changing the tracker type in mot_YOLOv3.py to all the options including ['CentroidTracker', 'CentroidKF_Tracker', 'SORT', 'IOUTracker'] but i keep getting the exact same result, there not even a friction of difference between choosing different trackers, why is that?

Is there any tracker prediction?

Thanks a lot for your code, I use your tracker with my own model of object detection. Tell me if i am wrong but the tracker you use "udpate" the bbox pixel, class id... but it cannot predict, calculate the bbox emplacement in the next frame, without object detection?
And if it does, wich function do you use to do it. Thanks

class_ids is not considered in the tracking

Hi, I tried the library and it seems that the tracker id can be shared between class_ids. If it's a multi object tracker, is it normal that the track id doesn't reset when it encounters different objects (ie with different class_ids)? Could you please clarify?

(
class_ids (numpy.ndarray or list): List of class_ids (int) corresponding to labels of the detected object. Default is None.

)

Fix cv2 version checking in motrackers.detectors.yolo.py

Hello, wondering if the below line 26 in motrackers.detectors.yolo.py causes issues for anyone else.

        if cv2.__version__ == '4.6.0':
  1. Because import cv2 as cv, I get a NameError: name 'cv2' is not defined when using this file. I believe changing the line to use cv instead of cv2 should fix the issue.
  2. I believe the line should check for greater than or equal to 4.6.0. My version is 4.8.0 and I get IndexError: invalid index to scalar variable. when using the file as-is

TL;DR proposed fix:

        if cv.__version__ >= '4.6.0':

Thanks!

saving object tracking info to file

I rely on this object tracker repo as an example,

https://github.com/nwojke/deep_sort

Aft it runs thru a video, it outputs a text file (hypotheses.txt) with the following format.

It has as many rows as the object detected in the video, and for each row, it has these columns

frame #, object ID, x, y, width, height, other info

This is quite useful for further analysis and assessment with ground truth.

It'd be great if you could do something similar in the notebooks.

AttributeError: 'SORT' object has no attribute 'track'

Hello,
I was following to "How to use?: Examples" step by step and I got this error.

I am using RetinaNet for object detection. I get my detections, scores and labels. I tried to implement sort into my code as following:

from motrackers import SORT

tracker = SORT(max_lost=3)
boxes, scores, labels = model.predict_on_batch(np.expand_dims(image, axis=0))  #  get prediction
output_tracks = tracker.track(boxes[0], scores[0], labels[0])

boxes[0] shape is (300,4)
scores[0] and labels[0] shapes are (300,)
but while im running this code. I got this error.
File "tired2try.py", line 68, in
output_tracks = tracker.track(boxes[0], scores[0], labels[0])
AttributeError: 'SORT' object has no attribute 'track'

Could someone help me for this error, please?

Plz add Requirements

Hello.

Can you add previous version of your project in branch ? Thank you! It will save my time .

How to maintain the same ID for a detected object across many frames?

Hi,

I am trying to use your tracking model, such that when I have detected an object (for example the two pedestrians in the back of the frame) and then they get lost or occluded over the next couple of frames, when they are detected again, they maintain the same ID they had before. I am using the SORT method.

Download (1)

In the proceeding frames these detected pedestrians get occluded and then they appear again. The goal is that they have the same ID as they had before dissapearing.

However I haven't been able to find a solution for this. Is it useful to change the value of max_lost ?

RuntimeError: OrderedDict mutated during iteration

Hey, great work. But there seems to be a bug in the track-yolo-model notebook. I just point video_src to a video of mine. Half way thru processing a video, this occurs at

objects = tracker.update(detections_bbox)  

So, it doesn't fail right away.

Full err msg below,

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-15-f82dd16210ab> in <module>
     46                       (x, y-5), cv.FONT_HERSHEY_SIMPLEX, 0.5, clr, 2)
     47 
---> 48     objects = tracker.update(detections_bbox)           # update tracker based on the newly detected objects
     49 
     50     for (objectID, centroid) in objects.items():

<ipython-input-2-174d5f2aa75d> in update(self, detections)
     25 
     26         if len(detections) == 0:   # if no object detected in the frame
---> 27             for objectID in self.lost.keys():
     28                 self.lost[objectID] +=1
     29                 if self.lost[objectID] > self.maxLost: self.removeObject(objectID)

RuntimeError: OrderedDict mutated during iteration

Print label and id of detected object

How to print the object ID and label from detected object? I'm trying to get output like this (for example: ID0 car, ID1 car, ID2 person, ID3 motorcycle)

To predict a detection without a detection by a detector

Hello
How are you?
Thanks for contributing to this project.
I am using your method with my custom detector.
My detector sometimes be NOT able to detect an object.
At a certain time, let's suppose that there is NOT a detection of an object by my detector.
Then your method should be able to predict a virtual detection although there is not a detection by the detector.
Of course, I found that the track's info is updated without a corresponding detection.
The Tracker's method "_get_tracks" is finally called in the method "update" of Tracker.

image

But this method "_get_tracks" does NOT return any track of which the variable "lost" is greater than 0.

image

So a track with at least one lost detection is NOT contained in the finally tracked objects.
I think that this is an issue.

Bug in detector_YOLOv3.py

Executing:

python detector_YOLOv3.py

gives the following error:

Traceback (most recent call last):
File "detector_YOLOv3.py", line 67, in
use_gpu=args.gpu
File "d:\source\third-party-repos\multi-object-tracker\motrackers\detectors\yolo.py", line 29, in init
self.layer_names = [layer_names[i [0] - 1] for i in self.net.getUnconnectedOutLayers()]
File "d:\source\third-party-repos\multi-object-tracker\motrackers\detectors\yolo.py", line 29, in
self.layer_names = [layer_names[i [0] - 1] for i in self.net.getUnconnectedOutLayers()]
IndexError: invalid index to scalar variable.

Tracking objects when there are no objects detected

Hello while using the SORT object from sort_tracker.py I realized that, after I stop detecting objects, if I send empty boxes, confidence and classes to the update function I still get 1 or more objects as being tracked. I thought the max_lost parameter would prevent this? the paper indicates that the tracks should be terminated if they are not detected for Tlost frames, so shouln't an empty detection count as all tracks not being detected in that frame?

Adding something like this should take care of the tracks if no object is detected in a frame:

for t in unmatched_tracks:
    track_id = track_ids[t]
    bbox = bbox_tracks[t, :]
    confidence = self.tracks[track_id].detection_confidence
    cid = self.tracks[track_id].class_id
    self._update_track(track_id, self.frame_count, bbox, detection_confidence=confidence, class_id=cid, lost=1)
    
    if self.tracks[track_id].lost > self.max_lost:
        self._remove_track(track_id)

# In case there are no detections, update tracks with the current prediction, if lost > max_lost remove track.
if len(bboxes) == 0:
    for i in range(len(bbox_tracks)):
        track_id = track_ids[i]
        bbox = bbox_tracks[i, :]
        confidence = self.tracks[track_id].detection_confidence
        cid = self.tracks[track_id].class_id
        self._update_track(track_id, self.frame_count, bbox, detection_confidence=confidence, class_id=cid, lost=1)
        if self.tracks[track_id].lost > self.max_lost:
            self._remove_track(track_id)

Also talking about the parameter "lost" for the track class isn't that line odd? if you set max_lost to let's say 5, then the condition lost > max_lost cannot be reached since the lost parameter is always reset to 1. In my case I commented that line out to make it work.

Here's a little demo that I wrote, I recommend runing it (without the changes I made) with a video where after the tracked object leaves the screen there are no more objects to detect, to see what I mean.

...

detector = Yolov4Detector()
tracker = CentroidTracker()

for i in range(len(images)):
    detection_bboxes = []
    detection_confidences = []
    detection_class_ids = []

    im = Image.open(images[i])
    transform = detector.get_transform()
    new_im = transform(im)
    new_im.unsqueeze_(0)
    dets = detector.detect(new_im)[0]

    for det in dets:
        xmin = int(det['bbox']['x1'] * im.size[0])
        ymin = int(det['bbox']['y1'] * im.size[1])
        xmax = int(det['bbox']['x2'] * im.size[0])
        ymax = int(det['bbox']['y2'] * im.size[1])

        width = abs(xmin - xmax)
        height = abs(ymin - ymax)

        detection_bboxes.append([xmin, xmax, width, height])
        detection_confidences.append(det['confidence'])
        detection_class_ids.append(0) #There's only one class

    detection_bboxes = np.array(detection_bboxes)
    detection_confidences = np.array(detection_confidences)
    detection_class_ids = np.array(detection_class_ids)

    output_tracks = tracker.update(detection_bboxes, detection_confidences, detection_class_ids)


    for track in output_tracks:
        frame, id, bb_left, bb_top, bb_width, bb_height, confidence, x, y, z = track
        assert len(track) == 10
        print(track)

Looks like someone else had the same issue #21 (comment)

Question about Yolov3+Trackers

Hi,
Thank you for the great work. I am using Yolov3+DeepSort but it is very slow. My target is to use detection+tracking in the real-time.
1-What kind of tracker are you using? Is it traditional methods or deep learning based trackers?
2-Can I use this project in Nvidia Xavier NX? If yes then is there any additional requirements?
3-What about the speed of yolov3+SimplTracker and yolov3+SimpleTracker2?
4-What is the difference between SimpleTracker and SimpleTracker2

Using Hyperparameter tuning library (Guild.ai or Ray tune)

hi @adipandas

First of all, congratulations on such great work. This repository has helped me a lot not only in a very specific tracking job, but also in having a much better understanding of the general tracking process, as a whole.

As a request I wanted to ask if you could guide me on using any Hyperparameter tuning library like Guild.ai or Ray tune, in case you have any experience with them or any other, for that matter. I'm very curious as to how these tuners can improve my tracking tasks and reduce errors (which I seem to have many in my use-case) by tuning the Hyperparametes and obseeving the different results.

Thanks in advance ๐Ÿ™‚

Speed is very slow

Hi sir,
I am using the tracking_yolo_model in GTX1080Ti. But the interface speed is very very slow. Is it running in the CPU mode?
It's roundabout 2 FPS.

x,y,z values alvays gives -1,-1,-1

Hi, Thanks for the nice works

I tried all trackers with my own data and own model. And at all return I got below results. Actually I can calculate the x,y (centeroid) but I wonder it may couse wrong calculation at distance. Here bolow one of the the results:

<class 'list'> 40
1 0 1232 880 187.0 57.0 0.99609375 -1 -1 -1
1 1 1197 144 192.0 57.0 0.9921875 -1 -1 -1
1 2 1197 207 186.0 52.0 0.984375 -1 -1 -1
1 3 1218 316 178.0 47.0 0.984375 -1 -1 -1
1 4 1237 833 183.0 56.0 0.98046875 -1 -1 -1
1 5 1232 616 187.0 45.0 0.9765625 -1 -1 -1
1 6 1230 724 191.0 46.0 0.9765625 -1 -1 -1
1 7 1726 304 184.0 88.0 0.9765625 -1 -1 -1
1 8 1197 97 186.0 54.0 0.96875 -1 -1 -1
1 9 1031 172 163.0 55.0 0.96484375 -1 -1 -1
1 10 1211 257 175.0 48.0 0.95703125 -1 -1 -1
1 11 1418 854 253.0 59.0 0.953125 -1 -1 -1
1 12 1211 362 181.0 48.0 0.9453125 -1 -1 -1
1 13 1235 782 187.0 50.0 0.93359375 -1 -1 -1
1 14 1048 370 159.0 45.0 0.9296875 -1 -1 -1
1 15 1081 852 153.0 50.0 0.9296875 -1 -1 -1
1 16 1034 232 156.0 46.0 0.91015625 -1 -1 -1
1 17 1048 427 159.0 41.0 0.91015625 -1 -1 -1
1 18 1383 51 257.0 62.0 0.91015625 -1 -1 -1
1 19 1223 581 187.0 44.0 0.890625 -1 -1 -1
1 20 1068 613 156.0 45.0 0.87890625 -1 -1 -1
1 21 1224 411 179.0 46.0 0.8515625 -1 -1 -1
1 22 1230 521 179.0 44.0 0.8359375 -1 -1 -1
1 23 1791 18 122.0 65.0 0.8125 -1 -1 -1
1 24 1084 812 149.0 51.0 0.75390625 -1 -1 -1
1 25 1038 151 162.0 57.0 0.74609375 -1 -1 -1
1 26 1249 678 183.0 47.0 0.74609375 -1 -1 -1
1 27 982 700 82.0 37.0 0.734375 -1 -1 -1
1 28 1056 457 151.0 46.0 0.734375 -1 -1 -1
1 29 1085 762 146.0 48.0 0.72265625 -1 -1 -1
1 30 1050 324 151.0 45.0 0.7109375 -1 -1 -1
1 31 1049 270 148.0 47.0 0.67578125 -1 -1 -1
1 32 980 561 80.0 41.0 0.6484375 -1 -1 -1
1 33 1067 516 158.0 39.0 0.6484375 -1 -1 -1
1 34 1067 570 158.0 39.0 0.625 -1 -1 -1
1 35 1069 657 153.0 42.0 0.625 -1 -1 -1
1 36 977 476 73.0 39.0 0.52734375 -1 -1 -1
1 37 985 609 81.0 38.0 0.52734375 -1 -1 -1
1 38 973 204 73.0 47.0 0.41796875 -1 -1 -1
1 39 987 802 86.0 46.0 0.40234375 -1 -1 -1

EDIT :
First I thougt x,y values became correct so I closed but I replaced print func. like below. Then result again -1,-1,-1 so reopened again.
iteration like :

for tr in  trdata:
      frame, id, xmin, ymin, w, h, scor, cx, cy, cz = tr
      xmin, ymin = int(xmin), int(ymin)
      xmax, ymax = int(xmin + w), int(ymin + h)

      print(frame, id, xmin, ymin, w, h, scor, cx, cy, cz)  # --> 1 39 987 802 86.0 46.0 0.40234375 -1 -1 -1
      cx,cy = xmin + int(w/2), ymin + int(h/2)

SORT Tracker reassigns new ID to an already tracked object

Hello, I am trying to implement the tracker to track custom classes (tree, wooden post, metal post). I am running the mot_yolov3.py file with minor changes. I observe that after a few frames where an object is detected, two things happen:

  • the tracker loses track of the object
  • when the object reappears, the tracker assigns it a new ID

pic-problem

If you take a look at the image you will see that the metal post is ID 10 at Frame 43 and ID 11 at Frame 54. In between these two frames, the tracker either lost track (did not display) or tracked it with the correct ID.

How can this issue be fixed?
Thanks in advance!

EDIT: I am also facing the Issue pointed out in #22, I will implement the change @cnavarrete has mentioned, and will see if that is a fix

Does it use GPU?

I ran the notebook tracking-yolo-model, and notice my GPU util is 0% throughout. So, question is, how to make it use GPU during inference?

How did you choose the Process & Measurement noise?

I've been using the Sort tracker since it fits my problem but I have the following problem, even though all my objects go from top to bottom in a mostly straight line (no exceptions), in some frames the predicions generate a bbox that goes up, which I find very odd. This kind of bbox is too far away from the detected bbox so it count as a new object to track. Is there a way to manipulate the predictions torwards a direction? How did you choose your parameters for the Kalman Filter?

Check this snippet, all it does is to simulate a bbox that moves 45-50 px (had to simulate some noise) down for 1000 frames, the IOU threshold is 0.2 so I think that it should work, but it does not. To print the bbox predicted I just added a line to sort_tracker.py after

bb = self.tracks[track_id].predict()

This is only to demonstrate that in some frames the prediction goes up. The first 2-3 frames are critical for my app, which is a problem since most of the time this frame's bbox Y coordinate is predicted incorrectly as going up.

from motrackers import SORT
from argparse import Namespace
import numpy as np
import random

# x, y, w, h
bbox = [100, 100, 200, 100]

params = {
    'max_lost': 0,
    'tracker_output_format': 'mot_challenge',
    'iou_threshold': 0.2,
    'process_noise_scale': 1,
    'measurement_noise_scale': 1,
    'time_step': 1
}

params = Namespace(**params)

tracker = SORT(
            max_lost=params.max_lost,
            tracker_output_format=params.tracker_output_format,
            iou_threshold=params.iou_threshold,
            process_noise_scale=params.process_noise_scale,
            measurement_noise_scale=params.measurement_noise_scale,
            time_step=1
        )

for i in range(0, 1000):
    detection_bboxes = []
    detection_confidences = []
    detection_class_ids = []

    detection_bboxes.append(bbox)
    detection_confidences.append(1)
    detection_class_ids.append(0)

    detection_bboxes = np.array(detection_bboxes)
    detection_confidences = np.array(detection_confidences)
    detection_class_ids = np.array(detection_class_ids)
    
    tracks = tracker.update(detection_bboxes, detection_confidences, detection_class_ids)

    print(tracks)

    bbox = [bbox[0] + random.randint(-1, 1), bbox[1] + random.randint(45, 50), 200, 100]

Also I'm pretty surprised that rarely some bboxes predicted contain negative values for width and/or height in my actual app, too bad I couldn't simulate it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.