zhongdao / towards-realtime-mot Goto Github PK
View Code? Open in Web Editor NEWJoint Detection and Embedding for fast multi-object tracking
License: MIT License
Joint Detection and Embedding for fast multi-object tracking
License: MIT License
Have error line: [INFO]: invalid load key, '\x00' when run demo.py
in JDETracker function to load pretrained weights
tracker = JDETracker(opt, frame_rate=frame_rate)
File "tracker/multitracker.py", line 158, in init
self.model.load_state_dict(torch.load(opt.weights, map_location='cpu')['model'], strict=False)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 386, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 563, in _load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, '\x00'.
I'm sorry, this would be very naive question but I couldnt run demo because utils.py couldnt properly import maskrcnn_benchmark.layers.nms as nms
$ python demo.py --input-video path/to/your/input/video --weights path/to/model/weights
Traceback (most recent call last):
File "demo.py", line 8, in
from tracker.multitracker import JDETracker
File "/home/hoge/work/fisheye/Towards-Realtime-MOT/tracker/multitracker.py", line 10, in
from utils.utils import *
File "/home/hoge/work/fisheye/Towards-Realtime-MOT/utils/utils.py", line 14, in
import maskrcnn_benchmark.layers.nms as nms
ModuleNotFoundError: No module named 'maskrcnn_benchmark'
I tried to install by following this install.md instruction
https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/INSTALL.md
and I manage to install properly.
But how should I edit importing sentense like import maskrcnn_benchmark.layers.nms as nms?
my current working tree structure is like this.
dir(fisheye)/
├──Towards-Realtime-MOT (git cloned dir)
├── maskrcnn-benchmark (git cloned dir)
in the case I tried to edit utils.py path for maskrcnn by myself with begginer python knowledge like below.
from ..maskrcnn-benchmark import maskrcnn_benchmark.layers.nms as nms
However I failed to import with this error massage
from ..maskrcnn-benchmark import maskrcnn_benchmark.layers.nms as nms
SyntaxError: invalid syntax
Can someone help me please?
thanks for this great job! when i run this demo ,i meet a error:
python3 demo.py --input-video /home/lyp/Videos/deploy1-155175756,155175757.mp4 --weights /home/lyp/project/mot-project/towards-realtime-mot/Towards-Realtime-MOT/jde.uncertainty.pt --output-format video --output-root /home/lyp/project/mot-project/towards-realtime-mot/ /usr/local/lib/python3.5/dist-packages/sklearn/utils/linear_assignment_.py:21: DeprecationWarning: The linear_assignment_ module is deprecated in 0.21 and will be removed from 0.23. Use scipy.optimize.linear_sum_assignment instead. DeprecationWarning) Traceback (most recent call last): File "demo.py", line 8, in <module> from tracker.multitracker import JDETracker File "/home/lyp/project/mot-project/towards-realtime-mot/Towards-Realtime-MOT/tracker/multitracker.py", line 14, in <module> from tracker import matching File "/home/lyp/project/mot-project/towards-realtime-mot/Towards-Realtime-MOT/tracker/matching.py", line 7, in <module> from utils.cython_bbox import bbox_ious ImportError: cannot import name 'bbox_ious'
i guess my python version is not matching with the cython_bbox.cpython-36m-x86_64-linux-gnu.so.
i find a 3.5 version file from https://github.com/microsoft/CNTK/tree/master/Examples/Image/Detection/utils/cython_modules, but this version file do not have the bbox_ious fuction.
Can you tell me about the file dir about cython_bbox.cpython-36m-x86_64-linux-gnu.so? i want a version with 3.5,thanks!
I meet the following mistake:
x/$ python demo.py --input-video input/MOT16-11.mp4 --weights weights/jde.1088x608.uncertainty.pt --output-format video --output-root output/
/home/x/anaconda3/envs/Towards_MOT/lib/python3.6/site-packages/sklearn/utils/linear_assignment_.py:21: DeprecationWarning: The linear_assignment_ module is deprecated in 0.21 and will be removed from 0.23. Use scipy.optimize.linear_sum_assignment instead.
DeprecationWarning)
Namespace(cfg='cfg/yolov3.cfg', conf_thres=0.5, img_size=(1088, 608), input_video='input/MOT16-11.mp4', iou_thres=0.5, min_box_area=200, nms_thres=0.4, output_format='video', output_root='output/', track_buffer=30, weights='weights/jde.1088x608.uncertainty.pt')
2019-10-28 20:05:44 [INFO]: start tracking...
The value: vw=960, vh=540 dw=1088 dh=608
Lenth of the video: 900 frames
2019-10-28 20:05:46 [INFO]: Processing frame 0 (100000.00 fps)
2019-10-28 20:05:47 [INFO]: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
ffmpeg: error while loading shared libraries: libopencv_core.so.2.4: cannot open shared object file: No such file or directory
How can I solve this?Thank you!
hi
I'm really thank you for this wonderful work.
I test on the MOT16-01 and find that the fps gradually increase, like 7.66 10.00 11.49 12.32....
Is it related to the fps computing method or some other reason,do you have any ideas?
when i run the demo.py,i got this:
from utils.cython_bbox import bbox_ious ModuleNotFoundError: No module named 'utils.cython_bbox'
Is anything I have to download or compile?
Wish u help
Hi there, excellent work with real time MOT!
how can we run demo.py using 864x408 images as input? do we need another trained model? something like JDE-864x408-uncertainty?
"CUHKSYSU/images/s6933.jpg" file path is not exist, however "CUHKSYSU/labels_with_ids/s6933.txt" is exist , casuse mismatch, can you slove the problem, please?
When I run demo.py. I have an issue like this:
Traceback (most recent call last):
File "demo.py", line 8, in
from tracker.multitracker import JDETracker
File "/home/thanhpham/PycharmProjects/HumanDetection01/venv/MOT/tracker/multitracker.py", line 13, in
from models import *
File "/home/thanhpham/PycharmProjects/HumanDetection01/venv/MOT/models.py", line 8, in
from utilss.syncbn import SyncBN
ImportError: cannot import name 'SyncBN'
From the author of paper. I changed the name file utils to utilss my code:
The code of demo.py file
from tracker.multitracker import JDETracker
from utilss import visualization as vis
from utilss.utilss import *
from utilss.io import read_results
from utilss.log import logger
from utilss.timer import Timer
from utilss.evaluation import Evaluator
import utilss.datasets as datasets
import torch
from track import eval_seq
When run command line:
bash compile.sh to install syncbn
I face issue like this:
Traceback (most recent call last):
File "setup.py", line 2, in
from torch.utilss.cpp_extension import CUDAExtension, BuildExtension
ModuleNotFoundError: No module named 'torch'
~/PycharmProjects/HumanDetection01/venv/MOT/utilss/syncbn
Although, I installed torch already.
Please give me some advices
Hi, thanks for you awesome code, but when I try demo.py, I met this error, and I found there is nothing in the syncbn folder.
I run the follow command, but server errors happen.
python demo.py --input-video ./results/test.mp4 --weights ./jde.1088x608.uncertainty.pt --output-format video --output-root ./results/
Unable to init server: Could not connect: Connection refused
Unable to init server: Could not connect: Connection refused
(demo.py:14943): Gdk-CRITICAL **: 01:48:57.208: gdk_cursor_new_for_display: assertion 'GDK_IS_DISPLAY (display)' failed
Namespace(cfg='./cfg/yolov3.cfg', conf_thres=0.5, img_size=(1088, 608), input_video='./results/test.mp4', iou_thres=0.5, min_box_area=200, nms_thres=0.4, output_format='video', output_root='./results/', track_buffer=30, weights='./jde.1088x608.uncertainty.pt')
2019-10-28 01:48:57 [INFO]: start tracking...
Lenth of the video: 1500 frames
2019-10-28 01:48:57 [INFO]: 'module' object is not callable
ffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)
configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libavresample 3. 7. 0 / 3. 7. 0
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
libpostproc 54. 7.100 / 54. 7.100
[image2 @ 0x55815dee38c0] Could find no file with path './results/frame/%05d.jpg' and index in the range 0-4
./results/frame/%05d.jpg: No such file or directory
您好!当我想运行demo的时候,发现没有权重文件。这个权重文件是train生成的吗?现在我可以通过运行train生成权重文件,然后运行demo吗?
i can run the demo.py sucessfully , but the result.mp4 is the same as the input video , there is no detection and tracking. why is this phenomenon happens。
firstly, thanks for your great jobs, when l try demo.py i met this error, could you give me some advice? thank you very much
Hi, thanks for your great contribution, I write a simpler demo code, which is good for fresh, but I am wondering that is there any faster-rcnn detection model or other model which is better than yolo3?
import os.path as osp
import cv2
import logging
import argparse
import motmetrics as mm
from tracker.multitracker import JDETracker
from utils import visualization as vis
from utils.log import logger
from utils.timer import Timer
from utils.evaluation import Evaluator
import utils.datasets as datasets
import torch
from utils.utils import *
class opt_c(object):
def __init__(self):
self.img_size=(1088,608)
self.cfg="cfg/yolov3.cfg"
self.weights="/home/apptech/Towards-Realtime-MOT/jde.1088x608.uncertainty.pt"
self.conf_thres=0.5
self.track_buffer=30
self.nms_thres=0.4
self.min_box_area=200
opt=opt_c()
def letterbox(img, height=608, width=1088, color=(127.5, 127.5, 127.5)): # resize a rectangular image to a padded rectangular
shape = img.shape[:2] # shape = [height, width]
ratio = min(float(height)/shape[0], float(width)/shape[1])
new_shape = (round(shape[1] * ratio), round(shape[0] * ratio)) # new_shape = [width, height]
dw = (width - new_shape[0]) / 2 # width padding
dh = (height - new_shape[1]) / 2 # height padding
top, bottom = round(dh - 0.1), round(dh + 0.1)
left, right = round(dw - 0.1), round(dw + 0.1)
img = cv2.resize(img, new_shape, interpolation=cv2.INTER_AREA) # resized, no border
img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # padded rectangular
return img, ratio, dw, dh
def eval_seq(opt,save_dir=None, show_image=True, frame_rate=30):
tracker = JDETracker(opt, frame_rate=frame_rate)
results = []
frame_id = 0
cam=cv2.VideoCapture(0)
while True:
_,img0=cam.read()
img, _, _, _ = letterbox(img0)
# Normalize RGB
img = img[:, :, ::-1].transpose(2, 0, 1)
img = np.ascontiguousarray(img, dtype=np.float32)
img /= 255.0
# run tracking
blob = torch.from_numpy(img).cuda().unsqueeze(0)
online_targets = tracker.update(blob, img0)
online_tlwhs = []
online_ids = []
for t in online_targets:
tlwh = t.tlwh
tid = t.track_id
vertical = tlwh[2] / tlwh[3] > 1.6
if tlwh[2] * tlwh[3] > opt.min_box_area and not vertical:
online_tlwhs.append(tlwh)
online_ids.append(tid)
# save results
results.append((frame_id + 1, online_tlwhs, online_ids))
online_im = vis.plot_tracking(img0, online_tlwhs, online_ids, frame_id=frame_id)
cv2.imshow('online_im', online_im)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
frame_id += 1
def main():
# run tracking
eval_seq(opt)
if __name__ == '__main__':
main()
Thank you for your work, I encountered some errors while running the demo.py, can you help me?
`File "/usr/local/lib/python3.5/dist-packages/apex/interfaces.py", line 10, in <module>
class ApexImplementation(object):
File "/usr/local/lib/python3.5/dist-packages/apex/interfaces.py", line 14, in ApexImplementation
implements(IApex)
File "/usr/local/lib/python3.5/dist-packages/zope/interface/declarations.py", line 483, in implements
raise TypeError(_ADVICE_ERROR % 'implementer')
TypeError: Class advice impossible in Python3. Use the @implementer class decorator instead.`
@Zhongdao Hi, Thank you for your sharing .I have been confused about too little information on multi-object tracking. Could you introduce your blog and some researchers or blog about multi-object tracking? Thank you very much!
hi, this is a wonderful work on MOT, but i run the demo.py, and only 3-4 fps, the video i run is TownCentreXVID.avi, this video is 1080p. i check the code, there is a resize, so i do not know what is the difference
2019-11-03 16:49:03 [INFO]: start tracking...
Lenth of the video: 900 frames
2019-11-03 16:49:05 [INFO]: CUDA error: out of memory
ffmpeg: error while loading shared libraries: libopencv_core.so.2.4: cannot open shared object file: No such file or directory
help!help !!
Hello, I would like to ask if your training results can be trained through the training data you published.thank you!
flake8 testing of https://github.com/Zhongdao/Towards-Realtime-MOT on Python 3.8.0
$ flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
./utils/utils.py:421:12: F821 undefined name 'cpu_soft_nms'
keep = cpu_soft_nms(np.ascontiguousarray(dets, dtype=np.float32),
^
1 F821 undefined name 'cpu_soft_nms'
1
E901,E999,F821,F822,F823 are the "showstopper" flake8 issues that can halt the runtime with a SyntaxError, NameError, etc. These 5 are different from most other flake8 issues which are merely "style violations" -- useful for readability but they do not effect runtime safety.
name
name
in __all__
I wonder what is the hardware specification were used on the experiment to get the fps the author claimed
您好,我加载模型时出现以下问题:
2019-10-11 13:38:53 [INFO]: "filename 'storages' not found"
ffmpeg version 2.8.15-0ubuntu0.16.04.1 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.10) 20160609
configuration: --prefix=/usr --extra-version=0ubuntu0.16.04.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv
libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
[image2 @ 0x9bf940] Could find no file with path '/media/xuemin/CE1E49B3007246A9/results/frame/%05d.jpg' and index in the range 0-4
/media/xuemin/CE1E49B3007246A9/results/frame/%05d.jpg: No such file or directory
请教下您知道哪里出问题了吗?谢谢
The author's pedestrian tracking algorithm is excellent, better than yolov3 + deep sort, but the detection rate does not seem to be high?
Hi all,
I spent two week to run demo.py. And I have a little bit experience . If you need to help, please feel free to contact me via skype: quangthanh1987.
Thanks and Best Regards,
Thanks for your work and in your project I see some import errors, such as SyncBN in models.py, _C in nms.py, and maskrcnn_benchmark in utils.py. Can you provide these files?
无法打开DATASET_ZOO.MD
Thanks for this code. I'm trying to run demo.py by following all the given instructions. I got following two errors. Can you please help me with this?
2019-10-09 11:56:11 [INFO]: start tracking...
Lenth of the video: 4706 frames
2019-10-09 11:56:11 [INFO]: [Errno 2] No such file or directory: 'weights/latest.pt'
ffmpeg version 4.2 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7.3.0 (crosstool-NG 1.23.0.449-a04d0)
configuration: --prefix=******* --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1566210161358/_build_env/bin/x86_64-conda_cos6-linux-gnu-cc --disable-doc --disable-openssl --enable-avresample --enable-gnutls --enable-gpl --enable-hardcoded-tables --enable-libfreetype --enable-libopenh264 --enable-libx264 --enable-pic --enable-pthreads --enable-shared --enable-static --enable-version3 --enable-zlib --enable-libmp3lame
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
[image2 @ 0x55b2a58bdcc0] Could find no file with path 'results/frame/%05d.jpg' and index in the range 0-4
results/frame/%05d.jpg: No such file or directory
def eval_seq(opt, dataloader, data_type, result_filename, save_dir=None, show_image=True, frame_rate=30):
if save_dir:
mkdir_if_missing(save_dir)
tracker = JDETracker(opt, frame_rate=frame_rate)
timer = Timer()
results = []
frame_id = 0
for path, img, img0 in dataloader:
if frame_id % 20 == 0:
logger.info('Processing frame {} ({:.2f} fps)'.format(frame_id, 1./max(1e-5, timer.average_time)))
dataloader can't be loop
Since the project requires maskrcnn-benchmark, it only works on GPU, will there be a CPU version?
Will you support for vehicle dataset too, such as UA-DETRAC?
btw I create UA-DETRAC to Caltech Pedestrian dataset here:
https://github.com/kikirizki/DETRAC_dataset2Caltech_dataset
Traceback (most recent call last):
File "demo.py", line 8, in
from tracker.multitracker import JDETracker
File "/HDD/lq/data/Towards-Realtime-MOT/tracker/multitracker.py", line 10, in
from utils.utils import *
File "/HDD/lq/data/Towards-Realtime-MOT/utils/utils.py", line 13, in
import maskrcnn_benchmark.layers.nms as nms
File "/HDD/lq/data/Towards-Realtime-MOT/maskrcnn-benchmark/maskrcnn_benchmark/layers/init.py", line 10, in
from .nms import nms
File "/HDD/lq/data/Towards-Realtime-MOT/maskrcnn-benchmark/maskrcnn_benchmark/layers/nms.py", line 5, in
from apex import amp
File "/usr/local/lib/python3.6/dist-packages/apex/init.py", line 18, in
from apex.interfaces import (ApexImplementation,
File "/usr/local/lib/python3.6/dist-packages/apex/interfaces.py", line 10, in
class ApexImplementation(object):
File "/usr/local/lib/python3.6/dist-packages/apex/interfaces.py", line 14, in ApexImplementation
implements(IApex)
File "/usr/lib/python3/dist-packages/zope/interface/declarations.py", line 485, in implements
raise TypeError(_ADVICE_ERROR % 'implementer')
TypeError: Class advice impossible in Python3. Use the @Implementer class decorator instead.
Is there a way somehow to change the tracking from pedestrian detection to vehicle detection without retraining the network?
Hi, this is a wonderful work on MOT.
I have some questions to the architecture.
Paper indicate that network choose FPN as base architecture, and I study Feature Pyramid Network (FPN) (Lin et al. 2017) the backbone is ResNets. But the implement backbone seems to be YOLOv3.
Is that network choose FPN as base architecture just the concept or use the specific network ?
I check YOLOv3 has three different scale feature maps, are these means the FPN?
I can't find the feature map with the different scale fused by skip connection, can somebody point where the code is?
And another question is RPN anchor choose from yolo layer with different scales?
thanks
I get the result frames,but player can't open result.mp4
import motmetrics as mm
ModuleNotFoundError: No module named 'motmetrics'
hello!while i am running demo.py,the above error occured, but i can not find the motmetrics module.why?
I think the loss function is not consider the multiple classes loss in the YOLOLayer
, How can I achieve the multiple calsses trianinig?
Traceback (most recent call last):
File "demo.py", line 8, in
from tracker.multitracker import JDETracker
File "/home/fan60526/Towards-Realtime-MOT/tracker/multitracker.py", line 10, in
from utils.utils import *
File "/home/fan60526/Towards-Realtime-MOT/utils/utils.py", line 13, in
import maskrcnn_benchmark.layers.nms as nms
File "/home/fan60526/maskrcnn-benchmark/maskrcnn_benchmark/layers/init.py", line 10, in
from .nms import nms
File "/home/fan60526/maskrcnn-benchmark/maskrcnn_benchmark/layers/nms.py", line 3, in
from maskrcnn_benchmark import _C
ImportError: /home/fan60526/maskrcnn-benchmark/maskrcnn_benchmark/_C.cpython-37m-x86_64-linux-gnu.so: undefined symbol: THCudaFree
Hello, I encountered the above error when running demo.py, request to help me solve, I am using python3.7 Cuda version 10.1
Can I reduce image size without retrain the model?
opencv-python
ffmpeg
scikit-learn
numba
motmetrics
Hey, Thanks for you excellent work, I want to train this repo on my own dataset, could you give instructions for training dataset composed? Expect to be answered!
python demo.py --input-video test/MOT16-11.mp4 --weights weights/jde.uncertainty.pt --output-format text --output-root results/
Namespace(cfg='cfg/yolov3.cfg', conf_thres=0.5, img_size=(1088, 608), input_video='test/MOT16-11.mp4', iou_thres=0.5, min_box_area=200, nms_thres=0.4, output_format='text', output_root='results/', track_buffer=30, weights='weights/jde.uncertainty.pt')
2019-10-15 10:35:17 [INFO]: start tracking...
Lenth of the video: 900 frames
2019-10-15 10:35:21 [INFO]: Processing frame 0 (100000.00 fps)
2019-10-15 10:35:21 [INFO]: too many indices for array
no result was genrated, why there is too many indices for array?
Traceback (most recent call last):
File "demo.py", line 8, in <module>
from tracker.multitracker import JDETracker
File "/data/shareJ/YDS/Towards-Realtime-MOT-master/tracker/multitracker.py", line 10, in <module>
from utils.utils import *
File "/data/shareJ/YDS/Towards-Realtime-MOT-master/utils/utils.py", line 14, in <module>
import maskrcnn_benchmark.layers.nms as nms
File "/data/shareJ/YDS/Towards-Realtime-MOT-master/maskrcnn-benchmark-master/maskrcnn_benchmark/layers/__init__.py", line 10, in <module>
from .nms import nms
File "/data/shareJ/YDS/Towards-Realtime-MOT-master/maskrcnn-benchmark-master/maskrcnn_benchmark/layers/nms.py", line 3, in <module>
from maskrcnn_benchmark import _C
ImportError: /data/shareJ/YDS/Towards-Realtime-MOT-master/maskrcnn-benchmark-master/maskrcnn_benchmark/_C.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe26detail37_typeMetaDataInstance_preallocated_32E
运行时报了这个错。。。
Traceback (most recent call last):
File "demo.py", line 8, in <module>
from tracker.multitracker import JDETracker
File "/data/share7/Towards-Realtime-MOT-master/tracker/multitracker.py", line 13, in <module>
from models import *
File "/data/share7/Towards-Realtime-MOT-master/models.py", line 8, in <module>
from utils.syncbn import SyncBN
ImportError: cannot import name 'SyncBN'
When I run the demo, I get this error.I see that there is no such file under the directory。
this error occur when i run demo.py,does anybody know why?
Thanks for your job. Do you have a plan to upload the cfg file and weight of yolov3-tiny?
When i run the demo.py, I am facing issue:
2019-10-30 14:06:51 [INFO]: start tracking...
Lenth of the video: 43190 frames
2019-10-30 14:06:58 [INFO]: Processing frame 0 (100000.00 fps)
Segmentation fault (core dumped)
Anybody help me
i want to test the model by my video,so i run demo.py with my video, but no affect i see.
so could anybody gives me a tutorial about what's the order runing the code?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.