Coder Social home page Coder Social logo

artlabss / tennis-tracking Goto Github PK

View Code? Open in Web Editor NEW
374.0 13.0 92.0 164.99 MB

Open-source Monocular Python HawkEye for Tennis

Home Page: https://www.artlabs.tech

License: The Unlicense

Python 62.17% Jupyter Notebook 37.83%
deep-learning python video yolo tennis line-detection ball-tracking machine-learning tennis-tracking

tennis-tracking's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tennis-tracking's Issues

-215:Assertion failed) count > 0 in function 'fitLine2D_wods'

Hi,

First of all, this is great work!

This is the error I get when I am trying to run on a tennis video. After printing model summary, this follows.

BOXES []
BOXES []
BOXES [array([166.0948 , 679.272 , 195.11845, 705.8905 ], dtype=float32)]
BIGGEST [166. 679. 195. 706.]
CAMERA ... Court tracking failed, adding 5 pixels to dist
CAMERA ... Court tracking failed, adding 5 pixels to dist
CAMERA ... Court tracking failed, adding 5 pixels to dist
Traceback (most recent call last):
File "predict_video.py", line 113, in
lines = court_detector.track_court(frame)
File "/content/drive/My Drive/TennisTracking/NewTracking/tennis-tracking/court_detector.py", line 445, in track_court
return self.track_court(frame)
File "/content/drive/My Drive/TennisTracking/NewTracking/tennis-tracking/court_detector.py", line 445, in track_court
return self.track_court(frame)
File "/content/drive/My Drive/TennisTracking/NewTracking/tennis-tracking/court_detector.py", line 445, in track_court
return self.track_court(frame)
File "/content/drive/My Drive/TennisTracking/NewTracking/tennis-tracking/court_detector.py", line 421, in track_court
[vx, vy, x, y] = cv2.fitLine(new_points, cv2.DIST_L2, 0, 0.01, 0.01)
cv2.error: OpenCV(4.1.2) /io/opencv/modules/imgproc/src/linefit.cpp:50: error: (-215:Assertion failed) count > 0 in function 'fitLine2D_wods'

Help: Failed to get convolution algorithm

2021-09-22 08:06:14.234343: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
fps : 60
2021-09-22 08:06:16.272909: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-09-22 08:06:16.294530: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 08:06:16.295305: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2021-09-22 08:06:16.295393: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-09-22 08:06:16.299668: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-09-22 08:06:16.299764: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-09-22 08:06:16.301051: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-09-22 08:06:16.301799: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-09-22 08:06:16.305778: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-09-22 08:06:16.306866: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-09-22 08:06:16.307181: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-09-22 08:06:16.307336: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 08:06:16.308216: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 08:06:16.309021: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-09-22 08:06:16.309402: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-09-22 08:06:16.309722: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 08:06:16.310485: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2021-09-22 08:06:16.310635: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 08:06:16.311429: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 08:06:16.312205: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-09-22 08:06:16.312300: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-09-22 08:06:16.884681: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-09-22 08:06:16.884740: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-09-22 08:06:16.884767: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-09-22 08:06:16.885046: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 08:06:16.886096: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 08:06:16.887106: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 08:06:16.887864: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2021-09-22 08:06:16.887939: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10800 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7)
layer24 output shape: 256 360 640
Model: "model_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 3, 360, 640)]     0         
_________________________________________________________________
conv2d (Conv2D)              (None, 64, 360, 640)      1792      
_________________________________________________________________
activation (Activation)      (None, 64, 360, 640)      0         
_________________________________________________________________
batch_normalization (BatchNo (None, 64, 360, 640)      2560      
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 64, 360, 640)      36928     
_________________________________________________________________
activation_1 (Activation)    (None, 64, 360, 640)      0         
_________________________________________________________________
batch_normalization_1 (Batch (None, 64, 360, 640)      2560      
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 64, 180, 320)      0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 128, 180, 320)     73856     
_________________________________________________________________
activation_2 (Activation)    (None, 128, 180, 320)     0         
_________________________________________________________________
batch_normalization_2 (Batch (None, 128, 180, 320)     1280      
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 128, 180, 320)     147584    
_________________________________________________________________
activation_3 (Activation)    (None, 128, 180, 320)     0         
_________________________________________________________________
batch_normalization_3 (Batch (None, 128, 180, 320)     1280      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 128, 90, 160)      0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 256, 90, 160)      295168    
_________________________________________________________________
activation_4 (Activation)    (None, 256, 90, 160)      0         
_________________________________________________________________
batch_normalization_4 (Batch (None, 256, 90, 160)      640       
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 256, 90, 160)      590080    
_________________________________________________________________
activation_5 (Activation)    (None, 256, 90, 160)      0         
_________________________________________________________________
batch_normalization_5 (Batch (None, 256, 90, 160)      640       
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 256, 90, 160)      590080    
_________________________________________________________________
activation_6 (Activation)    (None, 256, 90, 160)      0         
_________________________________________________________________
batch_normalization_6 (Batch (None, 256, 90, 160)      640       
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 256, 45, 80)       0         
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 512, 45, 80)       1180160   
_________________________________________________________________
activation_7 (Activation)    (None, 512, 45, 80)       0         
_________________________________________________________________
batch_normalization_7 (Batch (None, 512, 45, 80)       320       
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 512, 45, 80)       2359808   
_________________________________________________________________
activation_8 (Activation)    (None, 512, 45, 80)       0         
_________________________________________________________________
batch_normalization_8 (Batch (None, 512, 45, 80)       320       
_________________________________________________________________
conv2d_9 (Conv2D)            (None, 512, 45, 80)       2359808   
_________________________________________________________________
activation_9 (Activation)    (None, 512, 45, 80)       0         
_________________________________________________________________
batch_normalization_9 (Batch (None, 512, 45, 80)       320       
_________________________________________________________________
up_sampling2d (UpSampling2D) (None, 512, 90, 160)      0         
_________________________________________________________________
conv2d_10 (Conv2D)           (None, 256, 90, 160)      1179904   
_________________________________________________________________
activation_10 (Activation)   (None, 256, 90, 160)      0         
_________________________________________________________________
batch_normalization_10 (Batc (None, 256, 90, 160)      640       
_________________________________________________________________
conv2d_11 (Conv2D)           (None, 256, 90, 160)      590080    
_________________________________________________________________
activation_11 (Activation)   (None, 256, 90, 160)      0         
_________________________________________________________________
batch_normalization_11 (Batc (None, 256, 90, 160)      640       
_________________________________________________________________
conv2d_12 (Conv2D)           (None, 256, 90, 160)      590080    
_________________________________________________________________
activation_12 (Activation)   (None, 256, 90, 160)      0         
_________________________________________________________________
batch_normalization_12 (Batc (None, 256, 90, 160)      640       
_________________________________________________________________
up_sampling2d_1 (UpSampling2 (None, 256, 180, 320)     0         
_________________________________________________________________
conv2d_13 (Conv2D)           (None, 128, 180, 320)     295040    
_________________________________________________________________
activation_13 (Activation)   (None, 128, 180, 320)     0         
_________________________________________________________________
batch_normalization_13 (Batc (None, 128, 180, 320)     1280      
_________________________________________________________________
conv2d_14 (Conv2D)           (None, 128, 180, 320)     147584    
_________________________________________________________________
activation_14 (Activation)   (None, 128, 180, 320)     0         
_________________________________________________________________
batch_normalization_14 (Batc (None, 128, 180, 320)     1280      
_________________________________________________________________
up_sampling2d_2 (UpSampling2 (None, 128, 360, 640)     0         
_________________________________________________________________
conv2d_15 (Conv2D)           (None, 64, 360, 640)      73792     
_________________________________________________________________
activation_15 (Activation)   (None, 64, 360, 640)      0         
_________________________________________________________________
batch_normalization_15 (Batc (None, 64, 360, 640)      2560      
_________________________________________________________________
conv2d_16 (Conv2D)           (None, 64, 360, 640)      36928     
_________________________________________________________________
activation_16 (Activation)   (None, 64, 360, 640)      0         
_________________________________________________________________
batch_normalization_16 (Batc (None, 64, 360, 640)      2560      
_________________________________________________________________
conv2d_17 (Conv2D)           (None, 256, 360, 640)     147712    
_________________________________________________________________
activation_17 (Activation)   (None, 256, 360, 640)     0         
_________________________________________________________________
batch_normalization_17 (Batc (None, 256, 360, 640)     2560      
_________________________________________________________________
reshape (Reshape)            (None, 256, 230400)       0         
_________________________________________________________________
permute (Permute)            (None, 230400, 256)       0         
_________________________________________________________________
activation_18 (Activation)   (None, 230400, 256)       0         
=================================================================
Total params: 10,719,104
Trainable params: 10,707,744
Non-trainable params: 11,360
_________________________________________________________________
2021-09-22 08:06:17.330092: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open WeightsTracknet/model.1: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'
Using device cuda
Detecting the court and the players...
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
BOXES  [array([452.1675 , 735.3489 , 572.7479 , 954.95905], dtype=float32)]
BIGGEST  [452. 735. 573. 955.]
Finished!
Tracking the ball: 0.0
2021-09-22 08:20:51.717046: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2)
2021-09-22 08:20:51.724503: I tensorflow/core/platform/profile_utils/cpu_utils.cc:114] CPU Frequency: 2299995000 Hz
2021-09-22 08:20:52.243946: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-09-22 08:20:54.014311: E tensorflow/stream_executor/cuda/cuda_dnn.cc:352] Loaded runtime CuDNN library: 8.0.5 but source was compiled with: 8.1.0.  CuDNN library needs to have matching major version and equal or higher minor version. If using a binary install, upgrade your CuDNN library.  If building from sources, make sure the library loaded at runtime is compatible with the version specified during compile configuration.
2021-09-22 08:20:54.015598: E tensorflow/stream_executor/cuda/cuda_dnn.cc:352] Loaded runtime CuDNN library: 8.0.5 but source was compiled with: 8.1.0.  CuDNN library needs to have matching major version and equal or higher minor version. If using a binary install, upgrade your CuDNN library.  If building from sources, make sure the library loaded at runtime is compatible with the version specified during compile configuration.
2021-09-22 08:20:54.015950: W tensorflow/core/framework/op_kernel.cc:1767] OP_REQUIRES failed at conv_ops_fused_impl.h:698 : Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
Traceback (most recent call last):
  File "predict_video.py", line 155, in <module>
    pr = m.predict(np.array([X]))[0]
  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py", line 1727, in predict
    tmp_batch_outputs = self.predict_function(iterator)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 889, in __call__
    result = self._call(*args, **kwds)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py", line 957, in _call
    filtered_flat_args, self._concrete_stateful_fn.captured_inputs)  # pylint: disable=protected-access
  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 1961, in _call_flat
    ctx, args, cancellation_manager=cancellation_manager))
  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py", line 596, in call
    ctx=ctx)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
    inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.UnknownError:  Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
	 [[node model_1/activation/Relu (defined at predict_video.py:155) ]] [Op:__inference_predict_function_1776]

Function call stack:
predict_function

๐Ÿ˜ณ๐Ÿ˜ณ๐Ÿ˜ณ๐Ÿ˜ณ

Mapping labels to input videos

I am having difficulty understanding how the labels in the bigDF.csv and tracking_players.csv files correspond to the frames of the input video files. This is particularly confusing because some of the input videos are missing.

Could you please provide some clarification on this matter?

Bouncing point

Hey i wanted to know how does you predict the bouncing point of the ball so correctly. I was trying to understand your code to implement something similar to your bounce detection. I understand the part where you are saving x, y and v data in csv format and you are using pretrained model to detect the bounce but i am unable to understand how does this part of the code works:
Xs = test_df[['lagX_20', 'lagX_19', 'lagX_18', 'lagX_17', 'lagX_16',
'lagX_15', 'lagX_14', 'lagX_13', 'lagX_12', 'lagX_11', 'lagX_10',
'lagX_9', 'lagX_8', 'lagX_7', 'lagX_6', 'lagX_5', 'lagX_4', 'lagX_3',
'lagX_2', 'lagX_1']]
Xs = from_2d_array_to_nested(Xs.to_numpy())
can you please explain in detail.

`tensorflow.python.framework.errors_impl.InvalidArgumentError`

Hello, I have problems when I execute the following command

python3 predict_video.py --input_video_path=VideoInput/video_input3.mp4 --output_video_path=VideoOutput/video_output.mp4 --minimap=0 --bounce=0

Console output:

fps : 30
2023-04-03 08:02:41.981694: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-04-03 08:02:41.985775: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusolver.so.11'; dlerror: libcusolver.so.11: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/lg/conda/TensorRT-8.0.1.6/targets/x86_64-linux-gnu/lib:/usr/local/cuda-11.0/lib64:
2023-04-03 08:02:41.986361: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1835] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
2023-04-03 08:02:41.986675: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
layer24 output shape: 256 360 640
Model: "model_1"


Layer (type) Output Shape Param #

input_1 (InputLayer) [(None, 3, 360, 640)] 0


conv2d (Conv2D) (None, 64, 360, 640) 1792


activation (Activation) (None, 64, 360, 640) 0


batch_normalization (BatchNo (None, 64, 360, 640) 2560


conv2d_1 (Conv2D) (None, 64, 360, 640) 36928


activation_1 (Activation) (None, 64, 360, 640) 0


batch_normalization_1 (Batch (None, 64, 360, 640) 2560


max_pooling2d (MaxPooling2D) (None, 64, 180, 320) 0


conv2d_2 (Conv2D) (None, 128, 180, 320) 73856


activation_2 (Activation) (None, 128, 180, 320) 0


batch_normalization_2 (Batch (None, 128, 180, 320) 1280


conv2d_3 (Conv2D) (None, 128, 180, 320) 147584


activation_3 (Activation) (None, 128, 180, 320) 0


batch_normalization_3 (Batch (None, 128, 180, 320) 1280


max_pooling2d_1 (MaxPooling2 (None, 128, 90, 160) 0


conv2d_4 (Conv2D) (None, 256, 90, 160) 295168


activation_4 (Activation) (None, 256, 90, 160) 0


batch_normalization_4 (Batch (None, 256, 90, 160) 640


conv2d_5 (Conv2D) (None, 256, 90, 160) 590080


activation_5 (Activation) (None, 256, 90, 160) 0


batch_normalization_5 (Batch (None, 256, 90, 160) 640


conv2d_6 (Conv2D) (None, 256, 90, 160) 590080


activation_6 (Activation) (None, 256, 90, 160) 0


batch_normalization_6 (Batch (None, 256, 90, 160) 640


max_pooling2d_2 (MaxPooling2 (None, 256, 45, 80) 0


conv2d_7 (Conv2D) (None, 512, 45, 80) 1180160


activation_7 (Activation) (None, 512, 45, 80) 0


batch_normalization_7 (Batch (None, 512, 45, 80) 320


conv2d_8 (Conv2D) (None, 512, 45, 80) 2359808


activation_8 (Activation) (None, 512, 45, 80) 0


batch_normalization_8 (Batch (None, 512, 45, 80) 320


conv2d_9 (Conv2D) (None, 512, 45, 80) 2359808


activation_9 (Activation) (None, 512, 45, 80) 0


batch_normalization_9 (Batch (None, 512, 45, 80) 320


up_sampling2d (UpSampling2D) (None, 512, 90, 160) 0


conv2d_10 (Conv2D) (None, 256, 90, 160) 1179904


activation_10 (Activation) (None, 256, 90, 160) 0


batch_normalization_10 (Batc (None, 256, 90, 160) 640


conv2d_11 (Conv2D) (None, 256, 90, 160) 590080


activation_11 (Activation) (None, 256, 90, 160) 0


batch_normalization_11 (Batc (None, 256, 90, 160) 640


conv2d_12 (Conv2D) (None, 256, 90, 160) 590080


activation_12 (Activation) (None, 256, 90, 160) 0


batch_normalization_12 (Batc (None, 256, 90, 160) 640


up_sampling2d_1 (UpSampling2 (None, 256, 180, 320) 0


conv2d_13 (Conv2D) (None, 128, 180, 320) 295040


activation_13 (Activation) (None, 128, 180, 320) 0


batch_normalization_13 (Batc (None, 128, 180, 320) 1280


conv2d_14 (Conv2D) (None, 128, 180, 320) 147584


activation_14 (Activation) (None, 128, 180, 320) 0


batch_normalization_14 (Batc (None, 128, 180, 320) 1280


up_sampling2d_2 (UpSampling2 (None, 128, 360, 640) 0


conv2d_15 (Conv2D) (None, 64, 360, 640) 73792


activation_15 (Activation) (None, 64, 360, 640) 0


batch_normalization_15 (Batc (None, 64, 360, 640) 2560


conv2d_16 (Conv2D) (None, 64, 360, 640) 36928


activation_16 (Activation) (None, 64, 360, 640) 0


batch_normalization_16 (Batc (None, 64, 360, 640) 2560


conv2d_17 (Conv2D) (None, 256, 360, 640) 147712


activation_17 (Activation) (None, 256, 360, 640) 0


batch_normalization_17 (Batc (None, 256, 360, 640) 2560


reshape (Reshape) (None, 256, 230400) 0


permute (Permute) (None, 230400, 256) 0


activation_18 (Activation) (None, 230400, 256) 0

Total params: 10,719,104
Trainable params: 10,707,744
Non-trainable params: 11,360


2023-04-03 08:02:42.336100: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open WeightsTracknet/model.1: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'
Using device cuda
Detecting the court and the players...
/root/miniconda3/envs/test/lib/python3.6/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
BOXES [array([1014.6489 , 625.9299 , 1083.3445 , 844.51373], dtype=float32)]
BIGGEST [1015. 626. 1083. 845.]
Finished!
Tracking the ball: 0.0
2023-04-03 08:05:12.988603: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
Traceback (most recent call last):
File "predict_video.py", line 155, in
pr = m.predict(np.array([X]))[0]
File "/root/miniconda3/envs/test/lib/python3.6/site-packages/keras/engine/training.py", line 1751, in predict
tmp_batch_outputs = self.predict_function(iterator)
File "/root/miniconda3/envs/test/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 885, in call
result = self._call(*args, **kwds)
File "/root/miniconda3/envs/test/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 957, in _call
filtered_flat_args, self._concrete_stateful_fn.captured_inputs) # pylint: disable=protected-access
File "/root/miniconda3/envs/test/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1964, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "/root/miniconda3/envs/test/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 596, in call
ctx=ctx)
File "/root/miniconda3/envs/test/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU
[[node model_1/max_pooling2d/MaxPool (defined at predict_video.py:155) ]] [Op:__inference_predict_function_1776]

Function call stack:
predict_function

requirements.txt has been installed, may I ask what caused it?

cant' load weights and predict

1.The model will have the following problems when loading parameters๏ผš
2022-04-04 12:43:19.257086: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open WeightsTracknet\model.1: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?

2.When I change model.1 to model.h5, the above error message will disappear, but the model will break when predicting.

I am sorry for taking your time and hope you can help!

Hi, could you provide the training code for the bounce detection ?

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

Use of GPU on Colab

Hi,

I have tried running the script on Colab following the instructions on the README file and installing the missing dependencies.

It does work but it doesn't seem to use the GPU despite the option being set up.

How can I ensure the GPU is used? Is it something to do with tensorflow? Any advice?

Kind regards.

Error post tracking ball

Describe the bug
Node: 'model_1/max_pooling2d/MaxPool
Default MaxPoolingOp only supports NHWC on device type CPU
[[{{node model_1/max_pooling2d/MaxPool}}]] [Op:__inference_predict_function_1861]

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):
Windows 10
Python 3.9
All most upto date libraries as per requirements

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

Can this be done for a padel court?

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

This isn't really a bug in the code. I noticed it works smoothly for tennis, but less so for tracking padel. Do you think it's going to be on the roadmap to extend this feature into the padel-sport?

Describe the solution you'd like
A clear and concise description of what you want to happen.
Ideally, the option to use this codebase out of the box for padel footage.
Realistically, even some guidelines on how to get started with this code to extend it for the padel sport would already be an awesome addition

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Not Enough Non-NaN Values for Interpolation

Traceback (most recent call last):
File "/content/drive/MyDrive/tennis/tennis-tracking-main/predict_video.py", line 274, in
coords = interpolation(coords)
File "/content/drive/MyDrive/tennis/tennis-tracking-main/detection.py", line 469, in interpolation
xxx[nons]= np.interp(yy(nons), yy(~nons), xxx[~nons])
File "/usr/local/lib/python3.10/dist-packages/numpy/lib/function_base.py", line 1599, in interp
return interp_func(x, xp, fp, left, right)
ValueError: array of sample points is empty

How is the court detection model working?

Hi,

I want to know if there's any documentation on how the court line detection model works? I want to label all the intersection points of the court lines for camera calibration.

OpenCV(4.5.4-dev) /tmp/pip-req-build-h45n7_hz/opencv/modules/dnn/src/darknet/darknet_io.cpp:933: error

2021-10-26 14:43:32.932732: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open WeightsTracknet/model.1: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'
Traceback (most recent call last):
File "predict_video.py", line 87, in
net = cv2.dnn.readNet(yolo_weights, yolo_config)
cv2.error: OpenCV(4.5.4-dev) /tmp/pip-req-build-h45n7_hz/opencv/modules/dnn/src/darknet/darknet_io.cpp:933: error: (-213:The function/feature is not implemented) Transpose the weights (except for convolutional) is not implemented in function 'ReadDarknetFromWeightsStream'

Help: error: (-212:Parsing error) Failed to parse NetParameter file: Yolov3/yolov3.weights in function 'readNetFromDarknet'

2021-09-22 07:22:45.863212: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
fps : 30
2021-09-22 07:22:47.816475: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcuda.so.1
2021-09-22 07:22:47.837632: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 07:22:47.838418: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2021-09-22 07:22:47.838515: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-09-22 07:22:47.841400: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublas.so.11
2021-09-22 07:22:47.841528: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcublasLt.so.11
2021-09-22 07:22:47.842627: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcufft.so.10
2021-09-22 07:22:47.842984: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcurand.so.10
2021-09-22 07:22:47.845648: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusolver.so.11
2021-09-22 07:22:47.846309: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcusparse.so.11
2021-09-22 07:22:47.846562: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudnn.so.8
2021-09-22 07:22:47.846694: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 07:22:47.847449: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 07:22:47.848167: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-09-22 07:22:47.848560: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-09-22 07:22:47.848854: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 07:22:47.849664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: 
pciBusID: 0000:00:04.0 name: Tesla K80 computeCapability: 3.7
coreClock: 0.8235GHz coreCount: 13 deviceMemorySize: 11.17GiB deviceMemoryBandwidth: 223.96GiB/s
2021-09-22 07:22:47.849820: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 07:22:47.850577: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 07:22:47.851298: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0
2021-09-22 07:22:47.851385: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-09-22 07:22:48.399972: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-09-22 07:22:48.400039: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264]      0 
2021-09-22 07:22:48.400070: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0:   N 
2021-09-22 07:22:48.400323: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 07:22:48.401158: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 07:22:48.402034: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-09-22 07:22:48.402750: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2021-09-22 07:22:48.402816: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10800 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7)
layer24 output shape: 256 360 640
Model: "model_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 3, 360, 640)]     0         
_________________________________________________________________
conv2d (Conv2D)              (None, 64, 360, 640)      1792      
_________________________________________________________________
activation (Activation)      (None, 64, 360, 640)      0         
_________________________________________________________________
batch_normalization (BatchNo (None, 64, 360, 640)      2560      
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 64, 360, 640)      36928     
_________________________________________________________________
activation_1 (Activation)    (None, 64, 360, 640)      0         
_________________________________________________________________
batch_normalization_1 (Batch (None, 64, 360, 640)      2560      
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 64, 180, 320)      0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 128, 180, 320)     73856     
_________________________________________________________________
activation_2 (Activation)    (None, 128, 180, 320)     0         
_________________________________________________________________
batch_normalization_2 (Batch (None, 128, 180, 320)     1280      
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 128, 180, 320)     147584    
_________________________________________________________________
activation_3 (Activation)    (None, 128, 180, 320)     0         
_________________________________________________________________
batch_normalization_3 (Batch (None, 128, 180, 320)     1280      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 128, 90, 160)      0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 256, 90, 160)      295168    
_________________________________________________________________
activation_4 (Activation)    (None, 256, 90, 160)      0         
_________________________________________________________________
batch_normalization_4 (Batch (None, 256, 90, 160)      640       
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 256, 90, 160)      590080    
_________________________________________________________________
activation_5 (Activation)    (None, 256, 90, 160)      0         
_________________________________________________________________
batch_normalization_5 (Batch (None, 256, 90, 160)      640       
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 256, 90, 160)      590080    
_________________________________________________________________
activation_6 (Activation)    (None, 256, 90, 160)      0         
_________________________________________________________________
batch_normalization_6 (Batch (None, 256, 90, 160)      640       
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 256, 45, 80)       0         
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 512, 45, 80)       1180160   
_________________________________________________________________
activation_7 (Activation)    (None, 512, 45, 80)       0         
_________________________________________________________________
batch_normalization_7 (Batch (None, 512, 45, 80)       320       
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 512, 45, 80)       2359808   
_________________________________________________________________
activation_8 (Activation)    (None, 512, 45, 80)       0         
_________________________________________________________________
batch_normalization_8 (Batch (None, 512, 45, 80)       320       
_________________________________________________________________
conv2d_9 (Conv2D)            (None, 512, 45, 80)       2359808   
_________________________________________________________________
activation_9 (Activation)    (None, 512, 45, 80)       0         
_________________________________________________________________
batch_normalization_9 (Batch (None, 512, 45, 80)       320       
_________________________________________________________________
up_sampling2d (UpSampling2D) (None, 512, 90, 160)      0         
_________________________________________________________________
conv2d_10 (Conv2D)           (None, 256, 90, 160)      1179904   
_________________________________________________________________
activation_10 (Activation)   (None, 256, 90, 160)      0         
_________________________________________________________________
batch_normalization_10 (Batc (None, 256, 90, 160)      640       
_________________________________________________________________
conv2d_11 (Conv2D)           (None, 256, 90, 160)      590080    
_________________________________________________________________
activation_11 (Activation)   (None, 256, 90, 160)      0         
_________________________________________________________________
batch_normalization_11 (Batc (None, 256, 90, 160)      640       
_________________________________________________________________
conv2d_12 (Conv2D)           (None, 256, 90, 160)      590080    
_________________________________________________________________
activation_12 (Activation)   (None, 256, 90, 160)      0         
_________________________________________________________________
batch_normalization_12 (Batc (None, 256, 90, 160)      640       
_________________________________________________________________
up_sampling2d_1 (UpSampling2 (None, 256, 180, 320)     0         
_________________________________________________________________
conv2d_13 (Conv2D)           (None, 128, 180, 320)     295040    
_________________________________________________________________
activation_13 (Activation)   (None, 128, 180, 320)     0         
_________________________________________________________________
batch_normalization_13 (Batc (None, 128, 180, 320)     1280      
_________________________________________________________________
conv2d_14 (Conv2D)           (None, 128, 180, 320)     147584    
_________________________________________________________________
activation_14 (Activation)   (None, 128, 180, 320)     0         
_________________________________________________________________
batch_normalization_14 (Batc (None, 128, 180, 320)     1280      
_________________________________________________________________
up_sampling2d_2 (UpSampling2 (None, 128, 360, 640)     0         
_________________________________________________________________
conv2d_15 (Conv2D)           (None, 64, 360, 640)      73792     
_________________________________________________________________
activation_15 (Activation)   (None, 64, 360, 640)      0         
_________________________________________________________________
batch_normalization_15 (Batc (None, 64, 360, 640)      2560      
_________________________________________________________________
conv2d_16 (Conv2D)           (None, 64, 360, 640)      36928     
_________________________________________________________________
activation_16 (Activation)   (None, 64, 360, 640)      0         
_________________________________________________________________
batch_normalization_16 (Batc (None, 64, 360, 640)      2560      
_________________________________________________________________
conv2d_17 (Conv2D)           (None, 256, 360, 640)     147712    
_________________________________________________________________
activation_17 (Activation)   (None, 256, 360, 640)     0         
_________________________________________________________________
batch_normalization_17 (Batc (None, 256, 360, 640)     2560      
_________________________________________________________________
reshape (Reshape)            (None, 256, 230400)       0         
_________________________________________________________________
permute (Permute)            (None, 230400, 256)       0         
_________________________________________________________________
activation_18 (Activation)   (None, 230400, 256)       0         
=================================================================
Total params: 10,719,104
Trainable params: 10,707,744
Non-trainable params: 11,360
_________________________________________________________________
2021-09-22 07:22:48.849317: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open WeightsTracknet/model.1: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'
Traceback (most recent call last):
  File "predict_video.py", line 87, in <module>
    net = cv2.dnn.readNet(yolo_weights, yolo_config)
cv2.error: OpenCV(4.1.2) /io/opencv/modules/dnn/src/darknet/darknet_importer.cpp:214: error: (-212:Parsing error) Failed to parse NetParameter file: Yolov3/yolov3.weights in function 'readNetFromDarknet'

The python dependencies could not install

Describe the bug
Could not install the dependencies on macbook m1 pro

fl@bogon tennis-tracking % pip3 install -r requirements.txt
Defaulting to user installation because normal site-packages is not writeable
Collecting requests==2.23.0 (from -r requirements.txt (line 1))
  Using cached requests-2.23.0-py2.py3-none-any.whl (58 kB)
ERROR: Could not find a version that satisfies the requirement opencv_contrib_python==4.1.2.30 (from versions: 3.4.11.45, 3.4.13.47, 3.4.14.51, 3.4.15.55, 3.4.16.57, 3.4.16.59, 3.4.17.61, 3.4.17.63, 3.4.18.65, 4.4.0.46, 4.5.1.48, 4.5.2.52, 4.5.3.56, 4.5.4.58, 4.5.4.60, 4.5.5.62, 4.5.5.64, 4.6.0.66, 4.7.0.68, 4.7.0.72, 4.8.0.74, 4.8.0.76)
ERROR: No matching distribution found for opencv_contrib_python==4.1.2.30

fl@bogon tennis-tracking % python3 --version
Python 3.9.6

Getting Error: tensorflow.python.framework.errors_impl.InvalidArgumentError

Traceback (most recent call last):
File "predict_video.py", line 107, in
X = np.rollaxis(X, 2, 0);print(m.predict( np.array([X])))
File "\anaconda3\envs\Q\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1727, in predict
tmp_batch_outputs = self.predict_function(iterator)
File "anaconda3\envs\Q\lib\site-packages\tensorflow\python\eager\def_function.py", line 889, in call
result = self._call(*args, **kwds)
File "\anaconda3\envs\Q\lib\site-packages\tensorflow\python\eager\def_function.py", line 956, in _call
return self._concrete_stateful_fn._call_flat(
File "\anaconda3\envs\Q\lib\site-packages\tensorflow\python\eager\function.py", line 1960, in _call_flat
return self._build_call_outputs(self._inference_function.call(
File "\anaconda3\envs\Q\lib\site-packages\tensorflow\python\eager\function.py", line 591, in call
outputs = execute.execute(
File "anaconda3\envs\Q\lib\site-packages\tensorflow\python\eager\execute.py", line 59, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU
[[node model_1/max_pooling2d/MaxPool (defined at predict_video.py:107) ]] [Op:__inference_predict_function_1776]

Function call stack:
predict_function

Question about speed

Hi, I have used your algorithm in the official video but it is really slow like 2-3 fps in 3070. I wonder if the high resolution like 1080p video may influence the speed? And it seems that low resolution input video will affect the precison seriously, so what is the suggested resolution balance between speed and accuracy?
BTW, which step do you think is the bottleneck that takes the most time?

Speed of the ball after a shot

Hi, this is a great project, thanks a lot for sharing!
I think a nice addition would be to show the estimated speed of the ball after each shot in the minimap maybe.
Is this possible?

Question about training data

To predict bounce points machine learning library for time series sktime was used. Specifically, TimeSeriesForestClassifier was trained on 3 variables: x, y coordinates of the ball and V for velocity (V2-V1/t2-t1). Data for training the model - df.csv

Hello! @shukkkur

I have a few questions about the training data of the bounce model.

  • According to the above content, you gave a label csv for the training data of the bounce model. What's the corresponding videos or images of that? Can you share the data?
  • I want to retrain the bounce model, as this issue offered the training script, but the link is out of date now. Can you upload the script again?
  • For training the tennis trajectory model (I.e. TrackNet), have you tried training it before? Do you have the training data of the original TrackNet?

Thanks!

Default MaxPoolingOp only supports NHWC on device type CPU

2021-10-26 22:09:37.627546: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open WeightsTracknet/model.1: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'
Using device cuda
Detecting the court and the players...
/home/azuryl/anaconda3/envs/tennistrack/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448234945/work/c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
BOXES [array([452.16757, 735.3489 , 572.748 , 954.95917], dtype=float32)]
BIGGEST [452. 735. 573. 955.]
Finished!
Tracking the ball: 0.0
2021-10-26 22:14:17.447722: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)
Traceback (most recent call last):
File "predict_video.py", line 155, in
pr = m.predict(np.array([X]))[0]
File "/home/azuryl/anaconda3/envs/tennistrack/lib/python3.8/site-packages/keras/engine/training.py", line 1751, in predict
tmp_batch_outputs = self.predict_function(iterator)
File "/home/azuryl/anaconda3/envs/tennistrack/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 885, in call
result = self._call(*args, **kwds)
File "/home/azuryl/anaconda3/envs/tennistrack/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 956, in _call
return self._concrete_stateful_fn._call_flat(
File "/home/azuryl/anaconda3/envs/tennistrack/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1963, in _call_flat
return self._build_call_outputs(self._inference_function.call(
File "/home/azuryl/anaconda3/envs/tennistrack/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 591, in call
outputs = execute.execute(
File "/home/azuryl/anaconda3/envs/tennistrack/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU
[[node model_1/max_pooling2d/MaxPool (defined at predict_video.py:155) ]] [Op:__inference_predict_function_1776]

Function call stack:
predict_function

no version of Torch

Describe the bug
Make new conda env with Python=3.8, run pip install -r requirements.txt and get

ERROR: Could not find a version that satisfies the requirement torch==1.9.0+cu102 (from versions: 1.4.0, 1.5.0, 1.5.1, 1.6.0, 1.7.0, 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2, 1.11.0)
ERROR: No matching distribution found for torch==1.9.0+cu102

Is cu for cuda? In which case can I run this with another CPU only version of Torch?

Testing on my video getting an error

hey, i am trying run this program only for court detection. It seems to work with your video_samples, however, when I try using my video, I get:

Traceback (most recent call last):
File "/home/isco/Desktop/ambient/opencv_court/predict_video.py", line 24, in
lines = court_detector.detect(frame)
File "/home/isco/Desktop/ambient/opencv_court/court_detector.py", line 71, in detect
return self.find_lines_location()
File "/home/isco/Desktop/ambient/opencv_court/court_detector.py", line 292, in find_lines_location
self.lines = cv2.perspectiveTransform(self.p, self.court_warp_matrix[-1]).reshape(-1)
cv2.error: OpenCV(4.9.0) /io/opencv/modules/core/src/matmul.dispatch.cpp:550: error: (-215:Assertion failed) scn + 1 == m.cols in function 'perspectiveTransform'

It seems it can not calculate court_warp_matrix correctly. I get None for this result. My video is not dramatically different from yours so I do not understand what the problem might be.

Error at step 8 of How to Run

Describe the bug
When trying to run the code in Google Colab, I get an error when I am at step 8:
!pip install filterpy sktime

When I do this, I get the following errors message:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
yellowbrick 1.3.post1 requires numpy<1.20,>=1.16.0, but you have numpy 1.21.5 which is incompatible.
datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.
albumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.
Successfully installed deprecated-1.2.13 filterpy-1.4.5 llvmlite-0.38.0 numba-0.55.1 numpy-1.21.5 sktime-0.10.0 statsmodels-0.13.1
WARNING: The following packages were previously imported in this runtime:
[numpy]
You must restart the runtime in order to use newly installed versions.

When I restart the runtime, it does not help. When I try to run predict_video.py, I get the following error:

2022-02-08 15:57:12.483066: W tensorflow/core/util/tensor_slice_reader.cc:96] Could not open WeightsTracknet/model.1: DATA_LOSS: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
OpenCV: FFMPEG: tag 0x44495658/'XVID' is not supported with codec id 12 and format 'mp4 / MP4 (MPEG-4 Part 14)'
OpenCV: FFMPEG: fallback to use tag 0x7634706d/'mp4v'

I do not know if the second error is the cause of the first error, but I would love to be able to successfully implement the interesting algorithm. I am looking forward to your reply.

Desktop (please complete the following information):

  • OS: Windows
  • Browser chrome
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

How did you decide on x,y and velocity of ball to predict bounces?

To predict bounce points machine learning library for time series sktime was used. Specifically, TimeSeriesForestClassifier was trained on 3 variables: x, y coordinates of the ball and V for velocity (V2-V1/t2-t1).
Additional context
Add any other context or screenshots about the feature request here.

I want to try to improve the predictive accuracy but don't want to reinvent the wheel if you can share some info on what you tried and how you decided on these three factors.

question

Can the tennis-tracking mark the ball bounce position in the video?

thank you very much.
Mike

AssertionError: X must have unique column indices, but found Int64Index([0, 0, 0], dtype='int64')

Hi I am getting this error when I am trying to detect the bounce. Here is my code for detecting bounce.

bounce=1
if bounce == 1:
  # Predicting Bounces 
  test_df = pd.DataFrame({'x': [coord[0] for coord in xy[:-1]], 'y':[coord[1] for coord in xy[:-1]], 'V': V})

  # df.shift
  for i in range(20, 0, -1): 
    test_df[f'lagX_{i}'] = test_df['x'].shift(i, fill_value=0)
  for i in range(20, 0, -1): 
    test_df[f'lagY_{i}'] = test_df['y'].shift(i, fill_value=0)
  for i in range(20, 0, -1): 
    test_df[f'lagV_{i}'] = test_df['V'].shift(i, fill_value=0)

  test_df.drop(['x', 'y', 'V'], 1, inplace=True)

  Xs = test_df[['lagX_20', 'lagX_19', 'lagX_18', 'lagX_17', 'lagX_16',
        'lagX_15', 'lagX_14', 'lagX_13', 'lagX_12', 'lagX_11', 'lagX_10',
        'lagX_9', 'lagX_8', 'lagX_7', 'lagX_6', 'lagX_5', 'lagX_4', 'lagX_3',
        'lagX_2', 'lagX_1']]
  Xs = from_2d_array_to_nested(Xs.to_numpy())
  Xs.columns = [1]

  Ys = test_df[['lagY_20', 'lagY_19', 'lagY_18', 'lagY_17',
        'lagY_16', 'lagY_15', 'lagY_14', 'lagY_13', 'lagY_12', 'lagY_11',
        'lagY_10', 'lagY_9', 'lagY_8', 'lagY_7', 'lagY_6', 'lagY_5', 'lagY_4',
        'lagY_3', 'lagY_2', 'lagY_1']]
  Ys = from_2d_array_to_nested(Ys.to_numpy())
  Ys.columns = [2]

  Vs = test_df[['lagV_20', 'lagV_19', 'lagV_18',
        'lagV_17', 'lagV_16', 'lagV_15', 'lagV_14', 'lagV_13', 'lagV_12',
        'lagV_11', 'lagV_10', 'lagV_9', 'lagV_8', 'lagV_7', 'lagV_6', 'lagV_5',
        'lagV_4', 'lagV_3', 'lagV_2', 'lagV_1']]
  Vs = from_2d_array_to_nested(Vs.to_numpy())
  Vs.columns = [3]

  X = pd.concat([Xs, Ys, Vs], 1)

  # load the pre-trained classifier  
  clf = load(open('clf.pkl', 'rb'))

predcted = clf.predict(X)

Following is the detailed error

105   clf = load(open('clf.pkl', 'rb'))
    106 
--> 107   predcted = clf.predict(X)
    108   idx = list(np.where(predcted == 1)[0])
    109   idx = np.array(idx) - 10

5 frames
[/usr/local/lib/python3.7/dist-packages/sktime/datatypes/_series/_check.py](https://localhost:8080/#) in check_pddataframe_series(obj, return_metadata, var_name)
     74     # check that columns are unique
     75     msg = f"{var_name} must have " f"unique column indices, but found {obj.columns}"
---> 76     assert obj.columns.is_unique, msg
     77 
     78     # check whether the time index is of valid type

AssertionError: X must have unique column indices, but found Int64Index([0, 0, 0], dtype='int64')

NHWC data type on CPU

When I run the process.py file with CPU I get this error:
Node: 'model_1/max_pooling2d/MaxPool'
Default MaxPoolingOp only supports NHWC on device type CPU
[[{{node model_1/max_pooling2d/MaxPool}}]] [Op:__inference_predict_function_1861]

google collab error Default MaxPoolingOp only supports NHWC on device type CPU [[{{node model_1/max_pooling2d/MaxPool}}]] [Op:__inference_predict_function_1782]

Describe the bug
running code on google collab due to GPU tensor flow issue on laptop

BIGGEST [1015. 626. 1083. 845.]
Finished!
Tracking the ball: 0.0
Traceback (most recent call last):
File "predict_video.py", line 155, in
pr = m.predict(np.array([X]))[0]
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py", line 54, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error:

Detected at node 'model_1/max_pooling2d/MaxPool' defined at (most recent call last):
File "predict_video.py", line 155, in
pr = m.predict(np.array([X]))[0]
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 2033, in predict
tmp_batch_outputs = self.predict_function(iterator)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1845, in predict_function
return step_function(self, iterator)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1834, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1823, in run_step
outputs = model.predict_step(data)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 1791, in predict_step
return self(x, training=False)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/training.py", line 490, in call
return super().call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call
outputs = call_fn(inputs, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/functional.py", line 458, in call
return self._run_internal_graph(
File "/usr/local/lib/python3.8/dist-packages/keras/engine/functional.py", line 596, in _run_internal_graph
outputs = node.layer(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/engine/base_layer.py", line 1014, in call
outputs = call_fn(inputs, *args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 92, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/keras/layers/pooling/base_pooling2d.py", line 73, in call
outputs = self.pool_function(
Node: 'model_1/max_pooling2d/MaxPool'
Default MaxPoolingOp only supports NHWC on device type CPU
[[{{node model_1/max_pooling2d/MaxPool}}]] [Op:__inference_predict_function_1782]

Questions

Hey, I have three questions:

  1. Is there any way to use this on real time video (webcam, external camera etc...) ?
  2. I want to calculate the speed of ball as well, how can I do this ?
  3. Can you run this project using yolov5 instead yolov3, If so, how?

Edit: Also, when I tried to "pip install -r requirements.txt" on anaconda process is shows errors after the step "Installing build dependencies" how can I solve this problem
Error1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.