Coder Social home page Coder Social logo

pedestrian_crossing_intention_prediction's Introduction

Pedestrian Crossing Intention Prediction

Notification

Predicting Pedestrian Crossing Intention with Feature Fusion and Spatio-Temporal Attention.

Our proposed model

Paper in ArXiv: https://arxiv.org/pdf/2104.05485.pdf (Accepted to T-IV)

This work improves the existing pedestrian crossing intention prediction method and achieves the latest state-of-the-art performance.

Our implementation relied on the pedestrian action prediction benchmark: Kotseruba, Iuliia, Amir Rasouli, and John K. Tsotsos. "Benchmark for Evaluating Pedestrian Action Prediction." In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1258-1268, 2021.

Environment

python = 3.8  
tensorflow-gpu = 2.2   
numpy, opencv, PIL, matplotlib, etc  
CPU:i7-6700K, GPU:RTX-2070super  

We recommend using conda to create your environment.

We provide the conda environment's spec-list.txt in this repo, in case you want to recover the exact same environment as ours.

Dataset Preparation

Download the JAAD Annotation and put JAAD file to this project's root directory (as ./JAAD).

Download the JAAD Dataset, and then put the video file JAAD_clips into ./JAAD (as ./JAAD/JAAD_clips).

Copy jaad_data.py from the corresponding repositories into this project's root directory (as ./jaad_data.py).

In order to use the data, first, the video clips should be converted into images. This can be done using script ./JAAD/split_clips_to_frames.sh following JAAD dataset's instruction.

Above operation will create a folder called images and save the extracted images grouped by corresponding video ids in the ./JAAD/images folder.

./JAAD/images/video_0001/
				00000.png
				00001.png
				...
./JAAD/images/video_0002/
				00000.png
				00001.png
				...		
...

Training

Note: our model extracts the semantic mask via DeeplabV3 (you need download pretrained segmentation model deeplabv3 before training and put checkpoint file into this project's root directory (as ./deeplabv3_mnv2_cityscapes_train_2018_02_05.tar.gz) so that the model can obtain the input semantic data).

Use train_test.py script with config_file:

python train_test.py -c <config_file>

All config_files are saved in ./config_files and you can review all offered model configs in ./config_files/config_list.yaml and all offered model architectures in ./model_imgs corresponding to configs.

For example, to train MASK-PCPA model run:

python train_test.py -c config_files/ours/MASK_PCPA_jaad_2d.yaml

The script will automatially save the trained model weights, configuration file and evaluation results in models/<dataset>/<model_name>/<current_date>/ folder.

See comments in the configs_default.yaml and action_predict.py for parameter descriptions.

Model-specific YAML files contain experiment options exp_opts that overwrite options in configs_default.yaml.

Bash scripts are also provided to run train all the models one by one:

  • JAAD dataset: run_all_on_jaad.sh
# === run on JAAD datasets ===

# benchmark comparison
python train_test.py -c config_files/baseline/PCPA_jaad.yaml  # PCPA
python train_test.py -c config_files/baseline/SingleRNN.yaml  # SingleRNN
python train_test.py -c config_files/baseline/SFRNN.yaml      # SF-GRU
python train_test.py -c config_files/ours/MASK_PCPA_jaad_2d.yaml  # ours

# ablation study
python train_test.py -c config_files/laterfusion/MASK_PCPA_jaad.yaml    # ours1
python train_test.py -c config_files/earlyfusion/MASK_PCPA_jaad.yaml  # ours2
python train_test.py -c config_files/hierfusion/MASK_PCPA_jaad.yaml  # ours3
python train_test.py -c config_files/baseline/PCPA_jaad_2d.yaml      # ours4
python train_test.py -c config_files/laterfusion/MASK_PCPA_jaad_2d.yaml  # ours5
python train_test.py -c config_files/earlyfusion/MASK_PCPA_jaad_2d.yaml  # ours6
python train_test.py -c config_files/hierfusion/MASK_PCPA_jaad_2d.yaml  # ours7
  • PIE dataset: run_all_on_pie.sh
# === run on PIE datasets ===

# benchmark comparison
python train_test.py -c config_files_pie/baseline/PCPA_jaad.yaml  # PCPA
python train_test.py -c config_files_pie/baseline/SingleRNN.yaml  # SingleRNN
python train_test.py -c config_files_pie/baseline/SFRNN.yaml      # SF-GRU
python train_test.py -c config_files_pie/ours/MASK_PCPA_jaad_2d.yaml   # ours

# ablation study
python train_test.py -c config_files_pie/laterfusion/MASK_PCPA_jaad.yaml    # ours1
python train_test.py -c config_files_pie/earlyfusion/MASK_PCPA_jaad.yaml  # ours2
python train_test.py -c config_files_pie/hierfusion/MASK_PCPA_jaad.yaml  # ours3
python train_test.py -c config_files_pie/baseline/PCPA_jaad_2d.yaml      # ours4
python train_test.py -c config_files_pie/laterfusion/MASK_PCPA_jaad_2d.yaml  # ours5
python train_test.py -c config_files_pie/earlyfusion/MASK_PCPA_jaad_2d.yaml  # ours6
python train_test.py -c config_files_pie/hierfusion/MASK_PCPA_jaad_2d.yaml  # ours7

In case you cannot recognize the resulting folders of ablation study in the code, we provide the following mapping list:

config_files list: (model architectures can be seen in ./model_imgs)
1. baseline:
PCPA_jaad ---> original PCPA model (3DCNN) model:PCPA # PCPA
PCPA_jaad_2d ---> PCPA (2DCNN +RNN) model:PCPA_2D  # ours4
2. earlyfusion:
MASK_PCPA_jaad ---> PCPA + MASK (3DCNN)  model:MASK_PCPA_2  # ours2
MASK_PCPA_jaad ---> PCPA + MASK (2DCNN +RNN) model:MASK_PCPA_2_2D # ours6
3.hierfusion:
MASK_PCPA_jaad ---> PCPA + MASK (3DCNN) model:MASK_PCPA_3  # ours3
MASK_PCPA_jaad ---> PCPA + Mask (2DCNN +RNN) model:MASK_PCPA_3_2D  # ours7
4.laterfusion:
MASK_PCPA_jaad ---> PCPA + MASK (3DCNN) model:MASK_PCPA  # ours1
MASK_PCPA_jaad_2d ---> PCPA + MASK (2DCNN + RNN) model:MASK_PCPA_2D # ours5
5.ours:
MASK_PCPA_jaad_2d ---> PCPA + MASK (2DCNN + RNN) model:MASK_PCPA_4_2D  # ours

Test saved model

To re-run test on the saved model use:

python test_model.py <saved_files_path>

For example:

python test_model.py models/jaad/MASK_PCPA/xxxx/

The pre-trained models can be downloaded here for testing:

TODO Lists

  • Readme Completion
  • Pretrained Model
  • Support PIE Dataset

pedestrian_crossing_intention_prediction's People

Contributors

dongfang-steven-yang avatar osu-haolin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pedestrian_crossing_intention_prediction's Issues

TypeError: int() argument must be a string, a bytes-like object or a number, not 'tuple'

Hi!I ran "python train_test.py -c config_files/ours/MASK_PCPA_jaad_2d.yaml" but got this TypeError:

Traceback (most recent call last):
  File "train_test.py", line 210, in <module>
    run(config_file=config_file)
  File "train_test.py", line 154, in run
    model_opts=configs['model_opts'])
  File "/home/buaa/桌面/lh/Pedestrian_Crossing_Intention_Prediction/action_predict.py", line 1060, in train
    callbacks=callbacks)
  File "/home/buaa/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 728, in fit
    use_multiprocessing=use_multiprocessing)
  File "/home/buaa/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 224, in fit
    distribution_strategy=strategy)
  File "/home/buaa/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 547, in _process_training_inputs
    use_multiprocessing=use_multiprocessing)
  File "/home/buaa/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 606, in _process_inputs
    use_multiprocessing=use_multiprocessing)
  File "/home/buaa/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py", line 613, in __init__
    output_shapes=nested_shape)
  File "/home/buaa/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/data/ops/dataset_ops.py", line 540, in from_generator
    output_types, tensor_shape.as_shape, output_shapes)
  File "/home/buaa/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/data/util/nest.py", line 471, in map_structure_up_to
    results = [func(*tensors) for tensors in zip(*all_flattened_up_to)]
  File "/home/buaa/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/data/util/nest.py", line 471, in <listcomp>
    results = [func(*tensors) for tensors in zip(*all_flattened_up_to)]
  File "/home/buaa/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/framework/tensor_shape.py", line 1216, in as_shape
    return TensorShape(shape)
  File "/home/buaa/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/framework/tensor_shape.py", line 776, in __init__
    self._dims = [as_dimension(d) for d in dims_iter]
  File "/home/buaa/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/framework/tensor_shape.py", line 776, in <listcomp>
    self._dims = [as_dimension(d) for d in dims_iter]
  File "/home/buaa/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/framework/tensor_shape.py", line 718, in as_dimension
    return Dimension(value)
  File "/home/buaa/anaconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/framework/tensor_shape.py", line 193, in __init__
    self._value = int(value)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'tuple' 

What do you think caused the error?
thank you!

Results are different compared with the benchmark paper

Hi,

Thank you very much for your nice work!

I am just wondering why the results of SingleRNN, SF-GRU and PCPA are different from what are reported in the benchmark paper "Benchmarking for Evaluating Pedestrian Action Preciction"?

It seems that you are using the same dataset and configuration with the paper. Maybe you have some specific processing steps?

Many thanks!
Xingchen

程序运行出现了问题

ueError: Data is expected to be in format x, (x,), (x, y), or (x, y, sample_weight), found: (array([[[0. , 0. , 0. , ..., 0. ,
0.01091962, 5.66379976],
[0. , 0. , 0. , ..., 0. ,
0.00891008, 5.80273056],
[0. , 0. , 0. , ..., 0. ,
0.01121952, 4.47420454],
...,
[0. , 0. , 0. , ..., 0.32157823,
0.03451639, 5.85469866],
[0. , 0. , 0. , ..., 0.38340074,
0.03256493, 4.66801405],
[0. , 0. , 0.04821083, ..., 0.36591038,
0.03021018, 4.75490093]]]), array([[[0. , 0.10577066, 0. , ..., 0. ,
0.23501925, 2.13398218],
[0. , 0.00573547, 0.0271025 , ..., 0.57506508,
0.37441418, 2.8519516 ],
[0. , 0.11819082, 0. , ..., 0. ,
0.27746791, 1.68458521],
...,
[0. , 0.11244684, 0. , ..., 1.91713822,
1.1689465 , 3.14420724],
[0. , 0.17454623, 0. , ..., 1.97361267,
0.98953736, 3.34797406],
[0. , 0.20306113, 0. , ..., 2.39328313,
0.85657609, 4.01752806]]]), array([[[0.42410714, 0.11607143, 0.44196429, 0.19196429, 0.35267857,
0.20535714, 0.33035714, 0.35714286, 0.34375 , 0.48660714,
0.54017857, 0.17410714, 0.61607143, 0.32589286, 0.62946429,
0.46875 , 0.41964286, 0.50446429, 0.44196429, 0.70982143,
0.46428571, 0.90178571, 0.54910714, 0.49553571, 0.54910714,
0.68303571, 0.54464286, 0.85714286, 0.40178571, 0.10267857,
0.43303571, 0.09375 , 0.375 , 0.10714286, 0.45982143,
0.08482143],
[0.41071429, 0.12053571, 0.44196429, 0.19196429, 0.34375 ,
0.20535714, 0.32142857, 0.35714286, 0.33928571, 0.48660714,
0.53125 , 0.17410714, 0.59375 , 0.32589286, 0.625 ,
0.49553571, 0.41517857, 0.51339286, 0.42857143, 0.71875 ,
0.45535714, 0.90625 , 0.53571429, 0.50446429, 0.54017857,
0.70089286, 0.54017857, 0.86160714, 0.39285714, 0.10267857,
0.42410714, 0.09821429, 0.36607143, 0.11160714, 0.45982143,
0.09375 ],
[0.41517857, 0.11607143, 0.4375 , 0.19196429, 0.34375 ,
0.20535714, 0.32589286, 0.36160714, 0.33928571, 0.48214286,
0.53125 , 0.17857143, 0.56696429, 0.32589286, 0.55357143,
0.46428571, 0.40625 , 0.51339286, 0.42410714, 0.71428571,
0.45982143, 0.90178571, 0.53571429, 0.5 , 0.53571429,
0.69196429, 0.53571429, 0.87053571, 0.39285714, 0.10267857,
0.42857143, 0.09821429, 0.36607143, 0.10714286, 0.46428571,
0.08928571],
[0.40178571, 0.12053571, 0.42857143, 0.19196429, 0.33928571,
0.20089286, 0.32589286, 0.35267857, 0.35267857, 0.47321429,
0.52232143, 0.17410714, 0.57142857, 0.31696429, 0.58482143,
0.45535714, 0.40625 , 0.5 , 0.42857143, 0.70982143,
0.45535714, 0.89732143, 0.52232143, 0.49107143, 0.52232143,
0.70535714, 0.52232143, 0.88839286, 0.38392857, 0.10714286,
0.41964286, 0.09821429, 0.35714286, 0.10714286, 0.45089286,
0.08928571],
[0.41964286, 0.11160714, 0.42410714, 0.19196429, 0.33035714,
0.20089286, 0. , 0. , 0. , 0. ,
0.52678571, 0.17857143, 0.58035714, 0.32589286, 0.60267857,
0.48214286, 0.39732143, 0.5 , 0.41517857, 0.69642857,
0.45089286, 0.90178571, 0.52232143, 0.49107143, 0.52232143,
0.68303571, 0.52232143, 0.88392857, 0.39285714, 0.08035714,
0.43303571, 0.08035714, 0.35714286, 0.09821429, 0.45982143,
0.07589286],
[0.41071429, 0.11160714, 0.42857143, 0.1875 , 0.33928571,
0.19642857, 0.32589286, 0.35267857, 0. , 0. ,
0.51785714, 0.17410714, 0.55803571, 0.29017857, 0. ,
0. , 0.39732143, 0.48214286, 0.41964286, 0.68303571,
0.44642857, 0.89285714, 0.52232143, 0.47321429, 0.52232143,
0.67410714, 0.53125 , 0.88839286, 0.38839286, 0.09375 ,
0.42410714, 0.08482143, 0.36160714, 0.09821429, 0.45535714,
0.08035714],
[0.41071429, 0.11160714, 0.44642857, 0.19642857, 0.35267857,
0.20982143, 0.33035714, 0.35714286, 0.36607143, 0.47767857,
0.53571429, 0.18303571, 0.58482143, 0.32589286, 0.61160714,
0.48214286, 0.41071429, 0.48660714, 0.41964286, 0.6875 ,
0.44642857, 0.88839286, 0.53125 , 0.47321429, 0.52678571,
0.67410714, 0.53125 , 0.87946429, 0.38392857, 0.08928571,
0.42410714, 0.08035714, 0.36607143, 0.10714286, 0.45535714,
0.07142857],
[0.41071429, 0.12053571, 0.45089286, 0.19196429, 0.36160714,
0.20535714, 0. , 0. , 0. , 0. ,
0.52678571, 0.17857143, 0.58035714, 0.32589286, 0.61607143,
0.49553571, 0.41071429, 0.48660714, 0.41517857, 0.6875 ,
0.44642857, 0.88392857, 0.52678571, 0.47767857, 0.53125 ,
0.68303571, 0.53571429, 0.875 , 0.39285714, 0.10714286,
0.42857143, 0.09821429, 0.36607143, 0.10714286, 0.45535714,
0.08482143],
[0.41964286, 0.12053571, 0.44196429, 0.19642857, 0.34375 ,
0.20535714, 0. , 0. , 0. , 0. ,
0.53571429, 0.1875 , 0.60267857, 0.33482143, 0.61160714,
0.48214286, 0.375 , 0.49107143, 0.41517857, 0.69642857,
0.44196429, 0.89285714, 0.51785714, 0.49107143, 0.52232143,
0.6875 , 0. , 0. , 0.40178571, 0.10714286,
0.43303571, 0.10267857, 0.375 , 0.11607143, 0.46428571,
0.09375 ],
[0.42410714, 0.125 , 0.46428571, 0.19642857, 0.375 ,
0.20982143, 0.34375 , 0.35714286, 0.39732143, 0.49553571,
0.55357143, 0.18303571, 0.60714286, 0.33928571, 0.61160714,
0.49107143, 0.41071429, 0.5 , 0.42410714, 0.69642857,
0.44642857, 0.88839286, 0.54017857, 0.49553571, 0.55357143,
0.69642857, 0.54017857, 0.88839286, 0.40625 , 0.11160714,
0.44196429, 0.10714286, 0.38392857, 0.11607143, 0.47767857,
0.09821429],
[0.43303571, 0.08928571, 0.45982143, 0.19642857, 0.37053571,
0.20535714, 0.34821429, 0.35714286, 0. , 0. ,
0.54910714, 0.1875 , 0.60714286, 0.34821429, 0. ,
0. , 0.41071429, 0.52232143, 0. , 0. ,
0. , 0. , 0.54464286, 0.51785714, 0. ,
0. , 0. , 0. , 0.41517857, 0.07142857,
0.45089286, 0.07142857, 0.38839286, 0.09375 , 0.48214286,
0.08482143],
[0.44642857, 0.08928571, 0.46875 , 0.1875 , 0.375 ,
0.19642857, 0.35267857, 0.34821429, 0.37946429, 0.48214286,
0.55803571, 0.17410714, 0.60267857, 0.33035714, 0.61607143,
0.48214286, 0.41517857, 0.5 , 0.42410714, 0.6875 ,
0.44642857, 0.88839286, 0.54464286, 0.5 , 0.55357143,
0.69196429, 0. , 0. , 0.42410714, 0.07142857,
0.45982143, 0.07142857, 0.39732143, 0.08928571, 0.48660714,
0.08035714],
[0.44642857, 0.10714286, 0.47767857, 0.18303571, 0.38392857,
0.1875 , 0.36160714, 0.34375 , 0.39285714, 0.48214286,
0.56696429, 0.17410714, 0.60714286, 0.33035714, 0.60714286,
0.48214286, 0.43303571, 0.49553571, 0.42410714, 0.69196429,
0.43303571, 0.89285714, 0.55357143, 0.5 , 0.53571429,
0.69642857, 0.53571429, 0.89285714, 0.42410714, 0.08928571,
0.46428571, 0.08035714, 0.39732143, 0.09821429, 0.49107143,
0.07589286],
[0.46875 , 0.11160714, 0.48214286, 0.20089286, 0.37946429,
0.20982143, 0.35714286, 0.35714286, 0.40625 , 0.48214286,
0.57142857, 0.19196429, 0.60714286, 0.33928571, 0.61160714,
0.49107143, 0.43303571, 0.5 , 0.42857143, 0.69642857,
0.43303571, 0.88392857, 0.55357143, 0.49553571, 0.54910714,
0.70535714, 0.53571429, 0.91071429, 0.44642857, 0.09821429,
0.48660714, 0.08928571, 0.41071429, 0.09821429, 0.50892857,
0.08482143],
[0.46875 , 0.11160714, 0.47321429, 0.19196429, 0.37946429,
0.19642857, 0.35714286, 0.35714286, 0.39285714, 0.49107143,
0.56696429, 0.18303571, 0.60267857, 0.33482143, 0.59821429,
0.49107143, 0.42857143, 0.49107143, 0.41517857, 0.69196429,
0.41964286, 0.87053571, 0.54910714, 0.48660714, 0.54017857,
0.6875 , 0.52678571, 0.89732143, 0.44642857, 0.09821429,
0.48214286, 0.08928571, 0.41071429, 0.09821429, 0.51339286,
0.08928571]]]), array([[[ 0., 0., 1., 1.],
[ 2., 0., 3., 2.],
[ 3., 0., 4., 3.],
[ 5., 0., 6., 4.],
[ 6., 0., 7., 5.],
[ 8., 0., 9., 6.],
[ 9., 0., 10., 6.],
[11., 0., 12., 7.],
[12., 0., 13., 8.],
[14., 0., 15., 9.],
[15., 0., 16., 10.],
[17., 0., 17., 11.],
[20., 0., 20., 12.],
[22., 0., 21., 12.],
[25., 0., 24., 13.]]]), array([[[4.],
[4.],
[4.],
[4.],
[4.],
[4.],
[4.],
[4.],
[4.],
[4.],
[4.],
[4.],
[4.],
[4.],
[4.]]]))
您好,在程序运行过程中出现了这个问题,该如何解决?

可视化

如果我想对预测结果进行可视化,应该运行哪个程序,需要做哪些准备

Confused about parameters and dimension change

Hi! I'm a bit confused about the difference between time_to_event and obs_length. More specifically in get_data_sequence function (action_predict.py), could you explain why we change img_seq from (194) to (2134,16)? How does this process make use of time_to_event and obs_length?
Hope my questions are clear. Thank you so very much for the explaination :)

GRU modules have dimension error

Hi!
I ran the 'MASK_PCPA_jaad_2d.yaml" config file but got this dimension error:
ValueError: Input 0 of layer enc0_local_context_cnn is incompatible with the layer: expected ndim=3, found ndim=5. Full shape received: (None, 16, 224, 224, 3)
I don't think the tensor of this size can be passed in GRU. The tensor from mask_cnn seems to better fit the GRU input size. Do you mind elaborate a bit how tensors flow from CNN to GRU in this special case?
Thank you!

file error

Hello, how can I solve this problem? The data set and annotations of jaad are set up as required.
2021-12-16 20:08:19.367831: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
2021-12-16 20:08:19.369866: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-12-16 20:08:19.375950: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108] 0
2021-12-16 20:08:19.379667: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0: N
2021-12-16 20:08:19.381758: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce GTX 960M, pci bus id: 0000:02:00.0, compute capability: 5.0)
[--------------------] 0.00% 2021-12-16 20:08:22.955776: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2021-12-16 20:08:26.104307: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] Internal: Invoking GPU asm compilation is supported on Cuda non-Windows platforms only
Relying on driver to perform ptx compilation.
Modify $PATH to customize ptxas location.
This message will be only logged once.
2021-12-16 20:08:26.437737: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
Traceback (most recent call last):
File "train_test.py", line 209, in
run(config_file=config_file)
File "train_test.py", line 152, in run
saved_files_path = method_class.train(beh_seq_train, beh_seq_val, **configs['train_opts'],
File "F:\CODE\PCIP\action_predict.py", line 1035, in train
data_train = self.get_data('train', data_train, {**model_opts, 'batch_size': batch_size})
File "F:\CODE\PCIP\action_predict.py", line 5457, in get_data
features, feat_shape = self.get_context_data(model_opts_3d, data, data_type, d_type)
File "F:\CODE\PCIP\action_predict.py", line 818, in get_context_data
return self.load_images_crop_and_process(data['image'],
File "F:\CODE\PCIP\action_predict.py", line 458, in load_images_crop_and_process
with open(img_save_path, 'wb') as fid:
FileNotFoundError: [Errno 2] No such file or directory: 'data/features\jaad\local_context_cnn_vgg_raw_1.5\.\JAAD\images\video_0001\00491_0_1_3b.pkl'
Uploading 屏幕截图 2021-12-16 194323.png…

Google Drive link is down

Hi, the link to download the pre-trained models in Google Drive is down, could you please update it. Thanks

Support for PIE dataset

Hi, Thanks for the great work, it's very helpful :) Can you please share an updated version that supports the PIE dataset?

zip with models on google drive broken

thanks for uploading the models to google drive, unfortunately the file seems to be damaged

grafik

tried to download it a couple of times, but it seems like the uploaded file has an issue

Cannot reproduce results in paper

hi, i'm trying to reproduce the results reported in your paper. kindly let me know how to train the best performing as shown in Table1:
grafik

it appears to me that the correct config for this would be MASK_PCPA_jaad_2d.yaml, is that correct?
python train_test.py -c config_files/ours/MASK_PCPA_jaad_2d.yaml

my result for this model are somewhat far off from what's reported in table 1
for jaad_beh:
results:
acc: 0.580542264752791
auc: 0.5232582837723024
f1: 0.691435275713727
precision: 0.6405797101449275
recall: 0.7510620220900595

for jaad_all:
results:
acc: 0.7874331550802139
auc: 0.8035675530169841
f1: 0.5767524401064773
precision: 0.4423774954627949
recall: 0.8283772302463891

About how to get the POSE data

Could you please tell me which human pose estimation model is used to get the pose. pkl file of this project? Whether to pass the BBox data into the Model?

Global Pooling, when generating local context

Hello,

Thank you for your interesting work.

I have a question regarding the creation of the local context in the Mask_PCPA Model.

In line 5619 within the DataGenerator class, you perform a global pooling operation.

image

What is the purpose of this operation?

As I understand it, the shape of the local context is [No. Frames, Height, Width, RGB]. Through your global pooling operation every pixel within a frame gets the same RGB values. How can the input be used meaningfully if all pixels have the same colour?

image
Many Greetings
Moritz

Pre-trained

Hi. Thank you for your amazing work. Could you please post a link for the pre-trained model please? Because the link at the bottom of the README seems to be broken. Thank you for your help

ValueError: bad marshal data (unknown type code)

Hi. Thank you for your work. I am trying to test the model using the models provided by you. I generated the frames and followed your instructions in the README file. However, when I'm trying to test the model, I'm getting an error:
ValueError: bad marshal data (unknown type code)
I'm using Google Colab Pro. Thank you very much for your help.

File "test.py", line 52, in
test_model(saved_files_path=saved_files_path)
File "test.py", line 45, in test_model
acc, auc, f1, precision, recall = method_class.test(beh_seq_test, saved_files_path)
File "/content/drive/My Drive/Pedestrian_Crossing_Intention_Prediction/action_predict.py", line 1099, in test
test_model = load_model(os.path.join(model_path, 'model.h5'))
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/saving/save.py", line 146, in load_model
return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/saving/hdf5_format.py", line 168, in load_model_from_hdf5
custom_objects=custom_objects)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/saving/model_config.py", line 55, in model_from_config
return deserialize(config, custom_objects=custom_objects)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/layers/serialization.py", line 106, in deserialize
printable_module_name='layer')
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/utils/generic_utils.py", line 303, in deserialize_keras_object
list(custom_objects.items())))
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/engine/network.py", line 937, in from_config
config, custom_objects)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/engine/network.py", line 1893, in reconstruct_from_config
process_layer(layer_data)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/engine/network.py", line 1875, in process_layer
layer = deserialize_layer(layer_data, custom_objects=custom_objects)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/layers/serialization.py", line 106, in deserialize
printable_module_name='layer')
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/utils/generic_utils.py", line 303, in deserialize_keras_object
list(custom_objects.items())))
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/layers/core.py", line 947, in from_config
config, custom_objects, 'function', 'module', 'function_type')
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/layers/core.py", line 999, in _parse_function_from_config
config[func_attr_name], globs=globs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/keras/utils/generic_utils.py", line 400, in func_load
code = marshal.loads(raw_code)
ValueError: bad marshal data (unknown type code)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.