Coder Social home page Coder Social logo

vita-epfl / trajnetplusplusbaselines Goto Github PK

View Code? Open in Web Editor NEW
237.0 14.0 81.0 42.25 MB

[ITS'21] Human Trajectory Forecasting in Crowds: A Deep Learning Perspective

Home Page: https://ieeexplore.ieee.org/document/9408398

License: MIT License

Python 98.99% Shell 1.01%
social-lstm benchmark social-gan kalman-filter orca social-force-model crowd-dynamics human-trajectory-prediction human-trajectory datasets

trajnetplusplusbaselines's Introduction

TrajNet++ : The Trajectory Forecasting Framework

PyTorch implementation of Human Trajectory Forecasting in Crowds: A Deep Learning Perspective

TrajNet++ is a large scale interaction-centric trajectory forecasting benchmark comprising explicit agent-agent scenarios. Our framework provides proper indexing of trajectories by defining a hierarchy of trajectory categorization. In addition, we provide an extensive evaluation system to test the gathered methods for a fair comparison. In our evaluation, we go beyond the standard distance-based metrics and introduce novel metrics that measure the capability of a model to emulate pedestrian behavior in crowds. Finally, we provide code implementations of > 15 popular human trajectory forecasting baselines.

We host the Trajnet++ Challenge on AICrowd allowing researchers to objectively evaluate and benchmark trajectory forecasting models on interaction-centric data. We rely on the spirit of crowdsourcing and the challenge has > 1800 submissions. We encourage researchers to submit their sequences to TrajNet++, so the quality of trajectory forecasting models can keep increasing in tackling more challenging scenarios.

Data Setup

The detailed step-by-step procedure for setting up the TrajNet++ framework can be found here

Converting External Datasets

To convert external datasets into the TrajNet++ framework, refer to this guide

Training Models

LSTM

The training script and its help menu: python -m trajnetbaselines.lstm.trainer --help

Run Example

## Our Proposed D-LSTM
python -m trajnetbaselines.lstm.trainer --type directional --augment

## Social LSTM 
python -m trajnetbaselines.lstm.trainer --type social --augment --n 16 --embedding_arch two_layer --layer_dims 1024

GAN

The training script and its help menu: python -m trajnetbaselines.sgan.trainer --help

Run Example

## Social GAN (L2 Loss + Adversarial Loss)
python -m trajnetbaselines.sgan.trainer --type directional --augment

## Social GAN (Variety Loss only)
python -m trajnetbaselines.sgan.trainer --type directional --augment --d_steps 0 --k 3

Evaluation

The evaluation script and its help menu: python -m evaluator.lstm.trajnet_evaluator --help

Run Example

## LSTM (saves model predictions. Useful for submission to TrajNet++ benchmark)
python -m evaluator.lstm.trajnet_evaluator --output OUTPUT_BLOCK/trajdata/lstm_directional_None.pkl --path <path_to_test_file>

## SGAN (saves model predictions. Useful for submission to TrajNet++ benchmark)
python -m evaluator.sgan.trajnet_evaluator --output OUTPUT_BLOCK/trajdata/sgan_directional_None.pkl --path <path_to_test_file>

More details regarding TrajNet++ evaluator are provided here

Evaluation on datasplits is based on the following categorization

Results

Unimodal Comparison of interaction encoder designs on interacting trajectories of TrajNet++ real world dataset. Errors reported are ADE / FDE in meters, collisions in mean % (std. dev. %) across 5 independent runs. Our goal is to reduce collisions in model predictions without compromising distance-based metrics.

Method

ADE/FDE

Collisions
LSTM

0.60/1.30

13.6 (0.2)
S-LSTM

0.53/1.14

6.7 (0.2)

S-Attn

0.56/1.21

9.0 (0.3)

S-GAN

0.64/1.40

6.9 (0.5)

D-LSTM (ours)

0.56/1.22

5.4 (0.3)

Interpreting Forecasting Models

Visualizations of the decision-making of social interaction modules using layer-wise relevance propagation (LRP). The darker the yellow circles, the more is the weight provided by the primary pedestrian (blue) to the corresponding neighbour (yellow).Visualizations of the decision-making of social interaction modules using layer-wise relevance propagation (LRP). The darker the yellow circles, the more is the weight provided by the primary pedestrian (blue) to the corresponding neighbour (yellow).

Code implementation for explaining trajectory forecasting models using LRP can be found here

Benchmarking Models

We host the Trajnet++ Challenge on AICrowd allowing researchers to objectively evaluate and benchmark trajectory forecasting models on interaction-centric data. We rely on the spirit of crowdsourcing, and encourage researchers to submit their sequences to our benchmark, so the quality of trajectory forecasting models can keep increasing in tackling more challenging scenarios.

Citation

If you find this code useful in your research then please cite

@article{Kothari2020HumanTF,
  author={Kothari, Parth and Kreiss, Sven and Alahi, Alexandre},
  journal={IEEE Transactions on Intelligent Transportation Systems}, 
  title={Human Trajectory Forecasting in Crowds: A Deep Learning Perspective}, 
  year={2021},
  volume={},
  number={},
  pages={1-15},
  doi={10.1109/TITS.2021.3069362}
 }

trajnetplusplusbaselines's People

Contributors

pedro-mgb avatar rodolphefarrando avatar svenkreiss avatar thedebugger811 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

trajnetplusplusbaselines's Issues

Question about dataset conversion and best hyper parameters for sgan

Hello!

I'm really grateful to share your open source prediction model.
I have a few question about your code.

1. Dataset
Most of prediction paper use ETH and UCY dataset such as ETH, HOTEL, UNIV, ZARA1, and ZARA2 (in social gan paper). So, I wanna use only ETH and UCY datasets for training. I found several compressed datasets in data folder from trajnetplusplusdataset repository like belows:

  • ewap_dataset_light.tgz

    seq_eth, seq_hotel

  • data_zara.rar

    crowds_zara01, crowds_zara02, crowds_zara03

  • data_university_students.rar

    students001, students003, uni_examples

Is it right to use seq_eth, seq_hotel for ETH and HOTEL dataset, use crowds_zara01 for ZARA1, use crowds_zara02 for ZARA2, and use students001, students003, uni_examples for UNIV? if wrong please let me know about proper dataset file!

2. Data conversion for leave-one-out approach training
Most of paper use leave-one-out approach (in social lstm, social gan paper), training on 4 sets and test on the remaining set. So, I tried to train your social lstm model with seq_hotel (HOTEL), crowds_zara01 (ZARA1), crowds_zara02 (ZARA2), students001, students003, uni_examples (UNIV) and then test the trained model with seq_eth (ETH).
After training the social lstm model like above way and testing with seq_eth (ETH) dataset, I got test_pred folder!
However, when I try to visualize the result with visualize_predictions.py, I encountered some error like below:

python -m evaluator.visualize_predictions DATA_BLOCK/trajdata/test_private/biwi_eth.ndjson DATA_BLOCK/trajdata/test_pred/lstm_social_ETH_modes1/biwi_eth.ndjson --n 10
Scene ID: 714
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/kblee/Study/trajnet++/trajnetplusplusbaselines/evaluator/visualize_predictions.py", line 100, in
main()
File "/home/kblee/Study/trajnet++/trajnetplusplusbaselines/evaluator/visualize_predictions.py", line 90, in main
full_predicted_paths = add_gt_observation_to_prediction(paths, predicted_paths)
File "/home/kblee/Study/trajnet++/trajnetplusplusbaselines/evaluator/visualize_predictions.py", line 18, in add_gt_observation_to_prediction
full_predicted_paths = [gt_observation[ped_id][:obs_length] + pred for ped_id, pred in enumerate(model_prediction)]
File "/home/kblee/Study/trajnet++/trajnetplusplusbaselines/evaluator/visualize_predictions.py", line 18, in
full_predicted_paths = [gt_observation[ped_id][:obs_length] + pred for ped_id, pred in enumerate(model_prediction)]
IndexError: list index out of range

The reason was due to the number of tracks (pedestrain) in observation (gt) and model prediction for each scene are different. Could you give any solution to solve this problem? The code I used when train and test is as follows:

training: python -m trajnetbaselines.lstm.trainer --type social --augment --epochs 25 --step_size 10 --n 16 --cell_side 0.6 --embedding_arch two_layer --layer_dims 1024 --batch_size 8 --loss pred --output ETH
test (evaluate): python -m trajnetbaselines.lstm.trajnet_evaluator --path trajdata --output OUTPUT_BLOCK/trajdata/lstm_social_ETH.pkl

Test data is made with only seq_eth dataset like this (python -m trajnetdataset.convert --train_fraction 0.0 --val_fraction 0.0)
(I changed some code related to floating point like (test_fraction = 1 - args.train_fraction - args.val_fraction)

3. The difference between output_pre and output
What is the difference between output_pre and output during data conversion?

4. What is --n parameter in visualize_predictions.py?

5. Best hyper parameters for sgan
I want to train social-gan model on ETH and UCY datasets just like in social gan paper. Would you mind sharing the best hyper parameters that achieve the results stated in the paper? I tried to use below code.
python -m trajnetbaselines.sgan.trainer --type hiddenstatemlp --augment --noise_dim 8 --k 20 --output ETH
Is there any parameter I need to add or change?

Thanks.

custom dataset pre-processing

I want to run Trajnet++ to get baseline results on the custom dataset. My dataset has data in format [frame,ped_ID, y,x].
I want to convert this dataset into Trajnet++ format.
I was testing with the ETH dataset and by using convert.py code, as you explain in the blog. When I use eth={obsmat.txt} file to process the file, I get the processed file as expected in the output folder.
But when I use data given in the Trajnet_orginal() folder, which has similar columns as my data, I get an error, "No scene found".
I can't figure out the issue.
Your help is greatly appreciated.

Can't compute collision percentages for Kalman Filter baseline

Hello. Hope everyone that is reading this is doing well.

I was trying to run the trajnet evaluation code for the Kalman filter implementation, but I get "-1" for the Col-I metric.

From what I read in #15 , this is because the number of predicted tracks for the neighbours is not equal to the number of ground truth tracks. Upon closer inspection, I was obtaining additional elements on the list of tracks, that corresponded to empty lists (no actual positions).

While I'm not sure why this happened, I think it might be related to this issue, where the start and end frames for different scenes are not completely separate, for the converted data using the [Trajnet++ dataset]((https://github.com/vita-epfl/trajnetplusplusdataset) code.

Can someone confirm that that is the case? I'm assuming I'm not the only one to have come accross this issue. I could make a script to perform such separation, and see if that is the actual problem. If I don't find any existing code to do so, I suppose that's my best option.

License

Any plan to include a license to the repo?

Generative loss stuck

Hi,

Regarding the Social GAN model and while playing with your code, I found something that I couldn't understand.

E.g while running:

python -m trajnetbaselines.sgan.trainer --k 1

It means that we are running a vanilla GAN where the generator outputs one sample (the most common GAN setting without the L2 loss); In doing so, the GAN loss is always 1.38 throughout the training. Thus, the vanilla GAN (with only the adversarial loss) is not capable of modeling the data.

My question is to what extent are we taking advantage of a GAN framework? It seems that we are only training an LSTM predictor (when running under the aforementioned conditions).

Problem training lstm

Hi, while trying to train social Lstm I encountered this error
UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)

Which is weird because the older version of the repo works fine with the same dataset.

Also I tried switching to Pytorch 1.0.0 but it doesn't work either because of Flatten.
AttributeError: module 'torch.nn' has no attribute 'Flatten'

Can you please tell me what's going wrong?
Thanks

Parameters and Data for the Vanilla Baseline

Can anyone tell me whether the results of the vanilla baseline (that is compared to the submitted model to on AICrowd) are simply the LSTM with default parameters? And what data is this model trained on? Just the training data that is supplied by the challenge?

Thanks!

No module named 'socialforce' ??

Hi, first of all, thank you for sharing this great work.

"python -m trajnetbaselines.lstm.trainer --type directional --augment"
I just ran this command but I have faced the below error.
No module named 'socialforce'

Is there something I should do install or include?
Thank you,

Issue running social force

Hi,
I can't run socialforce_eval.py because there are some files (pickle files in goals folder) which are not in the repo.
What are they and how can I run this code properly?

Thanks in advance.

social anchors

I write this issue to thank your contributes for trajNet++ which have created a greatly benchmark for trajctory prediction,it helps me to quickly study some core knowledge.

in addition, 《Interpretable Social Anchors for Human Trajectory Forecasting in Crowds》 is a great job, but I have some confusion. I want to study some related codes. Could you please give me the codes? Thank you very much!

Issue about plot_log.py

Dear Author,
When I use plot_log.py,only the resulting accuracy picture is blank.The name is xx.val.png.
As shown in the figure below:
image
What should I do to make the accuracy show up correctly?
Thank you for your reply.

Problem running Sgan model

Hello, I've tried to run the code and encountered error regarding to layer_dims parameter. In the help section it's said to give it like an array [--layer_dims [LAYER_DIMS [LAYER_DIMS ...]]] but again I can't train the model.

I run the following command:
python -m trajnetbaselines.sgan.trainer --batch_size 1 --lr 1e-3 --obs_length 9 --pred_length 12 --type 'social' --norm_pool --layer_dims 10 10

and get this error:

Traceback (most recent call last):
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 533, in
main()
File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 529, in main
trainer.loop(train_scenes, val_scenes, train_goals, val_goals, args.output, epochs=args.epochs, start_epoch=start_epoch)
File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 73, in loop
self.train(train_scenes, train_goals, epoch)
File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 141, in train
loss, _ = self.train_batch(scene, scene_goal, step_type)
File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/trainer.py", line 210, in train_batch
rel_output_list, outputs, scores_real, scores_fake = self.model(observed, goals, prediction_truth, step_type=step_type)
File "/Users/sasa/Desktop/trajnetplusplusbaselines/venv/trajnet3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/sgan.py", line 77, in forward
rel_pred_scene, pred_scene = self.generator(observed, goals, prediction_truth, n_predict)
File "/Users/sasa/Desktop/trajnetplusplusbaselines/venv/trajnet3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/sgan.py", line 283, in forward
hidden_cell_state = self.adding_noise(hidden_cell_state)
File "/Users/sasa/Desktop/trajnetplusplusbaselines/trajnetbaselines/sgan/sgan.py", line 154, in adding_noise
noise = torch.zeros(self.noise_dim, device=hidden_cell_state.device)
AttributeError: 'tuple' object has no attribute 'device'

I appreciate it if you can tell me where i went wrong or give an example command that trains the model.

Thanks in advance

FDE score of 1.14 with social LSTM

Hi!
I am trying to get the FDE score of 1.14 with social LSTM.

Did you train on the whole (with cff) training dataset?
How many epochs?
And with which parameters?

Thanks in advance
Many greetings

Issue about fast_evaluator and trajnet_evaluator

Hello,I've been using Trajnet ++ to evaluate trained models recently,
Whether I use fast_evaluator or trajner_evaluator, my col-I is always -1.
I read that part of the code, and the condition for col-I to occur is num_gt_neigh ==num_predicted_neigh.
But I don't know how I can modify the code to compute COL-I.
Thank you very much for answering my questions.

Data normalization? Minor errors and minor suggestions

Hi,

First of all congratulations on this fruitful work.

Then, I have a technical question. It seems that you don't normalize the data in any of the steps. Thus, why did you choose the standard Gaussian noise? It should provide samples with high variance wrt to k.

After downloading and installing the social force simulator, I ran the trainer and it threw an error:
ModuleNotFoundError: No module named 'socialforce.fieldofview'

After changing to:
from socialforce.field_of_view import FieldOfView

Everything worked fine.

Hyper parameters

Hi, I want to train social-LSTM model on ETH and UCY datasets. Would you mind sharing the best hyper parameters that achieve the results stated in the paper?

Thanks in advance

Is there a general principle for sampling trajectories from the raw data?

Thanks for providing such a useful tool to the community!
I'm trying to generate trajectories from some annotated raw data. But I'm not clear about the principles for sampling trajectories from the raw data. My key concerns are as follows.

  • Should trajectories be allowed to overlap with each other? Is there a conventional overlapping ratio, like 50%?
  • Should trajectories from the training set and the testing set be allowed to overlap with each other?

Any advice or reference would be appreciated.

RuntimeError: CUDA error: out of memory

Hi,
When I run trajnet_evaluator.py after training with cuda.
RuntimeError: CUDA error: out of memory

Is it my personal problem? or Only can I train this code on CPU?

python -m evaluator.fast_evaluator --path crowds_zara02 --output lstm_directional_one_12_6.pkl

When I run the command "python -m evaluator.fast_evaluator --path crowds_zara02 --output lstm_directional_one_12_6.pkl", I get this error " File "/home/zyb/desktop/ 20230512/trajnetplusplusbaselines-LRP (1)/evaluator/fast_evaluator.py", line 22, in process_scene
predictions = predictor(paths, scene_goal, n_predict=args.pred_length, obs_length=args.obs_length, modes=args.modes, scene_id=scene_id, args=args) args=args)
Type error: call() gets an unexpected keyword argument 'scene_id'". Also can .pkl files be visualized? Looking forward to your reply

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.