Open Traffic Reporter is part of OTv2, the new Open Traffic platform under development. It will take the place of OTv1's Traffic Engine component.
Reporter takes in raw GPS probe data, matches it to OSMLR segments using Valhalla Meili, and sends segments and speeds to the centralized Open Traffic Datastore.
Build/run the python matcher service via docker-compose.
- move your tarball to
/some/path/to/data/tiles.tar
- the path is of your choosing, the name of the tarball is currently required to be
tiles.tar
- the path is of your choosing, the name of the tarball is currently required to be
DATAPATH=/some/path/to/data docker-compose up
- the container exposes port 8002 for the report and docker-compose maps that port locally
- the container exposes port 6379 for redis and docker-compose maps that port locally
- example browser request from your local machine: [click me](http://localhost:8002/segment_match?json={"trace":[ {"lat":14.543087,"lon":121.021019,"time":1000}, {"lat":14.543620,"lon":121.021652,"time":1008}, {"lat":14.544957,"lon":121.023247,"time":1029}, {"lat":14.545470,"lon":121.023811,"time":1036}, {"lat":14.546580,"lon":121.025124,"time":1053}, {"lat":14.547284,"lon":121.025932,"time":1064}, {"lat":14.547817,"lon":121.026665,"time":1072}, {"lat":14.549700,"lon":121.028839,"time":1101}, {"lat":14.550350,"lon":121.029610,"time":1111}, {"lat":14.551256,"lon":121.030693,"time":1125}, {"lat":14.551785,"lon":121.031395,"time":1133}, {"lat":14.553422,"lon":121.033340,"time":1158}, {"lat":14.553819,"lon":121.033806,"time":1164}, {"lat":14.553976,"lon":121.033997,"time":1167}]})
The following environment variables are exposed to allow manipulation of the python matcher service:
MATCHER_BIND_ADDR
: the IP on which the process will bind inside the container. Defaults to '0.0.0.0'.MATCHER_CONF_FILE
: the configuration file the process will reference. Defaults to '/etc/valhalla.json', which is included in the build of the container.MATCHER_LISTEN_PORT
: the port on which the process will listen. Defaults to '8002'.
This repository is tested on circleCI.
- pushes to master will result in a new container with the 'latest' tag being published on Docker Hub
- tagging in the form of
v{number}
will result in a docker container with a matching tag being built with whatever commit is referenced by that tag: e.g. taggingv1.0.0
on master will result in a container with tagv1.0.0
being built off of that tag on master.
Example: build a container tagged 'test'.
docker build --tag opentraffic/reporter:test --force-rm .
docker push opentraffic/reporter:test
Download sample probe data and place in a /data/traffic/<manila>
directory
To import probe data and create the json request file to be used as input to the Reporter service, run (passing path to data as argument) from the reporter directory:
./py/csv_formatter.py /data/traffic/manila > reporter_requests.json
NOTE: This is taken care of by the docker container. The csv formatter script can be run from your local machine and not inside the container as should the command to curl the requests to the service.
To run the Reporter service and send the created json requests via POST:
start reporter service from the py directory:
PYTHONPATH=PYTHONPATH:../../valhalla/valhalla/.libs REDIS_HOST=localhost DATASTORE_URL=http://localhost:8003/store? ./py/reporter_service.py ../../conf/manila.json localhost:8002
Build/run the python matcher service via docker-compose.
- move your tarball to
/some/path/to/data/tiles.tar
- the path is of your choosing, the name of the tarball is currently required to be
tiles.tar
- the path is of your choosing, the name of the tarball is currently required to be
export DATAPATH=/data/valhalla/manila
sudo docker build -t opentraffic/reporter .
sudo -E /usr/bin/docker-compose up
Run the following to curl the reporter requests thru the reporter service via POST from the py directory:
time cat py/reporter_requests.json | parallel --progress -j7 curl -s --data '{}' -o /dev/null http://localhost:8002/segment_match?
OR Curl a specific # of requests thru the reporter service via POST:
head -n 10 py/reporter_requests.json | parallel --progress -j7 curl -s --data '{}' -o /dev/null http://localhost:8002/segment_match?
Currently we only support a rudimentary form of authentication between the reporter and the datastore. The idea is that the reporter will be run on premisis (ie. by fleet operator) and will then need to authenticate itself with the centralized datastore architecture. For now this is done via a secret_key
query parameter in the reporters request url to the datastore. The datastore must be configured to do the authentication. The reporter gets the url for the datastore from an environment variable. This means that adding authentication only requires that one change this url to include the secret_key
query parameter.