Coder Social home page Coder Social logo

opendatacam / opendatacam Goto Github PK

View Code? Open in Web Editor NEW
1.6K 90.0 278.0 138.33 MB

An open source tool to quantify the world

Home Page: https://opendata.cam

License: MIT License

JavaScript 91.82% Shell 4.53% CSS 0.99% Dockerfile 2.66%
yolo camera iot smart-city computer-vision dataviz jetson darknet jetson-nano jetson-tx2

opendatacam's Introduction

OpenDataCam – An open source tool to quantify the world

OpenDataCam is an open source tool that helps to quantify the world. With computer vision OpenDataCam understands and quantifies moving objects. The simple setup allows everybody to count moving objects from cameras and videos.

People use OpenDataCam for many different use cases. It is especially popular for traffic studies (modal-split, turn-count, etc.) but OpenDataCam detects 50+ common objects out of the box and can be used for many more things. And in case it does not detect what you are looking for, you can always train your own model.

OpenDataCam uses machine learning to detect objects in videos and camera feeds. It then follows the objects as they move accross the scene. Define counters via the easy to use UI or API, and every time an object crosses the counter, OpenDataCam takes count.

Demo Videos

πŸ‘‰ UI Walkthrough (2 min, OpenDataCam 3.0) πŸ‘‰ UI Walkthrough (4 min, OpenDataCam 2.0) πŸ‘‰ IoT Happy Hour #13: OpenDataCam 3.0
OpenDataCam 3.0 Demo OpenDataCam IoT

Features

OpenDataCam comes feature packed, the highlight are

  • Multiple object classes
  • Fine grained counter logic
  • Trajectory analysis
  • Real-time or pre-recorded video sources
  • Run on small devices in the field or data centers in the cloud
  • You own the data
  • Easy to use API

🎬 Get Started, quick setup

The quickest way to get started with OpenDataCam is to use the existing Docker Images.

Pre-Requesits

Installation

# Download install script
wget -N https://raw.githubusercontent.com/opendatacam/opendatacam/v3.0.2/docker/install-opendatacam.sh

# Give exec permission
chmod 777 install-opendatacam.sh

# Note: You will be asked for sudo password when installing OpenDataCam

# Install command for Jetson Nano
./install-opendatacam.sh --platform nano

# Install command for Jetson Xavier / Xavier NX
./install-opendatacam.sh --platform xavier

# Install command for a Laptop, Desktop or Server with NVIDIA GPU
./install-opendatacam.sh --platform desktop

This command will download and start a docker container on the machine. After it finishes the docker container starts a webserver on port 8080 and run a demo video.

Note: The docker container is started in auto-restart mode, so if you reboot your machine it will automaticaly start opendatacam on startup. To stop it run docker-compose down in the same folder as the install script.

Use OpenDataCam

Open your browser at `http://[IP_OF_JETSON]:8080``. (If you are running with the Jetson connected to a screen try: http://localhost:8080)

You should see a video of a busy intersection where you can immediately start counting.

Next Steps

Now you can…

  • Drag'n'Drop a video file into the browser window to have OpenDataCam analzye this file
  • Change the video input to run from a USB-Cam or other cameras
  • Use custom neural network weigts

and much more. See Configuration for a full list of configuration options.

πŸ”Œ API Documentation

In order to solve use cases that aren't taken care by our opendatacam base app, you might be able to build on top of our API instead of forking the project.

https://opendatacam.github.io/opendatacam/apidoc/

πŸ—ƒ Data export documentation

πŸ›  Development notes

See Development notes

πŸ’°οΈ Funded by the community

  • @rantgithub funded work to add Polygon counters and to improve the counting lines

πŸ“«οΈ Contact

Please ask any Questions you have around OpenDataCam in the GitHub Discussions. Bugs, Features and anythings else regarding the development of OpenDataCam is tracked in GitHub Issues.

For business inquiries or professional support requests please contact Valentin Sawadski or visit OpenDataCam for Professionals.

πŸ’Œ Acknowledgments

opendatacam's People

Contributors

alanb128 avatar b-g avatar bettysteger avatar clair-hu avatar dependabot[bot] avatar felixjets avatar ff6347 avatar florianporada avatar gmetax avatar goodengineer avatar iraadit avatar kant avatar kartben avatar luiscosio avatar mildsunrise avatar munsterlander avatar paroque28 avatar rcpeters avatar shams3049 avatar tdurand avatar thapliyalshivam avatar vladimir-kananovich avatar vsaw avatar yauh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

opendatacam's Issues

Changing time in export.csv

Everything is working wonderfully so far, so congratulations on that. I was wondering if there was something I could do so that the export.csv file displays local time or if it must stay at UTC. Can you help me out?

UI design + code

@b-g @tdurand Hi, i just wanted to update you on the current status of the ui design + code. Everything is inside the playground in UI-test. I made a rough html structure of the main page and applied the styles + some animations.

screencast_ui

to do:

  • subpages (traffic stats + info)
  • detection marks/flags
  • game elements
  • traffic activity on main page

Tracking without active browser session

After I tested the app on the Jetson board, I noticed, that the tracking is only happening when there is an active browser session on the client.
After closing the browser window the collected data is lost.

It would be cool, to be able to trigger the process on the jetson via the client and retrieving the active tracking-process after accessing the web-interface again.

I think it would be enough to store the tracking data in a temp json/csv file and just expose it to the webinterface as a download link.
@tdurand maybe you have an idea on how to handle the connection between running process and browser session. ? :)
something like:

if (tracking.isActive) {
    res.send(activeSessionToClient);
}

(notrealcode)
If this would be implemented we also would not have the issue that the app crashes (at least the videostream) when opening multiple web sessions to the jetson board.

darknet-net

  • try to stream a video from the named pipe tmp file via ffmpeg
  • yolo.weight with just traffic related categories
  • proof of concept of receiving websocket result on the backend
  • darknet-net on osx, apparently works out of the box with osx 10.13

Responsive UI behaviour

I was playing with responsiveness with the nice prototype UI-test put together by @mmmmmmmmmmmmmmmm , it's great to be able to try with some buttons on top of the UI to feel the experience better even if the buttons and menu won't necessary be in the same place at the end.

Problematic

I think we should settle how we want the app to behave when the viewport isn't corresponding to the video ratio of 16/9 (which is almost in 100% of the case), because if we start to try some other implementations that are not drawing everything in a canvas (ie: having some divs / svg floating around), this can change things and we don't want to prototype everything twice.

I see those options:

  1. maximize either width or height and crop the remaining parts not fitting the viewport, having a full-width-height layout that can't be scrolled.

screen shot 2017-09-13 at 16 28 45
screen shot 2017-09-13 at 16 28 34

  1. same thing but have scrollbars to "explore" the hidden parts of the video . (with the menu fixed on top)

tonrxw4gtz

  1. squeeze the video the viewport, not respecting the ratio (didn't do an example of that as I think we do not want that), the portrait mode look awful obviouly

screen shot 2017-09-13 at 16 20 27

  1. Respect ratio and center the video horizontally or vertically and display "black" borders -> this could display things real small, I think we do not want that either

The best way to see that is to test, you can open the UI-test: https://github.com/moovel/lab-traffic-cam/tree/master/x-playground/UI-test which is with the video cropped, and the UI-test-scrollbars which is not and display scrollbars: https://github.com/moovel/lab-traffic-cam/tree/master/x-playground/UI-test-scrollbars . (Also I didn't notice performance drop with the scrollbar options)

I encourage you to open it on your mobile as well (you need chrome, on iOS it won't work for now)

Thoughts

First, I think the portrait mode is pretty nice with 1) and 2), as we have this street coming vertically in the frame feels nice. When we first talked we thought about asking people to go to landscape mode, but maybe we should embrace the fact people use mostly their mobile in portrait mode and at least provide the landing mode in portrait.

Remark: for 1) I had to do some smart cropping logic, always cropping the top of the image first as the bottom contain interesting stuff with the street, and cropping the left side up to the vertical street in portrait, we could do the same for 2) and scroll by default to those parts

Regarding Scrollbar vs No scrollbar, I like both, the scroll bar option especially feels nice on mobile portrait, you touch to move to the hidden part of the image, and I can imagine in tablet landscape the same thing, but we would need to have some UI indicating that we can move the viewport.. This could be distracting.

No scrollbars feels simpler and more "focus", but the problematic would be if we crop a part that is essential for the mode, it would become unusable. We can workaround that by requesting when entering special mode to go landscape, but for example the landscape of an iPad is not 16/9 at all.. In that case we could have another option which would be to fallback to 4.

We could also have a button to Freeze / Unfreeze the scrollbars that add an overflow: hidden / block to the body.

Video stream via Websocket

The jetson setup is now ready and its possible to send the video stream via web socket from the Jetson board. So now I think we should build something like the video-streaming example, but with a web socket receiver for video + data to test all the process. What do you think @tdurand @b-g ?

@tdurand The Jetson for you arrived today, so we will start setting it up :)

Improve Jetson Flashing documentation

  1. Maybe you could provide a small help for everyone who is new to Ubuntu. Something like first steps or prerequisites.

This could be

Host PC with Ubuntu is needed
Which version of Ubuntu is needed (16.04 I think) and how to install (https://tutorials.ubuntu.com/tutorial/tutorial-install-ubuntu-desktop-1604#4)
How to connect the Jetson with Host PC

  1. I found this tutorial to be very helpful with getting started and especially with flashing the Jetson so maybe you can incorporate it?

https://www.youtube.com/watch?v=D7lkth34rgM

  1. Last but not least I think it would help everyone who is not used to Ubuntu if you could put the Notice with the automatic installation at the very top of the page. Right after the flashing part. I realized that the code is not suitable for just copy and paste to the terminal on the Jetson (like changing directories back to install new files for example).

Android Chrome mobile compat

Sharing work in progress here, not really requesting any actions from you @b-g , @mmmmmmmmmmmmmmmm

As we are gonna go mobile first, I've played with the prototype with the video timecode (not the streaming one yet) on chrome mobile (android), and it wasn't working, turns out that we needed some tweaks (e801429)

  • the video tag must be on muted, overwise we can't trigger the playing of the video
  • we do not use autoplay but we trigger play on loadstart event
  • i've switched the timeout loop to a requestAnimation frame, improving a lot performance, including on desktop , on my quite recent android phone it is running at 20 FPS, on desktop 60 FPS.

In my opinion this is good enough perf, it should work at the same level of perf for anything we draw on the canvas, and we shouldn't have much more client side computation running at the same time.

Still to figure out:

WebGL implementation of masking

Goal of this: Behind able to "mask" on the contours of the car rather than on the bounding box given by YOLO.

State of the art

A researcher wrote a paper comparing all the background subtraction algorithms

And the good thing is that he open-sourced the work in a library:

https://github.com/andrewssobral/bgslibrary

He also applied BG subtraction to tracking and counting vehicles in this project: https://github.com/andrewssobral/simple_vehicle_counting

I couldn't resist but to run it on our videos, and to my surprise it is pretty good (but yeah just counts car), but it's not realtime and because it just on the CPU . (and my CPU is not that good)

screen shot 2017-11-06 at 10 06 54

Regarding the bg-subtraction, I tried the project on our video too, and it even faster, it's almost realtime on my CPU for a 1280x720 video (it does't work on GPU)

It gives pretty amazing art:

screen shot 2017-11-06 at 10 13 07

screen shot 2017-11-06 at 10 12 39

And it doesn't work with an "average image", just by comparing the previous frame diffΓ©rence.

Pragmatic approach

Didn't investigate much the C++ code of the bgsubtraction library, but as it only runs on the CPU and implement pretty complex algorithms that are probably not designed to work on a GPU I've not spend much more time with it.. The effort to adapt it in Webgl seems pretty high, but it is a nice information base.

I've found some nice "little projects" doing screenshot "difference" , it's something pretty common to build a quality control for the UI implementation of software development.

The best one is from mapbox (the creator of leaflet, this guy is everywhere πŸ˜‚) : https://github.com/mapbox/pixelmatch

The codebase is pretty simple, use some research work on antialiasing to diff two images (not just subtracting the pixels)

I did try with our average image + frame of the video:

output

They are some threehold params, but it seems it gives pretty good results.

But the algorithm is only for CPU, it is fast but not fast enough to run in the browser.

GPU / Webgl fragment shader implementation

I'm currently experimenting with our own Webgl implementation of that, just implemented a prototype of a simple substraction of the rgb channel of the average frame with a single frame and it gives that results

Shader code (simplified)

vec4 outputColor = vec4(frameText.rgb - bgText.rgb, 1.0);

Result:

screen shot 2017-11-06 at 10 26 42

Pretty nice also, so now on the todo list there is:

  • See performance of the fragement shader in realtime feeding a video in entry ( I'm pretty sure it will be 60 FPS)
  • Then do a prototype of masking a single contour of the car
  • If we feels that it can work, improve algorithm using all the research on the top (anti-aliasing ..)

Also there is this amazing project to integrate GLSL shader to react easily: https://github.com/gre/gl-react

Prototype Roadmap

Prototype Up to date URL: https://traffic-cam-yxqtypoqyv.now.sh/game

@mmmmmmmmmmmmmmmm tasks:

  • Share the current sketch file on google drive
  • Upload a good variety of possible (crops / inclination, maybe consult @b-g for ideas of footage) on the vimeo account (saying 10 diff crops) and share the detections file associated on the google drive (with maybe the vimeo id of something to identify what belong to what). Maybe stick to 5 min of footage / video, and also do the all the detections with the same resolution (1920x1080 to have more pixel for better detections results). If we try having "portrait" videos, it will obviouly be a different resolution, but in that case specify which.
  • Find some sound tracks that could be nice
  • Continue exploring games / mode ideas that we could integrate in the prototype. Maybe do some more prototypes of ideas.
  • Follow along the implem of the prototype and ask questions on what isn't clear πŸ˜‰
  • Anything I didn't think about

@tdurand tasks

Work on the implementation of the prototype (react app)

  • Implement the Beat the traffic game in the react app under the with the svg masking implem
  • Improve the beat the traffic game with some ideas of @mmmmmmmmmmmmmmmm (sound tracks, counting cars..) , @b-g (sprite animation, blur edges...), try to have different options of disappearing cars / game mode under different urls to test
  • Include some of the design elements of @mmmmmmmmmmmmmmmm for the UI
  • Add a select to be able to switch betweens video
  • Make sure it works correctly on mobile / desktop
  • Upload the prototype on some public url to be able to share it with people

@b-g tasks

Provide assistance on demand πŸ˜‰ +

  • @tdurand will try to have a first prototype done by wednesday, do a first review to add / remove stuff to it.

BTW, I'll be off on monday but back on tuesday.

Introduce a "valid-classes" config option to restrict the tracker to some classes only

I noticed you guys recently put out a new update. I really liked the addition of ID tags and being able to download the data without stopping tracking, I think that will be very useful, but I've got some questions.

Firstly, why are you tracking chairs now? Since this isn't shown in the UI, I was very surprised when counterData.csv had tracked my office chair.
counterData.csv.txt

Secondly, it kept giving me the same file instead of a new one when I would download without stopping tracking. The UI correctly displayed the new totals but it kept giving me the data from the first set. To get an updated csv file I had to reinitialize the tracker.

Thirdly, I've noticed frequent double counting that I didn't notice in the last version. See lines 7 and 8 of the file below for example.

Fourthly, it also seems like the "Unique ID Tags" aren't very unique. See lines 3 and 22 of the file below. Plus there was only ever one chair and one person and neither ever left the sight of the camera.

counterData (10).csv.txt

Couldn't connect to webcam.

I was able to install and run the webserver. But After i set the line and click "start count" button, the message "intializing neural network..." show in the screen and get stuck.
And i get the following error on the terminal
Couldn't connect to webcam. : No such file or directory darknet: ./src/utils.c:256: error: Assertion 0' failed.
video file: -address
Using PC to test the system. OS: ubuntu 16.04 GPU: GTX1060 CUDA 9
The darknet was already installed, so i did not install it again. Is the error happening because of that?

First iteration : Game Prototype

UPDATE: most up to date proto: https://traffic-cam-jwcjngkuli.now.sh/level/2

Hi !

I've put together a first level of the game https://traffic-cam-kupohgyewv.now.sh/ . It should work on mobile + desktop.

It features some work:

  • on the tracker to be able to know when things dissapear and be able to ignore some areas
  • on the app I've prototyped some game concepts / animation we talked about

Beware:

  • You might get some drop in FPS on mobile when clicking stuff, the animation work isn't well optimised
  • Masking is not improved yet, and you'll notice that it is problematic on that particular video as things gets hided by the other, a quick win would be to disappear several things with one click if there is too much overlap

I'm gonna:

  • implement another level to have some game progression and put the thing in landscape in mobile in level 2
  • try to improve tracker + masking

@b-g @mmmmmmmmmmmmmmmm not asking much feedback from you, I've "early" shared that to enabled markus to feel the thing, it can help you with design.

Be in touch,

Thibault

Making cars disappear

Quick update:
Played around with cv.accumulatedWeight to calculate from a video file or webcam an average image, aiming to create an car-free background image.

See x-playground/average-image/average-video.py

src frame from video
frame-from-video

output of cv.accumulatedWeight with weight 0.007
20170920-112630
Note the white car is still there, as the duration of the test video clip was just 50 sec and that car was sitting there all the time (waiting for green light)

masking cars in photoshop
screen shot 2017-09-20 at 12 17 13
Clipped out manually the bboxes of the cars

result = masked out cars layer + average background image
screen shot 2017-09-20 at 12 18 30
Not bad ... we beat the traffic and the cars seems to have disappeared :)

I had to slightly increase the lightness of the average background image to better match the foreground mask layer.

Decoding of timecode in canvas

A though which crossed my mind was, whether we really have to decode the timecode via expensive canvas pixel magic all the time? Would it not be enough do to that exactly a single time at the very beginning (or at least just every n seconds)?

Because once the video or live stream runs ... you can get some kind of a currentTime form the html video element. Once we have that we can match it a single time with the encoded timecode in the pixels of the video. We know then that position ABC in the video matches with yolo time XYZ, we basically know then the "time offset" in absolute time. Kind of: yoloTime = video.currentTime - absoluteTimeOffset

One problem is then that we might have to search for the closest detected yolo frame ... which could also be done with an 1d kdTree.

Error while installing node-yolo

I'm unable to compile the darknet fork, I get that error.

make OPENCV=1

gcc -Iinclude/ -Isrc/ -DOPENCV pkg-config --cflags opencv -Wall -Wno-unknown-pragmas -Wfatal-errors -fPIC -Ofast -DOPENCV -c ./src/utils.c -o obj/utils.o
./src/utils.c:16:5: warning: implicit declaration of function 'clock_gettime' is invalid in C99 [-Wimplicit-function-declaration]
clock_gettime(CLOCK_REALTIME, &now);
^
./src/utils.c:16:19: fatal error: use of undeclared identifier 'CLOCK_REALTIME'
clock_gettime(CLOCK_REALTIME, &now);
^
1 warning and 1 error generated.
make: *** [obj/utils.o] Error 1

Using OSX 10.11.6 .

I'll investigate further later, I wanted to try the streaming

Splitting in two repos ?

I'm starting to work on the jetson MVP, and I think it could be a good time to have another repo and clean up stuff to have a clean:

  • Repo for the game app
  • Repo for the jetson standalone app

The only thing is that we would need to moves somme issue, there is a tool for that: https://github-issue-mover.appspot.com/

Or the other option is to wait and do it at the end.

What do you think @b-g ?

WiFi hotspot setup

There seems to be an issue with the WiFi hotspot.

It is possible to set up the hotspot but the connection get's refused when trying to connect.
The issue is known in the NVIDIA forum.

For now the opendatacam has to be connected to a wifi (as client)

Week roadmap + stuff I'm waiting on

From @mmmmmmmmmmmmmmmm :

  • πŸ“Ή New Videos , if you can do thursday evening latest to have a need iteration ready for monday πŸ‘Œ
  • Design fixes : could be great by thursday evening also to integrate on the next iteration
  • Introduction animation with sound synced (but need new sound from Patrick) : not high prio, can be next week if it doesn't fit to your week or you didn't get the sound

From chris:

would be amazing if I can have those things by Wednesday night

  • Stars animation
  • 3D sprite

From @b-g :

  • βœ… A meeting scheduled on flights to rome (already delivered πŸ‘Œ)

Personal roadmap is :

  • Deliver Jetson app MVP
  • Game WebGL masking integrated to the game
  • Integrate missing things to the game: bananas + menu + stars animation + 3d sprites
  • Finish New iteration on the Game (new video + fix design + add animation )
  • Can do also some needed stuff for later, prepare app to support severals cities, city localized assets ...

bonus: work a bit on flights to rome if out of work, but would prefer not to use time on that before we meet.

Ready for testing

Some news on that, I've taken the MVP to a Beta product state:

  • Front-end is implemented with all the styling of @mmmmmmmmmmmmmmmm + the editor interface to draw lines.

  • Server side, I've improved the reliability of the process orchestration, and implemented the logic to detect if a tracked item is intersecting any counting line. (some solid geometry skills required 😏)

  • I've added some more instructions in the big cooking recipe in order to run the node.js app at startup + put the jetson in overclocking mode.

All together, after installing everything, the user flow is now great:

  • Plug jetson to some power source
  • Start it
  • Connect to jetson wifi
  • Open 192.168.2.1:8080

And you see the interface (click to open it's a youtube video)

IMAGE ALT TEXT HERE

I guess next step is to test this with actual cars, I've counted myself crossing my counting line, it kind of works πŸ˜‰, I've tested with video footage of streets though

Video shooting

I create an issue to gather the feedback / progress on that, will be easier than in google doc, @mmmmmmmmmmmmmmmm don't feel I'm pushing you to do it now, I know that this week you don't have the time πŸ˜‰. cc @b-g

Shooting specs:

  • 30 FPS
  • HD: 1080p
  • Shoot on a tripod
  • Try to get more than just 2 min to get to have some more "calm" moments without car that help t o produce the average image of the scene.

General Specifications (feel free to update):

  • Shoot cloudy weather to avoid shadows, or at mid-day.
  • Two cities: Stuttgart and Berlin
  • Every level is in the same city
  • Every level has ~3 alternative videos
  • Level 1: (1 min) : clear direction, β€œvertical line”, opens well in portrait..
  • Level 2: (2 min): more traffic than level one, but not yet rush hour, more lanes than lvl 1 (landscape/portrait both possible).
  • Level 3: (2 min) Peak, rush hour, more lanes than lvl 2 (landscape/portrait both possible)
    Speed up the video, making it really hard at the end (level 3).

Week after Week task:

  1. Shoot different things for each levels (ignore the cloudy / mid-day requirement)
  2. Integrate them in the game prototype : send to @tdurand the footage + detections
  3. Test and Gather feedback on difficulty level, attractivity of the scene ....

Repeat 1.

First MVP

We do have a first very rough MVP , it feels Christmas πŸŽ„!

Documentation

Worked a lot on that end, please review the readme: https://github.com/moovel/lab-traffic-cam/blob/master/README.md , normally if someones go threw all the steps he should be able to run the MVP, but I may have forgotten some steps, so could be great that someone tries to install it while I'm off to get a first feedback on that.

Also there is a lots of step that we need to work on automating, I'm not really devops as you know @b-g so maybe it could be great to reserve a slot of someone inside/outsite moovel and have him helping us on that end.

Technically wise, better than hundreds words, I've put together that schema of the architecture:

technical architecture open traffic cam

Limitations and next steps

This is when you see the christmas gift isn't that great yet 😁.

UI

Really "minimal" UI for now, you have 2 screens:

screen shot 2017-12-08 at 11 44 46

screen shot 2017-12-08 at 11 45 24

And then you have some debug info that are ouputed on the console of the jetson (you see 1 FPS now because I haven't overclocked the jetson on those test)

screen shot 2017-12-08 at 11 45 55

Tracker accuracy

For now as we haven't implemented the UI to defined "counting areas", we are doing only the following logic: "Count everything that have been matched for more than 3 frames." (but obviouly each time the ids get reassigned for the same object, we will count it twice...)

Bugs

The UI doesn't take in account the "async" nature of the processes, the main bug is when you click on start counting, the UI display immediatly the counting screen, but as YOLO takes 30s to startup, if you click on stop counting, it will failed miserably and leave a child process that will leads you either to restart ubuntu or to kill the process manually with ls -ef;kill -15 PROCESS_ID

Also it's very possible that the process cannot start / stop well for some other reason, it's very rough yet.

Next steps

  • Implement the counting lines Editor UI to be able to count stuff better
  • I think it actually may be helpfull to display the bounding boxes of the yolo detections on the counting screen, we can't display the webcam stream but we could draw the bounding boxes, maybe some rethinking to have for the design here.
  • Then refine with the actual colors and real style. @mmmmmmmmmmmmmmmm actually in January if you have time, you could start an empty next.js project and work on implementing some react components with styling independently from the jetson / Yolo logic (like buttons, fonts, static screens), would be really helpfull to speed up in February.
  • Work on devops to have a seemless install process that just start the node app when the jetson starts.
  • Making it way more robust at starting / killing process...

Review Aug

General

  • Add transition (move x) between count and path view
  • Just have a single arrow (left or right) as there are just two views currently
  • Change label of rec button to "Start Tracking" or "Stop Tracking"
  • Style download data buttons
  • Show download data buttons always (also while the tacking is running)
  • Style current "Count again" button like the rec button

Path view

  • Show canvas with background image full screen
  • Add button "export screenshot"

New:

  • Option to run from a video file instead of the webcam

websocket server: client can't reconnect after a certain time.

After running some longtime tests I encountered a problem with the websocket connection.

Tue Apr 17 2018 11:45:12 GMT+0000 (UTC) Peer ::ffff:127.0.0.1 disconnected.

The client loses the connection to the server process. Reloading the Browser results in the following error

(node:13929) UnhandledPromiseRejectionWarning: Error: connect ECONNREFUSED 127.0.0.1:80
    at Object._errnoException (util.js:1022:11)
    at _exceptionWithHostPort (util.js:1044:20)
    at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1198:14)
(node:13929) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)
(node:13929) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

My first guess is, that the Browser tries to connect on port 80 instead of 8080 (or is the websocket process running on 80?)

cc @tdurand any ideas on that?

POC of Jetson standalone setup

Minimal POC to see challenges :

cc @mmmmmmmmmmmmmmmm @b-g

TUTORIAL

Hardware needed:

  • webcam logitech
  • jetson (with Ubuntu 16.04)

Turn jetson into wifi access point:

  1. enable SSID broadcast, the driver’s op_mode parameter has to be set to 2, to do add the following line to /etc/modprobe.d/bcmdhd.conf:
options bcmdhd op_mode=2

details about that

  1. Configure hotspot via UI, follow this guide: https://askubuntu.com/a/762885

  2. Then define the adress range of the hotspot network, to be able to connect to it and know that 192.168.2.1 will be the jetson and the node app for the client

cd /etc/NetworkManager/system-connections
sudo vim YOUR-HOTSPOT-NAME

Add this line : address1=192.168.2.1/24,192.168.2.1

[ipv4]
dns-search=
method=shared
address1=192.168.2.1/24,192.168.2.1

Restart the network service

sudo service network-manager restart

Now when you connect to YOUR-HOTSPOT, and you open 192.168.2.1 in some browser, if you run a webserver there, it will display it πŸŽ‰

Details

Start the detections + Stream the webcam + Send the detection to the websocket server

NB: you need to do this things separately in each own tab or send the process in background

  1. Follow install steps

TIP: don't forget to download the yolo-voc.weights file

  1. Start your ffserver ( config here )

ffserver -f ffserver.conf

  1. Start you websocket server that will receive the detections , example node app here

  2. Start the darknet detection in the darknet folder

./darknet detector demo cfg/voc.data cfg/yolo-voc.cfg yolo-voc.weights -c 1 -address "ws://192.168.1.3" -port 8080 -videoOut 1 -pipe 1

(change the ws adress to the adress of your websocket server, if you run it on the jetson directly, ws://127.0.0.1 should works)

  1. Wait that the previous command outputs "Setting up video out" and then start to feed the darknet output video to the ffserver with this command (need to run it in the same folder as the 4.)

ffmpeg -i "./output.avi" http://localhost:8090/feed1.ffm

  1. Open your browser on http://:8090/test.webm , you should see the video stream

(replace localhost by the adress of the jetson if you open it from another device)

Stream webcam only (without yolo)

  1. start ffmpeg server with this ~ config: https://www.area536.com/projects/streaming-video/

  2. ffmpeg -f video4linux2 -i /dev/video1 http://localhost:8090/feed1.ffm

  3. open vlc network stream

Miscellaneous commands

Run detections on a file

./darknet detector demo cfg/voc.data cfg/yolo-voc.cfg yolo-voc.weights -filename ../prototype_level_1_5x.mp4 -address "ws://192.168.1.2" -port 8080

./darknet detector demo cfg/voc.data cfg/yolo-voc.cfg yolo-voc.weights -filename ../prototype_level_1_5x.mp4 -address "ws://localhost" -port 8080

Mount jetson filesystem on mac os x for development

sshfs -o allow_other,defer_permissions [email protected]:/home/nvidia/Desktop/lab-traffic-cam /Users/tdurand/Documents/ProjetFreelance/Moovel/remote-lab-traffic-cam/

Good accuracy: using coco.data (but only 2 FPS)

./darknet detector demo cfg/coco.data cfg/yolo.cfg yolo.weights -filename ../prototype1_720p_5x.mp4 -address "ws://192.168.1.2" -port 8080

sprite animation

@tdurand
Here is a very old example of mine of a sprite animation ... back in the good old days of 2011 :)

x-playground/puff-sprite-animation/drag-and-drop+puff.html
(drag a color rect onto the dropzone and see it puff away)

The entire animation is just 5 frames!!

puff-smoke

YOLO optimisation

@genekogan here is a summary of what we are doing / looking for.

Current perfs on jetson:

Yolo with VOC weights: 5 FPS
Yolo with COCO weights: 1-2 FPS

Min FPS needed:

Our tracker software that runs on the yolo output needs at least 5 FPS from yolo to be able to track something relatively well, so for now running COCO is not an option on the jetson

5fps

Accuracy of yolo trained with VOC vs COCO dataset

We identified that the detections with the VOC weights are significantly worse than the one done with COCO weights for some of our use cases:

VOC is pretty bad when point of view is "from the top":

VOC:
screen shot 2017-11-02 at 12 16 41

COCO:
screen shot 2017-11-02 at 12 19 11

But it does a relatively good job when point of view is like this one:

VOC:
screen shot 2017-11-02 at 12 21 55

COCO:
screen shot 2017-11-02 at 12 21 19

Also I've found some videos on youtube that shows that VOC is as good as COCO on the use case of beeing behind the cars: https://www.youtube.com/watch?v=uJy-RvxNGSA

Small perfs benchmark.

In order to be able to compare the tracking (counting car) perfs, I've run the thing on Level 1 and Level 2 and summarized the results in this table (the number represent the car counted)

VOC (5 FPS) current best jetson realtime setup COCO (5 FPS) not possible right now because it runs only at 2 FPS on the jetson COCO 25 FPS Actual real world car passing
Level1 27 27 38 45
Level2 20 53 79 95

The interesting thing to see here is that as predicted, the Level2 bad accuracy of the VOC detections leads to poor counting perfs BUT on the Level1 footage, this counting perfs are as good for 5 FPS with VOC or COCO detections ...

Goals:

In order to be more "robust", we think that we should try to improve the FPS of yolo with the COCO dataset, if we can reach 5 FPS on the jetson, we would be able to work with that... But if you have better ideas we would be happy to change our goals πŸ˜‰

ping @b-g

Screenshot of Frame + Counting-lines.

Hi @tdurand,

one quick question. Is it possible to take a screenshot of the lines + frame when the counting starts and display that as background image during the counting with opacity 0.3 or something? :)

Own specific mobility.weight file

FYI Engaged in a conversation with Gene Kogan (ML in art & design guru) ... asked him about to strip down the yolo.weight from 80 to the 7 mobility related ones to gain performance.

it may be necessary to review the literature about this because such a procedure likely has pros and cons. in one sense it would be more accurate because these 7 classes don't have to compete with the 73 others. however, it is also the case that each individual class learns some generic features by analyzing all the others so getting rid of a lot of the classes could hurt accuracy a bit too. the other thing is it is true that it's more efficient, but i don't entirely know if the efficiency and speed gains would be very remarkable since most of the computation is done in all the layers before the detection layer anyway. although it might make sense to adapt the tiny-yolo architecture which is more efficient if you want it to run another device.

Darknet-net fails during 'make' on the Jetson TX2

The installation during the make build process of the darknet-net dependency it fails.

Quick Fix:
In darknet-net/Makefile
Change:

ARCH= -gencode arch=compute_20,code=[sm_20,sm_21] \
      -gencode arch=compute_30,code=sm_30 \
      -gencode arch=compute_35,code=sm_35 \
      -gencode arch=compute_50,code=[sm_50,compute_50] \
      -gencode arch=compute_52,code=[sm_52,compute_52]

to:

ARCH= #-gencode arch=compute_20,code=[sm_20,sm_21] \
      -gencode arch=compute_30,code=sm_30 \
      -gencode arch=compute_35,code=sm_35 \
      -gencode arch=compute_50,code=[sm_50,compute_50] \
      -gencode arch=compute_52,code=[sm_52,compute_52]

Line placement bug

When viewing the tracker on different screen sizes, the lines appear in different places. It seems that this is purely visual and does not affect the counting itself. Just thought I'd report it in case you guys weren't aware.

From my phone (where I set it up):
opendatacammobile

From my desktop:
opendatacamdesktop

Second iteration : Game prototype + UI + Sounds

We have a new prototype πŸ”₯ : https://traffic-cam-ntnwxbalnw.now.sh , tons of things done, I won't go in details, please try it !

screen shot 2017-11-13 at 12 46 47

Missing:

  • Poping carrots and bananas bonuses
  • Puff / Star animation
  • About and menu

Thoughs:

Great ! Feel really good to me, way more crasy, also I've added a dynamic on the sizing of all the elements depending on the bbox size, and this made it feel really nice. The sound touch is super great also, it's still very much a work in progress and we should work on transition (we can do cross fades and stuff...)

Also the main win of this iteration for me is that putting an unicorn on the masking of the car legitimize the fact that it is not perfect. If we don't succeed to get a good webgl implem this looks already pretty nice to me.

Random things to discuss:

  • The factory smoke level bar doesn't necessarily convey that it is something bad as it fill with green at the beggining..
  • Somehow indicate that the user has 3 level to finish the game ? General progress ?
  • Also I though about of configuring two thresshold of "smoke bar" depending if we are on touch screen or on desktop, as it's way easier on touch screen..

Design related feedback

  • Switch cities, for now one button, but maybe need severals (on the game over and you won screen) ?
  • Switch languages missing from the design ?
  • Missing icon for sound on in the design, I've done one myself, need to redo
  • Loading indicator need a design
  • Need to design the Beggining / end of level screens

Jetson Standalone App Wireframe

Hi,

Today I drew the wireframes for the web interface of the jetson app (in drive folder). The main issue that came to my head was about which additional data we should collect for further uses of the tool. Maybe it would be interesting to get knowledge and opinions from people that would possibly use this tool. I shared some thoughts around this and the kit in general. Feel free to add, edit and comment.

https://docs.google.com/document/d/1Du7v4TY1glQdpbcSGqtt72cPQIRRJDX8SmTMy1j80OE/edit

bildschirmfoto 2017-10-12 um 15 28 17

@tdurand How is it going with the tech roadmap?

Game Music/Sound Design

@mmmmmmmmmmmmmmmm @tdurand this is the tread to gather sound design related things.

Plz check gdrive project folder traffic-cam/09_music (shared) for the latest music/sounds. We should collect everything in the call today in the afternoon. Patric (sound designer) needs feedback for the next iteration.

Beat the traffic - Game

I built a fast prototype on how the traffic game could look like.

By hovering or clicking on the cars, their flag transforms into a balloon and the cars fly away. For each removed car you get points. If you have enough points in a specific time you arrive to the next level with more traffic intensity (-> rush hour, etc ).

balloons

-> x-playground/balloons-game

Automatic Installation Error

During the automatic installation an error occurs that causes parts of the installation to be skipped (cURL, git-core, nodejs, ffmpeg)... Those can be installed manually afterwards and that makes it work but maybe there is a fix for that.

I am not sure but I think the error occurs because selecting an answer in a dialogue (Configuring docker.io) is not possible which might cause problems(?). The selection cannot be made. Keyboard input leads to what is shown in the picture below.

screenshot from 2018-08-30 13-57-30

Maybe there is a fix for that?

Tracked item canvas visualization

Hi @b-g ,

I spent half-day to do a really quick and dirty POC of this, I think what would be usefull next would be to discuss it on skype and see what we do from there. The code is on the path-visualization branch (https://github.com/moovel/lab-opendatacam/tree/path-visualization) but I think it is not really usefull for you to review this.

The current POC look like this:
may-25-2018 19-53-59

You can try it here: https://open-traffic-cam-fqhztoewzf.now.sh/ , it simulates YOLO detections at 8 FPS.

Best,

Thibault

video not visible while counting/tracking

I have been experimenting with your traffic counter. Good job.

But:
Only black background is shown when counting starts. Until then, the video is visible and "lines" can be placed. Once the counting starts, video is not shown anymore.

Also, FPS is low (6 FPS) with Logitech C922 and counting from above (view point) is hardly precise. I know, multiples issues...

Evaluate upcoming Jetson Xavier for OpenDataCam

NVIDIA Jetson Xavier is the latest addition to the Jetson platform. It’s an AI computer for autonomous machines, delivering the performance of a GPU workstation in an embedded module under 30W. With multiple operating modes at 10W, 15W, and 30W, Jetson Xavier has greater than 10x the energy efficiency and more than 20x the performance of its predecessor, the Jetson TX2.

The platform developer kit will be available in August for $1,299

https://developer.nvidia.com/jetson-xavier-devkit
https://developer.nvidia.com/embedded/jetson-xavier-faq

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.