View Code? Open in Web Editor
NEW
Gesture Recognition using Machine Learning. Documentation: https://relientm96.github.io/capstone2020/
Python 65.32%
HTML 2.40%
JavaScript 8.03%
Jupyter Notebook 24.25%
capstone2020's Introduction
๐ป Software Developer at SEEK .
๐ญ Interested in Backend Systems, Internet of Things and Embedded Systems.
๐ Master of Engineering - Electrical Graduate.
capstone2020's People
capstone2020's Issues
Creating Gantt Chart from GitHub APIs
A simple block diagram with description to help us and Jonathan see the overview of our system.
Use this thread to put in what you think we should discuss on Friday's Meeting.
Creating a document listing out what is traits we need to measure to choose right algorithm
Issues that Lucas brought up:
We may have upgraded our Nvidia drivers to a version that is incompatible with VM's
CUDA version we need to use is 10.0
Waiting On
Currently liasing with Lucas to re-instantiate the virtual machine.
Waiting for Lucas to recreate vm
Research into what constitutes a meaning gesture mapping. This issues should focus on Human Computer Interaction (HCI).
We will go through our proposal again to re-formalize our project
Go through timeline again, given situation
Trying to capture input from webcam into google colab
Using Google Colab to run OpenPose on captured input
Currently trying with images, then will move on to video, and ultimately live streaming video.
As we have discussed with Jonathan, we need to finalize details for our project scope and goals before our next meeting with Jonathan.
Use this thread to discuss your points and thoughts or ask questions regarding this issue.
Setting up a latex document on overleaf
Translating Assignment 01 structure from docx to Latex
Trying to reinstall openpose on my laptop, with a Nvidia GEFORCE graphics card (2GB Memory)
Research into different strategies of mapping gesture to music. This issue should focus on papers published.
Use this thread to put in what we want to discuss for our meeting on:
20 March 2020
11am to 12 pm
Note that this should be for the next project phase Project Discovery Phase
Research into how other people have mapped the gestures to musical parameters. This issue should focus on example applications.
I am currently trying to see how I can extract outputs from OpenPose to a program that can stream in json data points to gesture program
Although we can use google doc to write our proposal, I am just creating this issue and linking to the pull request #13 for either translating to md after writing the proposal, or working straight in md if you preferred.
Try using Github API to create a Gantt chart, or whatever chart that we could include in our report.
We have to document down our project scope/framework ( hopefully drafted during meeting on Friday ). Use this thread to give feedback or input on how we want to structure our project proposal.
I was hoping to use the framework given by CIE to create like a project brief, consisting of:
Problem statement: What we are trying to solve
Project Objectives/Summary: How can our project solve the stated problem
Deliverables: Things that we aim to come up with that can be shown to public + supervisors
Timeline: When can we deliver the deliverables
Limit / scope of our project : What we can and cannot achieve in this project
Please note down any other missing points that we should add into this or anything that shouldn't be added.
For now, I see that we need to use an IP camera ( possibly converting our webcams to ip cameras ) and streaming that onto the VM Server using the
flag in openpose
I am still unable to find a public IP Camera to test out.
Putting this as our main task now (up to 3rd March meeting), to test out working OpenPose on Unimelb Research Cloud.
Options are:
GPUs
FPGAs
Cloud/Online Computing
@nivlekp will write the email to Jonathan
Need to finalize a proper method/system to get work flow done in our group.
So far, we have the idea of using branches to apply your changes and create pull requests so that we can all review before merging to the master.
Still need to figure out way of assigning tasks, etc...
Although we can use google doc to write our proposal, I am just creating this issue and linking to the pull request #15 for either translating to md after writing the proposal, or working straight in md if you preferred.
Objectives for this issue:
We need to further look into algorithms that are suitable to work with in our music gesture recognition system.
Try to find a good one that can detect upper body parts in real time
Using rubric to compare
Try to put in your input through this thread
A thread more specific for the task of creating our project timeline.
We can discuss on how we want to go about it, using gantt charts or others.