robotology / assistive-rehab Goto Github PK
View Code? Open in Web Editor NEWAssistive and Rehabilitative Robotics
Home Page: https://robotology.github.io/assistive-rehab/doc/mkdocs/site
License: BSD 3-Clause "New" or "Revised" License
Assistive and Rehabilitative Robotics
Home Page: https://robotology.github.io/assistive-rehab/doc/mkdocs/site
License: BSD 3-Clause "New" or "Revised" License
We are required to shoot a video of the interaction shown in #6 to be presented at TechHub,
At the current stage (#3), only static joints are used to evaluate the human likeliness of the performed movement. Including dynamic joints in the evaluation can provide useful information (i.e. if the movement is performed correctly) and improve the human likeliness evaluation.
The Dynamic Time Warping (DTW) technique is commonly used in action recognition systems for synchronizing two temporal sequences (typically a template action and the performed action).
Some references may be found here:
We want to investigate the use of DTW in our framework to provide feedback on the quality of the performed movement (#71, #72).
Open libraries:
https://github.com/lemire/lbimproved
https://github.com/nickgillian/grt/wiki
The interactionManager
has to be updated according to the new feedback system.
Having a better framework capable of specifying the properties of an action (timing, movements relation...), it would be worth using it to improve the linguistic description of the action itself as well as the verbal feedback.
Carry out extensive tests of the new device driver for grabbing info from the RealSense.
This is a somewhat complex task dealing with the possibility to have a finer granularity in the description of an action, which in turn could be used to improve the verbal feedback.
I can see a few alternatives:
mk1
gives a cranky sensation while moving the arm due to the shafts in the torso.We might want to exploit a combination of both.
Investigate the possibility to give robotology members free access to ZenHub tools.
We might want to export:
We want to implement the library for Dynamic Time Warping for dealing with time signals at different speeds.
Self-explaining as well as very time consuming task ๐
High-level container for keeping track of this macro task.
There are situations where 2D skeleton points are not projected in the correct 3D position.
We want to evaluate why the problem occurs and if the point clouds provided by the real sense camera are consistent, for example when performing a reaching movement.
skeletonScaler
to handle specified tag.motionAnalyzer
talk to skeletonScaler
upon recepetion of load/start/stop RPC commands.
motionAnalyzer
output a feedback signal through a dedicated port to convey useful information about how the experiment is being carried out.getQuality
to account for the movement itself, not only for stationary links.I've seen that often namespaces are used in the using
directive within header files. This is an anti-pattern.
Please, clean up the code.
https://stackoverflow.com/questions/5849457/using-namespace-in-c-headers
The 3D skeleton was observed to have the following criticisms:
We want to implement a filtering strategy to enforce an underlying structure of the skeletons and get rid of these effects.
A GUI would be required to allow (naive) user to explore offline reports.
The sequencer establishes the pipeline of the current experiment in terms of human movements to be analyzed.
For the time being, we will consider only serial sequencing.
The module himrepClassifier
is not necessary for the recognition pipeline, hence it can be conveniently removed.
cc @vtikha
The visual pipeline can be run also without a physical robot (disembodiment), but only with one or a set of cameras.
This is particularly useful for @dotslinker.
High-level container to keep track of this macro task.
The code infrastructure comprises:
The following outstanding points can be addressed without urgency:
r1-cuda-linux
machine from icub
to r1-user
. Importantly, we need to make sure that the home path gets changed as well, which contains both the sources and the installed binaries.caffe
, openpose
and human-sensing
due to changed username.057
shows problems with its display. Under repairment.MATIO
: see #66 (comment).For enabling offline reporting of the experiments, it is required to export data acquired in real-time into formats that are general enough for ensuring an appropriate level of post-processing.
We would like to go with https://github.com/tbeu/matio, which guarantees to produce files that are in turn MATLAB/Octave compatible.
This is a record of the work carried out to design and implement the library needed to deal with skeletons within our framework.
r1-console-linux
.We agreed to use yarpscope
to plot sensitive data for the online report.
Thus, we would need to explore the possibility to extend the GUI in order to modify on the fly its appearance so that we may vary axes' properties according to the ongoing experiment.
We observed two phenomena:
A possible solution to both problems is to find the human contour and dilate the disparity map around it. This operation should fill holes and expand the disparity map and, thus, the keypoints would be projected correctly.
Implement a module that is responsible for playing back the trajectory of a skeleton as recorded by means of yarpdatadumper
.
The services exposed by RPC:
load <file>
: to load a session containing the skeleton data.start [p1] [p2] [p3] [p4]
: to start the session.
p1
: optional integer defining the number of times the trajectory needs to be played back.p2
: optional double specifying the time warp.p3
: optional double specifying the starting time within the recording.p4
: optional double specifying the ending time within the recording.stop
: to end the session.set_tag name
: to give the skeleton a name.This component is in charge of analyzing in real-time the Range-Of-Motion (ROM) of single joints.
Display a talking mouth when the robot speaks.
Implemented via robotology/cer#89.
For certain simple exercises, it would be advantageous to let R1 perform the physical movements with its arms.
Make sure that the new powerful device for sensing depth is mounted on R1, plus that the NVIDIA Jetson allows for real-time acquisitions that are satisfactory to us.
Implement the module responsible for the acquisition of the 3D occupancy of the skeleton, which will be stored in OPC.
We want to test the improved demo on mk-2.
The new demo improves the previous one (#6 ), as it includes:
Design and implement a VTK-based viewer capable of displaying multiple skeletons at the same times.
One possibility would be to create programmatically a IPthon Notebook and populate it with the elaboration of data stored via #14.
Relevant resources:
Using the skeleton information, implement a simple attention system capable of redirecting the gaze of the robot toward the salient parts of the scene.
DensePose maps human pixels of 2D RGB images to a 3D surface-based model of the body.
Resource: https://research.fb.com/facebook-open-sources-densepose/
It would be great to use an Avatar to display the movements required for the exercise.
Plus, the background could be also synthesized as a "living room".
It is still an open point whether the Avatar should be displayed in a dedicated viewer or overlapped with the patient's skeleton.
We have already demonstrated that it is fairly easy to replay experiments using the combination of yarpdatadumper
(ref. application) and yarpdataplayer
(ref. application).
We may want to discuss how this can become useful:
We need to properly attach a face to each skeleton. To this end, we will recognize faces resorting to our well-established pipeline, identifying corresponding bounding boxes using the skeleton information.
This component is in charge of analyzing in real-time the End-Point-Kinematics (EPK).
By resorting to optimization, we might enforce the following skeleton's structure:
motionAnalyzer
is required to pass on to skeletonScaler
the info to rotate the camera within the skeletonViewer
according to the needs of the ongoing experiment.
Currently, the module motionAnalyzer
uses matio library to write a MAT 7.3 output file.
However, on several systems (display laptop of mk-1 and console of mk-2), matio could not be successfully compiled when using matlab version 7.3.
The reason behind this seems related to hdf5, but the same version is used on other systems where the pipeline works.
To be more stable with different systems, we want to switch to MAT 5 files. For doing so, we need to change the python report, which currently uses the h5py package for reading the input file, that is only compatible with MAT 7.3 files.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.