Coder Social home page Coder Social logo

robotology / assistive-rehab Goto Github PK

View Code? Open in Web Editor NEW
20.0 12.0 11.0 9.61 MB

Assistive and Rehabilitative Robotics

Home Page: https://robotology.github.io/assistive-rehab/doc/mkdocs/site

License: BSD 3-Clause "New" or "Revised" License

CMake 4.16% C++ 32.83% Thrift 1.63% Jupyter Notebook 57.45% Shell 0.54% JavaScript 0.09% MATLAB 0.20% Python 1.76% Dockerfile 0.87% HTML 0.47% CSS 0.01%
assistive-robotics skeleton-tracking healthcare-application rehabilitative-robotics human-robot-interaction

assistive-rehab's People

Contributors

fbrand-new avatar mfussi66 avatar pattacini avatar randaz81 avatar ste93 avatar vtikha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

assistive-rehab's Issues

Investigate the use of Dynamic Time Warping

The Dynamic Time Warping (DTW) technique is commonly used in action recognition systems for synchronizing two temporal sequences (typically a template action and the performed action).
Some references may be found here:

  • [Reyes et al., 2011], Feature Weighting in Dynamic Time Warping for Gesture Recognition in Depth Data, International Conference on Computer Vision Workshops
  • [Sempena et al. 2011], Human Action Recognition Using Dynamic Time Warping, International Conference on Electrical Engineering and Informatics

We want to investigate the use of DTW in our framework to provide feedback on the quality of the performed movement (#71, #72).
Open libraries:
https://github.com/lemire/lbimproved
https://github.com/nickgillian/grt/wiki

Improve verbal feedback while explaining the exercise

Having a better framework capable of specifying the properties of an action (timing, movements relation...), it would be worth using it to improve the linguistic description of the action itself as well as the verbal feedback.

Demo Y1M5

This is the high-level collector for the demo scheduled on Y1M5.

The demo will implement the following diagram of modules:

diagram

Find out a way to show the movement the human is required to replicate

I can see a few alternatives:

  • R1 moves the arm:
    • mk1 gives a cranky sensation while moving the arm due to the shafts in the torso.
    • the arm movements are quite limited.
  • We use the skeleton viewer to show the movement on the screen:
    • the physical interaction gets lost.

We might want to exploit a combination of both.

Define data to export for the offline report

We might want to export:

  • computed metric
  • thresholds for the metric (minimum and maximum value)
  • average and instantaneous speed at which the task is executed
  • type of motion and number of repetitions

Test Demo Y1M5

Self-explaining as well as very time consuming task ๐Ÿ˜‰

Investigate criticisms of 3D skeleton

There are situations where 2D skeleton points are not projected in the correct 3D position.
We want to evaluate why the problem occurs and if the point clouds provided by the real sense camera are consistent, for example when performing a reaching movement.

Improvements to motion analysis

  • Update skeletonScaler to handle specified tag.
  • Add time duration of the exercise within the repertoire.
  • Make motionAnalyzer talk to skeletonScaler upon recepetion of load/start/stop RPC commands.
    • insert a tag within the repertoire to link the name of the scaler's file.
  • Let motionAnalyzer output a feedback signal through a dedicated port to convey useful information about how the experiment is being carried out.
  • Modify offline report in order to deal with multiple files.
  • Fix getQuality to account for the movement itself, not only for stationary links.
  • Better select the camera view according to the experiment.
  • Discriminate abduction/flexion movements.

Implement filtering strategy for making 3D skeleton robust

The 3D skeleton was observed to have the following criticisms:

  • a 3D point may get lost when the movement is parallel to the optical axis (for example when performing a reaching movement)
  • 2D skeleton points are not projected in the correct 3D position (#76)

We want to implement a filtering strategy to enforce an underlying structure of the skeletons and get rid of these effects.

Implement experiments sequencer

The sequencer establishes the pipeline of the current experiment in terms of human movements to be analyzed.

For the time being, we will consider only serial sequencing.

System maintenance

The following outstanding points can be addressed without urgency:

  • Change the username of the r1-cuda-linux machine from icub to r1-user. Importantly, we need to make sure that the home path gets changed as well, which contains both the sources and the installed binaries.
  • Recompile caffe, openpose and human-sensing due to changed username.
  • The laptop 057 shows problems with its display. Under repairment.
  • Update the system to the use of YARP >= 3.0.0.
  • Update R1-mk1 configuration files.
  • Update the system to Ubuntu 18.04.01, hoping that this will sort out the problem with MATIO: see #66 (comment).

Export real-time data into convenient format

For enabling offline reporting of the experiments, it is required to export data acquired in real-time into formats that are general enough for ensuring an appropriate level of post-processing.

We would like to go with https://github.com/tbeu/matio, which guarantees to produce files that are in turn MATLAB/Octave compatible.

Final refinements for Y1M5 demo

  • Fix the offline report generation on r1-console-linux.
  • Investigate and fix the handling of two faces present at the same time in the scene.

Extend yarpscope to modify appearance on the fly

We agreed to use yarpscope to plot sensitive data for the online report.

Thus, we would need to explore the possibility to extend the GUI in order to modify on the fly its appearance so that we may vary axes' properties according to the ongoing experiment.

Dilate disparity map around the human contour

We observed two phenomena:

  • a keypoint might occur outside the disparity map of the human contour and therefore it is projected much further than it should be;
  • the disparity map might have holes. Again, if a keypoint occurs on the hole, it will be associated to a wrong depth.

A possible solution to both problems is to find the human contour and dilate the disparity map around it. This operation should fill holes and expand the disparity map and, thus, the keypoints would be projected correctly.

Implement skeletonPlayer

Implement a module that is responsible for playing back the trajectory of a skeleton as recorded by means of yarpdatadumper.

The services exposed by RPC:

  1. load <file>: to load a session containing the skeleton data.
  2. start [p1] [p2] [p3] [p4]: to start the session.
    • p1: optional integer defining the number of times the trajectory needs to be played back.
    • p2: optional double specifying the time warp.
    • p3: optional double specifying the starting time within the recording.
    • p4: optional double specifying the ending time within the recording.
  3. stop: to end the session.
  4. set_tag name: to give the skeleton a name.

Implement the ACQ module

Implement the module responsible for the acquisition of the 3D occupancy of the skeleton, which will be stored in OPC.

Test demo Y1Q3

We want to test the improved demo on mk-2.
The new demo improves the previous one (#6 ), as it includes:

  • more detailed feedback
  • physical demonstration of the exercise
  • analysis of the end-point
  • voice recognition (?)

Implement skeleton viewer

Design and implement a VTK-based viewer capable of displaying multiple skeletons at the same times.

Implement simple attention system

Using the skeleton information, implement a simple attention system capable of redirecting the gaze of the robot toward the salient parts of the scene.

Implement Avatar

It would be great to use an Avatar to display the movements required for the exercise.
Plus, the background could be also synthesized as a "living room".

It is still an open point whether the Avatar should be displayed in a dedicated viewer or overlapped with the patient's skeleton.

Replay experiments

We have already demonstrated that it is fairly easy to replay experiments using the combination of yarpdatadumper (ref. application) and yarpdataplayer (ref. application).

We may want to discuss how this can become useful:

  • The physiotherapist may compute new metrics on the replayed experiments that did not run online.
  • We may be able to collect and maintain a logbook of all the patients' exercises.
  • It is fundamental to perform debugging โœจ

Implement face recognition

We need to properly attach a face to each skeleton. To this end, we will recognize faces resorting to our well-established pipeline, identifying corresponding bounding boxes using the skeleton information.

Enforce an underlying structure of the skeletons

By resorting to optimization, we might enforce the following skeleton's structure:

  • left and right upper/lower arms and upper/lower legs need to have the same length
  • the angle among some limbs do have bounded ranges

Update camera's view on the fly

motionAnalyzer is required to pass on to skeletonScaler the info to rotate the camera within the skeletonViewer according to the needs of the ongoing experiment.

Change offline report

Currently, the module motionAnalyzer uses matio library to write a MAT 7.3 output file.
However, on several systems (display laptop of mk-1 and console of mk-2), matio could not be successfully compiled when using matlab version 7.3.
The reason behind this seems related to hdf5, but the same version is used on other systems where the pipeline works.

To be more stable with different systems, we want to switch to MAT 5 files. For doing so, we need to change the python report, which currently uses the h5py package for reading the input file, that is only compatible with MAT 7.3 files.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.