Coder Social home page Coder Social logo

pr2_pbd's Introduction

PR2 Programming by Demonstration

Build Status Coverage Status

This repository contains the work of Maya Cakmak and the Human-Centered Robotics Lab at the University of Washington. Please see those sites for citing publications. We abbreviate Programming by Demonstration with PbD.

System Requirements

Currently the PbD system has the following requirements:

Installing

Clone this repository and build on both your desktop machine and on the robot:

cd ~/catkin_ws/src
git clone https://github.com/hcrlab/blinky.git
git clone https://github.com/jstnhuang/mongo_msg_db_msgs.git
git clone https://github.com/jstnhuang/mongo_msg_db.git
git clone https://github.com/jstnhuang/rapid.git
git clone https://github.com/PR2/pr2_pbd.git
cd ~/catkin_ws
rosdep install --from-paths src --ignore-src --rosdistro=indigo -y
catkin_make

Running

PR2

Commands on PR2 (c1)

robot claim
robot start
source ~/catkin_ws/devel/setup.bash
roslaunch pr2_pbd_interaction pbd_backend.launch

Desktop

setrobot <ROBOT_NAME>
roslaunch pr2_pbd_interaction pbd_frontend.launch
roslaunch pr2_pbd_interaction pbd_frontend.launch  # rviz, rqt, speech

# Optionally open PR2 dashboard in another terminal window
setrobot <ROBOT_NAME>
rosrun rqt_pr2_dashboard rqt_pr2_dashboard # Optional

Plug in a microphone to your computer. Speak into the microphone to issue speech commands to the robot. The voice commands are not currently documented.

Running in simulation (untested)

roslaunch pr2_pbd_interaction pbd_simulation_stack.launch

Common issues

Saving poses / executing actions is very slow

If it takes a very long time to save a pose, it is likely because MoveIt is configured to automatically infer the planning scene from sensor data. This makes it very slow to compute IK solutions, which are used to color the gripper markers in RViz. To eliminate this behavior, run MoveIt with a dummy sensor:

<include file="$(find pr2_moveit_config)/launch/move_group.launch" machine="c2">
  <arg name="moveit_octomap_sensor_params_file" value="$(find my_package)/config/sensors_dummy.yaml"/>
</include>

Where sensors_dummy.yaml looks like this:

sensors:
    - sensor_plugin: occupancy_map_monitor/PointCloudOctomapUpdater
      point_cloud_topic: /head_mount_kinect/depth_registered/pointsdummy
      max_range: 5.0
      point_subsample: 10
      padding_offset: 0.1
      padding_scale: 1.0
      filtered_cloud_topic: filtered_cloud

Running tests (not currently working)

Desktop

rostest pr2_pbd_interaction test_endtoend.test

PR2

roscd pr2_pbd_interaction
python test/test_endtoend_realrobot.py

Code coverage (not currently working)

After running the tests, you can view code coverage by opening ~/.ros/htmlcov/index.html with a web browser. Note that you can also view code coverage for normal execution by passing coverage:=true when launching pr2_pbd_backend.

With an account setup at Coveralls, edit the .coveralls.yml with your repo_token, and track coverage there by running coveralls --data_file ~/.ros/.coverage.

Contributing

Before creating a pull request, please do the following things:

  1. Lint your Python to pep8 standards by running pep8 file1 file2 ....
  2. Optionally format all your Python code with yapf and all your C++ code with clang-format. See the HCR Lab's auto code formatting guide.
  3. Run the tests on your desktop (see above).

(Untested) To lint all python files in common directories, run the tests on the desktop, open up code coverage with Google Chrome, and send the results to Coveralls (assuming Coveralls account and .coveralls.yml correctly setup), we have provided a script:

$ roscd pr2_pbd_interaction; ./scripts/test_and_coverage.sh

Questions, feedback, improvements

Please use the Github issues page for this project.

pr2_pbd's People

Contributors

ahendrix avatar jstnhuang avatar mayacakmak avatar mbforbes avatar saineti avatar thedash avatar trainman419 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pr2_pbd's Issues

Speech commands recognized but not working

Hey Guys, Probably I am probably just doing something wrong, but, the robot is not doing anything. I get the commands to be recognized with the microphone but it just doesn't do anything.

Partial: TEST-MICROPHONE
[INFO] [WallTime: 1413403489.399051] test-microphone
[INFO] [WallTime: 1413403489.399578] Received command:test-microphone
Partial: RELAX-RIGHT-ARM
[INFO] [WallTime: 1413403517.144450] relax-right-arm
[INFO] [WallTime: 1413403517.144824] Received command:relax-right-arm
[INFO] [WallTime: 1413403520.426516] relax-right-arm
[INFO] [WallTime: 1413403520.426937] Received command:relax-right-arm
Partial: OPEN-LEFT-HAND
[INFO] [WallTime: 1413403535.139681] open-left-hand
[INFO] [WallTime: 1413403535.140071] Received command:open-left-hand
Partial: OPEN-RIGHT-HAND
[INFO] [WallTime: 1413403538.072160] open-right-hand
[INFO] [WallTime: 1413403538.072599] Received command:open-right-hand
Partial: SAVE
Partial: ACTION
Partial: EXECUTE-ACTION
[INFO] [WallTime: 1413403680.654847] execute-action
[INFO] [WallTime: 1413403680.655228] Received command:execute-action

Am I supposed to do something else for this to work?

Add freeze/release head commands

With upcoming perception changes, the robot should be able to search for landmarks in places other than a tabletop.

We should figure out a good way to control the head for demonstrations. One possible way is to use social gaze to have the robot head follow the grippers, but have "freeze head" and "release head" commands.

World.py: object race condition

The following trace occurred during execution

Traceback (most recent call last):
  File "/home/djbutler/rosbuild_ws/pr2_pbd/pr2_pbd_interaction/nodes/interaction.py", line 26, in <module>
    interaction.update()
  File "/home/djbutler/rosbuild_ws/pr2_pbd/pr2_pbd_interaction/src/Interaction.py", line 514, in update
    states = self._get_arm_states()
  File "/home/djbutler/rosbuild_ws/pr2_pbd/pr2_pbd_interaction/src/Interaction.py", line 378, in _get_arm_states
    abs_ee_poses[arm_index])
  File "/home/djbutler/rosbuild_ws/pr2_pbd/pr2_pbd_interaction/src/World.py", line 616, in get_nearest_object
    dist = World.pose_distance(World.objects[i].object.pose,
IndexError: list index out of range

Which appears to be caused by code in World.py where a list's length is checked, but during the loop thereafter, it turns out to be smaller than advertised. This smells like a race condition to me.

Only check for IK when needed

The system will check for IK whenever it creates a gripper mesh, coloring it appropriately. This means it checks for IK solutions at unnecessary times:

  • When saving a pose (because poses are demonstrated kinesthetically there must be an IK solution)
  • When loading a previously recorded action

It's only necessary to solve IK:

  • When a pose is edited in rviz
  • Prior to executing an action (because the torso height might change)
  • When executing and a pose is relative to a landmark

The unnecessary checks do slow down the execution quite a bit, it takes O(# actions) seconds to load an action.

Put all messages into their own package

E.g., pr2_pbd_msgs

This is probably good practice overall. A specific problem right now is that you can't write nosetests and do imports like from pr2_pbd_interaction.msg import ArmState, because it will be unable to find the msg module while importing pr2_pbd_interaction and it's not aware of the catkin devel directory for some reason. This can be remedied if you only have imports like from pr2_pbd_msgs.msg import ArmState. You can still write normal unittests, but then you can't trigger them automatically with catkin run_tests.

Implement or remove face detection

If you set a gaze goal to FOLLOW_FACE (GazeGoal.msg), then in social_gaze.py there is an error as the initialization of this action client (self.faceClient) is commented out.

We need either:

  1. to fully implement this
  2. to remove it
  3. a comment as to why it is currently disabled (at the least)

Adjusting poses in rviz is really slow

Whenever I want to adjust a pose in rviz (because no IK solution was found for it), it goes really slowly. I'll drag the interactive marker in some direction. Then, the marker will snap back, and the gripper marker will slowly jump a tiny bit (maybe 0.5cm?) at a time in the direction I moved it, on the order of a few seconds for a tiny jump and maybe a minute just to drag the marker a few centimeters.

Stop publishing TFs for each object

TF never removes frames from its internal list, even after they have stopped being published for a while. MoveIt! uses this list to try and transform every single frame into the planning frame at a high frequency. When we stop publishing the frames for old objects, MoveIt! will fail to transform those frames and publish error messages at high frequency. This leads to MoveIt! spewing error messages really quickly, which wastes CPU and makes it hard to follow logging messages.

Instead, after detecting the objects, we should store their transforms internally and compute all the transforms ourselves. In general, TF should never be used for any frame that is not permanent, especially when used with MoveIt!

Poses should be independent of torso height

If you teach the robot to do a wave while at the lowest height, and then raise the torso, the execution might fail because some poses are not reachable. Poses not associated with any landmarks should be defined relative to torso_lift_link instead of base_link.

Landmark registration algorithm is wrong

When there are two similar objects in the learned action, sometimes they end up being registered to the same object during execution. This can be avoided by implementing the registration algorithm as described in the RSS paper.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.