Coder Social home page Coder Social logo

emdb_cognitive_processes_gii's Introduction

e-MDB reference implementation for cognitive processes

This repository includes software packages for the cognitive process that manipulate the knowledge elements in the long-term memory (LTM) of the software implementation of the e-MDB cognitive architecture developed under the PILLAR Robots project..

At the moment, there is just one process, which we have called "main cognitive loop". This process reads perceptions, calculate an activation value for each knowledge nugget (node) depending on those perceptions and the activation of the connected nodes (it detects the relevant contexts), and, finally, it executes an action [1]

More cognitive processes will come, as those related to learning processes, necessary to generate and adapt world models, utility models, and policies.

Therefore, there is just on ROS 2 package right now that includes the implementation of the "main cognitive loop".

For more information about the cognitive architecture design, you can visit the emdb_core repository or the PILLAR Robots official website.

[1] Duro, R. J., Becerra, J. A., Monroy, J., & Bellas, F. (2019). Perceptual generalization and context in a network memory inspired long-term memory for artificial cognition. International Journal of Neural Systems, 29(06), 1850053.

Table of Contents

Dependencies

These are the dependencies required to use this repository of the e-MDB cognitive architecture software:

  • ROS 2 Humble
  • Numpy 1.24.3

Other versions could work, but the indicated ones have proven to be functional.

Installation

To install this package, it's necessary to clone this repository in a ROS workspace and build it with colcon.

colcon build --symlink-install

This respository only constitutes the main loop, the reference cognitive process of the e-MDB cognitive architecture. To get full functionality, it's required to add to the ROS workspace, at least, the emdb_core repository, that constitutes the base of the architecture, and other packages that include the cognitive nodes, the experiment configuration and the interface that connects the architecture with a real or a simulated environment. Therefore, to use the first version of the architecture implemented by GII, these repositories need to be cloned into the workspace:

In these repositories is included an example experiment with the discrete event simulator in which the Policies, the Goal and the World Model are defined in the beginning, the objective being to create the corresponding PNodes and CNodes, which allow the Goal to be achieved effectively by the simulated robot.

The Goal, called ObjectInBoxStandalone, consists of introducing a cylinder into a box correctly. For that, the robot can use, in a World Model called GripperAndLowFriction, the following policies:

  • Grasp object: use one of the two grippers to grasp an object
  • Grasp object with two hands: use both arms to grasp an object between their ends
  • Change hands: move object from one gripper to the other
  • Sweep object: sweep an object to the central line of the table
  • Ask nicely: ask experimenter, simulated in this case, to bring something to within reach
  • Put object with robot: deposit an object to close to the robot base
  • Put object in box: place an object in a receptacle
  • Throw: throw an object to a position

The reward obtained could be 0.2, 0.3 or 0.6 if the robot with its action improves its situation to get the final goal. Finally, when the cylinder is introduced into the box, the reward obtained is 1.0. Thus, at the end of the experiment, seven PNodes and CNodes should be created, one per policy, except Put object with robot, which doesn't lead to any reward.

Configurate an experiment

It's possible to configure the behavior of the main loop editing the experiment configuration file, stored in the emdb_experiments_gii repository (experiments/default_experiment.yaml), or one created by oneself:

Experiment:
    name: main_loop
    class_name: cognitive_processes.main_loop.MainLoop
    new_executor: False
    threads: 2
    parameters: 
        iterations: 6000
        trials: 50
        subgoals: False
Control:
    id: ltm_simulator
    control_topic: /main_loop/control
    control_msg: core_interfaces.msg.ControlMsg
    executed_policy_topic: /mdb/baxter/executed_policy
    executed_policy_msg: std_msgs.msg.String
LTM:
    Files:
        -
            id: goodness
            class: core.file.FileGoodness
            file: goodness.txt
        -
            id: pnodes_success
            class: core.file.FilePNodesSuccess
            file: pnodes_success.txt

As we can see, we can configure the number of iterations of the experiment, the number of trials that the robot will make before resetting the simulated world or the existence or not of subgoals. Also, it's possible to configure the param new_executor as True, so this will indicate to the Commander node that it has to create a new and dedicated execution node for each cognitive node that is created, with the number of threads indicated (2 in this case), although this has more to do with the core of the architecture.

Additionally, there is the control part, which in the example case acts as a middleware between the cognitive architecture and the discrete event simulator, controlling the main loop the communications between both parts. In this case, the main loop publishes to the simulator some commands, such as the reset world, the current iteration and the active world model. Also, it indicates to the simulator where the policy to execute will be published. This can be adapted to another simulator or a real robot case.

Finally, the main loop is also the responsible for creating the output files, such as goodness.txt or pnodes_success.txt. In the file.py script in the emdb_core the file creation can be modified or new files can be created. In the experiment configuration file we can decide the output files that will be written or not.

Execution

To execute the example experiment or another launch file, it's essential to source the ROS workspace:

source install/setup.bash

Afterwards, the experiment can be launched:

ros2 launch core example_launch.py

Once executed, it is possible to see the logs in the terminal, being able to follow the behavior of the experiment in real time.

Results

Executing the example experiment, it will create two files by default: goodness.txt and pnodes_success.txt.

In the first one, it is possible to observe important information, such as the policy executed and the reward obtained per iteration. It is possible to observe the learning process by seeing this file in real time with the following command:

tail -f goodness.txt
Iteration Goal World Reward Policy Sensorial changes C-nodes
1416 object_in_box_standalone GRIPPER_AND_LOW_FRICTION 0.3 sweep_object True 7
1417 object_in_box_standalone GRIPPER_AND_LOW_FRICTION 0.6 grasp_with_two_hands True 7
1418 object_in_box_standalone GRIPPER_AND_LOW_FRICTION 1.0 put_object_in_box True 7

In the second file, it's possible to see the activation of the PNodes and if it was a point (True) or an anti-point (False).

When the execution is finished, it's possible to obtain statistics about reward and PNodes activations per 100 iterations by using the scripts available in the scripts directory of the core package (emdb_core/core/scripts):

python3 $ROS_WORKSPACE/src/emdb_core/core/scripts/generate_grouped_statistics -n 100 -f goodness.txt > goodness_grouped_statistics.csv

python3 $ROS_WORKSPACE/src/emdb_core/core/scripts/generate_grouped_success_statistics -n 100 -f pnodes_success.txt > pnodes_grouped_statistics.csv

To use these scripts it's necessary to have installed python-magic 0.4.27 dependency.

By plotting the data of these final files, it is possible to obtain a visual interpretation of the learning of the cognitive architecture.

emdb_cognitive_processes_gii's People

Contributors

sergio-maal avatar efallash avatar joseantoniobecerrapermuy avatar

Stargazers

Alejandro Paz avatar

Watchers

 avatar Alejandro Paz avatar Fran Bellas avatar Richard Duro avatar  avatar

Forkers

pillar-robots

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.