Coder Social home page Coder Social logo

Comments (9)

bgoertzel avatar bgoertzel commented on May 27, 2024

PLN needs inference trails to avoid circular inferences... but I believe we
decided to maintain trails in the Atomspace rather than in separate
objects...

On Fri, Jun 26, 2015 at 2:41 PM, Amen Belayneh [email protected]
wrote:

What is the purpose of these messages? Is there any reason for keeping
this message types?
1.
https://github.com/opencog/atomspace/blob/master/opencog/persist/zmq/atomspace/ZMQMessages.proto#L43
. Last commit refering to Trail is c9bacab
c9bacab
. Trail seems to have been a continaer for storing a series of atoms, could
have been used similarly to inference history in the python-pln
https://github.com/opencog/opencog/blob/bf4581f5a59065d0aba0052c8fbab0599990bb60/opencog/python/pln_old/chainers.py#L317
2.
https://github.com/opencog/atomspace/blob/master/opencog/persist/zmq/atomspace/ZMQMessages.proto#L20
. Last commit refering to VersionHandle is 518f603
518f603

This follows from @ceefour https://github.com/ceefour's suggestion @
https://groups.google.com/d/msg/opencog/KUrO5fx0bKg/uilwdioF9nYJ for the
neo4j backing store.

@cosmoharrigan https://github.com/cosmoharrigan @linas
https://github.com/linas @ngeiswei https://github.com/ngeiswei


Reply to this email directly or view it on GitHub
#109.

Ben Goertzel, PhD
http://goertzel.org

"The reasonable man adapts himself to the world: the unreasonable one
persists in trying to adapt the world to himself. Therefore all progress
depends on the unreasonable man." -- George Bernard Shaw

from atomspace.

amebel avatar amebel commented on May 27, 2024

@williampma / @misgeatgit is there a particualr schema of atom for storing inference history? I am asking because, it might come in handy to define the trail message , so as to cherry-pick such atoms to neo4j or other backing store for mining or whatever.

The same approach could be used for the data management of dialogue content that was discussed yesterday or other significat patterns. I am not sure whether this is the right architecture though.

from atomspace.

williampma avatar williampma commented on May 27, 2024

FC uses some sort of C++ Inference object (I think). BC has this (https://github.com/opencog/atomspace/blob/master/opencog/rule-engine/backwardchainer/Target.cc#L64-67)

Currently it's just using SetLink as a placeholder. I don't really considered this important since the inference history is stored in a temporary private atomspace, which only exists as long as when a BackwardChainer object exists. Maybe that will change in the future I don't know.

from atomspace.

misgeatgit avatar misgeatgit commented on May 27, 2024

As @williampma said, in the forward chainer, history is kept in a C++ object (https://github.com/opencog/atomspace/blob/master/opencog/rule-engine/forwardchainer/FCMemory.cc)

from atomspace.

linas avatar linas commented on May 27, 2024

I am not aware of any pre-existing design for storing inference history.

I would like to suggest that there are algorithms that avoid the need for
an inference history, and that also scale to very large networks: the most
famous is the Page-Brin (google) page rank algorithm. Here is how it would
work with PLN:

you start with a set of atoms, pick a PLN rule at random, and some initial
atoms at random, and apply that rule. Then, instead of setting the TV to
100% of what PLN recommends, you set the TV to ... 50% of its old value,
and 50% of the new recommended value. Then you do this again, "forever".
After many iterations, the TV on the network of atoms should stabilize to a
steady-state, where the old/existing values are about equal to the new
values, so there are few or no changes. The fact that previous nodes are
revisited doesn't matter.

It is useful to keep an extra vloating point value or a count that counts
the number of times that a node has been visited or used, so that it is
picked less often in later inference steps.

The above is a hand-waving sketch. There are some details that you would
have to figure out to make the above actually work.

-linas

On Fri, Jun 26, 2015 at 3:22 PM, Amen Belayneh [email protected]
wrote:

@williampma https://github.com/williampma / @misgeatgit
https://github.com/misgeatgit is there a particualr schema of atom for
storing inference history? I am asking because, it might come in handy to
define the trail message , so as to cherry-pick such atoms to neo4j or
other backing store for mining or whatever.

The same approach could be used for the data management of dialogue
content that was discussed yesterday or other significat patterns. I am not
sure whether this is the right architecture though.


Reply to this email directly or view it on GitHub
#109 (comment).

from atomspace.

bgoertzel avatar bgoertzel commented on May 27, 2024

Yeah ... In 2008 Cassio and I published a paper on how to avoid keeping
inference history in PLN

There are tradeoffs involved ...
On 26 Jun 2015 19:10, "Linas Vepštas" [email protected] wrote:

I am not aware of any pre-existing design for storing inference history.

I would like to suggest that there are algorithms that avoid the need for
an inference history, and that also scale to very large networks: the most
famous is the Page-Brin (google) page rank algorithm. Here is how it would
work with PLN:

you start with a set of atoms, pick a PLN rule at random, and some initial
atoms at random, and apply that rule. Then, instead of setting the TV to
100% of what PLN recommends, you set the TV to ... 50% of its old value,
and 50% of the new recommended value. Then you do this again, "forever".
After many iterations, the TV on the network of atoms should stabilize to a
steady-state, where the old/existing values are about equal to the new
values, so there are few or no changes. The fact that previous nodes are
revisited doesn't matter.

It is useful to keep an extra vloating point value or a count that counts
the number of times that a node has been visited or used, so that it is
picked less often in later inference steps.

The above is a hand-waving sketch. There are some details that you would
have to figure out to make the above actually work.

-linas

On Fri, Jun 26, 2015 at 3:22 PM, Amen Belayneh [email protected]
wrote:

@williampma https://github.com/williampma / @misgeatgit
https://github.com/misgeatgit is there a particualr schema of atom for
storing inference history? I am asking because, it might come in handy to
define the trail message , so as to cherry-pick such atoms to neo4j or
other backing store for mining or whatever.

The same approach could be used for the data management of dialogue
content that was discussed yesterday or other significat patterns. I am
not
sure whether this is the right architecture though.


Reply to this email directly or view it on GitHub
<#109 (comment)
.


Reply to this email directly or view it on GitHub
#109 (comment).

from atomspace.

ceefour avatar ceefour commented on May 27, 2024

By the way, as side note, the current ZMQMessages.proto does not follow protobuf recommended naming conventions. i.e. lower_underscore for field names. And protobuf compiler will generate using the naming convention for each target language.

from atomspace.

williampma avatar williampma commented on May 27, 2024

you start with a set of atoms, pick a PLN rule at random, and some initial
atoms at random, and apply that rule. Then, instead of setting the TV to
100% of what PLN recommends, you set the TV to ... 50% of its old value,
and 50% of the new recommended value. Then you do this again, "forever".
After many iterations, the TV on the network of atoms should stabilize to a
steady-state, where the old/existing values are about equal to the new
values, so there are few or no changes. The fact that previous nodes are
revisited doesn't matter.

It is useful to keep an extra vloating point value or a count that counts
the number of times that a node has been visited or used, so that it is
picked less often in later inference steps.

That's kind of like how the current BackwardChainer works (except the 50% TV thing), although the algorithm you described is mostly about ForwardChaining. The inference history is currently kind of useless at the moment, though I imagine it could be used to avoid "pick a PLN rule at random". ie. the tree could be used to make it not random. There's also a "visit/select" count on each target already. Again it's not used yet, but I implemented it so that maybe it can be used to avoid selecting the same target atom repeatedly (again avoiding random).

from atomspace.

linas avatar linas commented on May 27, 2024

closing; this appears to have been cleaned up a while ago.

from atomspace.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.