lab-v2 / pyreason Goto Github PK
View Code? Open in Web Editor NEWAn explainable inference software supporting annotated, real valued, graph based and temporal logic
Home Page: https://neurosymbolic.asu.edu/pyreason/
License: Other
An explainable inference software supporting annotated, real valued, graph based and temporal logic
Home Page: https://neurosymbolic.asu.edu/pyreason/
License: Other
The following rule where variable of one of the unary predicate in body clause is not used in any binary predicate in the body but directly used in the head of the rule, groundings are done incorrectly in current implementation of PyReason:
isConnectedTo(A, Y) <-1 isConnectedTo(Y, B), Vnukovo_International_Airport(A), Amsterdam_Airport_Schiphol(B)
New feature request (may be out of scope)
Prolog offers the ability to ask which atoms satisfy the logic, For instance friends(mary,X)
will give all friends of Mary. In PyReason, please provide similar capabilities augmented with
It would be extraordinary if PyReason could also provide reasons for failure along the logical train. If Mary cannot be Friends with John because she doesn't not have a cat, this could have far-ranging applications.
I ran the example as shown in the read me file, but the output that I got is:
`Filtering data...
TIMESTEP - 0
component popular
0 Mary [1.0,1.0]
TIMESTEP - 1
component popular
0 Mary [1.0,1.0]
TIMESTEP - 2
component popular
0 Mary [1.0,1.0]
`
The command that I ran was the following:
python3 -m pyreason.scripts.diffuse --graph_path pyreason/examples/example_graph/friends.graphml --timesteps 2 --rules_yaml_path docs/hello-world/rules.yaml --facts_yaml_path docs/hello-world/facts.yaml --labels_yaml_path docs/hello-world/labels.yaml --ipl docs/hello-world/ipl.yaml --filter_label popular
We don't support (there can be unexpected behavior) with cyclic rules such as:
head(x) <- pred1(a,x), pred2(a,b), pred3(b, x)
In a rule like this, when the third clause is encountered, it will go back through the variable dependency for refining the other variables, but will stop at a
. Technically this constitutes a cyclic dependency graph.
The refinement process will look like:
b updated and x updated-> refine a
But technically since a
is dependent on x
we should have
b updated and x updated -> refine a -> refine x -> refine b -> refine a ...
Delta t in rule is not pre-defined and instead is a function that depends on grounding of the body.
Add functionality for a new tag allow_duplicates where if False , multiple variables in the body cannot be grounded by the same constant simultaneously. Currently, it works like allow_duplicates=True always.
While using incremental annotation functions it's possible for lower bound to be greater than the upper bound (ex [1.2, 1]
)
@numba.njit
def f(annotations, weights):
print(annotations)
return annotations[0][0].lower+0.1, 1
Settings:
pr.settings.canonical = True
pr.settings.inconsistency_check = False
pr.settings.update_mode = 'override'
This is not an issue but the way we perform rule grounding is convoluted and can be simplified. Especially the use of subsets
in interpretation.py
.
This will involve changing the way parallelization takes place as well.
Do we have an estimate on when this might be picked up or guidance on how can contribute.
Originally posted by @f-bdolan in #49 (reply in thread)
Incremental annotation functions associated with a rule that has delta_t=0
does not work as expected. It is expected to converge in 1 timestep, instead it increments every timestep. Solve #22 before this.
import pyreason as pr
import numba
import networkx as nx
@numba.njit
def f(annotations, weights):
print(annotations)
return annotations[0][0].lower+0.1, 1
# Create a Directed graph
g = nx.DiGraph()
# Add the nodes
g.add_nodes_from(['John', 'Mary', 'Justin'])
g.add_nodes_from(['Dog', 'Cat'])
# Add the edges and their attributes. When an attribute = x which is <= 1, the annotation
# associated with it will be [x,1]. NOTE: These attributes are immutable
# Friend edges
g.add_edge('Justin', 'Mary', Friends=1)
g.add_edge('John', 'Mary', Friends=1)
g.add_edge('John', 'Justin', Friends=1)
# Pet edges
g.add_edge('Mary', 'Cat', owns=1)
g.add_edge('Justin', 'Cat', owns=1)
g.add_edge('Justin', 'Dog', owns=1)
g.add_edge('John', 'Dog', owns=1)
# Modify pyreason settings to make verbose and to save the rule trace to a file
pr.settings.verbose = True # Print info to screen
pr.settings.atom_trace = True # This allows us to view all the atoms that have made a certain rule fire
pr.settings.canonical = True
pr.settings.inconsistency_check = False
pr.settings.update_mode = 'override'
pr.add_annotation_function(f)
# Load all rules and the graph into pyreason
# Someone is "popular" if they have a friend who is popular and they both own the same pet
pr.load_graph(g)
# pr.add_rule(pr.Rule('popular(x) : f <-1 popular(y), Friends(x,y), owns(y,z), owns(x,z)', 'popular_rule'))
pr.add_rule(pr.Rule('person(x) : f <-0 person(x):[0,1]', 'popular_rule'))
pr.add_fact(pr.Fact('popular-fact', 'Mary', 'popular', [1, 1], 0, 2))
pr.add_fact(pr.Fact('popular-fact', 'John', 'person', [0.1, 1], 0, 0))
# Run the program for two timesteps to see the diffusion take place
interpretation = pr.reason(timesteps=6)
# Display the changes in the interpretation for each timestep
dataframes = pr.filter_and_sort_nodes(interpretation, ['person'])
for t, df in enumerate(dataframes):
print(f'TIMESTEP - {t}')
print(df)
print()
running pr.reason()
outputs:
Fixed Point iterations: 0
and does not infer till convergence as desired
Running within a Jupyter notebook. The cell below creates the following error. Note the .graphml
file is correct. Also note that the error says timesteps=1
but that's a cut/paste issue.
# Load all the files into pyreason
pr.load_graphml(graph_path)
pr.add_rule(pr.Rule('popular(x) <-1 popular(y), Friends(x,y), owns(y,z), owns(x,z)', 'popular_rule'))
pr.add_fact(pr.Fact('popular-fact', 'Mary', 'popular', [1, 1], 0, 2))
# Run the program for two timesteps to see the diffusion take place
interpretation = pr.reason(timesteps=2)
# Display the changes in the interpretation for each timestep
dataframes = pr.filter_and_sort_nodes(interpretation, ['popular'])
for t, df in enumerate(dataframes):
print(f'TIMESTEP - {t}')
print(df)
print()
Leading to
{
"name": "KeyError",
"message": "",
"stack": "---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[18], line 10
7 pr.add_fact(pr.Fact('popular-fact', 'Mary', 'popular', [1, 1], 0, 2))
9 # Run the program for two timesteps to see the diffusion take place
---> 10 interpretation = pr.reason(timesteps=1)
12 # Display the changes in the interpretation for each timestep
13 dataframes = pr.filter_and_sort_nodes(interpretation, ['popular'])
File /usr/local/lib/python3.11/site-packages/pyreason/pyreason.py:542, in reason(timesteps, convergence_threshold, convergence_bound_threshold, again, node_facts, edge_facts)
540 print(f\"\
Program used {mem_usage-start_mem} MB of memory\")
541 else:
--> 542 interp = _reason(timesteps, convergence_threshold, convergence_bound_threshold)
543 else:
544 if settings.memory_profile:
File /usr/local/lib/python3.11/site-packages/pyreason/pyreason.py:620, in _reason(timesteps, convergence_threshold, convergence_bound_threshold)
617 __program.specific_edge_labels = __specific_edge_labels
619 # Run Program and get final interpretation
--> 620 interpretation = __program.reason(timesteps, convergence_threshold, convergence_bound_threshold, settings.verbose)
622 return interpretation
File /usr/local/lib/python3.11/site-packages/pyreason/scripts/program/program.py:41, in Program.reason(self, tmax, convergence_threshold, convergence_bound_threshold, verbose)
39 else:
40 \tself.interp = Interpretation(self._graph, self._ipl, self._annotation_functions, self._reverse_graph, self._atom_trace, self._save_graph_attributes_to_rule_trace, self._canonical, self._inconsistency_check, self._store_interpretation_changes, self._update_mode)
---> 41 self.interp.start_fp(self._tmax, self._facts_node, self._facts_edge, self._rules, verbose, convergence_threshold, convergence_bound_threshold)
43 return self.interp
File /usr/local/lib/python3.11/site-packages/pyreason/scripts/interpretation/interpretation.py:171, in Interpretation.start_fp(self, tmax, facts_node, facts_edge, rules, verbose, convergence_threshold, convergence_bound_threshold, again)
169 self._convergence_mode, self._convergence_delta = self._init_convergence(convergence_bound_threshold, convergence_threshold)
170 max_facts_time = self._init_facts(facts_node, facts_edge, self.facts_to_be_applied_node, self.facts_to_be_applied_edge, self.facts_to_be_applied_node_trace, self.facts_to_be_applied_edge_trace, self.atom_trace)
--> 171 self._start_fp(rules, max_facts_time, verbose, again)
File /usr/local/lib/python3.11/site-packages/pyreason/scripts/interpretation/interpretation.py:196, in Interpretation._start_fp(self, rules, max_facts_time, verbose, again)
195 def _start_fp(self, rules, max_facts_time, verbose, again):
--> 196 \tfp_cnt, t = self.reason(self.interpretations_node, self.interpretations_edge, self.tmax, self.prev_reasoning_data, rules, self.nodes, self.edges, self.neighbors, self.reverse_neighbors, self.rules_to_be_applied_node, self.rules_to_be_applied_edge, self.edges_to_be_added_node_rule, self.edges_to_be_added_edge_rule, self.rules_to_be_applied_node_trace, self.rules_to_be_applied_edge_trace, self.facts_to_be_applied_node, self.facts_to_be_applied_edge, self.facts_to_be_applied_node_trace, self.facts_to_be_applied_edge_trace, self.ipl, self.rule_trace_node, self.rule_trace_edge, self.rule_trace_node_atoms, self.rule_trace_edge_atoms, self.reverse_graph, self.atom_trace, self.save_graph_attributes_to_rule_trace, self.canonical, self.inconsistency_check, self.store_interpretation_changes, self.update_mode, max_facts_time, self.annotation_functions, self._convergence_mode, self._convergence_delta, verbose, again)
197 \tself.time = t - 1
198 \t# If we need to reason again, store the next timestep to start from
File /usr/local/lib/python3.11/site-packages/numba/typed/dictobject.py:778, in impl()
776 ix, val = _dict_lookup(d, castedkey, hash(castedkey))
777 if ix == DKIX.EMPTY:
--> 778 raise KeyError()
779 elif ix < DKIX.EMPTY:
780 raise AssertionError(\"internal dict error during lookup\")
KeyError: "
}
Change how scalar values are currently stored/handled in pyreason; including creating rule templates in FOL that contain function dereferences within predicates.
Feature Request.
While using PyReason as a Python SDK, it would be useful to construct a NetworkX graph as an object programmatically, then pass that into the py engine.
Allow both pr.load_graph(path_to_graph)
and pr.load_graph(G)
where G = nx.Graph()
to avoid having to lay the XML to disk.
I installed PyReason with pip on my system as root and when I try to import the module I am getting the following error:
PermissionError: [Errno 13] Permission denied: '/usr/local/lib/python3.10/site-packages/pyreason/.cache_status.yaml'
I believe this is due to Numba cache path being set to the path based on the package path in:
Line 6 in 29ebb0b
I think that users should be allowed to override NUMBA_CACHE_DIR, so perhaps in __init__.py
the environment var could be set to the package path only if not set already.
pred(x)<-pred1(y,x), pred2(y,z)
is different from pred(x,y)<-pred1(y,x), pred2(y,z)
. This should be reflected in the YAMLs as well#
and remove any blank linesinterpretation.py
Model
class for pyreason instead of global variables. Include functions such as model.summary()
after reasoning.diffuse.py
and command line supportinterpretations
as a dict instead of class objectprev_l
and prev_u
in interval class__source
in edge rule node clausesCurrently PyReason loops through all nodes and edges and checks for neighbor groundings. This is inflexible with rules such as pred(x)<-pred1(y,x), pred2(y,z)
where y
will be replaced by the neighbors of x
--where in reality this should be any other node (except in the case of a node clause, then the groundings are the neighbors).
Add section on: Recommended starting values for parallel processes (cores) used and memory required for new users.
Feature Request
While using the Python SDK, it would be useful to construct the fact.yaml
, rules.yaml
etc programmatically. If these are translated into classes, perhaps expose that class and allow pr.load_rules(R)
where R = pr.RulesClass()
With the newest versions of all the dependencies, on a Mac using Sonoma 14.3, I installed Anaconda and created a new Python Environment with Python 3.12.1 . After running
pip install pyreason
I tried importing pyreason, to which I received this result
>>> import pyreason
Imported PyReason for the first time. Initializing ... this will take a minute
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "~/pyreason-main/pyreason/__init__.py", line 24, in <module>
reason(timesteps=2)
File "~/pyreason-main/pyreason/pyreason.py", line 542, in reason
interp = _reason(timesteps, convergence_threshold, convergence_bound_threshold)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/pyreason-main/pyreason/pyreason.py", line 620, in _reason
interpretation = __program.reason(timesteps, convergence_threshold, convergence_bound_threshold, settings.verbose)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/pyreason-main/pyreason/scripts/program/program.py", line 41, in reason
self.interp.start_fp(self._tmax, self._facts_node, self._facts_edge, self._rules, verbose, convergence_threshold, convergence_bound_threshold)
File "~/pyreason-main/pyreason/scripts/interpretation/interpretation.py", line 171, in start_fp
self._start_fp(rules, max_facts_time, verbose, again)
File "~/pyreason-main/pyreason/scripts/interpretation/interpretation.py", line 196, in _start_fp
fp_cnt, t = self.reason(self.interpretations_node, self.interpretations_edge, self.tmax, self.prev_reasoning_data, rules, self.nodes, self.edges, self.neighbors, self.reverse_neighbors, self.rules_to_be_applied_node, self.rules_to_be_applied_edge, self.edges_to_be_added_node_rule, self.edges_to_be_added_edge_rule, self.rules_to_be_applied_node_trace, self.rules_to_be_applied_edge_trace, self.facts_to_be_applied_node, self.facts_to_be_applied_edge, self.facts_to_be_applied_node_trace, self.facts_to_be_applied_edge_trace, self.ipl, self.rule_trace_node, self.rule_trace_edge, self.rule_trace_node_atoms, self.rule_trace_edge_atoms, self.reverse_graph, self.atom_trace, self.save_graph_attributes_to_rule_trace, self.canonical, self.inconsistency_check, self.store_interpretation_changes, self.update_mode, max_facts_time, self.annotation_functions, self._convergence_mode, self._convergence_delta, verbose, again)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/anaconda3/envs/pyreason/lib/python3.12/site-packages/numba/core/dispatcher.py", line 468, in _compile_for_args
error_rewrite(e, 'typing')
File "~/anaconda3/envs/pyreason/lib/python3.12/site-packages/numba/core/dispatcher.py", line 409, in error_rewrite
raise e.with_traceback(None)
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: Handle with contexts)
Failed in nopython mode pipeline (step: nopython frontend)
Cannot unify Tuple(uint16, unicode_type, Label, numba.IntervalType(('l', float64), ('u', float64), ('s', bool), ('prev_l', float64), ('prev_u', float64)), bool, bool) and int64 for 'i.28', defined at ~/pyreason-main/pyreason/scripts/interpretation/interpretation.py (464)
File "pyreason/scripts/interpretation/interpretation.py", line 464:
def reason(interpretations_node, interpretations_edge, tmax, prev_reasoning_data, rules, nodes, edges, neighbors, reverse_neighbors, rules_to_be_applied_node, rules_to_be_applied_edge, edges_to_be_added_node_rule, edges_to_be_added_edge_rule, rules_to_be_applied_node_trace, rules_to_be_applied_edge_trace, facts_to_be_applied_node, facts_to_be_applied_edge, facts_to_be_applied_node_trace, facts_to_be_applied_edge_trace, ipl, rule_trace_node, rule_trace_edge, rule_trace_node_atoms, rule_trace_edge_atoms, reverse_graph, atom_trace, save_graph_attributes_to_rule_trace, canonical, inconsistency_check, store_interpretation_changes, update_mode, max_facts_time, annotation_functions, convergence_mode, convergence_delta, verbose, again):
<source elided>
# Remove from rules to be applied and edges to be applied lists after coming out from loop
rules_to_be_applied_node[:] = numba.typed.List([rules_to_be_applied_node[i] for i in range(len(rules_to_be_applied_node)) if i not in rules_to_remove_idx])
^
During: typing of assignment at ~/pyreason-main/pyreason/scripts/interpretation/interpretation.py (464)
File "pyreason/scripts/interpretation/interpretation.py", line 464:
def reason(interpretations_node, interpretations_edge, tmax, prev_reasoning_data, rules, nodes, edges, neighbors, reverse_neighbors, rules_to_be_applied_node, rules_to_be_applied_edge, edges_to_be_added_node_rule, edges_to_be_added_edge_rule, rules_to_be_applied_node_trace, rules_to_be_applied_edge_trace, facts_to_be_applied_node, facts_to_be_applied_edge, facts_to_be_applied_node_trace, facts_to_be_applied_edge_trace, ipl, rule_trace_node, rule_trace_edge, rule_trace_node_atoms, rule_trace_edge_atoms, reverse_graph, atom_trace, save_graph_attributes_to_rule_trace, canonical, inconsistency_check, store_interpretation_changes, update_mode, max_facts_time, annotation_functions, convergence_mode, convergence_delta, verbose, again):
<source elided>
# Remove from rules to be applied and edges to be applied lists after coming out from loop
rules_to_be_applied_node[:] = numba.typed.List([rules_to_be_applied_node[i] for i in range(len(rules_to_be_applied_node)) if i not in rules_to_remove_idx])
^
It's hard to tell what's going wrong here, because the source is elided. I presume it's that a newer version of Numba or Pyreason or Python has messed up some implicit dependency, but if anybody has any suggestions I'd really like to be able to import this project. I've tried downgrading to Pyreason 2.0.0 and it gives the same result, as well as Python 3.11.
Update documentation and build doxygen docs
Is order meant to be significant in the rule definitions?
I expected these two rules to be equivalent, but was surprised to find pyreason had different behavior for each rule:
working_rule = 'has_blue(x) <-1 is_member_of(y,x), blue(y)'
broken_rule = 'has_blue(x) <-1 blue(y), is_member_of(y,x)'
Here are two examples to explain what I am seeing. The only difference is that Example 1 uses working_rule
while Example 2 uses broken_rule
.
# bug1.py
import networkx as nx
import pyreason as pr
pr.settings.verbose = True # Print info to screen
pr.settings.atom_trace = True # This allows us to view all the atoms that have made a certain rule fire
g = nx.DiGraph()
g.add_nodes_from(['Group1'])
g.add_nodes_from(['User1'])
g.add_edge('User1', 'Group1', is_member_of=1)
pr.load_graph(g)
working_rule = 'has_blue(x) <-1 is_member_of(y,x), blue(y)'
broken_rule = 'has_blue(x) <-1 blue(y), is_member_of(y,x)'
rule = working_rule
pr.add_rule(rule, 'ui_rule') # works
pr.add_fact(pr.Fact(name='fact1', component='User1', attribute='blue', bound=[1, 1], start_time=0, end_time=2))
interpretation = pr.reason(timesteps=2)
dataframes = pr.filter_and_sort_nodes(interpretation, ['has_blue'])
for t, df in enumerate(dataframes):
print(f'TIMESTEP - {t}')
print(df)
print()
Output:
$ python bug1.py
Timestep: 0
Timestep: 1
Timestep: 2
Converged at time: 2
Fixed Point iterations: 3
TIMESTEP - 0
Empty DataFrame
Columns: [component, has_blue]
Index: []
TIMESTEP - 1
component has_blue
0 Group1 [1.0, 1.0]
TIMESTEP - 2
component has_blue
0 Group1 [1.0, 1.0]
This output is what I expected since Group1
has a member with blue
.
# bug2.py
import networkx as nx
import pyreason as pr
pr.settings.verbose = True # Print info to screen
pr.settings.atom_trace = True # This allows us to view all the atoms that have made a certain rule fire
g = nx.DiGraph()
g.add_nodes_from(['Group1'])
g.add_nodes_from(['User1'])
g.add_edge('User1', 'Group1', is_member_of=1)
pr.load_graph(g)
working_rule = 'has_blue(x) <-1 is_member_of(y,x), blue(y)'
broken_rule = 'has_blue(x) <-1 blue(y), is_member_of(y,x)'
rule = broken_rule
pr.add_rule(rule, 'ui_rule') # works
pr.add_fact(pr.Fact(name='fact1', component='User1', attribute='blue', bound=[1, 1], start_time=0, end_time=2))
interpretation = pr.reason(timesteps=2)
dataframes = pr.filter_and_sort_nodes(interpretation, ['has_blue'])
for t, df in enumerate(dataframes):
print(f'TIMESTEP - {t}')
print(df)
print()
$ python bug2.py
Timestep: 0
Timestep: 1
Timestep: 2
Converged at time: 2
Fixed Point iterations: 3
TIMESTEP - 0
Empty DataFrame
Columns: [component, has_blue]
Index: []
TIMESTEP - 1
Empty DataFrame
Columns: [component, has_blue]
Index: []
TIMESTEP - 2
Empty DataFrame
Columns: [component, has_blue]
Index: []
This output is not what I expected since Group1
has a blue
member. has_blue(Group1)
should be true at TIMESTEP 2.
Add section on: Which factors determine parallel processes (cores) used and memory required.
This is not really an issue, but more an inconsistency. There are possible scenarios where a rule can have multiple groundings (with infer_edges=True
) OR the rule could be grounded multiple times differently. Example files and traces are attached below.
multiple-groundings.zip
Hello Everyone,
I was trying to follow the tutorial and was not able to run the scripts successfully.
Generating the graphml file:
import networkx as nx
# Create a Directed graph
g = nx.DiGraph()
# Add the nodes
g.add_nodes_from(['John', 'Mary', 'Justin'])
g.add_nodes_from(['Dog', 'Cat'])
# Add the edges and their attributes. When an attribute = x which is <= 1, the annotation
# associated with it will be [x,1]. NOTE: These attributes are immutable
# Friend edges
g.add_edge('Justin', 'Mary', Friends=1)
g.add_edge('John', 'Mary', Friends=1)
g.add_edge('John', 'Justin', Friends=1)
# Pet edges
g.add_edge('Mary', 'Cat', owns=1)
g.add_edge('Justin', 'Cat', owns=1)
g.add_edge('Justin', 'Dog', owns=1)
g.add_edge('John', 'Dog', owns=1)
# Save the graph to GraphML format to input into PyReason
nx.write_graphml_lxml(g, 'friends_graph.graphml', named_key_ids=True)
Loading it:
import pyreason as pr
graph_path = 'friends_graph.graphml'
pr.load_graph(graph_path)
Error I get:
AttributeError: 'str' object has no attribute 'shape'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ~/pyreason_test.py:12 in <module>
pr.load_graph(graph_path)
File ~/pyreason.py:410 in load_graph
__graph = __graphml_parser.load_graph(graph)
File ~/graphml_parser.py:24 in load_graph
self.graph = nx.DiGraph(graph)
File ~/digraph.py:319 in __init__
convert.to_networkx_graph(incoming_graph_data, create_using=self)
File ~/convert.py:159 in to_networkx_graph
raise nx.NetworkXError(
NetworkXError: Input is not a correct scipy sparse matrix type.
Can you please help me undertand how to solve this issue?
Thank you!
The following rule where variable of one of the unary predicate in body clause is not used in any binary predicate in the body but directly used in the head of the rule, groundings are done incorrectly in current implementation of PyReason:
isConnectedTo(A, Y) <-1 isConnectedTo(Y, B), Vnukovo_International_Airport(A), Amsterdam_Airport_Schiphol(B)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.