spinnakermanchester / pacman Goto Github PK
View Code? Open in Web Editor NEWPartition and Configuration Manager for SpiNNaker
License: Apache License 2.0
Partition and Configuration Manager for SpiNNaker
License: Apache License 2.0
Essentially reported by @SimonDavidson by way of @lplana - routing table minimisation with PACMAN is apparently very slow. This appears to be a result of driving the minimiser to produce the smallest table possible rather than a table which will fit (see https://github.com/SpiNNakerManchester/PACMAN/blob/master/pacman/operations/router_compressors/mundys_router_compressor/routing_table_condenser.py#L37). I'd recommend replacing target_length = None
with target_length = 1023
or reading the number of free entries from the data reported by the machine.
(Note for the future, a C executable for on-chip minimisation exists but we need to decide what the interface should look like.)
We have the module pacman.model.constraints.router_constraints
which only defines a single abstract class, AbstractRouterConstraint
, that nothing checks for or subclasses as a concrete class. Or even as another abstract class. It's use-less, and so is useless.
Can we delete the module? Defining a constraint using it will just result in that constraint being ignored; our routers don't roll that way.
It would be nice to be able to connect a device back to itself if it supports both input and output. I believe that the tools will currently fail to add a route (or rather add a route to a virtual chip).
Partitioners are application vertex and application edge based.
So they only consider the size of the slice
So why not calc once for each size and cache?
The ZonedRoutingInfoAllocator depends on
graph_mapper.get_machine_vertex_index
This will be a problem if there are more machine Vertex than those that need keys.
needs to be reevaluated.
To replicate:
Hack /SpiNNFrontEndCommon/spinn_front_end_common/interface/interface_functions/front_end_common_interface_functions.xml
<input_definitions>
<param_name>machine_time_step</param_name>
change the <param_type> to a bogus value.
Result:
Inputs required per function:
DatabaseInterface: ['['UniqueTimeStep']', '['A', 'l', 'l', ' ', 'o', 'f', ' ', '[', '[', "'", 'C', 'r', 'e', 'a', 't', 'e', 'A', 't', 'o', 'm', 'T', 'o', 'E', 'v', 'e', 'n', 't', 'I', 'd', 'M', 'a', 'p', 'p', 'i', 'n', 'g', "'", ']', ',', ' ', '[', "'", 'M', 'e', 'm', 'o', 'r', 'y', 'A', 'p', 'p', 'l', 'i', 'c', 'a', 't', 'i', 'o', 'n', 'G', 'r', 'a', 'p', 'h', "'", ']', ',', ' ', '[', "'", 'M', 'e', 'm', 'o', 'r', 'y', 'G', 'r', 'a', 'p', 'h', 'M', 'a', 'p', 'p', 'e', 'r', "'", ']', ']']' (optional)]
From #182
spinnMachine (machine.py).MAX_CORES_IN_MACHINE
no point duplicating.
time to fix the other convertors.
lines 126 and 129
should this not be edge.traffic_weight?
Or do we just NUKE the whole class as its not being tested and clearly not being used.
Test: https://github.com/SpiNNakerManchester/sPyNNaker8/blob/master/p8_integration_tests
/test_param_vertex_retrieval_from_board_after_1_run/test_synfire_20n_20pc_delays_delay_extensions_all_recording.py
test_with_constarint
../scripts/synfire_run.py:277: in do_run
p.run(run_times[-1])
../../spynnaker7/pyNN/init.py:161: in run
globals_variables.get_simulator().run(run_time)
../../../sPyNNaker/spynnaker/pyNN/abstract_spinnaker_common.py:315: in run
AbstractSpinnakerBase._run(self, run_time)
../../../SpiNNFrontEndCommon/spinn_front_end_common/interface/abstract_spinnaker_base.py:904: in _run
self._do_mapping(run_time, n_machine_time_steps, total_run_time)
../../../SpiNNFrontEndCommon/spinn_front_end_common/interface/abstract_spinnaker_base.py:1650: in _do_mapping
optional_algorithms)
../../../SpiNNFrontEndCommon/spinn_front_end_common/interface/abstract_spinnaker_base.py:1157: in _run_algorithms
executor.execute_mapping()
../../../PACMAN/pacman/executor/pacman_algorithm_executor.py:597: in execute_mapping
self._execute_mapping()
../../../PACMAN/pacman/executor/pacman_algorithm_executor.py:613: in _execute_mapping
results = algorithm.call(self._internal_type_mapping)
../../../PACMAN/pacman/executor/algorithm_classes/abstract_python_algorithm.py:47: in call
results = self.call_python(method_inputs)
../../../PACMAN/pacman/executor/algorithm_classes/python_class_algorithm.py:59: in call_python
return method(**inputs)
../../../PACMAN/pacman/operations/fixed_route_router/fixed_route_router.py:105: in call
destination_class, machine)
../../../PACMAN/pacman/operations/fixed_route_router/fixed_route_router.py:143: in _do_dynamic_routing
p=destination_processor, vertex=vertex_dest))
self = (1, 1, 4)(0, 0, 4)
placement = Placement(vertex=MachineVertex(label=None, constraints=set([]), x=0, y=0, p=4)
def add_placement(self, placement):
""" Add a placement
:param placement: The placement to add
:type placement:\
:py:class:`pacman.model.placements.placement.Placement`
:raise PacmanAlreadyPlacedError:\
If there is any vertex with more than one placement.
:raise PacmanProcessorAlreadyOccupiedError:\
If two placements are made to the same processor.
"""
placement_id = (placement.x, placement.y, placement.p)
if placement_id in self._placements:
raise PacmanProcessorAlreadyOccupiedError(placement_id)
E PacmanProcessorAlreadyOccupiedError: (0, 0, 4)
But the constraint is only x=0, y=0
The iterator of the routing infos uses a map which does not exist anymore, thus will go boom boom. Needs fixing
This line:
appears wrong (and possibly in other splitters too) since STR_MESSAGE requires two arguments and only one is given. I'm not sure whether STR_MESSAGE is wrong or that something else should be being passed in here instead.#256 (and its associated PRs downstream) has a few places where an import loop needed to be broken. The technique I used was to identify a class that was only ever actually needed at runtime (usually easy) and to move the lookup of that class to actually occur when it was first needed. For example, in application_edge.py
:
_MachineEdge = None
def _machine_edge_class():
global _MachineEdge # pylint: disable=global-statement
if _MachineEdge is None:
from pacman.model.graphs.machine import MachineEdge
_MachineEdge = MachineEdge
return _MachineEdge
If you see that pattern (used because application edges can manufacture machine edges), that's what is happening. This ugly pattern was only put in because otherwise things don't work.
We ought to look if there's a better way. There may be…
If you run script that inject items and then other script the injected items stay behind.
testfix is to add to
globals_variables.unset_simulator()
injection_decorator._instances = list()
or should this go into
AbstractSpinnakerBase.init
or low and behold actually fix the hacking injector ?????
/PACMAN/pacman/operations/algorithm_reports/routing_compression_checker_report.py
needs python 3 fix as you can't index over keys() as that is now a generator
When a placement constraint is added to place a vertex on a specific core, it doesn't work.
Found during testing in all_algorithms. that the work flow manager can end up doign routing before the edges for LPG are put into the graph.
I propose we put a new token that the router requires. aka InsertedEdges
then the add edges for lpg, extra mon, partitioner, and any others can produce their parts of this, ensuring that routing is only done once the machine graph is fully built.
It would help developers of external devices if the tools would allow the addition of an extra route (key, mask, links, cpus) to a specific router for debug purposes.
The separation between graphs shouldn't be necessary. Partitioning should be able to work on a "Graph" which then only partitions Application vertices. An ApplicationEdge would be any edge which starts or ends on an ApplicationVertex, but the other vertex of the edge can be either another ApplicationVertex or a MachineVertex. During partitioning, an ApplictionVertex is turned into one or more MachineVertices an ApplicationEdge is turned into one or more MachineEdges as is currently done. The only difference is that if an ApplicationEdge starts on a MachineEdge, all MachineEdges of the ApplicationEdge start on that same MachineEdge, and similarly if the ApplicationEdge ends on a MachineEdge. Partitioning should also simply copy any MachineEdge or MachineVertex from the Application graph to the Machine graph.
Graph Mapping is still a useful concept, since there will still be a graph before Partitioning and a graph after Partitioning. The GraphMapper should then only contain mappings between ApplicationVertices and MachineVertices and ApplicationEdges and MachineEdges, which would mean it would be empty if there are no ApplicationVertices in the original graph.
I think line 13 of setup.py needs to be either removed or changed to the current location of "interfaces".
Eric
P.S.
Line 52 in the same setup.py file is looking for SpiNNMachine version 2016.001.
(https://github.com/SpiNNakerManchester/PACMAN/blob/master/setup.py#L52)
However, the latest version in GitHub of SpiNNMachine is 2015.004.01.
(https://github.com/SpiNNakerManchester/SpiNNMachine/blob/master/setup.py#L8)
But, I'm sure SpiNNMachine will soon be changed to 2016.001 to fix this issue.
Only shows up when mode = Debug in config script.
Traceback (most recent call last):
File "/localhome/mbbssag3/spinnaker/git/sPyNNaker8/p8_integration_tests/test_various/test_mwh_population_synfire.py", line 98, in test_run_light
(v, gsyn, spikes) = do_run(nNeurons, neurons_per_core)
File "/localhome/mbbssag3/spinnaker/git/sPyNNaker8/p8_integration_tests/test_various/test_mwh_population_synfire.py", line 64, in do_run
p.run(1500)
File "/localhome/mbbssag3/spinnaker/git/sPyNNaker8/spynnaker8/init.py", line 618, in run
return __pynn["run"](simtime, callbacks=callbacks)
File "/localhome/mbbssag3/Downloads/PyNN/PyNN-0.9.1/PyNN9/pyNN/common/control.py", line 111, in run
return run_until(simulator.state.t + simtime, callbacks)
File "/localhome/mbbssag3/Downloads/PyNN/PyNN-0.9.1/PyNN9/pyNN/common/control.py", line 93, in run_until
simulator.state.run_until(time_point)
File "/localhome/mbbssag3/spinnaker/git/sPyNNaker8/spynnaker8/spinnaker.py", line 123, in run_until
self._run_wait(tstop - self.t)
File "/localhome/mbbssag3/spinnaker/git/sPyNNaker8/spynnaker8/spinnaker.py", line 166, in _run_wait
super(SpiNNaker, self).run(duration_ms)
File "/localhome/mbbssag3/spinnaker/git/sPyNNaker/spynnaker/pyNN/abstract_spinnaker_common.py", line 317, in run
super(AbstractSpiNNakerCommon, self).run(run_time)
File "/localhome/mbbssag3/spinnaker/git/SpiNNFrontEndCommon/spinn_front_end_common/interface/abstract_spinnaker_base.py", line 835, in run
self._run(run_time)
File "/localhome/mbbssag3/spinnaker/git/SpiNNFrontEndCommon/spinn_front_end_common/interface/abstract_spinnaker_base.py", line 1015, in _run
self._do_load(application_graph_changed)
File "/localhome/mbbssag3/spinnaker/git/SpiNNFrontEndCommon/spinn_front_end_common/interface/abstract_spinnaker_base.py", line 1873, in _do_load
optional_algorithms)
File "/localhome/mbbssag3/spinnaker/git/SpiNNFrontEndCommon/spinn_front_end_common/interface/abstract_spinnaker_base.py", line 1226, in _run_algorithms
reraise(*exc_info)
File "/usr/lib/python3/dist-packages/six.py", line 686, in reraise
raise value
File "/localhome/mbbssag3/spinnaker/git/SpiNNFrontEndCommon/spinn_front_end_common/interface/abstract_spinnaker_base.py", line 1211, in _run_algorithms
executor.execute_mapping()
File "/localhome/mbbssag3/spinnaker/git/PACMAN/pacman/executor/pacman_algorithm_executor.py", line 626, in execute_mapping
self._execute_mapping()
File "/localhome/mbbssag3/spinnaker/git/PACMAN/pacman/executor/pacman_algorithm_executor.py", line 642, in _execute_mapping
results = algorithm.call(self._internal_type_mapping)
File "/localhome/mbbssag3/spinnaker/git/PACMAN/pacman/executor/algorithm_classes/abstract_python_algorithm.py", line 46, in call
results = self.call_python(method_inputs)
File "/localhome/mbbssag3/spinnaker/git/PACMAN/pacman/executor/algorithm_classes/python_function_algorithm.py", line 45, in call_python
return function(**inputs)
File "/localhome/mbbssag3/spinnaker/git/PACMAN/pacman/operations/algorithm_reports/routing_compression_checker_report.py", line 151, in generate_routing_compression_checker_report
compare_route(f, o_route, compressed_dict)
File "/localhome/mbbssag3/spinnaker/git/PACMAN/pacman/operations/algorithm_reports/routing_compression_checker_report.py", line 111, in compare_route
compare_route(f, o_route, compressed_dict, o_code=o_code, start=i+1)
File "/localhome/mbbssag3/spinnaker/git/PACMAN/pacman/operations/algorithm_reports/routing_compression_checker_report.py", line 111, in compare_route
compare_route(f, o_route, compressed_dict, o_code=o_code, start=i+1)
File "/localhome/mbbssag3/spinnaker/git/PACMAN/pacman/operations/algorithm_reports/routing_compression_checker_report.py", line 111, in compare_route
compare_route(f, o_route, compressed_dict, o_code=o_code, start=i+1)
File "/localhome/mbbssag3/spinnaker/git/PACMAN/pacman/operations/algorithm_reports/routing_compression_checker_report.py", line 111, in compare_route
compare_route(f, o_route, compressed_dict, o_code=o_code, start=i+1)
File "/localhome/mbbssag3/spinnaker/git/PACMAN/pacman/operations/algorithm_reports/routing_compression_checker_report.py", line 111, in compare_route
compare_route(f, o_route, compressed_dict, o_code=o_code, start=i+1)
File "/localhome/mbbssag3/spinnaker/git/PACMAN/pacman/operations/algorithm_reports/routing_compression_checker_report.py", line 97, in compare_route
"a different defaultable value.".format(c_route, o_route))
pacman.exceptions.PacmanRoutingException: Compressed route 15168:4294967232:False:{}:{0} covers original route 15168:4294967232:True:{}:{0} but has a different defaultable value.
May be related to:
#190
Using a very big 40,001 core but simple network:
Time 0:16:01.598417 taken by PartitionAndPlacePartitioner
Script:
import spynnaker8 as sim
from spynnaker8.utilities import neo_convertor
n_neurons = 10000
n_populations = 200
weights = 5
delays = 17.0
simtime = 50000
sim.setup(timestep=1.0, min_delay=1.0, max_delay=144.0)
sim.set_number_of_neurons_per_core(sim.IF_curr_exp, 100)
spikeArray = {'spike_times': [[0]]}
stimulus = sim.Population(1, sim.SpikeSourceArray, spikeArray,
label='stimulus')
chain_pops = [
sim.Population(n_neurons, sim.IF_curr_exp, {}, label='chain_{}'.format(i))
for i in range(n_populations)
]
for pop in chain_pops:
pop.record("all")
connector = sim.OneToOneConnector()
for i in range(n_populations):
sim.Projection(chain_pops[i], chain_pops[(i + 1) % n_populations],
connector,
synapse_type=sim.StaticSynapse(weight=weights,
delay=delays))
sim.Projection(stimulus, chain_pops[0], sim.AllToAllConnector(),
synapse_type=sim.StaticSynapse(weight=5.0))
sim.run(simtime)
Set to None or 0 by default when routing tables are generated, so it's not even useful data.
The following demonstrates a failure of the the FixedRouteRouter. Although this uses a virtual machine, this machine was a real machine that was read quite a long time after it had been booted. It still could be an "odd" machine, but this needs to be investigated.
from spinn_machine.virtual_machine import VirtualMachine
from pacman.model.placements.placements import Placements
from pacman.model.placements.placement import Placement
from pacman.operations.fixed_route_router.fixed_route_router \
import FixedRouteRouter
import logging
logging.basicConfig(level=logging.INFO)
class DestinationVertex(object):
pass
down_chips = {(9, 4), (12, 4), (12, 7), (13, 8)}
down_cores = {(0, 0, 1), (0, 1, 1), (0, 2, 1), (0, 3, 1), (1, 0, 1), (1, 1, 1), (1, 1, 17), (1, 2, 1), (1, 3, 1), (1, 3, 17), (1, 4, 1), (2, 0, 1), (2, 1, 1), (2, 2, 1), (2, 2, 17), (2, 3, 1), (2, 4, 1), (2, 5, 1), (3, 0, 1), (3, 1, 1), (3, 1, 17), (3, 2, 1), (3, 3, 1), (3, 3, 16), (3, 3, 17), (3, 4, 1), (3, 5, 1), (3, 5, 17), (3, 6, 1), (4, 0, 1), (4, 1, 1), (4, 2, 1), (4, 3, 1), (4, 4, 1), (4, 5, 1), (4, 6, 1), (4, 7, 1), (4, 7, 17), (4, 8, 1), (4, 9, 1), (4, 10, 1), (4, 11, 1), (5, 1, 1), (5, 2, 1), (5, 2, 17), (5, 3, 1), (5, 4, 1), (5, 4, 17), (5, 5, 1), (5, 6, 1), (5, 6, 17), (5, 7, 1), (5, 8, 1), (5, 9, 1), (5, 9, 17), (5, 10, 1), (5, 11, 1), (5, 11, 17), (5, 12, 1), (6, 2, 1), (6, 3, 1), (6, 4, 1), (6, 5, 1), (6, 6, 1), (6, 7, 1), (6, 8, 1), (6, 9, 1), (6, 10, 1), (6, 11, 1), (6, 12, 1), (6, 13, 1), (7, 3, 1), (7, 4, 1), (7, 5, 1), (7, 6, 1), (7, 7, 1), (7, 8, 1), (7, 9, 1), (7, 9, 17), (7, 10, 1), (7, 11, 1), (7, 11, 17), (7, 12, 1), (7, 13, 1), (7, 13, 17), (7, 14, 1), (8, 4, 1), (8, 5, 1), (8, 6, 1), (8, 7, 1), (8, 7, 16), (8, 7, 17), (8, 8, 1), (8, 9, 1), (8, 10, 1), (8, 11, 1), (8, 12, 1), (8, 13, 1), (8, 14, 1), (8, 15, 1), (9, 5, 1), (9, 5, 17), (9, 6, 1), (9, 7, 1), (9, 7, 16), (9, 7, 17), (9, 8, 1), (9, 9, 1), (9, 10, 1), (9, 10, 17), (9, 11, 1), (9, 12, 1), (9, 12, 17), (9, 13, 1), (9, 14, 1), (9, 14, 17), (9, 15, 1), (10, 4, 1), (10, 4, 17), (10, 5, 1), (10, 6, 1), (10, 6, 16), (10, 6, 17), (10, 7, 1), (10, 8, 1), (10, 9, 1), (10, 9, 16), (10, 9, 17), (10, 10, 1), (10, 11, 1), (10, 12, 1), (10, 13, 1), (10, 14, 1), (10, 15, 1), (11, 4, 1), (11, 5, 1), (11, 5, 17), (11, 6, 1), (11, 6, 17), (11, 7, 1), (11, 7, 17), (11, 8, 1), (11, 9, 1), (11, 9, 17), (11, 10, 1), (11, 11, 1), (11, 12, 1), (11, 13, 1), (11, 14, 1), (11, 15, 1), (12, 0, 1), (12, 1, 1), (12, 2, 1), (12, 3, 1), (12, 5, 1), (12, 6, 1), (12, 6, 17), (12, 8, 1), (12, 8, 17), (12, 9, 1), (12, 9, 16), (12, 9, 17), (12, 10, 1), (12, 11, 1), (12, 11, 17), (13, 0, 1), (13, 1, 1), (13, 1, 16), (13, 1, 17), (13, 2, 1), (13, 3, 1), (13, 3, 17), (13, 4, 1), (13, 4, 17), (13, 5, 1), (13, 5, 17), (13, 6, 1), (13, 6, 17), (13, 7, 1), (13, 9, 1), (13, 9, 17), (13, 10, 1), (13, 10, 17), (13, 11, 1), (14, 0, 1), (14, 1, 1), (14, 2, 1), (14, 3, 1), (14, 4, 1), (14, 4, 17), (14, 5, 1), (14, 6, 1), (14, 6, 17), (14, 7, 1), (14, 7, 17), (14, 8, 1), (14, 8, 17), (14, 9, 1), (14, 10, 1), (14, 11, 1), (14, 11, 16), (14, 11, 17), (15, 0, 1), (15, 1, 1), (15, 1, 17), (15, 2, 1), (15, 3, 1), (15, 3, 17), (15, 4, 1), (15, 5, 1), (15, 5, 17), (15, 6, 1), (15, 7, 1), (15, 8, 1), (15, 9, 1), (15, 10, 1), (15, 11, 1), (16, 0, 1), (16, 1, 1), (16, 2, 1), (16, 2, 17), (16, 3, 1), (16, 4, 1), (16, 5, 1), (16, 6, 1), (16, 7, 1), (16, 8, 1), (16, 9, 1), (16, 10, 1), (16, 11, 1), (17, 1, 1), (17, 2, 1), (17, 2, 17), (17, 3, 1), (17, 4, 1), (17, 4, 17), (17, 5, 1), (17, 6, 1), (17, 6, 17), (17, 7, 1), (17, 8, 1), (17, 9, 1), (17, 9, 17), (17, 10, 1), (17, 11, 1), (17, 11, 17), (17, 12, 1), (18, 2, 1), (18, 3, 1), (18, 4, 1), (18, 5, 1), (18, 6, 1), (18, 7, 1), (18, 8, 1), (18, 9, 1), (18, 10, 1), (18, 11, 1), (18, 12, 1), (18, 13, 1), (19, 3, 1), (19, 4, 1), (19, 5, 1), (19, 6, 1), (19, 7, 1), (19, 7, 17), (19, 8, 1), (19, 9, 1), (19, 10, 1), (19, 11, 1), (19, 11, 17), (19, 12, 1), (19, 13, 1), (19, 13, 17), (19, 14, 1), (20, 4, 1), (20, 5, 1), (20, 6, 1), (20, 7, 1), (20, 8, 1), (20, 9, 1), (20, 10, 1), (20, 11, 1), (20, 12, 1), (20, 13, 1), (20, 14, 1), (20, 15, 1), (21, 4, 1), (21, 5, 1), (21, 5, 17), (21, 6, 1), (21, 7, 1), (21, 7, 17), (21, 8, 1), (21, 9, 1), (21, 10, 1), (21, 10, 17), (21, 11, 1), (21, 12, 1), (21, 12, 17), (21, 13, 1), (21, 14, 1), (21, 14, 17), (21, 15, 1), (22, 4, 1), (22, 5, 1), (22, 6, 1), (22, 7, 1), (22, 7, 17), (22, 8, 1), (22, 9, 1), (22, 10, 1), (22, 11, 1), (22, 12, 1), (22, 13, 1), (22, 14, 1), (22, 15, 1), (23, 4, 1), (23, 5, 1), (23, 5, 17), (23, 6, 1), (23, 7, 1), (23, 7, 17), (23, 8, 1), (23, 9, 1), (23, 9, 17), (23, 10, 1), (23, 11, 1), (23, 12, 1), (23, 13, 1), (23, 14, 1), (23, 15, 1), (24, 4, 1), (24, 5, 1), (24, 6, 1), (24, 7, 1), (24, 8, 1), (24, 9, 1), (24, 10, 1), (24, 11, 1), (25, 5, 1), (25, 6, 1), (25, 6, 17), (25, 7, 1), (25, 8, 1), (25, 8, 17), (25, 9, 1), (25, 10, 1), (25, 10, 17), (25, 11, 1), (25, 11, 17), (26, 6, 1), (26, 7, 1), (26, 8, 1), (26, 9, 1), (26, 10, 1), (26, 11, 1), (27, 7, 1), (27, 8, 1), (27, 9, 1), (27, 10, 1), (27, 11, 1)}
down_links = {(1, 2, 1), (1, 3, 0), (2, 2, 2), (2, 4, 5), (3, 3, 3), (3, 4, 4), (8, 4, 0), (9, 5, 5), (10, 4, 3), (10, 5, 4), (11, 4, 0), (11, 6, 1), (11, 7, 0), (12, 3, 2), (12, 5, 5), (12, 6, 2), (12, 8, 0), (12, 8, 5), (13, 4, 3), (13, 5, 4), (13, 7, 2), (13, 7, 3), (13, 9, 5), (14, 8, 3), (14, 9, 4)}
machine = VirtualMachine(
width=28, height=16, down_chips=down_chips, down_cores=down_cores,
down_links=down_links)
placements = Placements()
for chip in machine.ethernet_connected_chips:
placements.add_placement(
Placement(DestinationVertex(), chip.x, chip.y, 4))
router = FixedRouteRouter()
router(machine, placements, 5, DestinationVertex)
Add support for dict as a constructor argument for data structures where an equivalent exists in rig and add a property to get the dict back, and support lazy operation of those structures such that the dict is not converted until a property (other than the dict itself) is requested
The current test for routing compression is a bit exact, in that it tests that the result is a specific compression of the table. Instead, it should go through each entry in the original table and check that following the keys of that entry still results in the same route in the compressed table (noting that the compressed table might be specifically ordered, and so only the first match should be checked).
I have seen a circumstance under which the OneToOnePlacer might be trying a bit too hard. The issue seemed to come up with a 512 neuron retina population connected to a single population of 256 neurons, where the connected population then has 16 atoms per core as a restriction. This attempted to put all of the 16 cores on to 0, 0 by force. This has two issues:
This sounds to me like the OneToOnePlacer expecting a certain exception to occur when a grouped placement fails, but instead receiving a different one.
On SpiNNaker, some of the keys are reserved for system use. These should be pre-allocated to avoid them being assigned to a vertex.
Note that this will rarely be an issue for smaller networks as the keys reserved are at the top end of the key space, but the allocations tend to be made at the bottom end upwards. This should be fixed eventually though to avoid issues in the future.
Should we use collections.abc.Iterator
as the type to link to in our documentation? I'm talking about when doing things like this (very stupid, non-real but legal and possible situation):
def foo(x):
"""
:rtype: iterable(int)
"""
for y in range(x):
yield x ^ y
That would change the documented type to: ~collections.abc.Iterator(int)
which ought to link correctly (as Iterator(int)
) when Sphinx processes it. (NB: might need to be Iterable instead of Iterator; I tend to mix those up.)
Note that we can't link directly to the Python concept of iterator when doing type annotation processing; that's not a type in the Python docs, it's a concept.
so the only place this is used is in machine_algorithm_utilities.
it could be wrapped into add_chip and thus keep a uniform interface and allow machine to build the list of virutal chips interally via a lookup on the added chips bool value.
Hi guys,
At the moment there's a block of code in PACMAN which presumably eventually gets to a bunch of keys, masks and outgoing routes and joins as many equivalent sets of these as possible. We need to do something similar in nengo_spinnaker to work out which filter an incoming value needs to be sent to. With the ongoing refactor, would it be possible to ensure that some version of this function can be easily called externally (so it will probably accept a list of (key, mask, X) triples and return a reduced list of (key, mask, X) triples), this will save us duplicating some pretty core functionality and should mean we can sticks the tests in one central location!
Cheers!
__find_algorithm_in_list is getting fake_inputs that are required in some algorithms and optional in others.
Clealry the algorithm that requires them should not run.
Issue is that _deduce_inputs_required_to_run does consider them so does not report them.
Found with the following (broken and silly script):
from pacman.model.graphs.common import Slice
from spinn_front_end_common.utilities import globals_variables
from spinn_front_end_common.utility_models import ReverseIPTagMulticastSourceMachineVertex
import spynnaker8 as sim
sim.setup(timestep=1.0)
simulator = globals_variables.get_simulator()
input = ReverseIPTagMulticastSourceMachineVertex(
'input:0:6', Slice(lo_atom=0, hi_atom=6))
simulator.add_machine_vertex(input)
sim.run(20)
The ReverseIPTag specifies a port number - it could be useful to not require this and have one dynamically allocated if not specified.
Unused and in both edge and partition
Since all files live in version control, the __author__
mechanism is redundant thanks to git blame
and open to code rot as future authors will inevitably fail to update this tag. In addition, the variable has no standard meaning defined by Python and so will not be predictably picked up by tools.
I propose that instances of this variable are removed.
These methods where used in rig version but data was never actually used.
Are they needed elsewhere?
AbstractOutgoingEdgePartition.traffic_weight()
All the integration tests go boom because of something going wrong with operations done using numpy. The failure also shows up during unit testing.
pacman/operations/routing_info_allocator_algorithms/malloc_based_routing_allocator/malloc_based_routing_info_allocator.py:277: in _allocate_keys_and_masks
for (base_key, n_keys) in self._get_key_ranges(key, mask):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
key = 0L, mask = 4294967264
@staticmethod
def _get_key_ranges(key, mask):
""" Get a generator of base_key, n_keys pairs that represent ranges
allowed by the mask
:param key: The base key
:param mask: The mask
"""
unwrapped_mask = utility_calls.expand_to_bit_array(mask)
first_zeros = list()
remaining_zeros = list()
pos = len(unwrapped_mask) - 1
# Keep the indices of the first set of zeros
while pos >= 0 and unwrapped_mask[pos] == 0:
first_zeros.append(pos)
pos -= 1
# Find all the remaining zeros
while pos >= 0:
if unwrapped_mask[pos] == 0:
remaining_zeros.append(pos)
pos -= 1
# Loop over 2^len(remaining_zeros) to produce the base key,
# with n_keys being 2^len(first_zeros)
n_sets = 2 ** len(remaining_zeros)
n_keys = 2 ** len(first_zeros)
unwrapped_key = utility_calls.expand_to_bit_array(key)
for value in xrange(n_sets):
generated_key = numpy.copy(unwrapped_key)
unwrapped_value = utility_calls.expand_to_bit_array(value)[
-len(remaining_zeros):]
> generated_key[remaining_zeros] = unwrapped_value
E ValueError: shape mismatch: value array of shape (32,) could not be broadcast to indexing result of shape (0,)
pacman/operations/routing_info_allocator_algorithms/malloc_based_routing_allocator/malloc_based_routing_info_allocator.py:222: ValueError
There are several many setting which are fixed in the whole system and which do not change over time except for during sim.setup and sim.reset.
Instead of passing these up and down all over the place have a singleton objected shared with everything that holds these.
It will have an interface much like a read only dictionary with a few clearly marked set methods that should only be called from config handler and unit/integration tests.
Values to put in here include.
TimeScaleFactor
MachineTimeStep or the user defined on if we allow multiple ones
All the report and application data folders
It may well hold ALL values set by the cfg files to keep these is on well defined place.
Undecided if it would hold settings decided by the simulator such as the runtime to plan or save data for.
It should NOT hold options created by one algorithm to be passed to the next.
The placer sorts vertexes by constraints then groups them and places by group.
In raw weird cases this can cause an error;
Script:
import spynnaker8 as sim
from pacman.model.constraints.placer_constraints import ChipAndCoreConstraint
sim.setup(timestep=1.0, n_boards_required=1)
machine = sim.get_machine()
input1 = sim.Population(
1, sim.SpikeSourceArray(spike_times=[0]), label="input1")
input2 = sim.Population(
1, sim.SpikeSourceArray(spike_times=[0]), label="input2")
pop_1 = sim.Population(5, sim.IF_curr_exp(), label="pop_1")
pop_2 = sim.Population(5, sim.IF_curr_exp(), label="pop_2")
sim.Projection(input1, pop_1, sim.AllToAllConnector(),
synapse_type=sim.StaticSynapse(weight=5, delay=18))
sim.Projection(input2, pop_2, sim.AllToAllConnector(),
synapse_type=sim.StaticSynapse(weight=5, delay=18))
input1.set_constraint(
ChipAndCoreConstraint(0, 0, 1))
input2.set_constraint(
ChipAndCoreConstraint(0, 0, 3))
pop_1.set_constraint(
ChipAndCoreConstraint(1, 1, 1))
pop_2.set_constraint(
ChipAndCoreConstraint(0, 0, 2))
sim.run(500)
sim.end()
Error:
No resources available to allocate the given resources within the given constraints:
The issue is that when we place input on 0,0,1 we place the delay on 0,0,2
the error message is confusing
PACMAN/pacman/executor/algorithm_metadata_xml_reader.py
goes boom boom if there are commets in the xml
it uses dicts, and iterator of keys. instead of ordered dicts.
needs fixing.
It would be awesome if we could generate textual reports as the simulation is being run to clarify exactly what is being run.
An example taken from what Keras does:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (10, 1, 300) 235500
_________________________________________________________________
dense_2 (Dense) (10, 1, 100) 30100
_________________________________________________________________
flatten_1 (Flatten) (10, 100) 0
_________________________________________________________________
dense_3 (Dense) (10, 10) 1010
=================================================================
Total params: 266,610
Trainable params: 266,610
Non-trainable params: 0
_________________________________________________________________
This is done by calling a method such as model.summary()
. It shows each layer (population) in the network, with number of parameters (synapses) and output shape (which contains information about number of neurons as well).
I'm requesting something like this for SpiNNaker because otherwise I write my own for every single project. For example:
================================================================================
Creating projection aa_goc from granule to golgi with a weight of 0.020000 uS and a delay of 2.0 ms
Creating projection aa_pc from granule to purkinje with a weight of 0.075000 uS and a delay of 2.0 ms
Creating projection bc_pc from basket to purkinje with a weight of -0.009000 uS and a delay of 4.0 ms
Creating projection gj_bc from basket to basket with a weight of -0.002500 uS and a delay of 4.0 ms
Creating projection gj_goc from golgi to golgi with a weight of -0.008000 uS and a delay of 1.0 ms
Creating projection gj_sc from stellate to stellate with a weight of -0.002000 uS and a delay of 1.0 ms
Creating projection glom_dcn from glomerulus to dcn with a weight of 0.000006 uS and a delay of 4.0 ms
Creating projection glom_goc from glomerulus to golgi with a weight of 0.002000 uS and a delay of 4.0 ms
Creating projection glom_grc from glomerulus to granule with a weight of 0.009000 uS and a delay of 4.0 ms
================================================================================
Number of neurons in each population
--------------------------------------------------------------------------------
golgi -> 219 neurons
glomerulus -> 7073 neurons
granule -> 88158 neurons
purkinje -> 69 neurons
basket -> 603 neurons
stellate -> 603 neurons
dcn -> 12 neurons
TOTAL -> 96737 neurons
================================================================================
================================================================================
Number of synapses per projection:
--------------------------------------------------------------------------------
goc_grc -> 206092 synapses [inh]
pc_dcn -> 314 synapses [inh]
aa_pc -> 17256 synapses [exc]
pf_goc -> 350399 synapses [exc]
glom_dcn -> 1763 synapses [exc]
gj_sc -> 2411 synapses [inh]
glom_grc -> 352474 synapses [exc]
bc_pc -> 1379 synapses [inh]
TOTAL -> xxxx synapses
================================================================================
Number of incoming connections per population:
--------------------------------------------------------------------------------
golgi -> 451168 incoming synapses
glomerulus -> 0 incoming synapses
granule -> 558566 incoming synapses
basket -> 606900 incoming synapses
purkinje -> 1977916 incoming synapses
dcn -> 2077 incoming synapses
stellate -> 617588 incoming synapses
================================================================================
Normalised number of incoming connections per population:
--------------------------------------------------------------------------------
golgi -> 2060.13 incoming synapses
glomerulus -> 0.00 incoming synapses
granule -> 6.34 incoming synapses
basket -> 1006.47 incoming synapses
purkinje -> 28665.45 incoming synapses
dcn -> 173.08 incoming synapses
stellate -> 1024.19 incoming synapses
================================================================================
Generally, important statistics are: number of neurons, number of realised synapses (sPyNNaker will happily accept synpase source/target ids that are larger than the number of neurons), fan-in per population, maximum fan-in per neuron.
Important, but generally overlooked: there's a mismatch between e.g. weight values defined on the host and what they become on SpiNNaker (16 bit precision, weight scaling etc). I find it enormously useful to compare the values I think I put in and what it actually is on chip.
================================================================================
Average weight per projection
--------------------------------------------------------------------------------
goc_grc -> 0.00500011 uS c.f. -0.00500000 uS (100.00%)
bc_pc -> 0.00900269 uS c.f. -0.00900000 uS ( 99.97%)
aa_pc -> 0.07499695 uS c.f. 0.07500000 uS (100.00%)
pf_goc -> 0.00048828 uS c.f. 0.00040000 uS ( 81.92%)
pf_pc -> 0.00001526 uS c.f. 0.00002000 uS ( 76.29%)
================================================================================
due to some pacman objects using dicts, default dicts and sets. instead of ordered sets, ordered dicts and ordered default dicts, it is possible to run the same script many times and get different allocations.
refactor should be simple enough to do.
Even using the exact same physical board (Spin03) and the eaxact same script and config file the partitioner sometimes swaps the order of placement.
Sure I know that this should not matter but had we not intended they be constant for debuging and testing.
Example:
sim.setup(timestep=1.0)
sim.set_number_of_neurons_per_core(sim.IF_curr_exp, 100)
input = sim.Population(1, sim.SpikeSourceArray(spike_times=[0]),
label="input")
pop_1 = sim.Population(200, sim.IF_curr_exp(), label="pop_1")
sim.Projection(input, pop_1, sim.AllToAllConnector(),
synapse_type=sim.StaticSynapse(weight=5, delay=18))
sim.run(500)
option A:
Processor 3: Vertex: 'input', pop size: 1
Slice on this core: 0:0 (1 atoms)
Model: SpikeSourceArrayVertex
Processor 4: Vertex: 'input_delayed', pop size: 1
Slice on this core: 0:0 (1 atoms)
Model: DelayExtensionVertex
Option B:
Processor 3: Vertex: 'input_delayed', pop size: 1
Slice on this core: 0:0 (1 atoms)
Model: DelayExtensionVertex
Processor 4: Vertex: 'input', pop size: 1
Slice on this core: 0:0 (1 atoms)
Model: SpikeSourceArrayVertex
Might be useful to explicitly set up equality tests for this class so it's clear and easy to create lists of unique edges?
We should change SdramUsageReportPperChip
to SdramUsageReportPerChip
because it is a name that is visible to users (in timing logging). Requires a trivial matching change in FEC.
As we are still getting compression problems despite having a only about 20 to 30 unique targets work could be done to see if target based key allocation is possible and helps.
The idea is to sacrifice the continuous/ near continuous key range of keys for the machine vertexes in an App vertexes and instead have the target be the main decider.
Allocate each machine vertex separately
Reserve the same number of bites for the neurons on every vertex. Typically 8
(Sorry @alan-stokes the Retina CAN NOT be supported this way!)
Then group the machineVertex by target and allocate keys so that vertexes with the same target share the same power of 2 block. And no other vertexes are in that power of two block.
Validator complains because these fields end up being int64. I'm computing population size stuff with Numpy and using their 'int' type. The validation is fixed by parsing these two fields as native Python int.
PACMAN/pacman/utilities/json_utils.py
Lines 191 to 192 in 387fe3f
Since our packages are all Python modules and Python modules have the strong convention of being all lower-case I suggest renaming repositories to match.
needs fixing for Daniel, as well as other systems using the work flow system for their own algorithms
The current executor does not allow for a number of algorithms unknown to algorithm A to be executed before it (other than through the explicit ordering of the algorithms provided to the executor, which has been deemed unacceptable for this purpose). An example of this is in data loading - there may be any number of algorithms that are involved in the loading of data, with more being added depending on the front-end application, but the binary loader should not (and cannot) know about all these algorithms.
To avoid a need to rewrite the binary loader for each front-end, a "Smart" of "Composite" token should be supported. This token would ensure that all required algorithms that produce the token as output must be run before any algorithm that takes the token as input.
This might operate in one of the following ways:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.