Coder Social home page Coder Social logo

stanfordaha / garnet Goto Github PK

View Code? Open in Web Editor NEW
99.0 99.0 11.0 60.45 MB

Next generation CGRA generator

License: BSD 3-Clause "New" or "Revised" License

Python 53.23% Verilog 7.09% Makefile 0.71% SystemVerilog 6.25% Shell 5.76% Tcl 24.22% Awk 0.27% Perl 0.24% C 2.11% Batchfile 0.01% Dockerfile 0.02% Forth 0.01% Bluespec 0.09%

garnet's People

Contributors

akhileshvb avatar alexcarsello avatar ankita0805 avatar bobcheng15 avatar cdonovick avatar ctorng avatar gednyengs avatar hofstee avatar jack-melchert avatar jaiadi1 avatar jake-ke avatar jjthomas avatar joyliu37 avatar kalhankoul96 avatar kavyasreedhar avatar kongty avatar kuree avatar leonardt avatar makaimann avatar mbstrange2 avatar mcoduoza avatar norabarlow avatar pohantw avatar priyanka-raina avatar rdaly525 avatar rsetaluri avatar standanley avatar steveri avatar weiya711 avatar yuchen-mei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

garnet's Issues

Get to original CGRA Parity

  • Wrap PE (Lenny)

  • Extract "core" part of PE (Raj/Alex) and Memory (Raj) so that we can instance config registers in python, and then pass their values to the cores

  • Generate IO's (wrapped verilog -- Alex)

  • Add reset, read_data and global stall signals

  • Generate PnR resources (Raj)

    • Generate graph of mux's, reg's, and FU's
    • Construct bit stream generator which will take a placement + routing and generate the bits
  • CB

  • Wrapped genesis
  • Functional model
  • Python class based generator
  • Genesis wrapper test
  • Python generator test (Alex)
  • SB
  • Wrapped genesis
  • Functional model
  • Python class based generator
  • Genesis wrapper test
  • Python generator test (Alex)
  • PE Core
  • Wrapped genesis
  • Functional model (in the process of migrating previous pe models, Lenny)
  • Python class based generator (wrapped genesis wrapper, need to rip out conifg still)
  • Genesis wrapper test (in the process of migrating previous pe tests, Lenny)
  • Python generator test (Raj)
  • Memory Core
  • Wrapped genesis
  • Functional model (done for SRAM mode, need LB and FIFO mode, Lenny)
  • Python class based generator (wrapped genesis wrapper, need to rip out conifg still)
  • Genesis wrapper test (done for SRAM mode, need LB and FIFO mode, Lenny)
  • Python generator test (Raj)
  • Tile
  • Wrapped genesis
  • Functional model (Alex)
  • Python class based generator
  • Genesis wrapper test
  • Python generator test (Alex)
  • Column
  • Wrapped genesis
  • Functional model (Raj)
  • Python class based generator
  • Genesis wrapper test
  • Python generator test (Raj)
  • Interconnect
  • Wrapped genesis
  • Functional model (Raj)
  • Python class based generator
  • Genesis wrapper test
  • Python generator test (Raj)
  • Global Controller
  • Wrapped genesis
  • Functional model
  • Python class based generator
  • Genesis wrapper test
  • Python generator test (Alex)
  • Top
  • Wrapped genesis
  • Functional model (Raj, Alex)
  • Python class based generator
  • Genesis wrapper test
  • Python generator test (Raj, Alex)

Documentation tooling and format

We should decide on a tool to generate HTML documentation (ideally something that integrates with github pages, so we can serve the documentation from this repo, or with readthedocs, which is popular for many python packages).

Options I am familiar with:

  • markdown files with pandoc
  • markdown files with mkdocs
  • rst/markdown files with sphinx

irun.log doesn't get generated in all scenarios

When running regression_verilog_sim.py, the irun.log is not generated for all cases. One example is when different reset value are experimented in the verilog file, irun.log file is not always generated. Looking at the common/run_verilog_sim.py, it seems that the log is deleted if the test passes and is maintained when test fails. It would be good to have a copy of the log file maintained for all scenarios.

pe

We should merge our PE DSL into the system.

Help to design the low-level, LLVM-like universal HDL language

FPGA world suffers a lot from fragmentation - some tools produce Verilog, some VHDL, some - only subsets of them, creating low-level LLVM-like alternative will help everyone, so HDL implementations will opt only for generating this low-level HDL and routing/synthesizers accept it. LLVM or WebAssembly - you can see how many languages and targets are supported now by both. With more open source tools for FPGA this is more feasible now than ever. Most of the people suggest to adapt FIRRTL for this. Please check the discussion and provide a feedback if you have any. There is a good paper on FIRRTL design and its reusability across different tools and frameworks.

See f4pga/ideas#19

Simplify regression tests

Regression tests should be very simple and just contain logic for setting up test vectors. Everything else should be abstracted away to common.

Also can we use the same framework for testing a functional model? The way I see it the flow is like:

  1. create a functional model
  2. test the functional model itself against hand crafted inputs
  3. write some RTL-level implementations of circuit (genesis, magma, etc).
  4. Test RTL:
    (a) test each RTL against hand crafted inputs
    (b) test each RTL against each other
    (c) test each RTL against functional model

We basically need 2 fundamental frameworks:

  1. An easy way to specify test vectors (inputs and outputs). The peek/poke interface is nice for this.
  2. An easy way to specify just inputs (with outputs being derived from a functional model). This can be solved with the constraint/distribution based input generation thing we've talked about. A very common issue in SW testing, so probably there are very good solutions for this.

I think our testing infra should basically present a way to do these 2 things abstractly and orthogonally from a "backend" (e.g. a specific simulator). And then we can plug and play simulators as backends.

Virtual tapeout with garnet generated chips

Steps leading to a complete physical design generator

  • Creating pipe cleaners
    • Existing PE Tile
    • Existing Mem Tile
  • Setting up libraries and running pipe cleaners for both synthesis and place and route
    • FreePDK 45nm
    • FreePDK 16nm
    • TSMC 28nm (for March tapeout)
    • GF 14nm (for October tapeout)
  • Pass to move configuration registers into tile elements
  • Pass to create through connections for global signals
  • Pass to create a wrapper around CGRA to include IO pads
  • Pass to generate SRAMs using memory compiler and replace functional models with generated RAMs
  • Synthesis and static timing analysis
    • PE tile
    • Memory tile
    • Top level
  • Switching activity file (if you want to do power aware synthesis)
  • Extract various pitches from the technology files that are needed for place and route, create unified format for this information for different technologies
  • Pin placement
  • Pad ring creation
  • Floor planning the hard macros
    • SRAMs inside memory tiles
    • Analog blocks
    • Tiles themselves
  • Create power grid
  • Place standard cells
  • Scan insertion, clock tree synthesis, routing
  • Iterate till timing closure and no DRC violations
  • Generate GDS
  • Spice netlist generation. Naming uniquification.
  • DRC, LVS with Calibre
  • Open design in Calibre DRV/Virtuoso to fix DRC errors
  • Timing checks with Primetime
  • ERC
  • Parasitics extraction and annotation, following by post place and routed netlist functional simulation
  • Power estimation on place and routed netlist
  • Antenna checks
  • Special stuff for clock and power domains in the above steps (@ankita0805 shall give inputs)

Support wrapping multiple (different) generate calls to the same genesis module

Say we have a generator defined as a wrapper around genesis called define_foo_wrapper().

Then:

def define_bar(...):
    class _Bar(m.Circuit):
        IO = [...]

        @classmethod
        def definition(io):
            ...
            foo_inst0 = define_foo_wrapper(x)()
            foo_inst1 = define_foo_wrapper(y)()
            ...

will not produce correct verilog. Basically, the only one foo module will be imported. I think the underlying issue is that the circuits returned by define_foo_wrapper(x) and define_foo_wrapper(y) will have the same name.

One solution is to change the name of the circuit returned from DefineFromVerilog(). (or pass in a name to that function).

Maybe a good time to look into linking properly @leonardt @rdaly525

tests

The file test_cb/test_regressions.py should be generalized to take any module (e.g. cb) and generate the tests. The boilerplate should be moved into the test infrastructure.

We should strive to build a clean test infrastructure so that it is easy for developers to run common tests on their modules.

Cannot run tapeout branch and partial solutions

I'm trying to figure out how to run garnet. It seems like tapeout branch, rather than master branch, has the top module that I can use to generate PnR info.

Here is what I tried:
$ pytest .

ImportError while importing test module '/home/keyi/garnet/test_top/test_top_magma.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test_top/test_top_magma.py:2: in <module>
    from top.top_magma import CGRA
top/top_magma.py:3: in <module>
    import generator.generator as generator
generator/generator.py:2: in <module>
    from ordered_set import OrderedSet
E   ModuleNotFoundError: No module named 'ordered_set

It seems like ordered-set was not installed (not in the requirements.txt). So I did pip install ordered-set and it seems to pass this check.

Now try pytest . again, which gives me

ImportError while importing test module '/home/keyi/garnet/test_common/test_config_register.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
test_common/test_config_register.py:4: in <module>
    from magma.bit_vector import BitVector
E   ModuleNotFoundError: No module named 'magma.bit_vector'

Well after digging around it seems like just a naming error. So I did the following changes:

$ git diff test_common/test_config_register.py
diff --git a/test_common/test_config_register.py b/test_common/test_config_register.py
index 8e79443..b9bb64e 100644
--- a/test_common/test_config_register.py
+++ b/test_common/test_config_register.py
@@ -1,7 +1,7 @@
 import os
 import filecmp
 import magma as m
-from magma.bit_vector import BitVector
+from bit_vector import BitVector
 from magma.simulator.coreir_simulator import CoreIRSimulator
 from common.config_register import define_config_register
 from common.util import check_files_equal

That seems to bring out pytest. However, somehow test_simple_pe/build/pe.json got modified during the process and thus failed the test.

parse error - unexpected end of input; expected '}'


ERROR: No top in file :test_simple_pe/build/pe.json
$ git diff test_simple_pe/build/pe.json
diff --git a/test_simple_pe/build/pe.json b/test_simple_pe/build/pe.json
index 1c0c7db..fad51ba 100644
--- a/test_simple_pe/build/pe.json
+++ b/test_simple_pe/build/pe.json
@@ -144,4 +144,3 @@
     }
   }
 }
-}
\ No newline at end of file

I don't know garnet well enough to figure out which test modifies that file. I will setup inode watcher to monitor the filesystem changes to see which part goes wrong.

Reduce duplication of generator params

The generator params are repeated many places: the magma generator fn, the fn model generator fn, and multiple times in the wrapper file. Let's try to consolidate these, without changing the interfaces.

Using IP

JTAG currently uses Synopsys IP for the TAP state machine. I can't put this IP in the repo, so any test that uses JTAG will fail on Travis (it will basically fail when it runs on anything other than kiwi). Previously, our workaround for this was that our Travis tests didn't use JTAG or the global controller at all. This obviously was not a good solution, but I'm not sure what is the best way around this. Any ideas?

Figure out how to run system verilog tests

We need someway to run system verilog tests. Neither verilator nor iverilog has enough support for our use cases. I think the long term solution will be to run against a machine w/ a license or use a cloud license.

How to get PnR resource out

As someone who never touch garnet before, the entire design seems like a myth to me. I understand all the high level concepts, but I'm very clueless when coming to writing code to get PnR resource out of garnet. Here are things I tried:

Board layout:

for x, column in enumerate(self.interconnect.columns):  
    for y, tile in enumerate(column.tiles):
        if isinstance(tile.core, PECore):                               
            clb_type = 'p'                                    
        elif isinstance(tile.core, MemCore):                            
            clb_type = 'm'
        else: 
            raise Exception("Unknown tile type " + tlie.core.name()) 
        board_layout[y + clb_margin][x + clb_margin] = clb_type

So far so good, although I need to hack a little bit to make the result in par with old chip layout, which has PE margin and IO tiles.

Now comes the questions:

  1. How am I going to produce the bitstream?
  2. Should I use the remapped the layout, then subtract the margin to reverse the map while produce the bitstream?
  3. Where to set the bitstream?
  4. How am I going to specify which IO pad to use in the io.json file sent to TBG? By making some assumptions or parsing verilog file?

Placement resource is not that bad since it's very straight-forward, although some hacking and assumptions are mad. The routing is what got me.

Here are a list of things I tried.

for port1, port2 in pe_tile.wires:                                      
     print(port1.qualified_name(), "-", port2.qualified_name()) 

And that gives me:

north - north
west - west
south - south
east - east
layer16 - I
layer16 - I
layer16 - I
layer16 - I
layer16 - I
layer16 - I
layer1 - I
layer1 - I
layer1 - I
layer1 - I
layer1 - I
layer1 - I
O - data0
O - data1
O - data2
O - bit0
O - bit1
O - bit2
... (more) ...

The good thing is I at least see the familiar data0 stuff, but the bad thing is I have no idea what O is. I assuming it's output? But if that's the case, why it's connected to the output? Here are a list of questions given the output:

  1. What's I and what's O?
  2. Is using wires a good way to extract routing information?
  3. If not, what's the best way to do it without parsing verilog file?

Since I'm tasked with integrating with garnet,I need to dig further. So I did some other tests:

for cb in pe_tile.cbs:                                                  
     print("width:", cb.width, end=" ")                                  
     for conn1, conn2 in cb.wires:   
          print(conn1.qualified_name() + "-" + conn2.qualified_name(), end="  ")
     print() 

And this is what I got:

width: 16 O-I  O-read_config_data  I-I  O-S  O-O  config_addr-config_addr  config_data-config_data  config_en-write
width: 16 O-I  O-read_config_data  I-I  O-S  O-O  config_addr-config_addr  config_data-config_data  config_en-write
width: 16 O-I  O-read_config_data  I-I  O-S  O-O  config_addr-config_addr  config_data-config_data  config_en-write
width: 1 O-I  O-read_config_data  I-I  O-S  O-O  config_addr-config_addr  config_data-config_data  config_en-write
width: 1 O-I  O-read_config_data  I-I  O-S  O-O  config_addr-config_addr  config_data-config_data  config_en-write
width: 1 O-I  O-read_config_data  I-I  O-S  O-O  config_addr-config_addr  config_data-config_data  config_en-write 

I understand the width, but what's S?

At this point I'm super confused. I would really appreciate any of you can answer my questions. I think it would be better for me to see some example code that produces the cgra_info file. It doesn't have to be exactly the same, but just some reference code I can use to save some time messing with garnet source code.

Improve test/coverage discovery

In order for coverage to work properly, it seems we have to explicitly list each folder of interest, see https://github.com/rsetaluri/magma_cgra/blob/b508babe71d5b2f3b741525abf0d0fecb8f7296a/.travis.yml#L33-L49

It's quite verbose and manual (new directories have to be added). We should consider ways to improve this situation.

Idea from Raj,

Any way we can change the paradigm so people who create new directories don't have to modify this file? Perhaps by default all dir's get covered, and we manually exclude ones like build, experimental, etc.

Add a guide to testing infrastructure

Cover basic pytest patterns (file/function test discovery) and fault poke/expect tester. Refer to pytest and fault documentation for further reading.

Read in all files when wrapping Genesis

Imagine we have modules top.vp and foo.vp, and top.vp generates and instances foo. Currently, we only call DefineFromVerilog on the top module, but we also of course need the source for foo, especially if we say, overwrite it later with a new set of parameters. In short, the issue is reading in all generated files from a Genesis pass, not just the top level module.

Unfortunately we don't know the actual tree structure of instancing, but we do know that the root of the tree. We can just add all generated verilog files to top. Shouldn't change the top level interface, and will ensure we have all the source we need.

Problems Parsing SystemVerilog

Should add support for the following SystemVerilog features:

  • Signals of type Logic

  • SystemVerilog interfaces in port declaration

  • Pre/post-assignment operators (i.e. i++)

  • "Enhanced for loop declaration"/Declaring loop variable inside for loop declaration.

Add hardware coverage metric

Right now, our coverage metrics are only for the software lines that run. We also want to integrate a flow which reports how much of the hardware is covered. This involves 2 things:

  1. Integrating with open source/industry tools. From Priyanka:

I found out what coverage analysis tools Nvidia uses.
For HLS C++, they use CTC++.
For Verilog, they use VCS's internal tool called urg. It can report line, toggle, condition, branch, fsm coverage. VCS user guide points to another document called Coverage Technology User Guide that we can look at to find out more.

Such tools report coverage metrics given a set of tests and a set of verilog files.

  1. Figuring out a metric for generator coverage. Given a generated verilog file, we can use the tools above to figure out its coverage. But how do we also report coverage of the space of generator params available.

Repo build issues

From @kong0329:

Hi Raj,
I have questions about magma_cgra.

  1. When I run python cb/cb_wrapper_main.py test_cb/cb.vp command, an error occurs.
  File "cb/cb_wrapper_main.py", line 6, in <module>
    from cb.cb_wrapper import define_cb_wrapper
ModuleNotFoundError: No module named 'cb.cb_wrapper'; 'cb' is not a package```

I suspected that same name of package name(cb) and module name(cb.py) so removed cb.py and tried again.
Now I get this error.

  File "cb/cb_wrapper_main.py", line 6, in <module>
    from cb.cb_wrapper import define_cb_wrapper
ModuleNotFoundError: No module named 'cb'```

Could you let me know how to resolve it?

  1. This is to make sure what I understood about magma_cgra is correct.
    <Let's say we design new module X>
    a. Implement X.vp (Genesis generator file)
    b. Implement wrapper. X_wrapper.py and X_wrapper_main.py
    c. Implement magma file. X.py
    d. Implement functional file. X_functional.py
    e. Implement testbench. X_tb.v
    f1. test_wrapper - test if wrapper works.
    f2. test_functional - test if functional model is correct
    f3. test_regression - test if generator magma and genesis behave same. test both behave as functional model.

Is this correct?
In the future, can we just skip a, b, f1 if we only stick to magma, not genesis?

  1. Lastly, the point of magma is to increase the abstraction layer to python(magma), right?
    If so, how is testbench file X_tb.v generated automatically according to parameters?
    If we still have to implement testbench file in verilog, it would be inefficient.

Thanks for having great system Raj.

Proposal for testing paradigm

I've been thinking about the whole model/driver/monitor, transaction level modeling stuff. The high level idea is that a model has high-level functions, a driver lowers these functions to circuit inputs, and a monitor lifts circuit outputs to high-level outputs. An important thing to note is that a model should be 1-to-1 with a functional specification, where as there can be many circuits that adhere to that spec (i.e. there exists a driver-monitor pair that makes the circuit and spec compatible). For example, we could simply rename the ports of a circuit without really changing the meaning of the circuit.

It seems like when designing a module there should be one "model" and then a driver-monitor pair for each circuit which should adhere to the model, and the circuit designer is responsible for writing the driver and monitor. In particular, the driver and monitor should have the same set of functions, with the driver issuing "pokes" based on function inputs, and the monitor issuing "expects" based on model outputs.

Here is a prototype of this for a simple counter circuit. I think the general work flow is nice, but there's a lot of repeated stuff (primarily the class interfaces/function names -- though the bodies of each function are disjoint). Also, it is not easy to do things that don't fit into the simple driver-monitor framework. For example, you may want to do edge-case checks, contra-positive checks, etc. That doesn't fit easily here.

from bit_vector import BitVector
import fault
import magma
import mantle


class CounterModel:
    def __init__(self, width):
        self.width = width

        self.O = 0

    def step(self):
        self.O += 1


class CounterDriver:
    def __init__(self, width, tester):
        self.width = width
        self.tester = tester
        self.circuit = self.tester.circuit

    def step(self):
        self.tester.step(2)


class CounterMonitor:
    def __init__(self, width, tester, model):
        self.width = width
        self.tester = tester
        self.circuit = self.tester.circuit
        self.model = model

    def step(self):
        self.tester.expect(self.circuit.O, self.model.O)


class CounterTester:
    def __init__(self, model, driver, monitor):
        assert monitor.tester is driver.tester
        assert monitor.model is model

        self.model = model
        self.driver = driver
        self.monitor = monitor

    def step(self):
        self.model.step()
        self.driver.step()
        self.monitor.step()


if __name__ == "__main__":
    width = 4

    model = CounterModel(width)
    circuit = mantle.DefineCounter(width)
    tester = fault.Tester(circuit, circuit.CLK)
    driver = CounterDriver(16, tester)
    monitor = CounterMonitor(16, tester, model)

    full_tester = CounterTester(model, driver, monitor)
    for i in range(10):
        full_tester.step()
    tester.compile_and_run(target="verilator", directory="/tmp/", magma_output="coreir-verilog")

Simple CGRA issues

  • Add config enable logic, anded with R/W signal
  • Switch to using MuxWithDefault rather than Mux for config_read_data logic in simple_cb/sb
  • Figure out cannot collect test class 'Tester' because it has a __init__ constructor warning in pytest
  • Add IO pad frame to top

ncsim error running verilog test on kiwi

@alexcarsello have you seen something like this before? Any insight?

test_cb/test_cb_regression_verilog_sim.py:40: AssertionError
--------------------------------------------------------- Captured stdout call ---------------------------------------------------------
constant_bit_count = 16
default val bits = [1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
reset val bits   = [1, 0, 0, 1]
reset bit vec as int = 121
# of tracks = 16
Numargs=1
Running Runningvpasses
Numargs=1
In Run Generators
Done running generators
Numargs=1
Numargs=1
Numargs=1
Numargs=1
Numargs=1
Numargs=1
Numargs=2
Numargs=1
Numargs=1
Running vpasses

Modified?: Yes
Running genesis cmd 'Genesis2.pl -parse -generate -top cb -input cb/genesis/cb.vp -parameter cb.width='16' -parameter cb.num_tracks='10'
 -parameter cb.feedthrough_outputs='1111101111' -parameter cb.has_constant='1' -parameter cb.default_value='7''
Running irun cmd: irun -sv -top top -timescale 1ns/1ps -l irun.log -access +rwc -notimingchecks -input common/irun/cmd.tcl test_cb/conne
ct_box_width_width_16_num_tracks_10_has_constant1_default_value7_feedthrough_outputs_1111101111_tb.v genesis_verif/cb.v connect_box_widt
h_width_16_num_tracks_10_has_constant1_default_value7_feedthrough_outputs_1111101111.v
irun(64): 15.20-s022: (c) Copyright 1995-2017 Cadence Design Systems, Inc.
file: test_cb/connect_box_width_width_16_num_tracks_10_has_constant1_default_value7_feedthrough_outputs_1111101111_tb.v
        module worklib.top:v
                errors: 0, warnings: 0
file: genesis_verif/cb.v
        module worklib.cb:v
                errors: 0, warnings: 0
file: connect_box_width_width_16_num_tracks_10_has_constant1_default_value7_feedthrough_outputs_1111101111.v
        module worklib.corebit_and:v
                errors: 0, warnings: 0
        module worklib.coreir_const:v
                errors: 0, warnings: 0
        module worklib.coreir_mux:v
                errors: 0, warnings: 0
        module worklib.muxn_U12:v
                errors: 0, warnings: 0
        module worklib.coreir_reg_arst:v
                errors: 0, warnings: 0
        module worklib.coreir_slice:v
                errors: 0, warnings: 0
        module worklib.muxn_U1:v
                errors: 0, warnings: 0
        module worklib.coreir_eq:v
                errors: 0, warnings: 0
        module worklib.muxn_U10:v
                errors: 0, warnings: 0
        module worklib.muxn_U7:v
                errors: 0, warnings: 0
        module worklib.muxn_U0:v
                errors: 0, warnings: 0
        module worklib.Mux16x16:v
                errors: 0, warnings: 0
        module worklib.Mux2x32:v
                errors: 0, warnings: 0
        module worklib.Register__has_ce_True__has_reset_False__has_async_reset__True__type_Bits__n_32:v
                errors: 0, warnings: 0
        module worklib.connect_box_width_width_16_num_tracks_10_has_constant1_default_value7_feedthrough_outputs_1111101111:v
                errors: 0, warnings: 0
ncelab: *E,MTOMDU: More than one unit matches 'top':
        module worklib.top:sv (VST)
        module worklib.top:v (VST).
irun: *E,ELBERR: Error during elaboration (status 1), exiting.

The issue seems to be that it's finding more than one top: sv and v (a verilog and a system verilog one?)

Module naming conventions

We should resolve this ASAP as moving around directories/renaming files is annoying and will get more annoying.

From #14 (comment), Pat proposes:

I propose we use uppercase for acronyms. So connect_box would become CB (not Cb). Similarly, FPGA should not be spelled Fpga.

Basically, I think this is the standard Python naming conventions for modules, which is CamelCase.

BitVector error in mem_functional model

test_sram_basic is failing on kiwi with the following output:

self.memory = {BitVector(i, address_width): BitVector(0, data_width) for i in range(self.data_depth)} E TypeError: unhashable type: 'BitVector

memory_core/memory_core.py:22: TypeError`

Verilog TB module name 'top' hardcoded in run_verilog_sim.py

It seems that the TB module name has to be top since it is hardcoded in run_verilog_sim.py. It might be good to give the user the option to give the module name for the TB. Will be easier when multiple sub-blocks will be merged at top level and we still want to run the tests at sub-block level. The sub-block tests can then have module name like sb_tb, tb_cb etc. instead of having to be top.

Migrate to pycodestyle

This was first proposed in #140, see it for some more discussion. Summary: pep8 checker is out of date, pycodestyle is the (renamed) maintained variant.

I'm okay with this migration, the plan is to do it in a separate PR that only contains style related changes. Please speak up now if you have any opposition.

Note to revert this style related commit from Keyi's PR 011dd44

Get sram stub semantics right

Question about bypassing write values on ren & wen == 1. The google doc spec says there is bypass logic, but the sram stub disagrees. We need to resolve this before completing the fn. model.

Module prototype

We should make a module as clean as possible,.

For example, in cb

How about

cb.py -functional model
cb_magma - magma version
cb_genesis2 - genesis2 / verilog wrapper version

Maybe we can merge the main program for the genesis2 version into cb_genesis2?

And, shouldn't cb.vp be in this directory?

Shared INCA_libs between separate testbenches causing pytest to fail

When we run pytest, we are running multiple ncsim simulations on separate modules (global controller, switch box, connection box, etc.). However, since we're running all these simulations from the same location, only one INCA_libs directory is created. This is causing tests to error out before they begin because ncsim is detecting multiple modules named top in INCA_libs.

Here's the error:
ncelab: *E,MTOMDU: More than one unit matches 'top': module worklib.top:v (VST) module worklib.top:sv (VST). irun: *E,ELBERR: Error during elaboration (status 1), exiting.

One way to deal with this would be to always delete INCA_libs before running irun.

ConfigurableModel and ResettableModel

These should match the testers ConfigurationTester and ResetTester. To do this, they must implement a reset method. I think a ConfigurableModel can provide a reset method that resets the configuration, so that a model that subclasses from it can reuse that logic, e.g.

class MyModel(ConfigurableModel):
    def reset(self):
        super().reset()  # reset the configuration

ResettableModel can just define reset as an abstractmethod

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.