Coder Social home page Coder Social logo

amaranth-soc's Introduction

System on Chip toolkit for Amaranth HDL

TODO

License

Amaranth is released under the very permissive two-clause BSD license. Under the terms of this license, you are authorized to use Amaranth for closed-source proprietary designs.

See LICENSE.txt file for full copyright and license info.

amaranth-soc's People

Contributors

alanvgreen avatar antonblanchard avatar emilazy avatar fatsie avatar galibert avatar jfng avatar rroohhh avatar tpwrules avatar wanda-phi avatar whitequark avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amaranth-soc's Issues

`MemoryMap` doesn't uphold promise to iterate "in ascending order of [...] address"

Many methods on MemoryMap claim to iterate objects "in ascending order of their address" but this appears to not be the case in the current code. The code as it stands appears to iterate in order of object addition, as this is how Python dicts natively iterate.

This original documentation was written five years ago, so it's unclear something changed or this was never implemented. Iterating in address order is a useful property and will help determinism of downstream code, such as the SoC bus decoders, so the class should be modified to do this properly.

Resources on dense memory windows must be sized multiples of "parent" data_width?

Apparently, resources in a dense memory window cannot be smaller than
the data width of the parent memory map. This can be demonstrated in
test_memory.py by making the resource in the dense window smaller
than the ratio of the window (2 < 4):

# from this, which works
self.win3.add_resource(self.res6, name="name6", size=16)
# to this, which raises AssertionError in MemoryMap._translate(), via MemoryMap.all_resources() later on
self.win3.add_resource(self.res6, name="name6", size=2)

Here is another test case I created:

import pytest

from amaranth_soc.memory import MemoryMap

def test_dense():
    regs = ("reg0", "reg1", "reg2", "reg3")
    window = MemoryMap(addr_width=2, data_width=8)
    for reg in regs:
        window.add_resource(reg, name=reg, size=1)
    memory_map = MemoryMap(addr_width=1, data_width=16)
    (start, end, ratio) = memory_map.add_window(window, sparse=False)
    assert start == 0
    assert end == 2
    assert ratio == 2
    assert memory_map.decode_address(0) == regs[0] # unexpected! (would expect regs[0], regs[1])
    assert memory_map.decode_address(1) == regs[0] # completely unexpected!
    with pytest.raises(AssertionError): # unexpected!
        list(memory_map.all_resources())
    for reg in regs:
        with pytest.raises(AssertionError): # unexpected!
            res = memory_map.find_resource(reg)

I would expect that it is possible to address multiple units with a
single address in this case. Am I making a logical mistake this this assumption?

If not, is this just a problem with the implementation
(e.g. ResourceInfo not being able to represent "fractional"
addresses)?

In either case, I think this should be caught in add_window() already (or at least with an explanatory exception text).

Control/status registers

Issue by whitequark
Tuesday Sep 10, 2019 at 09:50 GMT
Originally opened as m-labs/nmigen-soc#1


Let's collect requirements for CSRs here.

Overview of oMigen design:

  • Two kinds of registers: CSRStatus (R/O) and CSRStorage (R/W);
  • Located on the dedicated CSR bus;
  • Does not support atomic reads or writes;
  • Includes ad-hoc support for generating C and Rust code, and dumping to CSV;
  • Does not have any support for generating documentation;
  • AutoCSR collects CSRs from submodules via Python introspection;
  • CSRConstant allows specifying limited supplementary data in generated code.

According to @sbourdeauducq, the primary semantic problem with oMigen CSR design is that it lacks atomicity, and fixing that would involve ditching the CSR bus and instead using Wishbone directly.

According to @mithro and @xobs, a significant problem with oMigen CSR design is that it does not allow generating documentation.

According to @whitequark, a problem with oMigen CSR design is that the code generation is very tightly coupled to MiSoC internals, and is quite hard to maintain.

Peripheral API design: exposing bus interfaces

Peripherals are a currently missing building block from nmigen-soc.

They would provide wrappers to cores by means of a CSR interface (also interrupts, but handling these could be the subject of a separate issue).
For example, an AsyncSerialPeripheral wrapper in nmigen-soc would provide access to an AsyncSerial core in nmigen-stdio. Baudrate, RX/TX data, strobes etc. would be accessed through CSRs.

Integration would be straightforward for peripherals that provide nothing more than CSRs:

  • CSRs are gathered behind a csr.Multiplexer, whose bus interface is exposed by the peripheral
  • all peripheral interfaces are gathered behind a single csr.Decoder
  • the csr.Decoder bus interface is bridged to the SoC interconnect

But what about peripherals that also provide a memory interface ? (e.g. DRAM controllers, flash controllers, etc.)
I see two possible approaches:

Approach A: exposing two separate bus interfaces for CSRs and memories

CSRs would be handled the same way as described above, but the peripheral would also provide a separate bus interface to access its memories (e.g. WB4). I think LiteX follows a similar approach.

This has the consequence of locating the CSRs and memories of a given peripheral in separate regions of the SoC address space.

pros:

  • lower resource consumption; all the CSRs of the SoC are still pooled behind a single csr.Decoder, and the WB4 interface of a peripheral is directly connected to its logic.

cons:

  • transactions may be reordered if e.g. the WB4 interface sits behind a FIFO, but not the CSR interface.

Approach B: exposing a single bus interface for both CSRs and memories

Instead of two separate interfaces, a memory-capable peripheral would expose a single bus interface like WB4 or AXI4. This has the consequence of locating all the resources of a peripheral in the same address space region.

  • peripherals would have a local wishbone.Decoder, whose bus interface would be exposed
  • memory interfaces would be added to the decoder
  • CSRs would be grouped into banks, each bank would be bridged to the same decoder
    (e.g. csr.Multiplexer -> WishoneCSRBridge -> wishbone.Decoder)

pros:

  • peripherals with single standard bus interface are easier to integrate when instantiated alone
    (counterargument: users may prefer just using the bare nmigen-stdio cores instead, if available)
  • the address space layout of a peripheral would be flexible to the point where one could mimick the peripherals of another SoC. This could facilitate porting/reusing drivers.

cons:

  • some layouts may consume significantly more resources, e.g. if many CSR banks are requested.
    (although I assume that the general case consists of a single CSR bank)

Any thoughts on this ?
cc @whitequark @awygle @enjoy-digital and others

[pre-RFC] Ideas for an I2C peripheral interface on Wishbone

I feel the need for a way to configure the i2c-controlled devices I have on my fpga board from a wishbone bus driven by a cpu core. That means a single master and no real surprises on the bus, everything documented (hopefully) and no collisions. So the question has been what the wishbone-level interface would be to keep the complexity as low as possible in both the cpu code and the module implementation. There's also an idea of "you pay for what you need and no more".

The ideas I currently have:

  • prescaling, or baud rate in general, should be separated from the module. The module would take an enable line at (probably) 4x the i2c clock rate which would select the wishbone bus clock edges. That enable would be generated by an otherwise independant divider module that can take multiple forms (fixed divider, multi-frequency, fully-programmable) and can be used for anything else that need that kind of thing (timers, uart, etc).
  • the module should provide serialization/deserialization (because cpus suck at that) and start/stop management and also clock stretching, ack recognition and other fun.

A possible starting point on the interface:

  • two or three 8-bit ports (e.g two addresses)
  • writing to a port sends a byte over the wire, after a start signal if the bus was idle
  • writing to a port while a byte is currently being sent blocks (no wishbone ack) until said byte is done
  • reading from a port blocks reading a byte from the wire and returns the result
  • the first port starts or continues a transmission. The second port adds a stop and bus idle after the transmission is done (e.g. it's used for the last message byte). The (optional?) third port does a repeated start (Sr)

Advantages of the interface:

  • writing a smbus-ish register is "*port1 = address; *port1 = register; *port2 = data;". Reading is "*port1 = address; *port1 = register; return *port2;". So the code is simple.
  • the module code isn't particularly complicated either

Disadvantages:

  • No handling of NAK/No device at that address. Bus error seems heavy. Interrupt is annoying. Status byte maybe tellings things are not ok?
  • Not a good interface for interacting with a drq-driven dma module. The DMA module would end up locking the bus until the transfer is done unless a crossbar is available, and that's expensive

Any ideas to complement/replace that one?

more fine-grained memorymap

Hey,
if I understand the intended use of Memorymap correctly, it should hold all information that is needed to write software for the SoC one is generating.
This works already quite well for me if I only have one "logical register" at one address. However, if I want to have registers which are smaller than 8 bits, and want to have them packed (e.g. when emulating the memory map of an existing peripheral to reuse driver code) I wasn't able to come up with a solution that expresses that using the current Memorymap class in a clean way.

`MemoryMap.add_resource()` should support integers in resource names

Currently, csr.Builder works around this by casting array indices to strings calling MemoryMap.add_resource().

Repro:

from amaranth import *
from amaranth_soc import csr

class FooRegister(csr.Register, access="r"):
    a: csr.Field(csr.action.R, unsigned(8))

regs = csr.Builder(addr_width=1, data_width=8)

for n in range(2):
    with regs.Index(n):
        regs.add("foo", FooRegister())

for reg, reg_name, reg_range in regs.as_memory_map().resources():
    print(reg_name)

Current output:

('0', 'foo')
('1', 'foo')

Expected output:

(0, 'foo')
(1, 'foo')

Should wishbone.Decoder generate bus errors for unmapped addresses?

Currently, the Decoder does not catch illegal addresses, meaning that the initiator hangs itself waiting for ack (unless there is a timeout implemented).

When constructed with features = {"err"}, the Decoder only propagates errors from the subordinate busses. Would it be appropriate to also signal an invalid address?

[Request] A proposal for CSRs

Issue by HarryHo90sHK
Friday Sep 27, 2019 at 07:10 GMT
Originally opened as m-labs/nmigen-soc#2


#1 Following the general ideas discussed, I have made a new abstraction for CSR objects in my fork (HarryMakes/nmigen-soc@ba5f354 fixed). An example script is given below, which will print all the fields and their properties in a CSR named "test". You might also test the slicing functionality using the format mycsr[beginbit:endbit], but please note that the upper boundary is exclusive.

from nmigen import *

from nmigen_soc.csr import *

if __name__ == "__main__":
    mycsr = CSRGeneric("test", size=64, access=ACCESS_R_W, desc="A R/W test register")
    mycsr.f += CSRField("enable", desc="Enable signal", enums=['OFF', 'ON'])
    mycsr.f += CSRField("is_writing", size=2, access=ACCESS_R, desc="Status signal of writing or not",
                        enums=[
                              ("YES", 1),
                              ("NO", 0),
                              ("UNDEFINED", 2)
                        ])
    mycsr.f += CSRField("is_reading", size=2, access=ACCESS_R, desc="Status signal of reading or not")
    mycsr.f.is_reading.e += [
        ("UNDEFINED", 2),
        ("YES", 1),
        ("NO", 0)
    ]
    mycsr.f += CSRField("is_busy", size=2, access=ACCESS_R_WONCE, desc="Busy signal",
                        enums=[
                              ("YES", 1),
                              ("NO", 0),
                              ("UNKNOWN", -1)
                        ])
    mycsr.f += [
        CSRField("misc_a", size=32),
        CSRField("misc_b"),
        CSRField("misc_c")
    ]
    mycsr.f.misc_a.e += [
        ("HOT", 100000000),
        ("COLD", -100000000),
        ("NEUTRAL", 0)
    ]
    #mycsr.f += CSRField("impossible", size=30, startbit=6)

    print("{} (size={}) is {} : {}".format(
            mycsr.name, 
            mycsr.size,
            mycsr.access,
            mycsr.desc))
    for x in mycsr._fields:
        print("    {} [{},{}] (size={}) is {}{}".format(
            mycsr._fields[x].name, 
            mycsr._fields[x].startbit,
            mycsr._fields[x].endbit, 
            mycsr._fields[x].size,
            mycsr._fields[x].access,
            (" : "+mycsr._fields[x].desc if mycsr._fields[x].desc is not None else "")))

Signature.create in amaranth-soc interfaces is incompatible with Signature.create from latest Amaranth

After changes introduced in Amaranth in amaranth-lang/amaranth@422ba9e definition of Signature.create in e.g. wishbone module is incompatible - it lacks src_loc_at keyword argument that is required by this call in Amaranth, hence it will throw an error:

  File "(...)/venv/lib/python3.10/site-packages/amaranth_soc/wishbone/bus.py", line 467, in __init__
    super().__init__()
  File "(...)/venv/lib/python3.10/site-packages/amaranth/lib/wiring.py", line 873, in __init__
    self.__dict__.update(self.signature.members.create(path=()))
  File "(...)/venv/lib/python3.10/site-packages/amaranth/lib/wiring.py", line 242, in create
    attrs[name] = create_dimensions(member.dimensions, path=(*path, name),
  File "(...)/venv/lib/python3.10/site-packages/amaranth/lib/wiring.py", line 237, in create_dimensions
    return create_value(path, src_loc_at=1 + src_loc_at)
  File "(...)/venv/lib/python3.10/site-packages/amaranth/lib/wiring.py", line 233, in create_value
    return member.signature.create(path=path, src_loc_at=1 + src_loc_at)
  TypeError: Signature.create() got an unexpected keyword argument 'src_loc_at'

SyntaxError when wishbone.Decoder data width is equal to granularity

Issue by jfng
Thursday Jan 23, 2020 at 11:44 GMT
Originally opened as m-labs/nmigen-soc#4


Repro:

from nmigen import *
from nmigen.back import rtlil
from nmigen_soc import wishbone


class Top(Elaboratable):
    def elaborate(self, platform):
        m = Module()

        m.submodules.dec = dec = wishbone.Decoder(addr_width=5, data_width=8, granularity=8)
        bus = wishbone.Interface(addr_width=4, data_width=8, granularity=8)
        dec.add(bus)

        return m


if __name__ == "__main__":
    print(rtlil.convert(Top()))

Output:

Traceback (most recent call last):
  File "repro.py", line 18, in <module>
    print(rtlil.convert(Top()))
  File "/home/jf/src/nmigen/nmigen/back/rtlil.py", line 1007, in convert
    fragment = ir.Fragment.get(elaboratable, platform).prepare(**kwargs)
  File "/home/jf/src/nmigen/nmigen/hdl/ir.py", line 67, in get
    obj = obj.elaborate(platform)
  File "/home/jf/src/nmigen/nmigen/hdl/dsl.py", line 484, in elaborate
    fragment.add_subfragment(Fragment.get(self._named_submodules[name], platform), name)
  File "/home/jf/src/nmigen/nmigen/hdl/ir.py", line 67, in get
    obj = obj.elaborate(platform)
  File "/home/jf/src/nmigen-soc/nmigen_soc/wishbone/bus.py", line 247, in elaborate
    with m.Case(sub_pat[:-log2_int(self.bus.data_width // self.bus.granularity)]):
  File "/usr/lib/python3.7/contextlib.py", line 112, in __enter__
    return next(self.gen)
  File "/home/jf/src/nmigen/nmigen/hdl/dsl.py", line 283, in Case
    .format(pattern, len(switch_data["test"])))
nmigen.hdl.dsl.SyntaxError: Case pattern '' must have the same width as switch value (which is 5)

Communicating constants from the peripheral to the BSP generator

(discussed during the 06/07 IRC meeting - log).

Examples of constants:

  • clock frequency (if there is only a single one for the core)
  • divisors
  • FIFO sizes
  • functional unit counts, for peripherals/cores with configurable sizes

Constants could be associated both to individual CSRs (e.g. counter time limit) or to an entire peripheral (e.g clock frequency).

Different languages represent constants in different ways:

  • in C, large integers must be appended a combination of u and l suffixes. You can often get away with #define'ing a constant.
  • in Rust, you need to know both size and signedness in order to decide whether to use an u8, or an i16, etc.
  • floating point numbers need special treatment in languages without first class support for them
  • strings may have different encodings e.g. UTF-8, UTF-16, etc.

Name bikeshedding: ConstantDict or ConstantMap

  • not ConfigDict since its scope should be limited to constants only
  • ConstantMap seems preferred

Nesting:

  • provide a way for complex peripherals (e.g. a DRAM controller) to organize constants in a hierarchy
  • hierarchies should preferably be implemented by an external class, instead of being the responsibility of the ConstantMap class itself.

Approach for the next iteration (first attempt was in #19):

  1. Limit the types that can be put into the map. Start with int and bool, and define a clear interpretation for them.
  2. No ConstantMap nesting for now. Once we have a working BSP generator, we should have a better idea of how nesting would work/be used.

[pre?-RFC] Generic constant-frequency enable generator

In multiple places I end up needing an enable signal that generates ones at a given frequency in relation to the domain clock. This is a proposal for a generic generator for such a thing. Please someone find a nice name for it, EnableFrequencyGenerator is too weird.

I think the best method is to go for a bresenham variant. The algorithm is simple. Let's call the domain frequency fd and the target frequency ft (which ft < fd). Then:

  1. Divide both by p = gcd(ft, fd). ftr=ft/p and fdr=fd/p
  2. Find n such that 1<= fdr
  3. Compute delta = 1<<n + ftr - fdr (note that this is the result of ftr-fdr in two's complement)
  4. Create a counter of n bits with carry (e.g. n+1 bits in practice if there's no easy way to get the carry out other than adding 0). The initial status is carry=1, counter = 0

The circuit is then "if carry at the previous clock, add delta to the counter and output one, else add ftr and output 0".

Some special cases can be simplified. If ftr is a power of two then delta and fdr are equal, and a mux is dropped. If ft is a multiple of fd, we end up with a simple divider. I don't know which is the most efficient between lut-wise between muxing on the adder input and using the carry or clearing to zero and an equlity comparison the counter value.

There probably should be two versions of the class. FixedEFG takes fd and ft and generates a fixed-frequency generator. ProgrammableEFG takes a number of bits for the counter and provides a wishbone endpoint with a couple of registers to write ftr and delta. The gcd aspect can be ignored, itis only there to reduce the number of bits of the counter.

The ProgrammableEFG could only have one register for fd and do the substract by itself, but it's a little sad to have a wide adder used just once for that instead of relying of the computational capabitilites of whatever cpu core is around. It should probably reset the counter on a write to the second register. Writing 0/0 stops the enable generation since no carry happens anymore.

Monotonic clock

It would be nice to have a peripheral that enables a CPU to measure time intervals and busy wait to delay. This peripheral would use the system clock as a reference, and has two registers:

  • Divider, which divides the system clock by an integer value where 0 is divide by 1, 1 is divide by 2, and so on. This register is R/W by the CPU.
  • Counter, which monotonically increments by 1 each time the divider counter overflows to 0. This register is R/O by the CPU.

The sizes of both registers/counters will be configurable as usual. The timer counter cannot be reset: this is a monotonic counter that only becomes lower than its previous value on natural (binary) overflow.

Proposed name: amaranth_soc.clock.MonotonicClock? Later we may add amaranth_soc.clock.RealtimeClock perhaps.

Wishbone-attached SRAM

It would be nice to have a Wishbone-attached SRAM peripheral, to enable CPU cores to have scratchpad RAM.

Since there is an ambiguity about what to do for non-power-of-2 sizes of such RAM, I propose that they be prohibited. There is some usefulness but it's unclear what to do on out-of-bounds accesses (wrap? set err? if you set err when do you clear it?) so it's probably best to punt on this.

Constructing Wishbone interface with a single memory window is overly complex / uses redundant parameters

For the simple but very usual case of a memory range mapping a device that does not need internal decoding (ram, rom, that kind of stuff) the boilerplate is a little annoying. Specifically it is:

    self.bus = wishbone.Interface(addr_width = self.ram_width-2, data_width = 32, granularity = 8, name="ram")
    map = memory.MemoryMap(addr_width = self.ram_width, data_width=8, name="ram")
    map.add_resource(self.ram, name="ram", size=self.ram_size)
    self.bus.memory_map = map

The redundancy I suspect makes things error-prone, especially with the data_width/granularity issues between the Interface and the MemoryMap. It probably could be done in one helper function call, not sure what it should look like though.

MemoryMap should allow adding windows with a larger granularity than the parent

The data_width attribute of a MemoryMap reflects not the actual data width, but the granularity of the associated bus. A 32-bit bus with byte lanes hence has a memory map with data_width = 8.

MemoryMap.add_window() performs the following check:

        if window.data_width > self.data_width:
            raise ValueError(f"Window has data width {window.data_width}, and cannot be added to a "
                             f"memory map with data width {self.data_width}")

This restriction makes it impossible to add a window without byte lanes (e.g. with granularity 32) to a parent memory map with granularity 8, even if both of the associated buses have an actual data width of 32.

I would like to have a 32-bit wide CSR bus connected to a 32-bit CPU with byte lanes. While the byte lanes are trivial to shim, MemoryMap doesn't support this arrangement, so instead of adding the target memory map as a window, I have to work around it by copying and transforming each individual resource entry to a new memory map.

Example implementation of a simple granularity converter that silently ignores writes smaller than the target granularity: https://paste.jvnv.net/view/g35PB

Clarify documentation for alignment parameters to mention that it is log2

Repro:

from nmigen.back import rtlil
from nmigen_soc import csr

csr_mux = csr.Multiplexer(addr_width=16, data_width=8, alignment=8)
csr_mux.add(csr.Element(1, "r"))

print(rtlil.convert(csr_mux))

Output:

<snip>
  File "/home/jf/src/nmigen/nmigen/hdl/ast.py", line 546, in <lambda>
    op_shapes = list(map(lambda x: x.shape(), self.operands))
  File "/home/jf/src/nmigen/nmigen/hdl/ast.py", line 546, in shape
    op_shapes = list(map(lambda x: x.shape(), self.operands))
  File "/home/jf/src/nmigen/nmigen/hdl/ast.py", line 546, in <lambda>
    op_shapes = list(map(lambda x: x.shape(), self.operands))
  File "/home/jf/src/nmigen/nmigen/hdl/ast.py", line 546, in shape
    op_shapes = list(map(lambda x: x.shape(), self.operands))
  File "/home/jf/src/nmigen/nmigen/hdl/ast.py", line 546, in <lambda>
    op_shapes = list(map(lambda x: x.shape(), self.operands))
  File "/home/jf/src/nmigen/nmigen/hdl/ast.py", line 546, in shape
    op_shapes = list(map(lambda x: x.shape(), self.operands))
  File "/home/jf/src/nmigen/nmigen/hdl/ast.py", line 546, in <lambda>
    op_shapes = list(map(lambda x: x.shape(), self.operands))
  File "/home/jf/src/nmigen/nmigen/hdl/ast.py", line 546, in shape
    op_shapes = list(map(lambda x: x.shape(), self.operands))
  File "/home/jf/src/nmigen/nmigen/hdl/ast.py", line 546, in <lambda>
    op_shapes = list(map(lambda x: x.shape(), self.operands))
  File "/home/jf/src/nmigen/nmigen/hdl/ast.py", line 643, in shape
    return Shape(self.stop - self.start)
  File "<string>", line 1, in __new__
RecursionError: maximum recursion depth exceeded while calling a Python object

Handling resource name collisions inside a MemoryMap

Dumping CSR names from a MemoryMap by iterating over .all_resources() is subject to name collisions.
These can happen in two scenarios:

  1. registers in separate windows share the same name
class X:
    def __init__(self):
        foo = csr.Element(1, "r")
        mux = csr.Multiplexer(addr_width=1, data_width=8)
        mux.add(foo)
        self.bus = mux.bus

a = X()
b = X()
decoder = csr.Decoder(addr_width=2, data_width=8)
decoder.add(a.bus)
decoder.add(b.bus)

for elem, elem_range in decoder.bus.memory_map.all_resources():
    print(elem.name, elem_range)

# output:
# foo (0, 1)
# foo (1, 2)

In this case, we could add a name attribute to MemoryMap. Each window would then have a namespace for its resources. Resolving the full resource name would be the responsibility of the BSP generator, by walking through the memory hierarchy.
The result could look like this :

class X:
    def __init__(self, *, name=None, src_loc_at=0):
        foo = csr.Element(1, "r")
        mux = csr.Multiplexer(addr_width=1, data_width=8, name=name, src_loc_at=1 + src_loc_at)
        mux.add(foo)
        self.bus = mux.bus
a = X()
b = X()
  1. registers in the same window share the same name
class Y:
    def __init__(self):
        self.foo = csr.Element(1, "r")

c = Y()
d = Y()
mux = csr.Multiplexer(addr_width=1, data_width=8)
mux.add(c.foo)
mux.add(d.foo)

In this case, I don't see an easy way to disambiguate the two names. The BSP generator would have to detect this and throw an error.

Wishbone access from initiator bus with data_width smaller than the one of the subordinate bus.

I am working on my Retro_uC. In there I combine a 32-bit M68K and with two 8-bit CPUs (a MOS6502 and a Z80). I would like to have all three of them accessing the same memory map. I would like it that the M68K can fetch a 32-bit word in one bus cycle.
AFAICS, currently neither the Wishbone Arbiter or Decoder allows that the data_width of the initiator bus is smaller than the data_width of the subordinate bus(es) even if the granularity is the same.

I see different solutions to this problem:

  • extend Arbiter to support initiator buses with data_width smaller than the subordinate bus data_width. This would solve my request directly.
  • extend Decoder to support subordinate buses with data_width bigger than initiator bus. For my case this would be more indirect. I would then have an Arbiter on the two 8-bit CPU buses, then a decoder from the output 8-bit bus to a 32-bit data_width and then an Arbiter of that output with the 32-bit CPU bus.
    I still propose this solution as the feature may be more as decoding than arbitration.
  • do both; for my use case I would only use Arbiter so the Decoder implementation will not be tested by user code.
  • do it an a separate bridge class.

This issue can be assigned to me after it is clear what the preferred implementation is.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.