Coder Social home page Coder Social logo

vhdl / compliance-tests Goto Github PK

View Code? Open in Web Editor NEW
25.0 25.0 7.0 288 KB

Tests to evaluate the support of VHDL 2008 and VHDL 2019 features

Home Page: https://vhdl.github.io/Compliance-Tests/

License: Apache License 2.0

Python 5.73% VHDL 62.07% C 32.19%
cosimulation fpga simulation vhdl vunit

compliance-tests's People

Contributors

bpadalino avatar eine avatar larsasplund avatar nickg avatar oliverbm67 avatar paebbels avatar pflake avatar tmeissner avatar umarcor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

compliance-tests's Issues

VHDL-2008: Generic packages on entity should test all cases

The VHDL-2008 test tb_generic_packages_on_entity exposed that most simulators do not handle the cases for passing a generic package into an entity. It's defined as:

interface_package_declaration ::= package identifier is new uninstantiated_package_name interface_package_generic_map_aspect

interface_package_generic_map_aspect ::= generic_map_aspect | generic map (<>) | generic map ( default )

So the test really should be 3 tests:

  • Test with a generic_map_aspect (i.e. a normal generic map)
  • Test with generic map (<>) (i.e. any mapping)
  • Test with default (i.e. only the default package mapping)

Generic in tb_071a.vhd is not mapped

The current version of nvc does not compile the file tb_071a.vhd.
The generic g is not mapped. The attached trivial patch fixes the issue.

Compiling into vhdl_2019: vhdl_2019/tb_071a.vhd failed
=== Command used: ===
/usr/local/bin/nvc --work=vhdl_2019:/Users/Peter/Downloads/Compliance-Tests/vunit_out/nvc/libraries/vhdl_2019 --std=2019 --map=vunit_lib:/Users/Peter/Downloads/Compliance-Tests/vunit_out/nvc/libraries/vunit_lib --map=vhdl_2008:/Users/Peter/Downloads/Compliance-Tests/vunit_out/nvc/libraries/vhdl_2008 --map=vhdl_2019:/Users/Peter/Downloads/Compliance-Tests/vunit_out/nvc/libraries/vhdl_2019 -a /Users/Peter/Downloads/Compliance-Tests/vhdl_2019/tb_071a.vhd

=== Command output: ===
** Error: missing actual for generic G without a default expression
> /Users/Peter/Downloads/Compliance-Tests/vhdl_2019/tb_071a.vhd:35
|
35 | U_e071a : entity work.e071a
| ^
patch.txt

Proposed Labels

  • Tools
    • Tools: VUnit
  • Simulator
    • Simulator: Active-ADL
    • Simulator: Riviera-PRO
    • Simulator: ModelSim
    • Simulator: QuestaSim
    • Simulator: GHDL
    • Simulator: xSim
  • Synthesizer:
    • Synthesizer: Vivado
    • Synthesizer: Quartus
    • Synthesizer: Lattice
    • Synthesizer: Symplify-Pro
  • VHDL Version
    • VHDL-1987
    • VHDL-1993
    • VHDL-2002
    • VHDL-2008
    • VHDL-2019
  • CI Platform
    • CI: Travis-CI
    • CI: GitHub Actions

VHDL-2019: Garbage Collection test exhausts all memory to fail

The garbage collection test (tb_030) requires the exhaustion of all memory to fail. For some simulators, the heap can be set and limited. For others, like ghdl, it doesn't allow this from a command line. It's recommended that the amount of heap is limited via ulimit under linux for this test. For systems with a large amount of memory, this test may pass without garbage collection actually being implemented.

I am unsure how to go about doing this within the simulation framework.

LCS2019: automatic issue generation, MWE update and testing tools

Branch LCS2019 contains a helper script to allow projects to automatically create one GitHub issue for each Language Change Specifications (LCS) in the recently published VHDL 2019. The list of LCS is defined in a YAML file, and they can optionally include a MWE.

Currently, very few LCS contain an MWE. Hence, help is wanted to write examples and/or to copy existing ones from http://www.eda-twiki.org/cgi-bin/view.cgi/P1076/VHDL2017.

Furthermore, issue-runner allows MWEs defined in GitHub issues to be automatically tested in CI workflows. It is suggested to make MWEs compatible with it. See https://eine.github.io/issue-runner/#usage.

Ref: December 31, 2019 6:18 PM

What is next?

The problem as I see it is that if you ask vendors such as
Mentor/Siemen's Ray Salemi, FPGA Solutions Manager for Mentor's Questa,
the question, "Any word on Mentor supporting the new features in VHDL-2019?"

The response we get is:
"we tend to add features as customers start using them.
Do you have any that stand out? "

We need a means to document what "stands out" to the
entire user community. I think what you have here is a start
for doing this.

My vision is:
We need a means for the user community to demonstrate
their support for features and to give users confidence that
they are not the only one expressing interest in this feature.

We need a set of test cases that are test feature capability
that can be added to by users who find bugs that were not
illuminated by the initial tests. That sounds like where this
group is headed anyway.

Next, we need a way for people to express interest in a given
test case. Demonstrating user support. But only one vote
per user/github account.

We need scripts for running the tests on different tools.
We need individual users to be able to run said tests on
their local platform.

We need a means for the user to indicate whether a
test has passed or failed on their platform.

When a test passes, we need a mechanism so a user
can indicate this and we can at a minimum add it to
our internal tracking.

When a test case fails, we need a mechanism to indicate
it failed, to internally track what tool the test failed for,
and a means for an individual user to produce a vendor
product support request using their user name and have
the information for the report automatically collected.
In addition to submitting the issue to tech support,
it would also be nice to submit the issue to the vendors
discussion boards to help generate additional support
for the feature - or to add to an existing discussion.

We need some level of tabulated reporting regarding
interest level and support of a particular feature.

My only concern is what does a vendor consider to be
benchmarking. I always thought it was performance.
If it does not include language feature support, then
it would be nice to have each tool listed in a matrix and
indicate number of passing and failing tests for a particular
feature.

Even if we cannot report against a particular vendor,
we can tabulate:

  • user support of a feature,
  • number of tools for which a particular test case passes,
  • number of tools for which a particular test case fails,
  • total number times this test case has been run and passed,
  • total number times this test case has been run and failed,
  • and total number of bug reports submitted,

Under support of code, it would also be nice to have an
extended listing of why this feature is important to you
and/or your project team.

Even without listing which vendor does or does not support
a feature, one thing we can clearly demonstrate to both
users and the vendors is the user support for the implementation
of a feature.

Given this objective, I think we need a more sexy name than
compliance tests - something like VHDL User Alliance Language Support Tests

GitHub Step Summary

Some months ago, GitHub introduced https://github.blog/2022-05-09-supercharging-github-actions-with-job-summaries/. Essentially, $GITHUB_STEP_SUMMARY points to a "file" where you can write markdown content. Then, it's shown in the workflow web view, beloew the graph. See, for instance: https://github.com/hdl/containers/actions/runs/4080049545.

In this repo, we currently need to open the workflow, expand the step and scroll to the botton in order to see the results. That's because we exit green unconditionally, so the exit status of CI is not meaningful by itself. It would be nice if we printed something in the summary.

@LarsAsplund what would be the best approach to get the summary from VUnit? Shall we do it in the run.py or generate an XUnit report and then process it in a different script?

Testing Stops at First Compilation Error

As of d0ddad0, the testing appears to stop at the first compilation error. This probably won't be a good idea for the VHDL-2019 tests since a lot of simulators will fail these tests.

Can the compilation failure be noted and skipped instead of a hard stop?

Co-Simulation Tests

I am unsure what the status of the cosim directory is and I didn't think they were run with VUnit with run.py.

Do we want to bring these in? I believe nvc should be able to be used with this as well since it looks like ghdl is the only target that might currently be used?

Clearer Usage Instructions

The README on the front page is very sparse with information on how to use the repo. Getting to the actual usage instructions requires 2 clicks - one on the documentation link, then another on the Usage link.

Can we include a quick reference of the pip command and the run.py command along with maybe a note about the VUnit dependency in the README so someone can clone and immediately understand what to do?

What kinds of compliance test to collect?

What kinds of compliance tests do we want to collect?

  1. Syntax checks
  2. Feature checks
  3. Type / operation checks
  4. Simulation results (timing / delta cycles?)
  5. synthesis results

(The latter might be complicated in collecting/writing and also checking, so these should be delayed for now. Just asking for the repositories scope and goals.)

1 - Syntax Checks

GHDL contains lots of checks, but mainly unstructured, because they have been listed according to issue numbers reporting a parser problem.

Example 1 - including whitespace checking:

entity e1 is end entity;

entity e2 is
end entity;

entity e3 is /*comment*/ end entity;

Example 2 - more feature oriented:

entity e1 is end;
entity e2 is end entity;
entity e3 is end e3;
entity e4 is end entity e4;

One could say, we don't need checks like in example 1, because these are Tokenizer problems, not parser problems.

2 - Feature checks

Some features are based on syntax. And more over GHDL shows that features might be implemented, but not properly detected in all syntax variants.

3 - Type / Operation Checks

While a feature might work for one type, it might not work for other types. See std_logic vs. std_ulogic due to vendor optimized implementations in e.g. native C-libraries. Or std_logic vs. bit. Or finally, predefined types vs. user-defined types.

4 - Simulation Results

Do Simulations have the same timing behavior. We might need to check on delta-cycle basis, right?

5 - Synthesis results

This could help to create some pressure in fulfilling IEEE Std. 1076.6-2004 (Withdrawn)

Compliance tests dependent on vunit

This test set might be useful if it was not dependent on vunit testing system.
I think it is a grave mistake to make this dependent on a particular testing system.
Tests should be stand alone as much as possible and include ONLY those statements needed to prove the target feature. Not test environment over head, which vunit is full of.

I am deeply saddened by the direction this has gone. So sad.

Using Python3 in Vunit

Is it possible to get rid of old Python2 syntax and use more up-to-date Python 3 with e.g. PathLib instead of nasty join(("foo", "bar"))?
Does VUnit methods support PathLib?

vhdl_2008.add_source_files(join(root, "vhdl_2008", "*.vhd"))

Python3 PathLib:

root = Path("/tests")
sources = root / "src"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.