Coder Social home page Coder Social logo

What is next? about compliance-tests HOT 11 OPEN

vhdl avatar vhdl commented on July 18, 2024
What is next?

from compliance-tests.

Comments (11)

LarsAsplund avatar LarsAsplund commented on July 18, 2024

I think it would be nice if we could separate features from the tests. Features are the things people vote for and tests are only used to verify vendor support. A features would typically have more than one test case.

Issues have voting capabilities. They are not ideal for feature voting because issues are something that you can complete and close. I know there have been discussions on adding voting capabilities to other parts of Gitlab but I don't think that has been done yet. I'm thinking the wiki would be the best place,

Running the test cases locally with your own simulator or with many different simulators/versions in a CI and tracking the status is what VUnit do. The problem is how/where we run the commercial tools. Github's CI has the ability let people run CI jobs on their local computers. I'm not sure if Gitlab has a solution like that but it would be a way to distribute the CI tasks to those having the required licenses while still having an automated solution

Creating a bug report is just a matter of pointing to the failing CI run. Everything needed to recreate the bug is there.

I'm ok with changing the name as suggested

from compliance-tests.

JimLewis avatar JimLewis commented on July 18, 2024

@LarsAsplund
Agree with separating features from tests - it is what I had in mind too.

Reporting test errors to vendors is only a side issue - my main goal is to give individual users a means to express and tabulate interest in the feature - and then report it to the vendor. Tabulating it to us allows us to quantify interest and promote the feature to the community - reporting it to the vendors gives them a means to believe our numbers - if they are actually keeping any of the reports.

Currently from a user perspective a vendor receives a feature request - denies that this is actually a VHDL feature and then deletes it.

from compliance-tests.

JimLewis avatar JimLewis commented on July 18, 2024

Is tabulating requests from multiple people WRT the same issue something we can automate?

from compliance-tests.

Nic30 avatar Nic30 commented on July 18, 2024

@JimLewis I was hoaping that the https://github.com/VHDL/Compliance-Tests will have interface like https://github.com/SymbiFlow/sv-tests

from compliance-tests.

JimLewis avatar JimLewis commented on July 18, 2024

@Nic30 That is ok, however, it misses tabulating the number of users who have noted that the feature does not work. This is important to do.

Vendors claim to be "market driven". However, they have people who are paid to transition the market to SystemVerilog - this is where they make more money.

They make claims that their market is happy with VHDL-2008 and has not asked for anything in the new standard. How do you prove this is a bogus claim. How do you help your customers trust you when you claim this is a bogus claim.

On one presentation, a Vendor claimed that OSVVM was not a methodology. They claimed there are more SystemVerilog engineers available - even in Europe. Considering that in the European FPGA market that 30% use VHDL + OSVVM and only 20% use SystemVerilog +UVM, that is a fairly egregious claim.

If we have numbers we can refute their claims. Without numbers, we loose customers to their continuous FUD.

from compliance-tests.

Nic30 avatar Nic30 commented on July 18, 2024

@JimLewis I send the sv-test to show you the test reports and it's GUI which seems to me as nice. The second thing which seems to me as a good idea is a test for each code construct based on formal syntax from language std. This is good because it completely tests the tool and passing tests can be seen as some kind of reward.

This covers the points did ask for:

  • number of tools for which a particular test case passes,
  • number of tools for which a particular test case fails,

This is not related to VHDL/SV war, any vendor interest or claim. ( However I may be a vendor in your eyes, but I am just a PhD student)

from compliance-tests.

JimLewis avatar JimLewis commented on July 18, 2024

@Nic30 For me it is not a V vs. SV type of thing.

How does the community (users and vendor) know if a language addition is really relevant or not? Simple, provide them with a tally of how many people tested the feature and submitted a bug request for it. If they are not submitting a bug report, then they are not so interested in it.

OTOH, this sort of web based bug report submission and counting is not a strength in my skill set. So I am hoping to find someone else who is willing to implement it. In trade of course, I am contributing where I am stronger - VHDL language and VHDL Verification Libraries.

Also I can also make sure that the VHDL language committee produces use models for all new language features.

from compliance-tests.

JimLewis avatar JimLewis commented on July 18, 2024

@Nic30
WRT you being a vendor. Personally I am grateful for anything an open source contributor is willing to make.

OTOH, for a commercial vendor, I expect them to support standards. Some are good. Others are playing a passive aggressive game of tool support - sometimes making things up - some times out right lying.

from compliance-tests.

eine avatar eine commented on July 18, 2024

Given this objective, I think we need a more sexy name than
compliance tests - something like VHDL User Alliance Language Support Tests

I'd propose something less verbose: VHDL Language Support Tests, and the repo name would be Language-Support-Tests, shortened to LST. This VHDL group is already a user alliance. So that info is already defined in the owner of the repo (the org).

Is tabulating requests from multiple people WRT the same issue something we can automate?

Yes, as long as we use reactions to issues as the measuring mechanism. I think we can decide to take into account all reactions, some kind only, for all the comments in each issue, or for the first comment only. I believe issues can be reacted to, even if closed. Hence, we can use the open/closed state to track whether we have implemented tests/examples for that feature in this repo; and the reactions to count the demand.

However, if we want to track the demand for each feature and vendor, that might be harder to achieve. On the one hand, we would need a separate issue (or a separate comment in the same issue) for each vendor. Similar to VHDL/Interfaces#27. On the other hand, we might not be allowed to do it.

Also I can also make sure that the VHDL language committee produces use models for all new language features.

This is currently the problem with this repo. The body and mwe field of most VHDL 2019 LCSs are empty: https://github.com/VHDL/Compliance-Tests/blob/LCS2019/issues/LCS2019.yml. I don't think we have the capacity to do that for VHDL 2008. However, I believe that a similar file should be one artifact/outcome of the next revision of the standard.

from compliance-tests.

bpadalino avatar bpadalino commented on July 18, 2024

This is a very old issue, but I'd like to resurrect conversation around it given that there are some outstanding pull requests which help try to alleviate some of the deficiencies listed previously, specifically:

Adding the VHDL-2019 tests should provide the mwe. I am unsure what the body is supposed to be for that field. Moreover, I don't find it unreasonable for the current VHDL-2008 tests to have a similar file which has a body and mwe to describe what the test is doing.

I like the table similar to sv-tests and I understand there may be license issues with posting something like that for commercial simulators, but could the overall test count be posted without issue for the commercial ones - instead of broken out? For example, if we grey'd out the test results individually, but just said it received a score of X/Y - would you be comfortable with that?

I am willing to do more work on making this better and trying to drive better support.

So, to reiterate @JimLewis, after those pull requests are merged in - what is next in 2023?

from compliance-tests.

umarcor avatar umarcor commented on July 18, 2024

@bpadalino merged #19 and #21 and updated #22. I'm unsure about #20, since we might want to discuss what to do in such cases (tools implementing features differently). With regard to #13, I didn't read the latest updates. I'll go through them now.

Adding the VHDL-2019 tests should provide the mwe. I am unsure what the body is supposed to be for that field.

The body is a multi-line string expected to be written in markdown. It is to be used as the description when a page is generated in the doc or an issue is created in some repo. So, for each LCS:

  • A unique key/id.
  • A title in plain text.
  • A body/description in markdown.
  • A code-block or a list of files in VHDL.

Moreover, I don't find it unreasonable for the current VHDL-2008 tests to have a similar file which has a body and mwe to describe what the test is doing.

Fair enough. Let's wait until we merge #13. Then, we can move issues/LCS2019.yml to ./LCS2019.yml or vhdl_2019/LCS.yml; and create a similar file for 2008.

I like the table similar to sv-tests and I understand there may be license issues with posting something like that for commercial simulators, but could the overall test count be posted without issue for the commercial ones - instead of broken out? For example, if we grey'd out the test results individually, but just said it received a score of X/Y - would you be comfortable with that?

There are several strategies we could use to work around the issue. For instance, we could have a table with columns G, N, Q, M, R and A (and C, S or X in the future). Then, we would add a large warning telling: "the results in this table are computed from results lists provided by users; we don't run any tests on non-free tools and we don't check the source of the lists provided by users".
That would put the responsibility on the users who provide the lists: i.e. we share it among many so that lawyers have potentially a harder time tracking the origin. An alternative strategy is the one used for documenting the bitstream of Xilinx devices: have a large enough corporation sponsor us so that they can provide the lawyers in case any not-as-large EDA company wants to sue us.

IMO we should not waste any time on that. We should not play any game based on hiding data/facts for dubious ethical marketing strategies. We are in no position to confront power. Our working and knowledge sharing model has been going up in the last three decades, particularly in the last one and very particularly in the last 5y. Momentum is with us. We'd better put effort on improving GHDL and/or NVC and/or any other tool whose developers do not ignore their user base.

Also, it's 2023, we have 1-2y to do the next revision of the standard. There is still much work to be done to make the LRM and the IEEE libraries open-source friendly. Libraries are open-source, but not as friendly as they should; and the LRM is not open source yet.

from compliance-tests.

Related Issues (19)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.