Comments (16)
@huningxin @anssiko I converted WebNN add and mul tests using Web Platform Tests testharness.js framework from such add tests and mul tests of @huningxin 's PR #1. My converted tests are on this link, and the report of these tests goes here, PTAL.
Following Writing Tests guidelines of Web Platform Tests, I did three small modifications to convert from @huningxin 's mocha tests.
- Created test html page, then imported both testharness.js and testharnessreport.js scripts.
- Used testharness.js promise_test interface instead of mocha it interface.
- Used testharness.js assertions instead of chai assertions in mocha tests.
What're your opinions? Thanks.
from webnn-polyfill.
Thanks @BruceDai! Given the conversion from mocha to w-p-t test harness seemed to be a reasonable effort, I'd recommend we focus on w-p-t tests that are reusable in the future standardization phases (in particular, latest at CR transition).
@wchao1115 might be interested in taking a look at your converted tests, since he is familiar with conformance testing of ML operations.
from webnn-polyfill.
I will implement convert tool for WPT tests, thanks.
from webnn-polyfill.
Update.
We've drafted some tests for later WPT tests contribution, including:
-
IDL tests
Implemented idlharness.https.any.js which could generate tests for Web IDL fragments using the JavaScript Tests (testharness.js) infrastructure. Now there're each 351 tests with 216 pass tests on latest Chrome stable (93.0.4577.82). -
JavaScript tests (testharness.js)
Implemented these tests by converting from 328 op tests and 60 model tests of webnn-polyfill tests. Now there're 388 pass tests on latest Chrome stable (93.0.4577.82).
Above tests are with webnn.idl of w3c/webref which automatically extracted from https://www.w3.org/TR/webnn/ and latest unminimized webnn-poyfill.js which was generated by building this webnn-polyfill project.
I'm not sure that Web ML community (Working Group) would be interested in such tests or whether I'm doing the right thing?
Here's a preview on https://brucedai.github.io/wpt/webnn/index.html, PTAL, thanks.
@anssiko @wchao1115 @huningxin Would you please give me some advice and some pieces of guidance? Thanks.
Notes: It would take time to load tests since unminimized webnn-poyfill.js script and weights files of model tests are large size files.
from webnn-polyfill.
Thanks much @BruceDai . The preview looks nice. As @anssiko mentioned, the WPT is a request for standardization phases (where we are now). So your development is quite helpful. I am interested in the status and the path forward.
I think this is a good topic for WebML WG meeting and probably for TPAC.
@anssiko @wchao1115 and @dontcallmedom , WDYT?
from webnn-polyfill.
Thanks @BruceDai for your efforts in converting the tests into w-p-t!
@dontcallmedom has expertise in w-p-t and can probably address your questions.
We've added the conformance testing of WebNN API to the TPAC agenda and will discuss this area more broadly at that meeting.
from webnn-polyfill.
@BruceDai the ultimate goal should be to submit the converted tests as a pull request to https://github.com/web-platform-tests/wpt in a newly created /webnn
directory - presumably at that point, the polyfill itself would import them from there rather than have the tests duplicated here.
Let me know what specific guidance you might want to move forward in that direction
from webnn-polyfill.
Thanks @dontcallmedom @anssiko and @huningxin. And I'm sorry for late replay because of the holiday.
presumably at that point, the polyfill itself would import them from there rather than have the tests duplicated here.
@dontcallmedom Yes, I quite agree with you.
I will submit a PR of IDL tests for WebNN API using webnn-poyfill.js to WPT firstly.
from webnn-polyfill.
Thanks @BruceDai
Aside from the choice of the tool to use, the test methodology is also important. The fundamental thing about floating-point math is that there is an inherent computational error built into the process and so the result could vary from operation to operation, or even from machine to machine. Typically for ML, we test the compute results by comparing them to a certain baseline values, and if the result is "close enough" to the baseline, the comparison is considered passed. This is how the epsilon value (or the so-called tolerance value) of a function like assert_approx_equal
could be used.
An important topic then becomes: so what tolerance values to use in the comparisons? Given that the computational error is accumulative, there is no single answer. For example, you could normally use a smaller tolerance value for element-wise mul
but would need a bigger value for element-wise sqrt
just because of the greater accumulation of errors of the additional complexity, and even bigger value for a highly complex one such as convolution and gemm. So, the answer is that it depends. An empirical estimation of a tolerance value to use can also be based on a real experimentation on a real hardware.
For DirectML, we chose the double-precision results from a standard CPU as our baseline values or the so-called ideal result. We then define tolerance values from the ideal results for each operation we test. The comparison itself, however, is not done in absolute values, but rather in term of ULPs.
The ULP or the unit of least precision is an important concept in floating point math. It is the distance between two consecutive floating-point values. When you define the tolerance as in the units of ULP, you no longer compare a floating-point value to another, rather you measure the distance between the two representations and see if it's greater than the minimal allowable distance.
Here is a sample implementation of such a comparison method. The same algorithm can also be extended for FP16 and even FP64.
template<typename T>
int64_t GetBitwise(T value) {
int64_t bitwiseValue = (value < T(0)) ? ~int64_t(0) : 0; // Extend sign.
*std::launder(reinterpret_cast<T*>(&bitwiseValue)) = value;
return bitwiseValue;
}
bool CompareUlp(float a, float b, uint64_t ulp) {
return static_cast<uint64_t>(abs(GetBitwise(a) - GetBitwise(b))) > ulp;
}
A key benefit of ULP-based tolerances is that it's hardware agnostic. Some compute hardware emulates their floating-point result from fixed-point instructions; the ULP comparison would still work because the comparison will still be relative to its native representation. There is also an opportunity to standardize a set of ULP tolerances across different operations.
from webnn-polyfill.
w3c/machine-learning-workshop#80
from webnn-polyfill.
Thanks @wchao1115.
We're following ULP-based comparison for tests. About your comparison method,
bool CompareUlp(float a, float b, uint64_t ulp) { return static_cast<uint64_t>(abs(GetBitwise(a) - GetBitwise(b))) > ulp; }
I have two following questions:
- Is it a typo error of relational operator in return expression, should > be < here?
- Does argument ulp actually mean the product of factor and ulp of different precision?
@wchao1115 PTAL, thanks.
from webnn-polyfill.
'>' is correct. The function return true if the distance between a
and b
is greater than the specified ulp
value. The ulp
value just means the acceptable distance between 2 floating point values in the units of ULP.
from webnn-polyfill.
@wchao1115 Thank you for your explanation.
from webnn-polyfill.
Hi @wchao1115,
I have a sample as following, you could see that I set ulp
be 65536, that is, the epsilon
is 0.0009765625 , then the return value of CompareUlp(a, b, ulp)
is false and the one of CompareUlp(a1, b, ulp)
is true.
So are these statements right? Please correct me if I am wrong, thanks.
- Since the distance between
a
andb
is smaller than the specified ulp value, soa
is approximately equal tob
. - Since the distance between
a1
andb
is greater than the specified ulp value, soa1
isn't approximately equal tob
.
const float a = 0.14768f;
const float a1 = 0.14888f;
const float b = 0.14775f; // baseline
const float top = b + 0.0009765625;
const float bottom = b - 0.0009765625;
const uint64_t ulp = 65536; // acceptable distance
std::cout << abs(a - b) << std::endl; // 7.00057e-05
std::cout << abs(a1 - b) << std::endl; // 0.00113
std::cout << GetBitwise(b) << std::endl; // 1041714119
std::cout << GetBitwise(a) << std::endl; // 1041709421
std::cout << GetBitwise(a1) << std::endl; // 1041789952
std::cout << static_cast<uint64_t>(abs(GetBitwise(a) - GetBitwise(b))) << std::endl; // 4698
std::cout << static_cast<uint64_t>(abs(GetBitwise(a1) - GetBitwise(b))) << std::endl; // 75833
std::cout << static_cast<uint64_t>(abs(GetBitwise(top) - GetBitwise(b))) << std::endl; // 65536
std::cout << static_cast<uint64_t>(abs(GetBitwise(bottom) - GetBitwise(b))) << std::endl; // 65536
std::cout << CompareUlp(a, b, ulp) << std::endl; // false
std::cout << CompareUlp(a1, b, ulp) << std::endl; // true
from webnn-polyfill.
Your statements are correct. I must add that in practice the ulp
value is going to be much smaller than what you use here, and that it also varies depending on the computational complexity of the operations whose results are being compared i.e. the ulp
value of an element-wise mul
is going to be narrower than element-wise sqrt
.
from webnn-polyfill.
Thanks @wchao1115
Since different baseline value has different acceptance distance, do you have any suggestion for it, such as unified acceptance distance compute formula regarding with baseline? Thanks.
from webnn-polyfill.
Related Issues (20)
- Implement reduceLogSum and reduceSumSquare
- Implement softplus
- Implement softsign
- "The context paramter [sic] is invalid" error if Experimental Web Platform features flag is on HOT 3
- Support TF.js WebGPU backend
- Drop test directory from NPM package HOT 4
- Change newShape of reshape to a sequence of nullable unsigned long
- Transfer the input and output views for asynchronous execution
- Update interface MLOperator to MLActivation
- CI failed with some errors happening during production build
- Enable transfer inputs and outputs of return by asynchronous execution
- Use unsigned integer axis
- Use static padding values for `pad` operation
- Add prelu (Parametric ReLU)
- Implement elu
- Simplify slice op
- input with big image error for webgl backend: context lost HOT 6
- WebGPU backend error: Binding size is smaller than the minimum binding size HOT 2
- Tests running under Node don't know they're running in Node
- Need support shape() method for MLOperand
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from webnn-polyfill.