Comments (10)
The artifact appears in the official 3.8.0 as well. I've modified the provided interpolation tutorial to match my camera setup:
camera.from = Vec3fa(0.0f, 5.0f, -5.0f);
camera.to = Vec3fa(0.0f, 0.0f, 0.0f);
and tested the ISAs with and without RTC_SCENE_FLAG_ROBUST.
The silhouette patterns vary slightly. I've come to realize that this kind of pattern only happens for even-sized image resolutions, not for odd image resolutions. If the camera rays were cast through the center of each pixel, I would expect the opposite. However, I can't find any evidence in the code that a half pixel offset to the center is performed.
This is the result of Embree 3.8.0 on x64 with "--isa avx2"
I don't think there's anything wrong with Embree. However, it might be worth verifying that a half pixel offset should be added throughout the tutorials to ensure there's no bias to a pixel corner. Visually, this is best done by rendering very low resolutions.
On the other hand, holes can nicely be detected when raycasting exactly along the axes. So both odd and even configurations might be worth testing in general.
from embree-aarch64.
neon-fix
seems solved the issue https://github.com/lighttransport/embree-aarch64/tree/neon-fix
It looks the reason was some incorrect SIMD operation + missing Newton-Raphson refinement for some div, sqrt, rsrt, etc.
BUILD_IOS
code path improves/fixes some NEON SIMD.
So neon-fix
branch backports BUILD_IOS
code path(iOS specific) to aarch64(Linux, Android) build by removing iOS clang dependency, primarily by explicitly adding vreintepret
conversion-free casting.
from embree-aarch64.
Hi Syoyo,
I can confirm the quality regression that you have noticed.
On Android arm64 many features become broken between July 2019 (left) and January2020 (right):
I've attached all image results for manual inspection:
comparison_github.zip
I also agree with your reasoning about the cause in the NEON SIMD code. I recall that I was able to fix these issues in the past by introducing two refinement iterations for Newton-Raphson.
from embree-aarch64.
@maikschulze Thanks for the comparison!
Please use neon-fix
branch. I have successfully suppressed some noise in that branch by correct Newton-Raphson iteration. If things goes well, I will merge neon-fix
branch to master
soon.
Also, it would be nice if you contribute regression tests(i.e. build test scene and run it in batch manner) to reproduce comparison images.
from embree-aarch64.
Thanks for pointing me to the neon-fix branch, @syoyo .
I've run my tests with this branch and compare it to the state of master in July 2019:
comparison_github_2.zip
Identical: DynamicScene, GridGeometry, InstancedGeometry, LazyGeometry, PointGeometry, PointPrecision, UserGeometry
Different: CurveGeometry, DisplacementGeometry, HairGeometry, Interpolation, IntersectionFilter, MotionBlur, SubdivisionGeometry, TriangleGeometry
I've taken a look at the images and their differences in BeyondCompare. I consider all but one perceptually identical. I'm not able to state which one is "better".
Only one result shows a structural difference to me: Interpolation_003.png
Here, it seems the shadow test is slightly less precise in neon-fix (right) vs the older state of master (left):
from embree-aarch64.
In terms of tests, I would gladly help out, or possibly contribute my setup after major cleanups. We should settle on the architecture of the tests first. I can briefly describe what I did:
I've created a second, private repository "embreetest" which contains a lot of the original tutorial code that has been refactored to create well-controlled image series and take some additional measurements. In addition, I'm using a different math library (GLM) to minimize the influence of Embree code changes onto my test case. This works very well, because Embree's API is so stable.
The down-side is the "duplication" of the tutorial code. I was trying to wrap the existing tutorial codebase, but gave up eventually and consider it more important to have the tests in a different repository. I avoided changing the original tutorial code to avoid code conflicts and minimize regression risk. Having the tests in a separate repo allows the easy execution of newly added tests retrospectively on older builds, which came handy already for me when I was tracking down numerical imprecisions. Naturally, this test repository depends on Embree. The other dependencies (such as GLM, lodepng) could be included.
The tests are compiled into a dynamic library with a simple C++ interface for a result monitor. This could be changed to a C interface based on callbacks for easier composition.
On top of that sits a command-line executable that implements a result monitor and writes text log files and png files. So far, I've manually inspected the results. I'm unfamiliar with automatic setups on Github. However automation would probably be best. This approach would also allow the integration of the test suite into GUI apps done by thirds.
What's your take on this? What else should be considered?
from embree-aarch64.
I've briefly compared the image results with x64 as well and realize "my structural artifact" also appears when rendering on x64 SSE2. There's small differences between neonfix @ arm64 and the x64 SSE2 state in many images, similar to the above results comparing the two states on arm64. Nothing stands out.
Left: neonfix on arm64
Right: master July2019 on x64 SSE2
I would therefore argue that neonfix is in a good state and you've fixed the imprecisions. Thank you!
from embree-aarch64.
I've briefly compared the image results with x64 as well and realize "my structural artifact" also appears when rendering on x64 SSE2.
Does this artifact appear in original embree build(v3.8.0)? If so, it would be better to report this issue to original embree git repo.
from embree-aarch64.
On top of that sits a command-line executable that implements a result monitor and writes text log files and png files. So far, I've manually inspected the results. I'm unfamiliar with automatic setups on Github. However automation would probably be best. This approach would also allow the integration of the test suite into GUI apps done by thirds.
What's your take on this? What else should be considered?
Usually we use the following approach for testing our own renderer.
- Run scenes or test rendering in unit tester(e.g. goole test, acutest) or python script.
- Run image comparator(e.g. pdiff(perceptual diff), idiff from OpenImageIO or our own custom image comparator) for rendered images
- Report result as HTML
- Something like http://www.smallvcm.com/reports/60sec/
It would be better to manage test codes and scenes in different git repo, so we can setup embree-aarch64-test
(or similar) git repo. We may also be possible to contribute HTML reporter.
I can setup CI automation of running tests using Github Actions or Travis. Github Actions support aarch64 platform(through qemu emulation, which is slow to execute though)
from embree-aarch64.
Fixed in this commit:
4433618
But we still see some inconsistency in verify
watertight test #23 and displacement test. Will investigate it further and open another issue.
from embree-aarch64.
Related Issues (20)
- Road to intel embree v3.9.0 and beyond HOT 14
- curve_intersector_virtual.cpp takes too much time to compile(aarch64-v3.8.0) HOT 6
- verify NEON seg faults HOT 2
- `verify` watertight test fails to pass HOT 1
- Improve the accuracy of minps, maxps, rcp, rsqrt in NEON HOT 16
- Use newer SSE2NEON HOT 1
- SubD GridSOA seg faults om aarch64 (v3.11.0) HOT 2
- Watertight test fails in `verify` HOT 2
- SSE2NEON mapping for _mm_sqrt_ps(0) does not return 0 HOT 3
- support libc++ build on aarch64 linux + clang
- Code Changes that impact x64 HOT 3
- verify NEON.update failure in recent commit(at least from v3.11.0) HOT 7
- Lots of `verify` test failures in v3.12.0 merge HOT 2
- Internal compilation error(implicit_conv_expr) when using gcc8 ~ 10. HOT 1
- verify crashes for AVX2/NEON2X with ray packets HOT 1
- Native aarch64 build on Travis CI HOT 4
- create releases HOT 5
- verify fails to execute on M1 macOS HOT 5
- FPS degradation in tutorial code with GUI on M1 macOS
- Possible assertion at checkPadding16 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from embree-aarch64.