Coder Social home page Coder Social logo

Comments (8)

erichlof avatar erichlof commented on July 30, 2024 2

@knightcrawler25

Hi again, I'm happy to report that the initial test of new stochastic light sampling technique works great with the BVH! BVH Point_Light_Source Demo

The light source is very small and bright and the rest of the scene is dark. If a normal path tracer was used, the noise would be very slow to converge. But as you can see, with the new approach, the image resolves almost instantly. And the best part is that the cost is the same as that of a traditional path tracer - 4 bounces max through the BVH structure to get the refractive glass surfaces looking correct.

I will ramp up the triangle count and try some different light source types, like spot lights and quad lights, but from what I can see so far, things are looking good!

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

@knightcrawler25
Hi Asif,
Thanks so much for the kind words! And great work on your own path tracing project! The example images are beautiful - production render quality.

As for the data supporting the switch to stochastic light sampling over the more traditional direct light sampling (shadow ray technique), I'm sorry but I don't have any numbers or performance analysis at the moment. I have always been a visual-learner when it comes to computer graphics and I have based many decisions on how something looks, or how smooth something feels, rather than inspecting the data. It has lead me on a good path, but sometimes my lack of total understanding of the math and algorithms behind Monte Carlo, the rendering equation (integral calculus), bdrf's, pdf's, linear algebra, etc., comes back to bite me.

What I can say for sure is that Direct Lighting (in the traditional sense) works great for low-geometry scenes, like the Cornell Box, or scenes with a few random math shapes like spheres, cones, etc. The problems arose (for me anyway) when trying that with 4,000+ triangles. Let's say you had a tight path tracing loop of 4 bounces (which is the minimum to get refractive glass looking correct). On the first ray cast, half of the rays either hit the background, or hit some kind of reflective surface. The other half hit some kind of diffuse surface like the walls of a room. The first half are done with their work, ready to go on to the second bounce in the loop, but the latter half (diffuse rays) need to sample the point light source. So all those rays must search through the big BVH just to get the shadow rays and diffuse lighting looking correct, while the easy rays have to wait for them to finish. Thus you get GPU divergence and we're not even out of the 1st iteration of a 4 iteration loop. Things can get worse on each subsequent bounce.

The brilliant insight that Peter Shirley had is: do the ray cast for the first bounce as normal, the rays that hit nothing or that are easily reflected - treat those the same. The diffuse ones though, just bias their diffuse reflection (in the hemisphere oriented around the surface normal) towards the light. So no sample is needed while the easy rays have to wait. You just mark the ray direction for the next loop iteration as either going to a random spot on the light source, or bounce it in the normal fashion (with the cosWeightedDirectionInHemisphere function). I handled this Monte Carlo-style by randomly selecting if the ray would go towards the light, or go on its usual cos-weighted random direction. If you just have the rays go to the light every time, you end up with a flat-color comic book look (which is cool but probably not what you wanted, ha), and if you have them all go in the cos-weighted random direction, you'll have a traditional unbiased path tracer (which is also cool, but it will take forever to get rid of the noise, ha). So my approach was to randomize those 2 different possibilities on each diffuse bounce: if ( rand(seed) < 0.5 ) {do 1st}, else {do 2nd}

You probably noticed on my initial BVH demos that I skirted around this issue by using a huge dome sky light above the entire scene. This allowed me to use a 4-bounce traditional path tracer with no direct light sampling, but it still converges almost instantly! All that went down the drain when I tried rendering a small bedroom scene (table, sofa, lamp, chairs, 4 walls, 1 lightbulb). Using the traditional path tracer without direct light sampling was still around 60 fps, but the noise was terrible - taking way too long to get a decent image. Trying to sample the light, thus searching the BVH again on each bounce, resulted in good convergence, but the frame rate tanked at 10-15 fps. More hefty scenes crashed my browser.

I have yet to try this new stochastic technique with the BVH. But I will be sure to post findings here along with examples. I'm grateful for Peter Shirley sharing his insights in his books. I have just ordered Realistic Ray Tracing (2nd edition) and look forward to studying it. In fact, Kevin Beason's SmallPT, which was the inspiration for my project, was based on algos presented in that book, the golden nugget being the ray-jitter tent filter inside the main() function that makes the progressive renders converge to a photo-quality result. A lot of useful nuggets have been used from that old book (even in OTOY's Brigade path tracer), so I'm looking forward to going to the source! ;-)

Happy new year!
-Erich

from three.js-pathtracing-renderer.

knightcrawler25 avatar knightcrawler25 commented on July 30, 2024

Wow that is a very detailed reply. :D I have a couple of questions.

"The problems arose (for me anyway) when trying that with 4,000+ triangles."
When you were using the direct lighting approach were you reusing SceneIntersect() and finding a closest hit or did u use a specialized function to find any hit and basically break out of the loop if you hit any triangle?

"The light source is very small and bright and the rest of the scene is dark. If a normal path tracer was used, the noise would be very slow to converge. "
When you say normal path tracer you mean without explicit light sampling right?

I went through the samples and noticed its quite fast and converges instantly. I'll take some time out after work and try this out today :) Thanks again

from three.js-pathtracing-renderer.

knightcrawler25 avatar knightcrawler25 commented on July 30, 2024

@erichlof I tried this approach and it does indeed bump up the performance quite a bit (upto 53fps->78fps for a particular scene) although at a slight cost of convergence (or maybe I haven't implemented it properly).

Screens:
https://user-images.githubusercontent.com/11459803/50653655-418e0d00-0fb0-11e9-9809-9e071ec24085.png

https://user-images.githubusercontent.com/11459803/50653656-4488fd80-0fb0-11e9-81f7-2477c713efea.png

Another optimization that could help is rasterizing the first hit and tracing rays from the gbuffer. I saw quite a bit of improvement initially but the catch is you need two copies of triangle data for the path tracer shader as well as when using the rasterization pipeline. Could be helpful if one or two diffuse bounces are only needed for larger scenes where as other ray types can be traced further (Probably bad for thread divergence though). Another improvement is using the BVH itself as LOD when you don't need to trace through highly detailed geometry after the first bounce (You've probably seen this already though http://raytracey.blogspot.com/2018/07/accelerating-path-tracing-by-using-bvh.html)

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

@knightcrawler25
Yes I wanted to be as detailed as time would allow because you have your own path tracing project and you already know the in's and out's of the algorithms.

As to the term "normal path tracer" yes I meant a traditional unidirectional unbiased path tracer. In other words, the rays go from the camera out into the scene, bounce off of surfaces, and you hope that they bounce into the light source by the time the loop needs to be done. If they don't find a light, their time and GPU computation is wasted, and the ray returns black , vec3(0,0,0) - hence the noise if you have a small, difficult-to-find light source in the scene. Explicit light sampling (or shadow ray casting) mitigates this problem, but then you run into performance issues as previously discussed because you need to stall the other non-diffuse rays and search through the BVH geometry for a direct light sample.

Speaking of searching through the BVH, that's an interesting idea that you mentioned about having an intersection routine where you are only concerned with a boolean answer - if the shadow ray hit any geometry or not. I haven't tried this with my BVH yet, but I might give it a shot. Though I have done something similar in the past, even without a BVH, on older demos like Billiard Table and the Rendering Equation where even trying to take direct lighting samples (shadow rays) of those scenes crashed my browser. So I made an additional special shadow ray intersection routine that returned just a boolean result, false if no hit, true if hit. This did the trick and allowed the demo to compile without crashing. If you go back through the code change history of those older demos, you will see the extra shadow routine; I think it's called "isLightVisible()" or something. So I might give it a shot, although even with 2 iterations through the bounce loop with a BVH scene (let alone 4 that refractive surfaces need) it was still crashing with direct light sampling. But it's worth a shot.

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

This is a second post, don't miss the one above ^ :-)
@knightcrawler25

Oh wow I just saw your second post with the images - that's awesome! I'm glad it worked. Yes there is a slight cost of convergence. Direct light sampling converges the fastest, but at a cost to performance and GPU memory issues and possibly crashing (at least with my WebGL case). Thanks for posting the results - I couldn't do a comparison like that because the first direct lighting with BVH would not even compile for me, ha ha.

So after seeing this, I would conclude that if you can afford it, direct light sampling would produce the quickest convergence. If you can still get 30 to 60 fps, depending on your desires, then it's the best choice. However, if your memory requirements go up too much, or the frame rate gets below 20fps, then it may be time to switch to the stochastic light sampling technique outlined by Peter Shirley. In my case, I don't have a choice where the browser and WebGL are concerned - I was forced to find another way. It was looking hopeless for WebGL for a while there, until I read that one sentence out of his Ray Tracing in One Weekend book, which gave me a way out. ;-)

As to using the BVH as LOD as outined in RayTracey's (Sam Lapere's) blog, yes I have seen that and it was actually a source of inspiration for this demo: BVH Visualizer which does the same thing that his demo was doing, but you can slide the LOD up and down in real time. If you replace the jet fighter model with the Stanford bunny, it should look very similar to his demo. This technique might be very useful if you are wanting to use the geometry as a rough lookup into whether or not a surface is able to see a light source or not. You could maybe use LOD 4 or something like that if the object was far enough away. The intersection routine would need a function argument that had the desired LOD which would be a variable that is constantly changing if the camera is moving. I don't know if this will cause any divergence or not - it will need more investigation.

Yes I have heard of rasterization on the first ray cast, then tracing for reflections and GI on the subsequent bounces. I think this is called the 'hybrid' approach. I believe this is what NVidia is currently doing with their new RTX graphics cards. They rasterize the triangles as normal, and then use that data to know what pixels need to be processed (or ray traced) and what type of reflection they need. I may be wrong but I think they only have specular reflection working at the moment, so stuff like puddles on the street, mirrors, a still lake, car bodies, etc. I don't think they have an option for turning true global illumination on as of yet.

I must admit even if that hybrid approach was an option for WebGL (which I'm not sure if it is), I don't know how to implement that. I haven't programmed anything that uses a G buffer. I would have to go back and study the basics of modern GPUs in order to leverage any capabilities like that. Is that something that you have already tried and had success with on your own project?

from three.js-pathtracing-renderer.

knightcrawler25 avatar knightcrawler25 commented on July 30, 2024

@erichlof I did try the hybrid approach initially when I was using a naive bvh builder and saw a large performance improvement for primary rays, but I had some issues with the depth buffer which I didn't know how to fix since I only started learning at that time so didn't pursue that path. I still have the code so I'll try and revisit that this weekend and see if it helps in any way. I believe it should be doable with three.js as well. At the first step you would need to render the objects as usual but only output their positions, normals and material data to seperate textures (using multiple render targets). Then use these textures as input for the path tracer and shade those points and also launch rays from these positions depending on the material type. Here is an example of the first step that would be useful when it comes to implementing this (http://edankwan.com/tests/threejs-multi-render-target/webgl_mrt.html).

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

@knightcrawler25
Thanks for the info and reference! I'll check it out! :)

from three.js-pathtracing-renderer.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.