Coder Social home page Coder Social logo

Comments (106)

n2k3 avatar n2k3 commented on July 30, 2024 2

That is the why I suggest for your project to read/parse the entire THREE scene, and render it as is.
At least for the first render, then the user should be able to update the scene, and re-send changes to your shader...

@MEBoo that will be the functionality of the glTF viewer I'm working on, which will be merged into this repository when it's done. Currently it loads multiple glTF models into a Three.js scene, then you can call another function that will read the scene and prepare all models for path tracing. In the near future I'll make it so that users can drag & drop new model(s) into the viewer to replace the old ones and call the same function for the new models.
Once that feature is done, you could use just the prepare function for your own scene (and not use any of the glTF loading stuff, if your models use a different file format).

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 2

@MEBoo @EtagiBI @n2k3

Success on multiple fronts! Not only is it compiling correctly and loading every time (without crashes), I got the frame rate up to about 30 fps, which is still real time, and I even applied a hammered metal material to the steel teapot on the left! Now it looks almost exactly like Eric Veach's original rendering for his Bi-Directional path tracing thesis. :)

multipleteapotobjects

MEBoo, I achieved the compiling every time by still loading the 3 teapots separately (so it could be any type of model, any number of models, different models from eachother, different amounts of triangles of each model, whatever you want), but then before uploading to the GPU, I merged the triangle data into one texture that still fits comfortably on a 2048x2048 three.js DataTexture. That way the shader doesn't have to read from 3 different geometry data textures (which was causing the crashing and slower frame rate), but just reads from a larger 'uber' scene data texture.

I guess it's fitting that this post is the 100th post of this epic thread! I think we can safely close out this topic. ;-D

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@meblabs
Yes it is pretty complex unfortunately. The easiest part was getting the basic path tracer to work. There are many tutorials out there such as the ones on ScratchAPixel and ShaderToy, and of course Kevin Beason's smallpt in C++, which is where I started. In a couple hundred lines of code you can have a decent looking path tracer. When you are rendering simple shapes like spheres, it's great, but as soon as you start trying to render large triangular models, everything slows to a halt or crashes. Then you realize an acceleration structure is needed to speed up rendering and load all those triangles. I have yet to find a full tutorial for streaming acceleration structure data (like a BVH - bounding volume hierarchy) to the GPU. There aren't even decent tutorials on how to create the data structure in the first place, let alone sending it to a fragment shader as a texture. I was lucky enough to find a C++ CPU BVH example here on GitHub and I ported it successfully (this is what is used on my current model rendering demos). But there is almost 0 information on how to do that on the GPU, and in WebGL (1.0 or 2.0). Then there's the problem we were discussing about how to store all that material data for each triangle made in a 3d modelling program. Again, no tutorials on that as a GPU texture out there on the internet.

I'm still looking at the AntiMatter source - I might post some of it here - I have gone in and renamed the minified variables and functions so instead of reading 'float m=m0(g, h);' it reads 'float result = calculateFresnel(vec3 intersectionPoint, int material_ID);' I'm still looking into it. :)

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024 1

@erichlof thanks and good work... let me know if I have to do something in future ;)

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

Hi @meblabs and all,
As promised here is the antimatter glsl shader source in its original minified form: antimatter.min.glsl It was all smashed together on one long line of code, so I did a quick document format, adding spaces and line returns for some readability.

And after going through his code line by line, here is my decompressed version: antimatter.glsl I reverse-engineered and added clear, meaningful variable names and function names, for maximum readability.

There are a lot of similarities between his path tracing shader and mine here on my repo. I added a comment to the function
StackElement_data getStackElement_byIndex(const in float ix)
indicating this is how he did the workaround for the WebGL 1.0 limitation of no dynamic array indexing.

I am in the process of decompressing the index antimatter.js file that does all the WebGL initialization stuff and creates the BVH to data texture for GPU consumption. This part is even more complicated than the shader intersectBVH function. Creating the acceleration structure is always more difficult because you have to order all of the thousands (or millions) of triangles, vertex data, and material data, sort them, and compress them onto a texture in a meaningful way. But hopefully by studying his source I can make some headway. ;-)

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

Hi @meblabs and all ,

Yay, initial success! (well, for the most part anyway). Wouldn't you know it, the very day I get the WebGL 1.0 BVH shader code working is the day that three.js releases version r97, which has initial support for WebGL 2.0! Haha

So now I'm at a fork in the road. The WebGL 1.0 version works great for lower than 800-triangle models, but as soon as I tried heftier models of just a little over that, say 1000 triangles, it either will not compile, or I can see the rendered model for a brief moment (in all its glory) before the frame rate drops to 0, the browser tab crashes, and the WebGL context is lost, so you have to close and reopen the tab which is annoying.

I have a couple of ideas of why this is happening - one is all those 'if' statements inside the workaround functions for WebGL 1.0's limitation of no dynamic array indexing. GPUs do not like too many branching 'if' statements - if you throw too many at it, the shader just crashes, or won't even compile. The only glimmer of hope is that maybe I overlooked something because antimatter.js was able to load thousands of triangles with WebGL 1.0, and I am using the same workaround functions that I manually unpacked from the minified file.
The other culprit might be quality of the BVH build. As you can see from my total overhaul of the BVH builder .js file, it is pretty complex with multiple levels of recursion (definitely the most complex 300 lines of code I have ever written/ported). I printed out the tree for low poly models and it seemed to be doing a good job of recursively splitting the triangles at each juncture, all the way down until it reaches 1 triangle, which it designates as a 'leaf'. But as it scales up to thousands of triangles, who knows if it still is doing its job right. It's hard to print out 20,000 lines of data and make sense of it.

The other fork I could take is just to abandon the WebGL 1.0 workaround and go with WebGL 2.0, getting rid of all those 'if' statements in the workarounds. This sounds simple, just call WebGL 2.0 renderer from three.js setup code - but it is more involved. Since WebGL 2.0 uses OpenGL ES 3.0 (I know confusing right?) all of my established path tracing shader code on this repo will not compile right away. I have to manually go in and change stuff like 'attribute' to 'in' and 'varying' to 'in' and 'out', among other things. Here's a nice list of TODO's to get WebGL 2.0 working: WebGL 2.0 from 1.0

So once I can get WebGL 2.0 working, I can see if dynamic array indexing under WebGL 2.0 helps my crashing issue. As always I'll keep you all updated with my progress. Sorry it has been quiet around this repo the last couple of months, I have been wrestling with BVHs and WebGL 1.0 limitations and the like. :)

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@meblabs
I'm happy to report I successfully integrated WebGL 2.0 into my older test path tracing demo! So that means I can definitely go the WebGL 2.0 way now. I'll be making some updates to all the demos on this repo very soon. I need to manually go in and reflect the changes on each shader and the THREE.WebGLRenderer() setup code inside the init() functions for each demo. But it should work.

Yeah I don't know yet why the WebGL 1.0 version crashes after 1000 triangles. You mentioned recursion call amount, but actually no GPU shaders allow recursion of any kind, hence the stackLevel[x] approach. So you push and pop all the various branches as you descend the BVH tree which is why I needed dynamic array indexing. Now on the CPU side, it loves recursion. So that's why I went the recursion way with the Builder.js file. That uses JavaScript and is strictly CPU-side. I wish though that GPUs allowed recursion, that would make things much simpler! ;) I'll keep investigating but in the meantime I'll try the WebGL 2.0 way, which has more promise.

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

Hi @meblabs
Sorry for the late reply, I have been busy migrating the whole repo to WebGL 2.0. Also I just changed the GPU random number generator from a common GPU hack - fract(sin(largeNumberSeed)) found all over the internet, to a more traditional integer hash using bit shifting, provided by iq on ShaderToy. WebGL 2.0 allows bit manipulations inside shaders. The result is faster, cleaner convergence on all demos.

This bit manipulation capability also points towards Morton Codes and Z-order curves for real-time building of dynamic BVH's that are created every animation frame (2 or 3 milliseconds for each build of the entire scene!) which would allow animation of triangle geometry while being rendered.

If you have a question regarding materials and shading, maybe just open a new issue so we can focus on that with the discussion. That way all viewers can benefit from your questions and my responses (hopefully, ha).

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

Hi @EtagiBI , @meblabs , and everyone,

Just wanted to let you know that I recently implemented a much better .obj model BVH builder. This Iterative version replaces the Recursive version that was prone to crashing due to infinite loops. You can check out the demo: BVH WebGL 2.0.

Next on the TODO list is what this issue started out as: handling multiple OBJ files in the same scene. I am actively working on this, now that the BVH builder for single .obj model files is much more robust.
I'll keep you all posted with any progress I can make!
-Erich

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@meblabs
Yay! About multiple OBJ files, I'm trying to decide how to tackle this. I may be able to use the THREE.OBJLoader multiple times to load each model, and just assign each object to its own GPU texture with its own BVH. The alternative is to merge all the scene models' triangles into one huge BVH texture, but that seems on the outset a little too rigid to me (what if we want to individually move the models around the scene?, etc...)

First I'm currently taking baby steps - tonight I just figured out how to merge multiple parts (children / subMeshes) specified in the same single .obj file. If you look at some of the .obj files like male02.obj, there are separate 'objects' designated with the letter 'o' (or 'g' in this case) inside the file that define the different subMeshes/children of the model. For instance, the male02.obj has the body suit frame as the main parent object, then the head, hair, feet, and hands as children objects. Now this is the type of file that 'does' need to be merged into one bigger BVH because it makes sense to create the BVH from all the triangles of the various parts; they all belong to the same model and are typically located right next to each other.

I'll be posting a demo of that soon! :-)

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@meblabs
Sorry you already knew about some of the topics I was writing about - I want others to be able to publicly read our discussion here and know what we're talking about. About when to intercept the data stream, yes that is a good suggestion to capture it after the three.js mesh has been created, thus providing a model "format-agnostic" path tracer.

Luckily the smart folks behind three.js have provided various loaders for the dozens of possible model formats in the wild, and then they convert them into a three.js 'Mesh' with geometry and materials. I can directly access the geometry, that's how I got things to work in the first place. But with materials, it's a little less clear to me at the moment: I need to study how three.js encodes those on a per-triangle basis, if at all, or the next best thing, on a model child-object basis (so all the triangles for that child part of the model have the same material properties) - I could also make that work with that info.

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

Hi @MEBoo
Thanks for your suggestion about the three.PhysicalMaterial - yes that one would be the best fit, although the three.StandardMaterial could work as well, because it has enough PBR parameters to make do. The issue stems from the fact that (as you know now), three.js converts all OBJ+MTL models into meshes with three.PhongMaterial as the default. And PhongMaterial is not enough info for my path tracer to do the model justice. It is not three.js's fault, because historically, the OBJ+MTL model system was defined with the BlinnPhong lighting model in mind, that's why three.js chose that material. I do hope Ben Houston's suggestion of extending the MTL model to include PBR will gain wide support someday. But until that day, In your opinion, should I suggest the use of GLTF models for the default pathtracing model format, rather than the older OBJ+MTL? For those that only have OBJ+MTL, there are plenty of free converters online, like the aforementioned Clara.io

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@MEBoo and all,

Ok thanks for responding. So it should be up to the user to make sure that the materials are in PBR form before they are loaded into the path tracer. With Blender, Clara.io, etc. there are plenty of free software solutions to help with that. In my opinion, the format which offers the least resistance is GLTF. It is an open format, and all modern game engines and modelling software have importers, converters, and exporters for GLTF. I believe that three.js will turn the GLTF vertex encoding into a three.Mesh with enough PBR info to use in my path tracer. Do you agree?

I'm just trying to get an idea of what the user's workflow would be: I'm thinking they first acquire a model with PBR textures, or PBR parameters set, then use a free converter to convert it to GLTF form. Then they use three.GLTFLoader (inside my html file) to load the entire scene with its models and lights. This then is converted by three.js to a three.Scene with various three.Mesh's. Finally, I extract the PBR info, textures, vertex data, lighting info, etc. and then place everything in the GPU BVH. Does this sound about right? Am I missing any steps?

Thanks for your input. To others reading this, feel free to chime in. I'm just trying to get a feel for how everything should flow in and out of the path tracer.

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@MEBoo
Thanks for hashing through all of this with me. On the issue of scene loading, yes GLTF supports most of the features that I need to read in when importing objects. I might just leave the OBJ+MTL path alone and let it sit on this repo as an example. It was giving me issues because sometimes the loader couldn't find a material .src for some reason and the web page will just be black. I want examples to work every time. Also the aforementioned lack of PBR creates a lot of problems and guesswork when I'm trying to import it for the path tracer.

I think I'll stick with the GLTF for now unless something better comes along. As you said, if I can just parse the three.Scene, it shouldn't matter what loader is used. However, in terms of ease-of-use and frictionless workflow, I wanted my project to provide a clear way to get objects into the path tracing scene, hence the GLTF loading example. I could have made everybody convert the scene themselves into a Three.Scene in JSON format (which would be simple to parse), but not only is this seemingly cumbersome, the size of a scene described in JSON is not web-friendly like GLTF is (GLTF packs down so you can better transmit it through the internet). So I'm providing a sort of middle-step where I load the user's .gltf scene using the GLTFLoader, and then after three.js turns it into a JSON format Three.Scene() (the format I ultimately need), I can then parse the scene, extracting the bits of info that I need.

Finally on the materials live-edit issue you mentioned, I think it is possible. I don't have any examples currently, but if you were to somehow change the material of a sphere from diffuse to glass inside the path tracing shader, it would update instantly at 60 fps. The only bottleneck when trying to do this with a model made out of triangles is that all the material data has to be on a three.DataTexture that has been loaded onto the GPU. It is possible to change the model's materials by altering the triangle_array[] fields. However, it will not update visually. The changed triangle_array[] texture will have to be re-loaded onto the GPU. But it shouldn't take more than a second or two I would imagine. I have yet to try this, but I might give it a try in the near future just to make sure it works. If a lot of editing is desired, I could even look into somehow incorporating my path tracer into three.js's fully-functioning editor

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@MEBoo

Ok thank you for clarifying. I believe I understand the best way forward. I will continue to support and encourage the use of the GLTF loader for getting objects imported, then I just traverse the three.Scene and extract the info. Your editing scenario has sparked my interest: I might throw together a quick non-triangle model demo (i.e. a simple sphere scene) and include a basic html button that once pressed, switches back and forth between materials that are applied to the sphere, in real time. If it goes smoothly, I could apply that to the models loading scene, by using the drop-down controls gui that three.js uses on most of its demos, in order to provide more buttons/sliders for PBR parameters.
Thanks again for your input :)

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@MEBoo
Sorry for the delayed response - I've been hard at work implementing the material switching feature we were discussing a couple of posts back. I'm happy to report: it works great! Even better than expected! :
Switching Materials Demo
When a different material is chosen, the path tracer instantly traces the new material. The switch happens at 60 fps! I like this style of demo so much that I tossed out the old static materials demos (1-4) and replaced them with this snappier new one! The GUI works perfectly on mobile too ;)

Regarding spotlight handling, yes the point light inside of the cone (or cylinder) should work in theory. I will have to test it out - Ha ha, another demo project I'm already thinking about now! Thanks for the suggestion, I'll keep you posted!

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@MEBoo
Sounds good to me! Yes, the spotlights will be made out of reflective metal. If you want, you can open up a new thread about supporting various three.js light types. And yet another thread for supporting various materials/textures by parsing the scene. Yes we kind of took over this thread from the OP (sorry about that @EtagiBI ). I still intend on working to get multiple OBJ (or multiple GLTF models rather) supported. It will require more BVH work, which is the most complicated part of the code-base and most sensitive to change. I'll post to this thread if I have any breakthroughs on multiple objects.

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024 1

@erichlof yes for "three.scene parsing" I mean also BVH of BVHs since without them is impossible to render a scene with multiple meshes...
I'll open the others threads once I can test the scene parsing, so I can try myself something more complicated about maps and lights ;)

See you soon!

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@MEBoo
Hello, Merry Christmas! Yes I have been making updates and changes here and there across the codebase - nothing earth-shattering, but just incremental improvements to the various pieces in order to make the whole project more unified and, in some cases, bug fixing and making things work properly (i.e. mobile swiping camera rotation).
I had started down the path of adding different light types to scenes containing BVH's (like I mentioned in an earlier post on this thread), but I had to take a detour because I quickly found out that having a small light source (like a point light or a light bulb) was taking way too long to converge with all the dark noise in the images. This is because I was trying to use old fashioned path tracing and let the rays try and find the light source by chance. The frame rate was still great, 30-60 fps, but the image quality was unacceptable. If you noticed, my current BVH demos have a huge sky-light dome/sphere that makes the models rendering converge very quickly, so I had been skirting around this issue when I was getting my feet wet with BVHs.

Now that I couldn't put off the problem any longer, I decided I must investigate further exactly what is going on in the BVH (hence the new BVH visualizer demo). When I was satisfied with that, I turned to the lighting algorithm. At first I tried what I already had working on my Cornell Box Demo, Geometry Showcase, etc., which is direct light sampling on every diffuse bounce. When I tried the same algo with a BVH introduced into the mix, this sped up the convergence, but tanked the frame rate, and sometimes crashed the WebGL rendering context because the BVH was not only being called 4 times per frame like my current BVH demos, it could be called 8+ times because of the additional direct light sample through all the scene geometry.

Luckily I was reading Peter Shirley's new Ray Tracing in One Weekend series (a great read) just for fun and possible inspiration, when I came upon a single sentence in which he says you can either belong to one camp and sample the light directly on every bounce, aka send shadow rays (as everyone including me is currently doing) or go with the camp in the minority (of which he is a member) and statistically just aim more rays at the small light source and down-weight the contributions accordingly to probability theory. On a whim I tried this approach in the current demos without the BVHs (so I could measure the success rate) and it works great so far! It converges as-fast or almost-as-fast depending on lighting complexity, and the best news is the frame rate stays at a solid 60fps (I don't have to send extra shadow rays on each diffuse bounce) and even on mobile when the frame rate would have been 10fps, it goes up to 20/25fps because of the reduced thread divergence - the rays proceed in a more lock-step fashion.

So that is what I've been working on lately. It's kind of a twisty path I'm on right now, because there's multiple ways of approaching the problem. Sorry it's been quiet lately on the BVH side of things, but I feel I need to explore this avenue of lighting so that when I introduce different light types, multiple objects with their own BVH's, and a BVH for the BVHs, that I can be confident that it won't crash the browser webgl context, as well as produce a nice image quickly. Soon I will post the lighting changes to the entire codebase - they work really well. I'll keep you posted! ;-) Happy Holidays!

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@MEBoo
Yes it should converge to the same quality result. I made the repo-wide changes last night. Hopefully it is a seamless transition and you might not even realize that anything is different under the hood, which is a good thing! The only thing that is different is that it won't crash when I add the BVH with different light types, ha! ;-)

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@MEBoo and all,
The following is a copy of my response to question #22

I'm happy to report that the initial test of new stochastic light sampling technique works great with the BVH! BVH Point_Light_Source Demo
The light source is very small and bright and the rest of the scene is dark. If a normal path tracer was used, the noise would be very slow to converge. But as you can see, with the new approach, the image resolves almost instantly. And the best part is that the cost is the same as that of a traditional path tracer - 4 bounces max through the BVH structure to get the refractive glass surfaces looking correct.
I will ramp up the triangle count and try some different light source types, like spot lights and quad lights, but from what I can see so far, things are looking good!

Just wanted to share the promising results on this thread as well!
-Erich

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@MEBoo
I got rid of the bright fireflies! BVH_Point_Light_Source Demo

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@MEBoo
Yes the next step is different light types. The spot lights will be made by placing a variable size bright spherical bulb (usually white, but can be changed to other spot colors) inside a variable size metal open cylinder (color is usually black metal, but can be changed if desired). The rays that go toward this light source will only get light if they find the bulb inside the opening of the cylinder, or hit the inside of the reflective cylinder and then find the bulb on the next bounce. It should work, in theory. :-D
Rectangle area lights are already working - check out the Billiard table demo and the Lighting Equation Demo (from Kajiya's famous 1986 paper). Dome lighting and large spherical lights already work too - current BVH demos (as you know) and check out Geometry Showcase demo for multiple large sphere lights.

Speaking of that demo, to answer your question about handling multiple lights and multiple different types of lights in the same scene, at first I will do what I did on the Geometry Demo (and Lighting Equation demo with 3 different quad lights) and treat them Monte Carlo style, or randomization - basically it's like a roulette wheel; on every animation frame, you spin the wheel and have as many slots as there are light sources. Where the ball lands is which light source is 'the winner' for that series of 4 bounces. On the next animation frame you do it again for the upcoming loop of 4 bounces. Eventually all light sources will be the winner. Now this has worked with 3 or 4 light sources, however I have yet to try with unlimited light sources (like welding particles), or vastly different types all mixed together, say a dome, sphere lights, quad area light, and a spot light in an interior modern home scene at the same time.

My prediction is that it will work, but the amount of noise and the time to converge that noise will go up a little with each addition of a complex lighting scheme. This is because less and less rays get devoted to each light source in the big list of light sources. You can't get to all of them each frame, otherwise the framerate would suffer too much.

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@MEBoo
I just got Spot Lights working! Here's a demo: Spot Light Source Demo
As you can see, the casing of the spotlight is cylindrical and depending on which side you're looking at, it is black on the outside (or could be any color like gray or white, whatever you want) and reflective metal on the inside. The bulb is made out of a sphere, which is white in the demo, but you could easily change it to colored spot lights.

The new stochastic sampling works great on these highly contrasting lighting scenes! Normally if you had darkness in the background and a very bright spot on the scene subject, the noise would have been unbearable. But with this new technique, it is able to search through the BVH inside a WebGL browser shader, running at 60 fps, and converges almost instantly!

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

Hi @MEBoo
I improved the sampling of the hemisphere above the diffuse surface that needs to be lit. This results in smoother lighting with less noise. More importantly, I found a way to better sample the light sources by adding a diffuseColorBleed variable to most of the demos. This allows the end user to dial up and down the amount of gathered diffuse color bouncing vs. the amount of direct lighting shadows. It ranges from 0.0 to 0.5 and if it is 0.5, you get exactly what I had in the past for all the demos, basic path tracing with full color bleeding. But if you wish to dial it down, the convergence goes way faster at the cost of a slight loss of color bleeding (which in practice isn't even that noticeable).

Also, I updated most of the demos because I am trying to unify the path tracing algorithm so that a similar plan of action or algo can work for the various demos and their individual lighting needs. The only demos this doesn't apply to are the outdoor environments and the 2 bi-directional scenes from Eric Veach's paper (these need different strategies).

Click on the Geometry Showcase and Quadric Geometry demos and you should notice they run faster, converge more quickly, and provide a smoother experience!

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@MEBoo
I'm about to start working on multiple models (finally, which was the title of this epic thread in the first place, LOL!), and a BVH for the BVH's after that. On the side, I have been revisiting the bi-directional scenes and seeing if there are any improvements to be made. This is because with the BVH's inside a room or house for example, most of the light sources are hidden inside cove lighting, recessed lighting panels, underneath lamp shades, etc. Even though I got the spot lights and point lights working fast with the BVH recently, those demos are just a tad idealistic so far as real world architecture and lighting plans go.

Those demos have exposed point and spot lights, and the older demos have huge spheres hanging in the air or big quad area lights (like the museum demos). Things work well in those idealistic lighting conditions, but once you try to render an apartment or bathroom with recessed lighting, the noise returns big time. The only solution to this indoors problem that I can see at the moment is bi-directional path tracing, so I am revisiting some of that old code to see if I missed any optimizations.

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@MEBoo
Thanks! I've been busy working on moveable BVHs . I've had a breakthrough, I'm putting together a small demo to show the new functionality. Will post it soon!

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@MEBoo
Yeah that's exactly what I want to have path traced inside the engine! It is in fact GPU based - the bones and animation data are stored as a GPU data texture (kind of like my BVH data texture) and then the joints' rotations and offsets are read in the vertex shader and affect the vertices of the mesh. I'm not sure how to trace all of that though, there could be as many as 200 bone matrices which are each 4x4 floats. That's a lot of inverses to do! I'm not sure if it'll work out in the end, but it's worth investigating.
In the meantime, I'm about to refactor all the demos and get rid of the duplicate code from .html file to .html file. This change will make the demo collection less error prone and easier to maintain. :-)

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@MEBoo Hi!
I have been working on loading HDR equi-rectangular images and using those as the sky backgrounds. It's going well so far, the new GLTF model viewer #27 will benefit most from these backgrounds. Sorry it's been a little quiet lately, I had to read up on HDR images, how they work, how three.js handles them, etc. But user n2k3 and I should have a working demo soon!

I am going to try adding multiple model files to the Difficult lighting demo, the one with the slightly cracked open door and the 3 objects on the coffee table. In the original, those are supposed to be 3 Utah teapots with 1000 triangles each and different materials for each one. The current demo has ellipsoids, but I always wanted to have the 3 classic models in there, and now I have the means to add them I think. That will be step 1 to getting multiple BVH objects in. Step 2 will be a BVH for the BVH's!
:)

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

Hi @MEBoo and @EtagiBI
Well, good news and not-so-good news - The good news is I successfully loaded multiple OBJ (now GLTFs in the new refactor) which was the original title of this now epic thread! Here's a little preview:

multipleobj

It is finally starting to look like the original classic scene by Eric Veach in his seminal paper! I realize now that I could have done some trickery with offsetting the casting rays (or how instancing is done in ray tracing) since all of the objects have the same shape, and I just might do that for the final demo - but this is actually doing it the hard way for proof of concept: it loads a teapot of 4,000 triangles, makes its BVH, uploads it to the GPU for path tracing, then loads another teapot of 4,000 triangles, makes its BVH, uploads to GPU, then loads yet another teapot of 4,000 triangles, makes its BVH, uploads to GPU. So in the end, we have 12,000+ triangles spread between 3 models, each with their own BVH and materials, as you can see in the image.

Now for the not-so-good news: If you look at the top left corner framerate, it has gone down by half. This demo used to run on my admittedly humble laptop at 50 fps, now it is at 25 fps. Still real time and interactive and amazing that all this is happening on a freakin' browser, but nonetheless not as fast as I was hoping for. It is safe to say, that adding more objects would eventually grind the shader to a halt.
Speaking of shaders, the other not-so-good news is that on first start-up compilation, it crashes my webgl context and results in a black image. I have to reload the webpage, then it usually compiles the second time (not sure why that is). This is not only annoying for me to have to keep doing every time I change something in the code and debug, but for the end user - I don't want to crash everybody's webpage, then ask them to reload a second time - just to get it to work so they can see the cool demo.

So I will continue exploring ways of first of all getting it to compile on the first time every time, and then increasing the framerate (which is less crucial, but would be nice). As always I'll keep you guys updated. I just wanted to share the initial success (tinged with a little failure, lol) and finally progress this epic thread! Sorry it has taken this long to get to this point, but other avenues I have gone down have helped get this multiple OBJs feature started and hopefully improved! :-)

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024 1

@MEBoo
Ok thanks for clarifying. Yes at the moment the loadModel and init functions just take the created three.scene and go through its children, saving the materials and geometry. So it is doing almost fully what you suggest. All my examples use gltf loader but as long as the user has a three.scene, that should be sufficient. In the future, just for example purposes, I might make a small demo that loads a pure three.scene in JSON format, (whatever the output is of the three.js editor that mrdoob created and maintains). That will show that the type of loader does not matter, you just need a three.scene file in the end. :-)

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

Hello @EtagiBI

Thank you for posting your question here - I'm sure there are others out there who are wondering the same about the ability to load multiple OBJ files, and possibly MTL files as well.

The short answer is unfortunately, not at the moment. I currently have a demo that loads a single OBJ file. So my path tracer has the raw ability to load a list of vertices/faces in OBJ file format, and path trace the mesh relatively quickly. However that Crane model in the demo only has 36 triangles. Even that amount lowers the frame rate on my laptop from 60 to 40 fps. Things get worse if you, for instance, go into the HTML source code and comment out the Crane model and choose another listed model with a higher poly count. So if you and your friend are trying to load rooms full of thousands or even hundreds of triangles, the frame rate will probably slow to a crawl, like 1 or 2 fps, or even lose the WebGL context. Users would not be able to smoothly look around your room with their mouse, and would probably get frustrated.

Now for the long answer, if you're interested ;-)

When I started this project I was naive in thinking that if I could make geometric mathematical shapes render at 60 fps, then I'll just eventually throw a list of triangles at it and it would be the same. But sadly, checking rays against every triangle in a detailed mesh really adds up. And that's just for the initial raycast - you have to multiply that cost by 4 if you want to bounce those same rays around the room to get global illumination. So I looked into acceleration structures like grids, KD trees, and the BVH - the latter seemed to be the preferred choice for ray tracing and path tracing, so I ported a BVH builder in C++ over to Javascript. This seems to help a lot, BVH attempt , but the problem is that WebGL 1.0 does not support dynamic indexing of arrays during a loop, such as correctBranch[x]. In WebGL 1.0, that 'x' must be a constant like '2' or a predefined value. That means that I can't prune large parts of the tree, which is the whole point of having an acceleration structure in the first place. When WebGL 2.0 is supported inside three.js (hopefully soon), then I can revisit the BVH with dynamic indexing allowed, and even look into the LBVH which uses Morton codes and bit manipulations (allowed only by WebGL 2.0), which can rebuild the entire structure on the GPU for a scene of moving, dynamic triangles - every frame.

I would love to be able to list a bunch of meshes like you and your friend want to do in the room scenes, and have it all just work. But getting it to run smoothly is the most difficult aspect of ray/path tracing I'm finding out. Traversing the BVH is not too hard to understand, and it can be done in a tight loop inside a GPU shader in 30 lines or so. What is difficult to grasp and implement on the GPU, is the BVH builder, which must quickly load thousands of triangles into texture memory, create bounding boxes and indices for all of those triangles, and then order them in some efficient fashion to be examined and pruned by the tracer. Unfortunately, GPU BVH builders and source code (even in CUDA, let alone WebGL) are poorly documented and explained. It's no coincidence that a lot of cutting-edge research has been done in this area, and if good results are found, they are assimilated into existing renderers that might be behind patents and not available for public viewing of the GPU source code. However, maybe this will change with more people out there like me trying this stuff out on their own, and posting it online for all to learn from.

It's frustrating because I know interactively rendering large amounts of triangles is possible, even inside seemingly low WebGL. For instance I found this cool piece of software, which is similar to my project, but has much more capabilities in terms of rendering large meshes: Antimatter . Scroll down and hit the 'Launch Prototype' button. Then you can choose from a list of large meshes. If they can do it in the browser with a BVH, it has to be possible. I would have said you guys should go with that for your project, but it looks like it is going to be for purchase only, and not open source - plus I don't know how deep into needing three.js in your project you already are.

I wish I could be of more help to you both, I hope you can find a different approach or a workaround in the meantime. If you would like to ask anything else or need clarification or advice, please don't hesitate.

-Erich

from three.js-pathtracing-renderer.

EtagiBI avatar EtagiBI commented on July 30, 2024

Wow, thanks for a thorough reply!

I forgot to mention that we want to render our scenes server side, not in real-time. The whole plan looked like this:

  1. We create a scene in our "home-made" three.js 3D planner
  2. We press some "Render" button that throws our ticket in a rendering queue on server
  3. We patiently wait...
  4. We receive a rendered photorealistic image!

Is it possible? I clearly understand that real-time rendering of multiple textured models is really heavy for resources.

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

Hello @EtagiBI
Ah, so offline rendering while you wait is ok. Mmm, I hadn't imagined this kind of use-case yet for a browser path tracer. I actually need to get some sleep, it's late here where I am. But let me think about it for a little bit and if I can't do a demo myself with the tools I have in this repo, I may be able to point you in the right direction using a javascript path tracer that can handle any amount of mesh data, but maybe without fancy or crafty GPU shaders to make it go real time, since you don't mind it being an offline rendering.
I'll get back to you soon, thanks!

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

Hi again @EtagiBI

After thinking about it for a while, unfortunately I don't think the tools that I have developed here can help you at the moment. What you are needing is a robust loader, simple editor, and an offline beauty renderer. What I have here currently is a non-robust, simple shape renderer that goes as fast as possible so you can move and look around in real time, but still have correct global illumination effects. I started the project with the latter in mind and geared every line of code towards that purpose. I would have to essentially start over with what you have in mind to really do it right, instead of trying to hack something robust (loading .obj files, textures, etc.) into the blazing fast GPU path tracer that was made for different purposes.

But I really like the idea that you have, which is, if I understand correctly, to use three.js and the browser as a visualizing tool to load .obj files into a simple editor where you can click and move furniture around a virtual room using the three.js default WebGL renderer at 60fps. Then when you are happy with how the room looks, you hit 'Render' and it dumps all of that data (a big list of triangles, vertex normals, texture uv's) into the path tracer, which ray casts against those triangles, finds the closest intersection, looks up the normal and uv data for that correct triangle, and colors that pixel. When all pixels have been calculated, then it saves the final render as a .png or something. That part would take several minutes without acceleration structures, but you mentioned that you don't mind waiting offline for it to finish the calculations. I might go off and attempt a side project of my own that loads a whole scene of .obj files and then renders offline because this sounds like a neat project idea, but mine would be minus the editor part, that is a whole other tool set that would take weeks to develop - editor .

It just occurred to me that someone has already done what you are wanting: Ben Houston and his Clara.io project. Here's the link: Clara.io Ben is a nice fellow who has contributed a lot to the three.js code base. He wants his browser based editor/renderer/modelling software to be able to compete with 3dsMax, Maya, Blender, etc. I hope he and his team succeed because having the ability to be online and collaborating real time while using a sophisticated 3D modeling package is a great idea. In a nutshell, his software loads meshes of any file type (.obj, .fbx, etc), uses three.js renderer in real time to drag, reshape, resize the scene and meshes, then you hit fast preview and the sophisticated V-Ray rendering farm jumps into action. If you like the preview, you hit full render, wait, then hit save image and you're done. It's free to try out, you might give it a go - if only to get some ideas for your project.

Sorry I couldn't be of more help with my tools in this repo, but hopefully you have an idea of what needs to happen in your software, and hopefully I at least pointed you in the right direction. If you like, you can post any future findings or breakthroughs, or links to your project here so we can all benefit.

-Erich

Edit: just found a perfect resource that does what you are wanting and is open source: XRay
Hope this helps!

from three.js-pathtracing-renderer.

EtagiBI avatar EtagiBI commented on July 30, 2024

Hello @erichlof!

I'm sincerely impressed by amount of useful information given by you. Thank you very much!

I'll try to keep this topic up-to date.

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

Hi @EtagiBI ,

Glad to be of help, sorry I couldn't have my existing project just work out of the box for what you wanted. However, you have inspired me to start planning ahead and working towards being able to call the THREE.OBJLoader on various meshes, and then they get their data get saved to a texture to be consumed by the GPU and path traced. Earlier today I just figured out how to let the THREE.OBJLoader do all the .obj file parsing work (I task I kind of understand but not completely), and then 'hijack' the newly created THREE.Mesh and open it up and read and save its geometry data to a texture. Next is to try to do the same with materials data from a .mtl file. I'll post a short demo here soon!

Thanks :)

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

Hi @EtagiBI
Just wanted to give you an update of my progress with this issue. I didn't want you to think I had forgotten about you and this feature request! :) I have figured out how to use three.js's THREE.OBJLoader (I was using a less-robust custom one that I had ported from C++ before) and I am now able to hook into the resulting mesh and save its entire triangle data to a THREE.DataTexture, which can be saved and quickly loaded onto the GPU for use with the path tracing engine. Demo and its Source Code

I understand the .obj and .mtl file formats now (they were well designed which is why they have lasted so long I guess! ), but I'm trying to decide how to retrieve the material data and then make physical ray tracing materials out of them. You can see from the demo, I just assigned a matte bright purple color to all the triangles of the crane origami .obj model. But suppose there was an accompanying .mtl file that said they wanted the crane's neck to be transparent blue, the wings to be shiny silver, and the body to be matte gray or something? For the path tracing engine to use that data, it needs to have its physical reflectance properties as well as a color and shininess or color and transparency. So the matte would be the easiest, I could just assign that to be Lambertian diffuse in my engine, the shiny one would have to have a metalness flag saying if it is a metallic specular object, or just a shiny piece of plastic. The transparent one would need an Index of Refraction (IoR) saying how much the rays should bend when they enter a transparent surface like glass, clear coat plastic, or water.

So I am kind of going back and forth as to how I should implement the materials loading part of all this. I'll give you more updates as I work towards a solution. If I can eventually get the BVH working with WebGL 2.0 supported in three.js, then all objects will load and render in real time!

from three.js-pathtracing-renderer.

EtagiBI avatar EtagiBI commented on July 30, 2024

Excellent news!
As far as I understand, all materials have these specific characteristics (light reflection, light absorbtion, etc.) within MTL file. We've managed to add dynamic lighting in our editor, so models with different material characteristics look different now.

Since three.js has obtained shader support, we're trying to implement native shadows for models.

from three.js-pathtracing-renderer.

FishOrBear avatar FishOrBear commented on July 30, 2024

@EtagiBI Hi, How is your project going?

I also want to do similar projects. Any suggestions?

I understand a lot of open source renderers, and I'm ready to try Blender Cycles now.

from three.js-pathtracing-renderer.

EtagiBI avatar EtagiBI commented on July 30, 2024

@FishOrBear Yes, we decided on Blender Cycles as well. Render quality is great, but it took a while to adjust parameters.

from three.js-pathtracing-renderer.

FishOrBear avatar FishOrBear commented on July 30, 2024

@EtagiBI
Did you start using it?

I understand that some of the Cycles documentation is missing.

I found that the mitsuba document is sound. I am now considering what rendering engine to use.

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

Hi @erichlof any news about?

I'm very interested in your project... but to do online client background rendering (no realtime) of Three.js scene created online loading OBJs and applying custom materials dinamically (no MTLs involved).

It would be nice having more OBJs and three.js support, before Febrary 2019...

Thankyou

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

Hi @meblabs
I'm sorry but I'm kind of at a stand-still with OBJ rendering until the folks at three.js implement WebGL 2.0 support. In order to render larger scenes or multiple OBJ files, I need to physically load the triangle data onto the GPU, and the only way to do that currently is through a data texture. I have successfully done it with one small OBJ file, see the simple OBJ demo - but as soon as I try loading multiple files or 1 large OBJ file (the Utah teapot with 1000 triangles, for instance) the renderer slows to a crawl and crashes or just fails to compile.

Therefore I needed some kind of data acceleration structure for randomly sorting through all those triangles, and the only way to do that is to have random access into large arrays, which is not supported by WebGL 1.0 (which is still only supported by three.js). WebGL 2.0 brings with it the possibility to look up data randomly inside an array through the GPU fragment shader, which I absolutely must have in order to continue with that part of the project.

If wait-time for rendering is not a concern, you could go with a CPU renderer, which is not accelerated of course, but definitely has the capability to sort through huge amounts of OBJ triangle data pretty quickly. It would just render a static image though, which is not the focus and direction of my project.

The only other alternative I can think of is that you create something with three.js, use their converter to convert the entire scene to a readable file by a production renderer, like Octane for Cinema4D, Cycles for Blender, V-Ray for Clara.io, etc. and hit the render button inside their software. It should be the best of both worlds, able to be accelerated somewhat with their proprietary acceleration structure, and able to handle huge amount of scene data through streaming.

Sorry I can't be of more help. Best of luck to you with your project. Please let us know if you find a temporary solution with other software, as I have had some similar questions and requests for larger amounts of data rendering.

from three.js-pathtracing-renderer.

mrboggieman avatar mrboggieman commented on July 30, 2024

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

Thanks @erichlof ... so let's wait for a three.js upgrade to WebGL 2.0

Exporting data to another offline renderer is not an option, since I need to render client side and online.

But thankyou for your explanation.

When you say that the scope of your project is not a static rendering, please consider that your work is the best I've seen here, speaking about pathtracing etc., the tech you built can be used in a real context, so I would consider to simply generate a client-side hi-res and quality (but slow) render, other then the super fast real-time rendering that is awesome but more for science.

So I hope you'll consider my suggestions when the three.js upgrade become ready...

See you next months ;)

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@mrboggieman thankyou! But I've already seen that project... the problem is that it'isnt opensource... I can't find a repository.

And the work done by @erichlof seems better, regarding the final render quality after many samples... and AntiMatter is not Three.js based :(

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

@meblabs
Thank you for the kind words! I will definitely keep the quality static render scenario in mind once I get the BVH going with WebGL 2.0 support from three.js, etc. Speaking of AntiMatter, yes unfortunately that renderer is not three.js based, nor is it open source from what I can tell. I have looked extensively through the author's GitHub repo's and his website but there is nowhere to view his source code for that particular project. However, one glimmer of hope: I was able to right-click and 'view page source' while running the AntiMatter browser app, and I was able to piece together what looked to be a working BVH using WebGL 1.0. Mind you it is in minified/compact mode, so all the functions are like one letter or 2 letters long, lol. So it can be done, I just have to try to piece it together. I've already made some headway with his GPU fragment shader source. It looks like he implemented some crafty workarounds to sidestep the random array indexing problem. It's ugly and wasteful but it works! ;-)

I'll let everyone know if I can gain any ground with WebGL 1.0 while I'm waiting for WebGL 2.0 support.
Thanks again,
-Erich

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof wow that's great!
I've also checked his code and saw that it is minified... but I don't have many skills about :(

if I were you, I'd wait for webgl 2.0, but with WebGL 2.0 it would be awesome to also have a working PBR shader, simply configurable (like the one included in Three.js), and that supports at least a simple texturing method... Let me know if it is possible or a dream!

I can wait some months :)

Thankyou again!

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

@meblabs
Hi, yes it is possible to incorporate PBR materials - they would need to be set in the triangle data list and loaded onto the GPU as a data texture. Currently I only have positional data for each triangle vertex packed in a texture, so something like
dataTexture[0,0].r = 270.01; dataTexture[0,0].g = -65.86; dataTexture[0,0].b = 483.2;
And that defines where the triangle vertex is located in model space, XYZ = .r, .g, .b

To get PBR data on top of that, we would need to pack the data into a larger structure inside the texture, such as:
dataTexture[0,0].r = 270.1; dataTexture[0,0].g = -65.86; dataTexture[0,0].b = 483.2; dataTexture[0,0].a =43.9;
dataTexture[1,0].r = 124.7; dataTexture[1,0].g = 54.8; dataTexture[1,0].b = -843.1; dataTexture[1,0].a =675.0;
.rgb is first vertex XYZ, .a .rg is second vertex XYZ, .b and .a is third vertex XY
dataTexture[2,0].r = 453.8; dataTexture[2,0].g = 1.0; dataTexture[2,0].b = 0.0; dataTexture[2,0].a =
0.0;
(the .r channel here is continuation of the previous vertex list vertex Z, because 9 / 8 has a remainder of 1 (annoying, right?), .g .b. a. defines the usual rgb diffuse color, so a red triangle.

dataTexture[3,0].r = 1.0; dataTexture[3,0].g = 1.0; dataTexture[3,0].b = 1.0; dataTexture[3,0].a = 0.5;
still on the same triangle, now it's the PBR stuff like specularColor.r, .g, .b, and a 'roughness' value

dataTexture[4,0].r = 0.0; dataTexture[4,0].g = 1.0; dataTexture[4,0].b = 0.5; dataTexture[4,0].a = 0.25;
still the same triangle, but finally we finish up with PBR stuff like 'metalness', Index of Refraction, clearCoat IoR, clearCoat roughness.

So it can be done, but as you can see there needs to be a lot more texture data slots (around 20) for each triangle. The final texture size would need to be the number of triangles in the model, times 5 rgba slots per triangle. So roughly calculating, if we had a 10,000 triangle model, that's 50,000 .rgba texture slots (200,000 total floating-point data entries packed in). Which seems like a lot, but most cell phones even can handle a 4096x4096 .rgba texture, so that's 16,777,216 available texture elements, each with their own .rgba channels (4 values for each texture element), so 67,108,864 possible triangle data entries packed into 1 texture.

edit : argh, I forgot uv texture coordinates need to be specified for each of the 3 vertices, so that's 6 more data numbers to pack in, maybe pad it to 8 numbers (2 .rgba texture elements) to be memory look-up effecient, and use the 2 remaining slots for Texture ID (which texture to use), extra texture info, etc.

So I kind of know how I would approach it, but getting materials to change on the fly inside some kind of material editor would take a lot more work and know-how. That's why they have teams of 10, 20 or more working on OTOY Octane, Blender Cycles, etc. I don't know if I could do all that alone, it isn't really part of the scope of this project. But it definitely is possible and has already been implemented with more sophisticated renderers. Hope that helped! :)

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof wow seems a little complicated...
I've not studied this yet... so the problem is how to send data to the GPU to elaborate it with a Fragment Shader? With WebGL2 there is no other new fancy way?

But when your base tech will be ready, I can work on like a monkey and you could tell me what to write :D (and I may pay other monkeys to help me)

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof ahaha great work!
only 500 lines of code to render the reality, awesome!
Do you think that the workaround has the same performance as the webgl 2.0 will have?

Regarding BVH, you should already have your implementation done, right? Or yours is not for traingles, but for primitives only?

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

@meblabs
Yes it's amazing what a couple hundred lines of code can produce. This style of rendering was proposed all the way back in the late 1970's and early 1980's (ray tracing and then path tracing, respectively) but computers were way too slow to even think about trying it. Hence the branch in a different direction of rasterization of triangles to the screen, which was doable, but many complicated hacks had to be introduced to get close to what you could do with a small ray tracing program (even to this day, i.e. reflection probes, cascading shadow mapping, etc..). Then along came 3d acceleration cards which helped with throwing triangles at the screen fast, and the rest is history. However, with Nvidia's recent introduction of RTX ray tracing card technology (now that computers and GPU's are fast enough in 2018), they are trying to bring back ray tracing first (you can view a couple of their presentation demos on YouTube), and then eventually real time path tracing once they get their deep-learning A.I. assisted denoisers fast enough - I have no idea how all that A.I. denoising stuff works. But yes, at its core, ray tracing is an elegant, short, simple way of rendering photo-realistic images - a window into reality!

Regarding the BVH, I have a basic triangle BVH builder that works. Typically you can survive without a BVH if you are just intersecting mathematical shapes likes spheres, cylinders, cones, boxes, etc., that's why you see everyone's beginning ray tracers rendering only those type of shapes. When you get into loading models though, it requires many triangles to be searched and intersected (because in the graphics world, most 3d models are represented as triangles), so that's when you need to introduce a BVH. I ported a C++ BVH that I found on GitHub (I gave credit to the author in the comments) to JavaScript. So it produces a list of bounding boxes and a list of raw triangles (vertex position data) in 2 different textures to be loaded on the GPU. The roadblock I was hitting was not being able to maintain a working bounding box stack on the GPU (a WebGL 1.0 limitation) so that I could search through the stack with dynamic array indexing and intersect it.

I don't know if there will be any performance difference between employing the Antimatter webgl 1.0 workaround, or just using array indexing in webgl 2.0. That remains to be seen. I'll keep you updated - I'm about to try his hack and stick it into my code and just see if it works at all. :)

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof yes I know the history of 3d graphics, I found you just after the nVidia keynote... searching "webgl raytracing"... I thought that if nVidia starts to deliver RT dedicated hardware, some mad boy could have a solution to implement it in Three.js :D ... and let's see when webgl will support the new hardware ;)
But looking to your code, is still awesome that 4/5 hundreds of code can display the reality. wow!

Ok I'll wait for good updates.. thank you!

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

Hey you did it man!! 🎉🎉🎉
Awesome!

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

ahaha... I read about the crashes... may be too many calls/recursions with 1000+ triangles?
I would consider to directly go for the WebGL 2.0 way... to simplify/optimize your BVH code.

Great work!

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

Wow great work!!
Yes sorry I didn't studied yet :(

Leave the old webgl 1.0 demo alone, and go for the new promise ;)

Thankyou!
PS: can we communicate in other way other than using the issues in reddit? I should ask you only something about materials and shading... can we use in Skype/Discord?

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

Hi @meblabs
Sorry for the late reply, I have been busy migrating the whole repo to WebGL 2.0. Also I just changed the GPU random number generator from a common GPU hack - fract(sin(largeNumberSeed)) found all over the internet, to a more traditional integer hash using bit shifting, provided by iq on ShaderToy. WebGL 2.0 allows bit manipulations inside shaders. The result is faster, cleaner convergence on all demos.

This bit manipulation capability also points towards Morton Codes and Z-order curves for real-time building of dynamic BVH's that are created every animation frame (2 or 3 milliseconds for each build of the entire scene!) which would allow animation of triangle geometry while being rendered.

If you have a question regarding materials and shading, maybe just open a new issue so we can focus on that with the discussion. That way all viewers can benefit from your questions and my responses (hopefully, ha).

Hi Erich.
I'm sorry but this month I had too much work ... but now I'm ready to test your progress!
My question maybe obvious or stupid, that's the why I didn't open a new issue! But after more code-testing maybe I'll be ready to ask that question opening new issue

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

Hi @EtagiBI , @meblabs , and everyone,

Just wanted to let you know that I recently implemented a much better .obj model BVH builder. This Iterative version replaces the Recursive version that was prone to crashing due to infinite loops. You can check out the demo: BVH WebGL 2.0.

Next on the TODO list is what this issue started out as: handling multiple OBJ files in the same scene. I am actively working on this, now that the BVH builder for single .obj model files is much more robust.
I'll keep you all posted with any progress I can make!
-Erich

Holy shit, fast real time OBJ rendering! You did it!
Do you think that handling many OBJ will be a problem regarding performance?
It seems everything so smooth now!

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@meblabs
Yay! About multiple OBJ files, I'm trying to decide how to tackle this. I may be able to use the THREE.OBJLoader multiple times to load each model, and just assign each object to its own GPU texture with its own BVH. The alternative is to merge all the scene models' triangles into one huge BVH texture, but that seems on the outset a little too rigid to me (what if we want to individually move the models around the scene?, etc...)

First I'm currently taking baby steps - tonight I just figured out how to merge multiple parts (children / subMeshes) specified in the same single .obj file. If you look at some of the .obj files like male02.obj, there are separate 'objects' designated with the letter 'o' (or 'g' in this case) inside the file that define the different subMeshes/children of the model. For instance, the male02.obj has the body suit frame as the main parent object, then the head, hair, feet, and hands as children objects. Now this is the type of file that 'does' need to be merged into one bigger BVH because it makes sense to create the BVH from all the triangles of the various parts; they all belong to the same model and are typically located right next to each other.

I'll be posting a demo of that soon! :-)

Yes I think it's right... 'g' means 'group' and an OBJ could have many groups inside... everyone should be merged into one BVH.
Stupid question: can every sub mesh have its material assigned?

For multiple OBJ I think that the way is to have multiple BVH, so it's possibile to delete/add a new OBJ runtime without refreshing the others

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

@meblabs
Yes every sub-mesh can have its material assigned (Well, it currently can when doing normal three.js WebGL rendering). I don't have them on this repo at the moment, but most of the OBJ files that I use to load the model geometry come with accompanying MTL files. These files are human-readable and define the material properties for the main parent mesh and each of its children (parts). If the model needs textures or maps of any kind, they are specified in the MTL file as well.

It is pretty straight forward to handle these files and assign material properties to the geometry when doing 'normal' or 'typical' 3d rendering - we've been doing it for decades. What is not so clear is how to intercept all this data, and when to intercept it (at loading time? or later when the three.js mesh has already been created? etc..) and how to stick it all inside a packed 2048x2048 triangle data-texture so that the GPU can consume it efficiently while ray tracing. A ray can strike any triangle of a 30,000 triangle mesh and it needs to immediately know what that triangle's properties are. I am currently trying to tackle this problem and I don't have a simple solution yet.

And regarding different OBJ models, yes each needs to have its own BVH, because like you mentioned, you might want to add or delete meshes at any time. I've heard that some renderers like Octane have a BVH for the BVHs! Meaning, all the scene's mesh general bounding boxes are put in a big over-arching hierarchical structure that covers the entire world. Then, when a ray hits one of those general big bounding boxes for a mesh, the individual BVH for that mesh is fetched from the GPU, and then the personal tree for that mesh is descended, however deep it might go.

p.s. I've also got a working GLTF model viewer inside the path tracer - whoo hoo! Well, I have the geometry at least. Materials problem is same as stated above. But it will be nice to support OBJ/MTL as well as GLTF (the latter is much more web-friendly and recommended by the authors of three.js)

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof yes I know about OBJ + MTL, that's the why I asked if your pathtracer can render every type of OBJ or not ...
About intercepting data (object material properties): I suggest to "read" the mesh after it has been loaded... since so it could be possible to assign/change a material property runtime...
If you manage to pass any "geometry with triangles" with its own material data to the GPU, instead of an "OBJ optimized data", we can render every type of scene.... a format indipendent pathtracer...
Is this so hard now that BVH is complete?

BVH for BVHs ... I vote for this, that's the way :)

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

Yes, in THREE js there is the GROUP (or Object3d) that contains MESHes ... and the Material is applied to the mesh, not to the group... so when you speak about "children", you are meaning "meshes" inside a group.
A Mesh cannot contain another mesh... so I think that your material problem is "only" about the mesh's material itself ... then it's like the "BVH for BVHs" problem -> a group can contain more meshes each one with its material but the difficulty is only on the mesh's material.

I don't know exactly how the path tracer works, but I think that you should traverse the threejs scene and pass everything ... building BVH for BVHs etc

PS: i changed my nick due to a reorganization ;)

from three.js-pathtracing-renderer.

EtagiBI avatar EtagiBI commented on July 30, 2024

Wow, I'm impressed by your progress. Congratulations!
Although we use non-three.js solution for rendering now, it would be very interesting to follow your project.

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

@MEBoo
I went back and studied how three.js turns the model format data ( .obj, .mtl, .gtlf, .fbx, etc. ) into a THREE.Mesh with geometry and materials. So after three.js loads and converts the data into the THREE.Mesh, I intercept the mesh/scene and traverse its children (which are meshes themselves, as you mentioned). I successfully got the 'male02' OBJ+MTL model working, which was tricky because if you inspect the male02.obj file, you'll see that it has 14 meshes, all with their own geometry and materials, the male02.mtl file shows 5 materials some of which use different or the same textures, but there are only 3 textures in the materials folder! Maybe I should have chosen a more simple model example (ha! too late now), but it's proof that the path tracer can load in a multiple child-mesh .obj that re-uses an arbitrary number of diffuse-albedo color texture maps.

GLTF formats are a little more straightforward, there's just one .gltf file that contains the info for the meshes and texture paths. The triangle vertex data is neatly packed in a .bin file that accompanies the .gltf file. I haven't tried FBX models yet, but I will soon - they should work too (in theory).

The good news is that the path tracer is essentially format-agnostic: it doesn't care how you get the models imported or what format they are. It just intercepts the scene/meshes once three.js creates them. Hats off to the authors of all those different types of loaders - it makes my life much easier!

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

@EtagiBI and @MEBoo and all,

Thanks @EtagiBI ! Yes, luckily I have been making some good progress in this area of the path tracer. I must give credit to the authors of the various loaders, and of the three.js Mesh creation algos from those loaders' data. This all wouldn't be possible without their great initial work.

I'm happy to report another progress milestone: the ability to load models of various formats and now use their diffuse texture maps for materials. Here is a
OBJ+MTL model loading demo. p.s. you might have to refresh/re-load the page a couple of times, for some reason it gave me a image.src 'could not find' error on the first try - will investigate to see if I can prevent that from happening.

And to prove that the path tracer is format-agnostic, here is a GLTF model loading demo

Note that the only difference between the source code of both demos is that I call the constructor of the desired loader, then call the '.load' method for that loader, with the file path to the desired model of that type.

Although everything appears to be working, there remains much work to do: If you look carefully at the models, they all use the 'clearcoat' material, this is because MTL files were invented long ago before PBR materials were a thing, and there's no way of me knowing what the user wants for each material of each part of a model - is it diffuse, is it metal, is it emissive, is it glass, what's the IoR, what's the roughness?, etc. Ben Houston, creator of Clara.io, and frequent contributor to three.js, has proposed an industry-wide addendum to the MTL format to help with this very problem: proposal . Although this would be perfect, I haven't seen wide adoption of this extension by major modelling software vendors.

With the GLTF format, it is more up-to-date, with parameters for most, if not all, of the PBR material options. Now that I have the diffuse maps working, it would be nice to also use and render the normal, emissive, metalness, roughness maps, etc.

After I have a solution to the above materials problems, I will begin work on creating a BVH for the BVHs in order to start loading multiple models, each with their own meshes, geometries, and materials. As always, I will keep everyone updated! :-)

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof great job in a short time!!
I just read your code and I suggest you what I said before.
You are now reading the output of a Loader (OBJ or GLTF) that can or can't support some material property.
But if I use THREE JS, I use it to build a scene, loading models in various format, positioning them, adding lights, tuning its materials, changing them. So for the best THREE JS integration, you should parse the entire scene with its THREE.js meshes and materials.
You could start supporting MeshPhysicalMaterial (that adds "coating" to the MeshStandardMaterial) with all/some its properties, so no problem if OBJ/GLTF doesn't support "metalness" etc.
I know, I'm talking more about code architecture when you are concentrated on working on algorithms... but I think that a developer using your renderer shoud do:

  1. creating a THREE scene
  2. erichPathTracer(scene,renderer)

So, as you said, before developing BVH for BVHs, you could find solution to the materials problems, but supporting/intepreting actual three materials properties (color, map, metalness, roughness, metalnessMap, roughnessMap, reflectivity, clearCoat, clearCoatRoughness, etc) and reading it from meshes (1 for the moment) instanced on a scene.

The properies supported here
https://threejs.org/docs/#api/en/materials/MeshPhysicalMaterial

Hey I hope you understand what I only suggest for your already huge and impressive work.

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@MEBoo
I went back and studied how three.js turns the model format data ( .obj, .mtl, .gtlf, .fbx, etc. ) into a THREE.Mesh with geometry and materials. So after three.js loads and converts the data into the THREE.Mesh, I intercept the mesh/scene and traverse its children (which are meshes themselves, as you mentioned). I successfully got the 'male02' OBJ+MTL model working, which was tricky because if you inspect the male02.obj file, you'll see that it has 14 meshes, all with their own geometry and materials, the male02.mtl file shows 5 materials some of which use different or the same textures, but there are only 3 textures in the materials folder! Maybe I should have chosen a more simple model example (ha! too late now), but it's proof that the path tracer can load in a multiple child-mesh .obj that re-uses an arbitrary number of diffuse-albedo color texture maps.

GLTF formats are a little more straightforward, there's just one .gltf file that contains the info for the meshes and texture paths. The triangle vertex data is neatly packed in a .bin file that accompanies the .gltf file. I haven't tried FBX models yet, but I will soon - they should work too (in theory).

The good news is that the path tracer is essentially format-agnostic: it doesn't care how you get the models imported or what format they are. It just intercepts the scene/meshes once three.js creates them. Hats off to the authors of all those different types of loaders - it makes my life much easier!

hey sorry I just read this first reply after replying to the other! Now I understand better your second reply :D

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof in my opinion that is only an OBJ+MTL problem that should not be a your problem, you should support the THREE new standard PBR materials (standard or physical or both) and not more ...
If you could parse the three scene (mesh geometries, materials, lights), then anyone would be free to use the loader which want to, and could convert the meshes materials to a PBR one, since you are developing a PBR renderer then no body expects to render a non PBR material ;)

For the Stanrdard vs Physical, the last give some better tuning for non metal materials, since clear-coat and reflectivity applies more to non-metal surfaces like plastics or glossy woods... you could check the physical shader implementation and find how these props are applied

thanks for your attention :)

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof
it's very simple to give the users a Material converter like this:
scene.traverse( if (mesh) { copy color then map specular to roughness / metalness } )
so I think that is not a problem now.
For the GLTF i think that you can suggest the users to adopt it because it uses PBR materials, but if you can parse the scene then your path tracer is totally format indipendent.

For the last question, that sounds very right!
Only a thing more: if you can do what you said, then consider the case where the scene may change runtime. For example I may change colors or a material property, or generating a geometry (a wall or other) and assign a material -> that's the why I would consider both the THREEJS PBR materials (even if GLTF doen's export the full three.physical material). Consider a web car configurator where a user can change materials, these are defined by the developer and not exported from a software.
Regarding this, I think that it would be only an addon to your trace equation, so only a little work about undestand what the other properties stand for.

That's all, this will become the best practical and quality pathtracer available

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof yes in the case of scene import, everyone should use the GLTF, not the THREE JSON encoded format...
But when the GLTF is loaded using the default loader provided by THREE, you have already all the objects instanced... no need to parse any JSON string and no action is required for the user to instance the scene... actually only a double parse happens because you first load the GLTF into THREE, then you traverse the THREE scene ... if you provide a middle-step way (like now), then you'll provide a way that is a bit more performing to direct load a GLTF scene into your pathtracer.
But what I mean is: no need to parse a JSON scene format (the one usually used by three), but only to traverse the scene for Meshes and Lights.
So someone who wants to load a GLtf could simply follow the standard path provided by Three
https://github.com/mrdoob/three.js/blob/master/examples/webgl_loader_gltf.html

loader.load( 'models/gltf/DamagedHelmet/glTF/DamagedHelmet.gltf', function ( gltf ) {scene.add( gltf.scene );});

As you can see is 1 line of code is required to the user to setup a gltf scene... then you can simply traverse the three scene.

Regarding materials, mine was only an example of a possible use case, it's not important... was only to speak about the possibility for the user to instance a new PBR material runtime or just before starting the pathtracer during the scene setup, I was speaking about supporting both the PBR materials even if GLtf doesn't instance the "physical" one

Thankyou for listening ;)

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

Wow that would be awesome!
If you provide updates to mesh materials runtime, I think you have to optimize performance by transfering only updated info to the GPU, I don't know if I'm wrong because I don't know how and when you transfer data to the shader, I have to study :D
Consider also the update of a mesh position (or scale or ....) sorry too much imagination :D

PS:
an ultimate consideration about OBJs -> the OBJ standard was created to load single mesh with its material, not an entire scene with cameras and lights defined. So if someone would like to import an entire SCENE, then he should use a format for scenes, like COLLADA or GLtf. Even if OBJ can contains a group of Meshes, that is not a scene. Said that, if someone does import OBJ+MTL then, before adding it to the three.scene, he should convert the material, nothing more. The same if someone import a COLLADA scene, etc.

PPS:
regarding lights, don't know if you already import the three.scene lights... but if you consider doing this, then you should read the "power" property of the light expressed in lumens, and not its intensity property -> because when using PBR materials in three, you should also use renderer.physicalcorrectlights = true, and because intensity means nothing for a real application like yours

Good work!

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

@MEBoo
Yes I'm not sure either if you can just load relevant data, like a single value to a texture that is already residing on the GPU. I don't know if three.js has any examples of how to handle this. I will have to study too, lol.

Regarding OBJ considerations, thank you for the info on that subject - that makes sense to me now. I'm sort of new to all this three.js importing stuff; in the past I would just use the three.js primitives like three.Box or three.Sphere and build the scene up myself out of different pieces. So your considerations help me. :)

Regarding lights, at the moment, I'm not reading in light information. The demos for model loading currently have a huge light-blue/light-green sphere light that is wrapped around the entire scene, like a sky dome. This helps with converging noise quicker. But I do intend to support point lights (light bulb) and directional lights (the sun) and just parse the three.Scene for light objects. The only light type I might have trouble with is spotlights. At the moment I don't have any demos with spot lights, other than the "classic scene" for "bi-directional path tracing" with the gray room and the egg shaped glass sitting on the little table. This has a spot light shining towards the left, but I had to bi-directionally path trace it, which is expensive in terms of geometry. But we'll see, nothing is impossible right? :-D

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof glad to little help you! :)
yes nothing is impossible ;)

Regarding "converging noise quicker", from what you said, I think that a closed room/scene is better than an opened one, so this should be pointed out to developers once lights are supported.

Regarding spotlights: what about creating a point light inside a cone geometry with inside a super reflective material? like real spotlights? ...otherwise you have to do some math to simulate rays coming from a spotlight..

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

hey impressive!!! It's cool to watch indirect light changes real-time!!
Awesome!!

Ahaha ok! Maybe we loose 1 bounce with this method? Maybe It would be better to simulate the spotlight firing photons like the cone do... but let's see!
Remember to coat the internal cone's surface with a reflective mirror :D

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

Hey here we are at 60 posts ... when you complete the three.scene parsing I think this thread is done ... I'll open other topics for different arguments, like material maps, lights etc ;)

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

Hi Erich!
I always checked for updates on the bottom of your project homepage, the news section, but only today I saw that you were writing updates on the top :D

Nice to see the planet demo and that you are still working on the project!
Merry Christmas ;)

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof hey so many problems!!
Let me understand: using BVH + sky-light dome there are no performance/quality problems (as we saw in your demo).
But when you put a small light in that scene with BVHs you had slow converging (and bad quality).
So you started checking the BVH algorithm and then the core light render algorithm.

I didn't expected any issues since your Cornell Box Demo already is with direct light sampling... so I expected that BVH + direct light wasn't a problem :(

But hey... after all, seems like you built a better path tracer now! Hope this new approach is usable and produces better performance/quality for every use case ... from the old Cornell Box to the new polygon scene.

Thanks for your amazing job!
Happy Holidays!

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

@MEBoo
Yes you got it! You understand perfectly! :)

Direct light sampling works most of the time and that's why everyone, including me, usually defaults to that approach. It guarantees that on every bounce you will get a light sample contribution, no matter how small the light source. But it comes at a cost - you must search through all the geometry (BVHs included) again to get that light sample and see if the surface in question is being lit, or is blocked by an occluding geometry object, thus leaving it in shadow. That's why it is sometimes called the 'shadow ray' technique.

For all light types it produces really fast results, if you can afford double the geometry searches. I found out that I couldn't afford it when trying to add the heavy-duty models with BVHs in that geometry search, at least for WebGL shaders in the browser. Hence the slightly different approach of stochastic ray direction choices slightly in favor of the light source. If it chooses to sample the light, it does so on the next loop, acting like a shadow ray - if it doesn't choose to sample the light, it bounces randomly and collects the usual diffuse color bleeding from other objects in the scene as a traditional path tracer does. It doesn't matter which it chooses, but you must account for weighting if it does decide to spring towards the small light source.

This random picking allows all the rays to do roughly the same amount of work on each frame, thus keeping divergence low and frame rate high. I believe it should work for BVH models. I'll post a demo soon with a single point light and a triangle model. Stay tuned!

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof understood!
Hope the new path-tracer maintains the same quality ... regarding noise etc.

I'm tuned ;)

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof so ... Happy new year!!
A new big milestone achieved!!
Now let's see what happens with more lights / triangles!

So, will this new tracer method be the default algorithm for every use case? Is it good for the Sky Dome case?

A little issue: I see in the "bvh point light demo" some white point that never happened before with the old algorithm...

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

Hi @MEBoo
Yes this new algorithm should work for all general cases. Now with the sky dome, you could just delete the random if statement that makes the rays go towards the light source because the rays will find the dome just fine without assistance. Essentially you'd end up with a traditional path tracer like I have currently on all the BVH demos.

The only case that would be tricky would be like the bidirectional demos, the room with a small table and glass objects on it and the light sources are hidden, inside a casing, or behind a nearly shut door. That would require the bidirectional algo because no amount of rays will be able to find the light sources, they are hidden. But we can cross that bridge later. :-)

Those bright spots are called fireflies and they are a bane to rendering programmers everywhere, ha ha. I will see if I can mitigate the bright spots somehow.

Happy new year!

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof You know... you are the man!!!
And now? You could start experimenting light types? Hand crafted Spotlights? Rect Area lights?

A question... Using multiple lights in a scene, the algorithm will converge slowly or it's exately the same?

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

AWESOME!! No other words... I think the most complex objectives are almost done for this project!

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

Hi @erichlof ... I see many updates.. what happened?

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

wow! even better! what's next?

from three.js-pathtracing-renderer.

EtagiBI avatar EtagiBI commented on July 30, 2024

@erichlof, I'm amazed by your progress!
By the way, have you already switched to GLTF/GLB? These new formats are well optimised, so it's possible to render more complex models without any speed losses.

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

@EtagiBI
Thank you! Yes I believe that GLTF will be the best way forward. All the GLTF models I have downloaded from places like Sketchfab, everything just works. I haven't heard about GLB yet, is that Binary? If so, I believe some of the models like the Damaged Helmet (here in the models folder of this repo) do have a binary section that describes the vertices and indices. This speeds everything up, from downloading, to loading into memory. So, if that is the case, then yes I will be using that format going forward.

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof nice new demo!!

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof Hey just saw the moveable BVHs with models!! Don't know how you did but it's very fast!
I see that the scene is accumulating samples where there is no movement... correct?
The result even with only 1/few samples is awesome!

So a real spotlight, low light, hi-poly model texture+other maps on a BVH updated and moved runtime!!

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

@MEBoo
Yes I was very excited to see it actually working for the first time! The secret is that I treat the BVH like I do the boxes in the Cornell box scene. If you notice in that old demo, the mirror box and the short diffuse box are slightly turned. How I achieved this is taking advice from old ray tracing pros and instead of transforming the box and trying to trace an arbitrary rotated object (which is hard and expensive), you just transform the ray by the opposite (or inverse matrix to be exact), which puts the turned box essentially back to facing straight-on, then trace the 'non-rotated' object, which is easy to do. Then you rotate the normals and hit data back into world space with the desired rotation matrix. So on a whim, I tried this with the entire BVH, 7000+ boxes, transformed the ray by the inverse of the rotated root node of the BVH, and then trace as normal!

Now, I've been struggling with figuring out how to do this with a mesh skeleton, bones and animation, which actually move the mesh vertices in real-time on the GPU vertex shader. I thought I could go down the bone hierarchy and transform the ray by the inverse of each bone, but it turns out to be a little more complicated than that, because of weighting and skinning deformations and such. But I will post my findings if I get something working with simple animations.

About the accumulation of samples, it's actually just 1 sample over and over again, but I let the background scenery (that which is not actively moving) 'bleed' a little more from the previous frame. So there is a little more motion blur effect on the ground and walls, but it is not distracting because those things are static. 1 old sample bleeds more into the new sample, so it's like having 2 samples for a split second I guess. On the dynamic objects, or when the camera is moving, I manually turn down the 'bleeding' from the previous frame, in order to minimize distracting motion blur that would occur if I did nothing about it. It is a delicate balance between smooth motion blur which covers up distracting noise, and moving objects which you want to be more crisp and clear without too much distracting motion blur. :)

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof nice... understood!

mmm don't know why you are working on IK/animations ... it's a big world apart!
But you could check what is already done in threejs
https://threejs.org/examples/?q=skin#webgl_animation_skinning_blending

don't know if it is GPU based...

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof Hi!!
Any feature update other than codebase refactor?

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

Nice news and milestone 🥇 !!!
Finally we can close this "issue" thread 😁

3 questions:

  1. is the code format indipendent? I mean... as you did before, are we able to send to GPU any THREEjs Mesh (pre-loaded with GLTF / OBJ / MyUltimateOptimizedFormat)? So are you parsing the THREEjs scene and dynamically building BVHs?

  2. About the compiler bug: I can't understand how a shader could compile a time and a time not! I've already seen something like this happen, but I simply can't understand how :)

  3. About the performance loss: are these frame drops due to multiple bvhs or to too many polygons? I mean, if you use a single BVH with 3 models inside and 4000*3 polygons, you have the same frame drop?

Afterall, I think that this is the tech of the future... but you can't dream about having a real-time real application for now. But now we can have a "background" client photo-realistic rendering engine 😉

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

@MEBoo
Hi! About the 3 questions,

1: Well yes and no. Somewhere along the way, I think it was a couple of months ago, I decided to support .gltf and .glb (gltf in binary format for faster transmission) and remove the examples of the .OBJ files and other formats. The reason is twofold, first the .OBJ is heavier and less compressed than .glb. And second, .OBJ is an old format so even though I can extract the three.js data from the three.js created mesh when it loads, three.js does not know how to insert PBR materials into that old format, and there's no way for authors to define those types of materials in the old format when they create them in 3dsMax, Maya or Blender. GLTF on the other hand natively supports textures of all types like metalness maps, and physical materials like glass with IoR specified by the author, which I in-turn absolutely need to load into my path tracer. I know this decision might leave out some models that we have lying around, but the good news is that free websites like ClaraIO are able to convert any file type into GLTF for faster web transmission and native PBR support. In fact, you can load [insert your favorite format here] into clara, then ADD free pbr materials that are ray-tracing friendly, then save the whole thing, and hit 'Export All' gltf 2.0 and you're done. That's exactly what I did for 90% of the demo models on this repo, they were originally in another format. This decision makes my life a little easier by reducing the corner cases and codesize of handling the three.js Mesh after it has been loaded by an arbitrary unknown-in-advance format. This way I can either intercept the gltf data myself (it is in a human-readable format, the material stuff anyway) or wait further down the pipeline and get everything from three.js's correctly created Mesh with ray-tracing friendly materials and specifications (which is what I'm currently doing). Of course you could try this whole process with three.js's FBXLoader for example with some minor modifications to my demo code, but then again, I want to only think about 1 format that is built for the web, works with three.js, supports animations, and has modern detailed material specifications.

2: I ran into the 1st-time fail, 2nd-time pass compilation problem back when I created the CSG museum demos a while ago. That's why there are 4 separate demos. Initially I had all 14 CSG models in the same museum room, but it wouldn't compile at all. Then I reduced it by half to 6 or 7, then it compiled on the 2nd time only. Then I split it further into 4 demos with 3 or 4 objects each, and it compiles every time. I think it has to do with the amount of 'if' statements you have in the GPU code. CPUs love 'if' statements, GPU's - not so much! If you have too many branches, it crashes. It must not like all the 'if' ray hit bounding box on all the models - some parts of the screen have to traverse the BVHs, and some parts of the screen get lucky and hit an easily-reflected surface or wall, which also partly explains the framerate drop - GPU thread divergence.

3: Which ties into the performance drop - yes I think it is because of different models, GPU divergence and branch statements. I don't believe the triangle count has much to do with it. Take a look at the BVH_Visualizer - it handles 100,000 triangles at 60 fps on my humble machine (of course if you fly into the dragon model, the frame rate goes down, but for the most part, it doesn't even break a sweat). So there are a couple of things to try in the near future: A BVH for the BVH's (but in this simple case of 3 similar teapot objects, I'm not sure if that will help any), and like you mentioned, combine all 3 teapots into a super-teapot type shape and place a BVH around 12,000 triangles. That might work better. Also, in my last post I mentioned 'trickery' - you can actually do a modulus on the ray origin and treat it as multiple rays, and therefore it would return hits for 3 objects (like copies for free), even though you only load 1 teapot into the scene. This is a little more advanced and just a tad deceitful (ha), but something I want to try eventually - for example, a forest with thousands of trees seen from a helicopter view.

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof

  1. yes yes I know the history, it's actually written in this post! I only asked if you did the scene parser, since someone could edit materials properties run-time, and the way to do this is loading the imported mesh into THREE and then apply a material, or just another example, someone could aggregate objects/meshes to build a scene... so the best thing would be a real scene parser that will parse meshes (geometries and pbr materials), completely abstracting the objects source/format

  2. understood :/ What I don't understand, since I not checked your code, is: are the objects coded in the GPU shader? Aren't they sent to the shader from JS? So why the code was so "object" dependent in the museum demo? Here the BVH has so many "if" ?

Wow .. Hope you will find the way
Thanks for the info and the work

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

@MEBoo

  1. Ahh ok you were meaning a pure scene parser. Yes I suppose we could do that, but the only issue is as the size and number of the models grow, it gets more error-prone for the end user to manually assign materials to selected groups of triangles inside the model. Take for instance the Damaged Helmet model in my 'Animated BVH Model' demo: Let's say that an author modeled that helmet in Maya and had no material specified (white), then saved it to .obj or whatever format, doesn't matter, then yes the user could load all 15,000 triangles and three.js would create a mesh object. Currently I am parsing the scene by extracting the child.geometry.material property from three.js, which it in turn got from the file. So, inside the GPU path tracer, it would see vec3(1,1,1) rgb white diffuse and render it as such. But if the user wants the face mask to be glass, then they want IoR of 1.5, then they want metal on the outside with a roughness of 0.1, it would be difficult to intercept the loading/creation of the three.js mesh object and manually put those in there. In fact, I don't even think you can select a group of triangles out of 15,000 say, and assign metal to triangles 2,287 to 3,013. That would be super error prone, which brings the problem further back, meaning the author would have to name the materials in Maya and assign the physical properties such as color, reflection amount, IoR in a helpful visual way inside of Maya. Then it would make sense to just dump all of that into an exported gltf file that we can read verbatim without any chance of errors or guesswork. That's how I see the problem, but maybe I'm missing something that you would want in future versions. Please let me know :)

  2. Yes if you look at the shader code for the CSG demos, there are tons of 'if' statements because of the multiple possibilities a ray could intersect a hollow, solid, additive, subtractive, or intersection overlapping pair of shapes. So as the number of objects grew in the demo room, the 'if' statements started to pile up. Now in the case of BVHs, yes there are potentially many 'if' statements for each set of bounding boxes and their triangles. But thanks to binary trees it ends up being an O(log n) search where n is the number of triangles. That works perfectly for 1 model, no matter how big, because you essentially halve the problem at each step. But if you have 2 BVHs, the right part of the screen's rays might have to go down one BVH tunnel and work on pruning it, while the left part of the screen's rays will have to prune a completely different BVH - again thread divergence rears its ugly head. And GPUs sometimes have to execute both paths of 'if' statements and then mask out the result that it didn't need in the end - super wasteful, but that's how things work I guess in GPU land. One more thing is that I've read that branches that involve possible texture lookups is generally bad practice. However, the only way to get the ray tracing on the GPU is through data textures, and searching through (or not for some parts of the screen) those big 2048x2048 textures for bounding box data. Also, if they locate an intersected triangle, they need to go to another big texture and lookup that triangle's vertex data, like UV, material, and color, etc. What I might try first is deferring the texture lookup until the last possible moment inside the bounces rendering loop and see if that helps mitigate the crashing/performance problems.

from three.js-pathtracing-renderer.

erichlof avatar erichlof commented on July 30, 2024

@MEBoo
Oh I think i know what you mean now - you wanted to be able to change a material on the fly? Like for instance, with the helmet model, changing one of its children.geometry.material to glass instead of metal? Or blue instead of red color? If I'm understanding correctly then yes, it is possible, but the only requirement is that the user must have default materials and the materials must be assigned to the exact triangles of the original mesh. So using the helmet model again for example, in Maya they would have to say "child0: face mask, child1: metal top of helmet, child2: hoses and connections on bottom of helmet. " Then three.js would correctly assign a child.geometry.material to each of those children parts of the model (even though they might be default and all the same and boring at load time), and then the end user could say "now that the model has loaded, I want child1.geometry.material to be smoother more mirror-like metal, and child0.geometry.material.IoR = 1.4", or something to that effect. Am I understanding your functionality request correctly? If so, the only problem is that I preload the triangle data (colors, materials, uv, index of refraction, reflectivity, etc) as a 2048x2048 data texture. If the user changes something on that triangle list, I would need a way of efficiently looking up the triangles in question and then updating the floating point numbers corresponding to their options. It's not impossible, I just haven't tried intercepting it like that while the path tracer is reading that same texture every frame.

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@erichlof yes I already do this (material change) on my little project... In THREE materials / meshes / groups have names! So I can parse the scene and do whatever I want.
But not only this use case: people use THREE to compose a scene, ok pre-loading meshes from various sources, but then composing all together.
That is the why I suggest for your project to read/parse the entire THREE scene, and render it as is.
At least for the first render, then the user should be able to update the scene, and re-send changes to your shader...

from three.js-pathtracing-renderer.

MEBoo avatar MEBoo commented on July 30, 2024

@n2k3 @erichlof
You are awesome people 😁

from three.js-pathtracing-renderer.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.