Path Tracing (Homework 6)
In this homework, I started by running my ray tracer on some scenes from previous which were once taking too long or was being rendered incorrectly that I never had a full render.
Previous Homework Scenes
Here is one of the outputs:
One other result was as follows:
Part 2: Parsing Sponza
So, I solved this bug and then decided to start with the scene Sponza. I had not implemented the homework 6 topics yet, and the sponza scene had .bin files with it. Which means that if I implement the needed stuff and then try these on the Sponza scene, I will unlikely to know if the buggy results are because of my implementations of rendering algorithms or the problem is about the binary files.
If you remember from the previous homeworks, I once created a viewing debugger which was showing a vertex for each triangle on the scene. I removed that code; because after I -as most of the people taking this course now- started to store the meshes seperately rather than copying their triangles inside one global triangle container, the debugging of that style has become useless. Now for this scene, I want to bring it again. However, not with triangles. Now I want to be able to -if possible- see all the vertices in the scene seperate from triangles. Since the given formats are all positive in this sense, I now keep the scene vertex data instead of destroying it before (note that my triangle object has vertex position data in it itself), and then show the positions (assuming they are visible).
I tried this debugger in some old scenes to make sure it works before trying on the Sponza.
GitHub Copilot
GitHub's Copilot extension (free for university students) uses LLMs to help people write code faster. It usually auto-completes repetitive parts of source code. Visual Studio Code has the Copilot extension. For my experiences of using that so far, I believe the extension works as this: When you create a workspace, it gets activated. Then you start to make edits to the workspace such as editing source codes. It feeds these edits to the LLM you choose. So, the model can know what have you been doing in the source files recently; in a sense. It probably has inner context window that also considers current code segment you are using your cursor on.
In the old (1-2 years maybe) days that was not the case, but nowadays most of the CEng people in METU has a idea about how these LLMs work. The LLM usually auto-completes some part of the code with a measure of confidence. The copilot probably has access to the real-time confidence values of the models. If the confidence is more than a threshold, during the code editing of the user, the extension suggests the auto completion of the code. The suggested part can be as small as a column after a statement, or as much as multiple levels of for loops with complex operations corresponding to more than 100 lines of code.
It does not explain what the suggested auto-completion does, so it is up to the user to infer it. Accepting a large block of suggested code without proofreading it is risky, since the model does not know exactly what you are doing in the project: You did not tell it explicitly.
For sponza, I copied the explanation about the .bin files and put them as comments as if I was writing this to help myself write its code. Then started to write the code as if I was going to parse the files. After some characters, the Copilot extension suggested me a auto-completion and I accepted it. After correcting small mistakes, I tried the scene with the debugger.
The output was hinting that the vertex positions are being parsed correctly.
The Veach Ajar Scene: Initial Bugs
I tried the veach ajar scene without implementing path-tracing algorithms.
Since the path tracing was not implemented, how was some pixels in the inner room white?Another problem in the scene was about the teapots, but the previous bug was more important, so I did not pay attention to it for now.
After reading some text based visual debug data, I decided to play with the scene to see what is the real problem. This is the result when I shrinked the area light's size to 2 (It was 3.):
Then I played with the ray epsilons to see if the problem was about the epsilons. I made the epsilon 0 to see if the scene was leaking light because of that.
It seemed like the problem was weakly related to that, if any. Then I inverted the correction rays to see if the problem was about normals:
The results were confusing but I guessed that the problem should not be about that either. I tried to play with the correction more, but the problem seemed to be not about that.
Visual Debugging
Then I decided to integrate a functionality to the ray tracer for easier visual debugging to solve the problem in Veach Ajar: I was going to make the path of shadow rays visible.
This is the first version:
The first version was testing only the first point light. Then I made it show the shadow rays to every point light:
Then I made the shadow rays which cannot see light red:
Then made it keep track of the mirror rays so that we can see how the light traversed in the scene:
And then used the resulting debugger to understand the problem in the Veach Ajar scene:
Area Light Mistake
The green lines show the shadow rays when I click on the middle of the window. The middle corresponds to the back wall of the box. The shadow rays are sent to a wider area than I thought they need to be. The lighting on the ceiling does not explicitly hint that.
Then I checked my area light code and compared it with the given definition in the older homeworks only to see that I was calculating the size twice as big! And the bug in the veach ajar was that simple. I corrected that part and this was the correct result (left my render; right, the desired one).
Then I rendered the Veach Ajar scene to get the desired result.
Some Tests
I started by picking some scenes and rendering it before implementing to make sure there are not other problems. Here are some scenes:
Texture Mapping Issue
The one on the left is the killeroo_torrancesparrow scene during rendering (no HDR post processing). The right one is the desired solution. The problems about texture mapping here is that I did the mapping without considering transformations. Thus, the textures are mapped as if the vertex positions are in the local space while the given positions are transformed to the world space. I attempted to correct this to get this result:
Now that looked like it solved my problem but now the texture was being mapped 4 times stretched. I checked the code to see that the TexCoords included more than 1. Which I had assumed to not happen, thus clamping the values if they are bigger than 1 or smaller than 0. Now I had to use fmod to have the desired result:
The problem with the HDR post processing still exists, however, I had no time to solve that. Also, the ground textures were ok but the other textures were not correct. Because I had not been taking into consideration of the vertex offset in Texture coordinates. I do not remember if I fixed that problem, but I wanted to switch to the current homework's tasks finally.
Homework 6
Then attempted to implement uniform sampling to generate this result:
I was not really sure why the sides of the box are in dark. Probably HDR tonemapping was not working well with this initial light. I was not even dividing the results according to the cosine of the diffuse angle, so the calculated colors were probably mostly 255.
I tried this with the Jaroslav Path Glass scene:
Then I made a couple of changes to how I was handling the lights, because in next event estimation I'd need to sample lights similar to the ray tracing and the code I wrote was not suitable to do so. In summary, I created a light class and made lights inherit from that.
My result was off by overall colors and I -as always- suspected that my diffuse path tracer was either sampling incorrectly, or calculating the light contributions incorrectly so that was why. Later, I'd see that I was both right and wrong in that.
I made edits to the code where I think my result had problems: I added the correct multipliers to the incoming environment light contributions (cos(theta)), I divided by the length from the light source for correct attenuation... However, the real problem was not any of those. Even though they helped me correct some problems in the path tracing code: I was not sampling uniformly before, now I thought I did. Later, I was going to see that the corrected sampling method I did was not uniform sampling, but importance sampling :D
This is a example scene if the rays hitting object lights only get the maximum contribution if the angle of the intersection is perpendicular.
After some break and thinking about what could be the reason of the problem again, I thought maybe the problem is not in the path tracer but the tone mapper.
I have been developing this program in Kubuntu, a Ubuntu distribution with KDE. Kubuntu has additional useful programs. One of them is a .exr viewer. After bringing the source code of path tracing into a state which I think is correct, I made the program output .exr images. Then I viewed the images using the Gwenview (or Okular):
Apart from the fact that the image is darker, the overall image seem to correct. I believe that; since we cannot pass tone mapping parameters to these viewers, the images being overall darker can be tolerated. The image tones seem to be similar to the desired ones. I will check my tone mapping algorithm later to see what is wrong (It might be that I am still using uniform random sampling on the pixels.).
Another scene (tone mapped by me): Cornell Box Jaroslav Sphere Light the 4. option (just raw path tracing): [left: mine, right: desired]
I skipped the next event estimation part, because I implemented it but still the lights had problems. So I do not know if it works or not. So far, I think my path tracer is close to correct.
After that, I decided to implement splitting. That was easy to add. This is the comparison of the splitting versus not splitting in my implementation. I unfortunately was not able to measure time for each, however, splitting took a little longer as I remember. Left is without splitting.
At the time of submission, my path tracer was not working with glass objects: They looked all black. However, after submissions I checked my code to see what was wrong. The only problem was about a old label (and a goto statement) being after the path tracing computations. When I moved the statement just before the path tracing, the problem is solved. So, this is the result:
Conclusion
In this homework, I could develop a simple path tracer. However, could not implement russian roulette, next event estimation and importance sampling (I was doing importance sampling without knowing I was, however, the sampling was not getting divided by the correct probability value since I thought I was sampling the hemisphere uniformly.). I was unable to render the two bigger scenes Sponza and Veach Ajar. I could not implement any BRDF. At one point, I had to choose what to implement: BRDFs or path tracing. I thought path tracing was more important.
I made a simple debugger that can be useful for debugging path tracing implementations, which I explained above. For example, here is a empirical and visual sign that I am not sampling the rays uniformly:
Comments
Post a Comment