Ray Tracing Homework 5
Hello. In this blog, I will be summarizing my work in the 5. homework.
Part 1: Clattering Along on Homework 5
Bump Mapping
I started this homework by editing the code just enough to start visualizing my bump mapping approach. I could not finish bump mapping, only start it by making a false visualization of a correlatively distorted objects -to the image- -the ones having a bump mapped texture.Tone Mapping v1
With the same philosophy with before, I first wrote a simple and biased tone mapping technique before implementing the bare minimum technique. I partly implemented the spherical directional light. I implemented is to only act on the rays that go into void (background). The basic tone mapping technique was to divide every channel of the image by the average of all the channels of the image: average of f(x,y) for all x,y such that f(x,y) = (r_xy+g_xy+b_xy)/3. This was the resulting initial version of the aforementioned scene:
Profiling and Acceleration Structure Issues
After that, I switched to the Car Scene of the homework. I had not implemented the texture mapping on triangles yet. So the virtual function was returning (0,0) as the image coordinates. However, the program should still give some output. However, even one horizontal line was taking too long to render. In the homework it was implied that some scenes may require our acceleration scenes to be optimal enough to render in a reasonable time. Even though I started this homework 10 days before the last (late) due date, I had other homeworks and some graduation project work to the same date too. This was why I was in a rush and could not even wait to see the unoptimized outputs.
.ply Files
I resigned after taking 2 sleepless hours with no success in even parsing the Car Scene, moreover, realizing that the new "parser" was now giving segmentation faults in the old scenes which it used to work. I decided to beat my "readme phobia" and read the manual of one of the mentioned libraries that parse .ply: Happly. After learning what to do, I wrote the code according to this and was able to parse different formats.
Performance Issues
After being able to parse the Car Scene correctly (not getting segmentation faults), was I to realize that my program is not efficient enough to render the scene. I remembered that I had not yet applied a top-level acceleration structure. So the program was calculating a inefficient acceleration structure on each mesh and that was all. This worked in the past scenes just because their mesh count was too small. This scene has around 300 meshes, and that resulted in every ray being tested 300 times. I decided to design a top-level acceleration structure by copying and modifying the code that calculates mesh-level one. This resulted in the scene being rendered fast enough that I was able to see some lines are being rendered. However; the whole scene was still infeasible for me to wait to get rendered, and also the lines were all black. So I thought that since I had not yet implemented the real tone mapping, the scene was not worth waiting for, as the result may be all black because of the lack of a suitable tone mapping operator.
Temporarily resigning on a complex scene did not mean doing nothing for it. I suspected that the algorithm was gotten worse in performance, so I tried scenes from past homeworks to see the time it takes to render them. My suspection was correct and even the bunny scene was taking a minute to render. I knew that the inefficiency was not impossible to find and solve if I analyzed the source code I have been developing for 2 months, however, I knew that it'd take whole week and thus I'd not be able to implement HW5 algorithms. That was why I decided to try profiling in my program. Profiling is probably done by halting the program in large intervals -like around 30ms- and noting in which function the current program counter is at. Thus it disables some optimizations that need to lose the data.
When I enabled profiling flags, the program was visibly slower. I inspected the collected profiling data and found that the function the program spent most of its time was the vector3::operator+. It was unexpected since I thought the inefficiency existed somewhere else. The operator only makes a function call to the intel's library, so I did a research to find out the potential reason. The reason was that I was seperating the implementation from declaration of the function. Since the link time optimizations were turned off, the program cannot inline the function properly.
I turn link time optimizations off during development, because it takes long and I do not like to wait compilation in every change. And it is known that even enabling link time optimizations may result in partially optimal inlining. So I moved the definition into the header file and defined the program as inline.
The second bottleneck for the program was ray-triangle intersections. In the web, it is stated that usually ray-box intersections dominate the ray-triangle intersections. In my case it was the opposite and that was a sign of possible improvement of the acceleration phase. I am probably not going to be able to work on that in this homework, but added fixing this in the next homeworks.
The BVH structure was taking too much time, so I was cutting that in the middle. I talked to a friend about this and we exchanged information. He does the tree generation from bottom to up, where I was dividing the triangles from top-down. I will try a bottom-up approach the first I have time to see the difference. I think, bottom-up approach will be safer. Because it makes sure that leaf size is equal to the triangle size. I am stopping this discussion here since the theoretical view of acceleration structures is not relevant to the current homework's topic.
Tone Mapping v2
The color was weird and I thought it was because I was not implementing the correct tone mapping. However, I was going to see that the reason later. It was that I had tried to do a gamma "operation" (I cannot say correction here because it was just trying to see the effect of taking power of color channels.) and forgot that code in the raytracer.cpp file. I will be showing *color-inverted* images (I usually put the images here in chronogical order.) because I saw the left-out code days after this problem occurred.
It was not long before I noticed that the middle horizontal line in the image was mistakenly mapped to the poles of the sphere. So I fixed that:
Then I had to change my tone mapping algorithm, because the scene was being too dark when some parts of the texture were visible. That was because some of the lights were too bright and my basic tone mapping algorithm was not showing the image correctly. Unfortunately I forgot to take screenshot of the problem, but I can explain it relative to the next image.
Let a channel value be v. The program basically puts 255v/(1+v) instead of v:
In the image you can see the windows under the lights. In my tone mapping v1, only the lights (seen red in the north pole) were visible. Now the windows are also visible.
Also, the texture was mapped upside down to the sphere, I corrected that too.
Texture Mapping
Then I also changed the sphere-ray intersection algorithm. Till now, only the centers of the spheres were being transformed. So the shape was always a spherical one. Now, the rays are being transformed during the intersection calculations.
I also had not implemented texture mapping on triangles. The u-v coordinate returning function was returning (0,0). So, the head env. light image was all yellow:
Since the scene was taking too much time to render, I switched to a small scene from HW4: Pkane Bilinear. I implemented texture mapping initially as any point's u-v coordinates are the weighted average of the corners': The weights were the distance of the point to the corner.
Several problems here:
- The texture is not linearly aligned with the edge. The reason is that my algorithm is just a approximation to calculating barycentric coordinates.
- The edges are mapped differently in different triangles. The reason was the same: Normally on the edges the other vertex' *weight* must be 0, but now was non-0.
- The color is too dark. The reason was that I was dividing the bilinear color sum to 4. However, the sum is already a weighted average of 4 pixels, so division results in dimmed image.
Then the texture indices were corrected (bug fix) so that the same vertex is mapped to the same texture indice:
And this is when I found the correct barycentric coordinates (left) and the reference image (right):
The reason why my image looks noisier (but less aliased) is that I still handle the default case with a 1 sample per pixel random sampling setting. This results in noisy images, however, the noise cancels -partly- the aliasing.
Directional Light
The next step was outputting totally unsaturated images, making each channel equal to the tone mapped luminance:
And this output is when I fixed the green image issue which was caused by a forgotten experimental pre-gamma operation in my code:
I was not using any lambda (Probably that is why some of my output images have different colors, I guess) for calculations, so that is probably why this white portions of the image is generated. I handled this edge case using a simple if statement:
As you can see, the image has a contrastproblem. It feels like to be caused by some saturation issue. At that moment I was not aware (Because I did not have the time to debug this issue.), however, I now see that it is because I did not apply de-gamma transform. I forgot to integrate it and it is too late now at the time of writing this blog post.
Part 2: Some Fixes and Patches
Old Scenes
First, I did not try the program on old scenes for long. I gave Cornellbox a chance and the result was..:
It was the same problem I faced in the beginning!: The shadow rays were intersecting beyond the light sources, and thus there are undesired shadows (depending on the angle). The solution to that is simple, however, for the similar scenes to render, I decided to improve the debugging logic. Whenever you click on a pixel, the trace of the ray that is sent thru that pixel is printed. The output, if the position is under shadow, tells which object (id) is under the shadow of which object (id). I did not know at that time, but right away that addition was useful:
So, I solved the cornellbox scene.
Working Example of Texture Mapping
It was upside down, so I fixed the orientation:
Then I tried to correct the horizontal orientation of the scene by hand. Putting some variables for offset, this was closer to see the result.
Spot Lights
1. Basic Spot Lights: The intensity suddenly drops to 0 at coverage angle (left), fall-off angle (right)
2. Linear Dimming3. Correct Spot Light (linear dimming to the power of 4)
Spherical Directional Light Shading
I did not use random rejection sampling in this case. Instead, I put the random variables in correct functions according to their distribution. Since we want to determine the 3d direction, 2 random variables are enough (representing 2 degrees of freedom). Those variables are 2 angles: one is the angle of the (imaginary) to-be-sampled 3d direction's projection to the surface tangent plane, the other is the angle between that projection vector to the surface normal. This second random variable should be put into a suitable function to give the desired result.
This is the Head Env. Light scene with sample count reduced to 1. The black holes are probably caused by the sampled rays being hit to the nearby triangles. Resulting in shadow.
Increasing the sample size (32) will reduce those noises.
And here is the first image of scene_pisa generated by my ray tracer:
Conclusion
While in this homework I was able to implement every given specification (probably there are some small errors), it is far from finished. My ray tracer still lack correct bump mapping and normal mapping implementation. Also, the image textures are mapped correctly with the spherical directional lights, however, this causes the textures of the spheres to be inverted. I could not fix this issue in time to show the correct results.


Comments
Post a Comment