My Ray Tracing Adventure (Fall 2024)



I have been making a game (also its game engine) this summer. The game was 2 dimensional, so for some calculations inside the game I created a Vector class for 2d vectors. For this homework, I converted the Vector class to a 3d one.

I defined other objects by Vector class. Each vertex is a vector.

I defined a Object class which is (for now) a abstract base class for 2 real scene objects: triangle and sphere. There are many ways to define a 3d triangle, I thought when it came the time to write the class definition of it. My implementation has become as follows: A triangle class have 3 Vectors: one is the global position of one of its vertices. The other 2 are the relative positions of its remaining vertices (maybe you can call them edges?).

The Sphere class was straightforward: a Vector for global origin position and a float for radius.

I decided to define a Ray class, and have the Object intersections with it as its methods. I decided to define a ray by 2 Vectors: a starting point and another point in the ray. I could very well chose the second Vector as a direction vector, however, I wanted to give this style of definition a try.

My first meaningdul output after struggling with various bugs is as follows:

This version of the raytracer was only sending rays towards the camera and whenever it intersects with any object setting the pixel value to white color. Let me go through how I visualized this (thinking that it is also relevant):

I only needed visualization since I did not by any means complete the program. I used XLib in this summer to visualize stuff, so I decided to use that for rendering too. So in real time, as the program computes the colors of pixels by sending rays, it shows it in the window.

I first thought that I made a error to make the sphere seem elliptical. Later, I found out that it is a result of the fact that the screen is not spherical. (Called FOV effect.)

Next, I edited the program so that the color is as bright as it is close to the point light. It looked like below.

The following step for me was improving the algorithm:

For every pixel: send a ray.
For every ray: check if it intersects with any object.
  • if so, then send a ray towards the light source and check whether it intersects with any object
    • If so, set the color as ambient light.
    • Else, set the color to white (to be edited).
  • If not, then set the pixel as the background color
The result was as below:

I was not correcting the shadow rays to prevent false reintersections, so the image was noisy.

Later on, I edited the code to be able to move the point light at runtime. I tested with different setups. I realized that if I did not correct the shadow rays, the triangles can even be bright when the light source is behind them:

Then I encorporated to  tried other scenes. The next scene was the cornell box one. The left image is the desired image.



My result was the right one. At that time for easier and faster debugging I limited the screen resolution to 255x255. I was not using the colors of the objects, setting pixel colors according to the distance between the intersection point and the object light. However, none of these are enough to describe what is happening here. Later, I found out that after the light source the rays were intersecting with objects. Because of that, the algorithm was deciding that the object was in shadow (no ambient light yet). Then I created another class named Linesegment. I defined intersection of objects with a line segment. This solved the problem since the line segment's endpoint was at the pointlight.

Later, I corrected the normals for the spheres and tried another scene.


This scene shows that I was able to detect shadows. So far, I did not use the given colors.

The next step was adding cos(theta) to the diffuse calculations and utilizing the ambient lights
. The results were satisfying:

I had still not solved the problem with this scene (The solution I explained was found by me after this experiment.):

The right one is the newer one.

Then I added the diffuse color properties of the materials:

Realizing that the problem with the box was about the shadow rays here. (I had no idea before that.)

Then, specular:







Note that the scenes I computed that time were so dark in comparison to the desired images (on the right). I could not find out the reason for that, focusing on finishing the algorithm.

I tried to add a mirror object now. Trying the "recursive cornell box" scene. My first visualizable attempt was like this:

I was using a max recursion depth of 1, so only calculating the next mirror ray. The result were not satisfying and I realized that this object started to act like a transparent weird object instead of a mirror. Then after analyzing and thinking about where I possibly made a error, I realized couple of things:

  • First, I was not sending the mirror rays with a correction. This resulted in some (most) of the rays to think that they were inside the sphere. The strip-like pattern on the mirror object was because of that.
  • Second, I was (kind of intentionally) not correctly calculating the intersection rays. When the ray exits the sphere, the ray thought the sphere is behind it (It kind of is, but...). Thus ignoring that it intersected with the sphere totally

I then corrected my rays to achieve a more satisfying result (left, current; right, desired)


Then I realized that the mirror objects still have diffuse colors. Looking at the original image made me check the XML file for the scene. I then added the diffuse color for the mirror objects too ([below] left: no diffuse, right: diffuse added). For achieving this, I edited my code to better structure the coloring algorithm.



Then I encorporated other light sources. Yet, I was only considering one light source. I started to add up all light sources. Results were as below:

Then I remembered a old problem: The images were so dark. I checked the slides and the code to find any bugs in my implementation. Found none, then read a part of a presentation slide. It echoed in my mind. I realized that the diffuse and specular colors of objects were not being divided to the square root of the total distance the light travelled till the light hit the camera, rather, they were being divided to the distance between the intersection and the light source. I encorporated this into the program and the results were as follows:

I then tried to add the final material: dielectric. The first result was like this:


I was not expecting the result to be perfect. Because I had not written the full code including fresnel reflections, total internal reflections and the absorbtion ratio. However, this result was still too far from the desired image. I knew that the black strips were all about the correction, but at that time could not solve the great difference between the reflections. After a day of not being able to solve the problem, I decided to utilize the graphical interface: I was able to click anywhere on the screen and at runtime I could send rays by just clicking! That was so powerful. I decided to give it a go and implemented this. And clicked to see what happens to the initial ray. I realized that the rays that are not being a victim of uncorrecting were acting as if they were sent out of the sphere: I was still not taking into account that the rays were originating from the inside of the sphere.

I solved the problem so that the rays were also refracting from inside of the sphere:


Then I realized that I was not calculating the total internal reflection:


Then I added corrections to the refracted rays so that the noise is gone:

The refractions were weird so I debugged this with mouse, realizing that I was miscalculating the internally reflected rays, giving them the same corrections as if they were refracted, and also miscalculating their directions. Then I attempted to solve this problem without hope, and this happened!:

The images are not the same but you can see that they are so similar! Finally I was able to solve the problem... Now if you take a closer look, you will realize that the dielectric object is also reflecting some of the light. I checked the slides and the internet to add the correct formulas into the program to enable this. However, before that, let us remember a weird ,unmentioned, problem with this scene:
The desired image is on the right. The reflections were so bright! I wanted to resolve this. I found out that when triangles become mirrors, I miscalculate their reflected angles and this is causing the blooming effect. I corrected the problem and the results were better:

Then I tried the science tree scene and got good results:

Then I switched to the real resolutions:

After that, I tried the glass science tree scene:

It looked good, however the glass material was too different from the desired image (on the bottom). The reason for that was the fact that I did not calculate the fresnel reflections and total internal reflections. I then added the related formulas and logic and results were like this:


The image on the right is the desired image. I realized I accidentally reversed the ratio of reflection and refraction and corrected the code. The results were finally closer to ideal:

However, the glass science tree was still off from the original science tree. I could not solve this problem and tried another scene:

The one on the right is the correct one. After this image, I realized that, for every light source I added up the lighting. This means I am multiplying the ambient light and the background light! This must not be done! So I created a fake ray and precalculate the background and ambient lighting using this one, after that, for each light source send real rays to calculate the mirror, dielectric, diffuse and specular lighting. Later, I was going to realize that this is a bad idea. However, the results were good for berserker:
 
The one on the right is the expected image, however, I guess we should not do smooth rendering. So the image is still different, however, near ideal.
Above is the tower scene that is finally correct after my fix, the one on the right was before the fix.

The chinese dragon example was going to take around 6 hours of time, I did not have the time so I terminated it.

The other_dragon.xml was going to take even longer...

The lobster was impossible.

I guess I did not write a good ray tracer. These timings are really not normal. The program is not (yet) memory-bound either. So I guess I wrote a really unoptimal program. And, because of the lack of my time, I could not really see if I was able to load the .ply files correctly. The ton_Roosendaal was really weird. I doubt the other .ply files are loading correctly.


Conclusion

After all errors are corrected, I had left only half a day to finalize my code to output png values and get rid of XLib specific parts, and write this blog. I thought I have managed to achieve the final result. Then while I was writing this blog, I realized that while correcting the last problem, I have made a big mistake! I am not considering ambient and background lights during the mirror and dielectric object calculations, This results in darker lighting in mirror and dielectric objects are as here: (I did not have time to find this out before submitting.)
My submitted algorithm is as follows:
  • For each pixel
    • Send a "fake" ray to calculate the ambient and background lighting
    • Then for each pointlight, send a ray to calculate the mirror, dielectric, diffuse and specular lighting. Using the same function, so the function sets the background and the ambient lighting as 0
Since this approach dismisses ambient light during dielectric and mirror calculations, it gives this kind of dark dielectric objects. The ideal algorithm'd be:
  • For each pixel
    • Send a ray to calculate mirror, dielectric, background and ambient lighting
    • Then for each point light
      • send a ray to the pointlight to add up to the diffuse and specular lighting
I could not have the time to implement this one in this homework. However, I could come up with a middle solution and the "science tree glass" scene below is the result of this. My partially correct algorithm in here is as follows:
  • For each pixel
    • Send a "fake" ray to calculate the ambient, background, dielectric and diffuse lighting
    • Then for each pointlight, send a ray to calculate the mirror, dielectric, diffuse and specular lighting. Using the same function, so the function sets the background and the ambient lighting as 0
Note that in the last algorithm the dielectric and diffuse lightings are calculated twice. However, the second call only really calculates the diffuse lighting so the overall result may be still correct -or close.

Okay, here are some of the final results after I partly fixed this issue. Thanks for reading!
I really do not know why this output is like that... I think I read the .ply files a little problematic...







Comments

Popular posts from this blog

Adaptive Sampling in Path Tracing

Path Tracing (Homework 6)