Programming a Raytracer
It's a great advantage to understand how 3D software works "under the hood". Especially if you have rendering problems or if you are trying to come up with a new look or effect. But it's hard getting into the jargon used for this branch of mathematics, so I thought I'd try and explain a raytracer in non-technical terms. I hope you find it useful.
In the early nineties, I did some renders using commercial 3D software on the Amiga. The images I saw from Pixar and Siggraph looked so much better, how did they do it? After reading about vectors in high school, it dawned on me that the math in my textbooks could be used to create 3D images. I realized that if I programmed a renderer myself, I could add the features the renderers available to me were missing!
The core of a raytracer
The core of a raytracer is beautifully elegant and simple, and you just need a few vector and programming skills to make a basic raytracer.
Raytracing, as the name suggest, is a way of calculating an image by tracing (shooting off) rays into a virtual scene. A ray is the same as a vector: a two- or three-dimensional line.
You can think of the image or screen as a grid of points (the pixels). You put that screen grid in front of the scene you have set up (just a sphere in the example illustration), and you place the camera on the other side of the scene. Mathematically, the camera is only a point, a position coordinate.
You shoot a ray (or a vector) from the position of the camera to the first pixel on your screen. You then check if the continuation of that vector hits the sphere. If it hits, great! That pixel should then be the color of the sphere. As you see above, the green pixel hits the sphere, while the red one misses it.
But you rarely want a solid color, you want it to look like the sphere is lit by something. You need a light in the scene, which is also just a coordinate position. Trace a ray from the sphere hit point to the light position, and with some more math you figure out the light intensity at the surface point. See the illustration below.
To calculate the light you need something called the normal, which tells what way is up on the surface. On a sphere this is a line from the center of the sphere. The angle between the normal and the light direction decides the light amount on that point. So you see, the smaller angle (yellow) will be brighter than the large angle (red). It's simple!
And by using the angle between the camera ray and normal, you can also shoot new rays into the reflection and refraction direction. And using simple formulas you can start collecting more information about the surface. You can calculate speculars, shadows and all the other information needed for a realistic image.
The raytracer I made
After having prototyped the formulas on paper, I sat down a monday afternoon in february 1993 in front of the Amiga. I first wrote a program with just text output, just to check if my math formulas was returning the same values as they did on paper. The following friday I managed to write a small program that rendered a shaded red sphere on a blue gradient background. I was stunned and immensely proud, this was magic!
Here's a screenshot of the source code for my simple raytracer. If you skip all the green comments and empty lines, there's just 39 lines of actual code in there. That's how efficient the core of a simple raytracer can be!
Later that year Øyvind Bakksjø, a friend of mine, and I wrote a second raytracer in Pascal. I researched the math and he did the programming. All the images on this page are from this second, advanced raytracer. Although it was limited in what it could render, we had a couple of features I had not seen in any renderers at that time; such as fresnel reflections, super-bright speculars and rendertime boolean operations (which I still miss to this day).