I've written several raytracers and rasterizers that are smaller than 200 lines of C++, though quite likely they're worse pedagogically than Jacco's (slashdotted, but available at https://web.archive.org/web/20220615174927/https://jacco.omp...) tutorial, and they also don't illustrate useful optimizations. Hopefully, what mine lack in cluefulness and performance they make up in breadth, diversity, and brevity: they are written in C, C++, Python, JS, Lua, and Clojure, with output to JPEG files, PPM files, X11, the Linux framebuffer, ASCII art, Unicode Braille art, and the browser <canvas>.
· http://canonical.org/~kragen/sw/aspmisc/my-very-first-raytra... 184 lines of C, including vector arithmetic, input parsing, and PPM output. I'm not sure what you mean by "excluding the headers" — this one doesn't have any headers of its own (why would a 200-line program have headers of its own? Are you on a Commodore 64 such that the compilation time for 200 lines of code is so high that you need separate compilation?) but it #includes math.h, stdio.h, stdlib.h, and string.h, which total almost 1800 lines of code on my machine and presumably 15× that by the time you count their transitive includes.
· https://gitlab.com/kragen/bubbleos/blob/master/yeso/sdf.lua 51 lines of Lua for an SDF raymarcher including animation, the model itself, and live graphical output. SDFs are cool because it's often easier to write an SDF for some shape than to write code to evaluate the intersection of an arbitrary ray with it. This one runs either in X-Windows, on the Linux framebuffer, or in an unfinished windowing system I wrote called Wercam.
I feel like basic raytracing is a little simpler than basic rasterizing, but I don't think the difference is hugely dramatic:
· http://canonical.org/~kragen/sw/torus is a basic rasterizer in 261 lines of JS, which is larger than the three raytracers I mentioned above, but about 60% of that is 3-D modeling rather than rendering, and another 5% or so is DOM manipulation. On the other hand, one of the great things about raytracing is that if you want to raytrace a sphere or torus or metaballs or whatever, you don't need to reduce them to a huge pile of triangles; you can just write code to evaluate their surface normals and intersect a ray with them, and you're done.
· http://canonical.org/~kragen/sw/netbook-misc-devel/rotcube.p... The smallest I've been able to get a basic rasterizer down to, 15 lines of Python, just rotating a point cloud, without polygons. You might argue that rotating a point cloud is stupid because it doesn't look very 3-D, but Andy Sloane's donut.c does okay by just having a lot of points and applying Lambertian shading to the points in the point cloud: https://www.a1k0n.net/2011/07/20/donut-math.html. If your point cloud is generated by intersecting a field of rays with some object, its density variation will approximate the Lambertian brightness of the object as illuminated by that ray field.
So I think that the core of either a (polygon!) rasterizer or a raytracer, without optimizations, is only about 20 lines of code if your ecosystem provides you with the stuff around the edges: graphical display (or image file output), model input, linear algebra, color arithmetic. If you have to implement one or more of those four things yourself, it's likely to be as big as the core rasterizer or raytracer code.
For a polygon rasterizer, it's something like:
tpoints = [camera_transform @ point for point in points]
framebuffer.fill(background)
painter = lambda poly: min(tpoints[i].z for i in poly.v)
for poly in sorted(polys, key=painter)):
normal = tpoints[poly.normal]
if normal.z > 1: # backface removal, technically an optimization
continue
p2d = [(p.x / p.z, p.y / p.z) for p in [tpoints[i] for i in poly.v]]
lambert = normal.dot(light_direction)
color = min(white, max(black, lambert * light_color + ambient))
framebuffer.fill_poly(p2d, color)
While a Whitted-style raytracer is more like this:
for yy in range(framebuffer.height):
for xx in range(framebuffer.width):
ray = vec3(xx, yy, 1).normalize()
hits = [(o, o.intersect(ray)) for o in objects]
hits = [(o, o.p) for o, p in hits if p is not None]
if hits:
o, p = min(((o, p) for o, p in hits),
key=lambda t: t[1].z) # nearest
framebuffer[xx, yy] = o.shade(p)
else:
framebuffer[xx, yy] = background
But this presumes you've previously transformed the objects into camera space, it leaves .intersect and .shade to be defined (potentially separately for each object), and it doesn't do the neat recursive ray-tracing thing that gives you those awesome reflections. For a sphere, intersection is about 7 lines of code evaluating the quadratic formula (which you can cut to 3 if you have a quadratic-equation solver in your library), and basic Lambertian shading is about the same as in the rasterizer; your surface normal is (p - sphere.center).normalize().
The core of my Lua SDF raymarcher I linked above is simpler than that. Here I'm using the iteration count as part of the shading function to fake ambient occlusion, which is pretty bogus because it depends on where the camera is in a totally non-physically-based way, but it looks pretty 3-D.
local function torus(p, c, r1, r2)
return length2(length2(p[1]-c[1], p[3]-c[3]) - r1, p[2]-c[2]) - r2
end
local function render_pixel(x, y, palette)
local p, n = {x,y,1} -- near clipping plane: z=1
local q = normalize(p) -- ray direction
for i = 0, 255 do
n = i
local r = scene_signed_distance_function(p)
p = add(p, mul(r, q))
if p[3] > 10 then return palette(0) end -- far clipping plane
if r < 0.02 then break end
end
return palette(max(0, min(255, 48 - n - math.floor(p[1]*-16+p[2]*32))))
end
> It certainly gets hugely dramatic once you include shadows & reflections.
It's remarkable just how 'hacky' high-quality rasterised graphics is.
For shadows, render a scene from the PoV of every light source, create shadow maps, and then transform those shadow maps to camera space.
For reflections, use stencil buffers; for global illumination, use radiosity maps; for ambient occlusion, either bake it in or take a big runtime penalty and render them real-time...
Ray-tracing (and derivatives, like (bidirectional) path tracing, light transport, etc) should really be called simulations, and the physics and mathematics behind it is so straightforward, but extremely accurate. Even the simplest Whitted ray-tracers can produce fairly photorealistic rendering with simplified geometry, and extending ray-tracers to include very complex effects (subsurface scattering, PBR, even general relativistic ray-tracing) is comparatively straightforward.
The only problem is the absolute battering that ray-tracing does on traditional hardware.
Yes, that's true, there are a lot of things that are easier with ray-tracing; I'd add refractions, laser-sparkle interference patterns, and volumetric ray-marching to the list.
On the other hand, if you want to draw hidden lines (as for a mechanical drawing), draw lines at edges between facets (wireframishly) or to outline surface curvature, or add null halos around foreground elements, I think those are easier to do with a polygon or NURBS rasterizer.
This is gorgeous! You packed in not just a texture but even a scene description! And reflections! However, I have a couple of quibbles:
1. It is not C.
2. It is not 9 lines.
To elaborate on the first point, it's C++, using the functional cast syntax int(C*N), C++ includes, and a non-constant static initializer, none of which are legal in C.
To elaborate on the second point, it's not "9 lines" of C++ in the sense that My Very First Raytracer is "184 lines of C"; it's only 9 lines in the sense that it has two lines of #includes, and then you've chosen to insert six newlines into it at essentially arbitrary locations! Conventionally formatted, it's 45 lines of C++, which seems in keeping with my estimate that the basic raytracing algorithm is about 20 lines of code if you have linear algebra, while having to implement linear algebra adds another bit of code that's slightly larger than 20 lines.
I'm not sure how to define logical lines of code for K, but it seems relevant that one of those 7 lines defines two nested functions.
Here's the reformatted version of your Tinytrace, which I look forward to studying in more detail:
#include <cmath> // sqrt
#include <cstdio> // fopen
int i, p = 0, h[] = { 3 << 16, 8 << 24, 0, 41944064, 8 };
FILE *f = fopen ("o", "wb");
int main() {
fwrite (h, 2, 9, f);
for (; p < 9 << 17;) {
float x = 0,
T = .2,
Y = p % 1024 / 430. - 1,
R = p++ / 327680. - 1,
E = 3,
C = 1 / sqrt (Y * Y + R * R + 1),
I, t, N, A;
Y *= C;
I = 1 | -(Y < 0);
R *= C;
a:
t = x - I;
A = T - I;
N = Y * t + R * A - C * E;
A = N * N - t * t - A * A - E * E + 1;
N += sqrt (A);
if (N < 0) {
E += N * C;
x -= N * Y;
T -= N * R;
t = x - I;
A = T - I;
N = 2 * (Y * t + R * A - C * E);
I = -I;
Y -= N * t;
R -= N * A;
C += N * E;
goto a;
}
fputc (i, f);
if (R < 0) {
N = (3 - T) / R;
i = Y * N;
R *= .4 - (((i + int(C*N)) & 1) + .6);
}
i = R > 1 ? 255 : R * 255;
}
} // "TINY TRACE" edition - JB'22 (but reformatted)
For anyone else who wants to run it, you will probably want to rename the output file "o" to "o.tga", because it's a TARGA-style uncompressed image.
Sorry, I wrote that 10 years ago. I guess Python 3 requires you to say "for x in (-1, 1) for y in (-1, 1) for z in (-1, 1)", while Python 2 parsed those as tuples even without the parens. The parens make the code clearer anyway.
· http://canonical.org/~kragen/sw/aspmisc/my-very-first-raytra... 184 lines of C, including vector arithmetic, input parsing, and PPM output. I'm not sure what you mean by "excluding the headers" — this one doesn't have any headers of its own (why would a 200-line program have headers of its own? Are you on a Commodore 64 such that the compilation time for 200 lines of code is so high that you need separate compilation?) but it #includes math.h, stdio.h, stdlib.h, and string.h, which total almost 1800 lines of code on my machine and presumably 15× that by the time you count their transitive includes.
· http://canonical.org/~kragen/sw/dev3/circle.clj 39 lines of Clojure, including the model, which is a single sphere; it uses java.awt.image for JPEG output. About half of the code is implementing basic vector math by hand. A minified version is under 1K: http://canonical.org/~kragen/sw/dev3/raytracer1k.clj
· https://gitlab.com/kragen/bubbleos/blob/master/yeso/sdf.lua 51 lines of Lua for an SDF raymarcher including animation, the model itself, and live graphical output. SDFs are cool because it's often easier to write an SDF for some shape than to write code to evaluate the intersection of an arbitrary ray with it. This one runs either in X-Windows, on the Linux framebuffer, or in an unfinished windowing system I wrote called Wercam.
I feel like basic raytracing is a little simpler than basic rasterizing, but I don't think the difference is hugely dramatic:
· http://canonical.org/~kragen/sw/torus is a basic rasterizer in 261 lines of JS, which is larger than the three raytracers I mentioned above, but about 60% of that is 3-D modeling rather than rendering, and another 5% or so is DOM manipulation. On the other hand, one of the great things about raytracing is that if you want to raytrace a sphere or torus or metaballs or whatever, you don't need to reduce them to a huge pile of triangles; you can just write code to evaluate their surface normals and intersect a ray with them, and you're done.
· http://canonical.org/~kragen/sw/netbook-misc-devel/rotcube.p... The smallest I've been able to get a basic rasterizer down to, 15 lines of Python, just rotating a point cloud, without polygons. You might argue that rotating a point cloud is stupid because it doesn't look very 3-D, but Andy Sloane's donut.c does okay by just having a lot of points and applying Lambertian shading to the points in the point cloud: https://www.a1k0n.net/2011/07/20/donut-math.html. If your point cloud is generated by intersecting a field of rays with some object, its density variation will approximate the Lambertian brightness of the object as illuminated by that ray field.
· http://canonical.org/~kragen/sw/dev3/rotcube.cpp in C++ rotating an ASCII-art pointcloud is 41 lines; and
· http://canonical.org/~kragen/sw/dev3/braillecube.py with wireframes in Braille Unicode art it's 24 lines of Python, but that's sort of cheating because it imports a Braille Unicode art library I wrote that's another 64 lines of Python. Recording at https://asciinema.org/a/390271.
So I think that the core of either a (polygon!) rasterizer or a raytracer, without optimizations, is only about 20 lines of code if your ecosystem provides you with the stuff around the edges: graphical display (or image file output), model input, linear algebra, color arithmetic. If you have to implement one or more of those four things yourself, it's likely to be as big as the core rasterizer or raytracer code.
For a polygon rasterizer, it's something like:
While a Whitted-style raytracer is more like this: But this presumes you've previously transformed the objects into camera space, it leaves .intersect and .shade to be defined (potentially separately for each object), and it doesn't do the neat recursive ray-tracing thing that gives you those awesome reflections. For a sphere, intersection is about 7 lines of code evaluating the quadratic formula (which you can cut to 3 if you have a quadratic-equation solver in your library), and basic Lambertian shading is about the same as in the rasterizer; your surface normal is (p - sphere.center).normalize().The core of my Lua SDF raymarcher I linked above is simpler than that. Here I'm using the iteration count as part of the shading function to fake ambient occlusion, which is pretty bogus because it depends on where the camera is in a totally non-physically-based way, but it looks pretty 3-D.
I know what BVHs are, even though I've never implemented them. I'm so clueless that I didn't know what BLAS and TLAS are, but Jacco explains them in part 6 of his series: https://web.archive.org/web/20220605013040/https://jacco.omp....