A Software Rasterizer in Rust

Two things I wanted to learn recently: the Rust language, and low-level graphics programming techniques. I also wanted to gain some freedom from the Unity engine. I had previously gone through the excellent book Raytracing in One Weekend, and wanted more! Building a software renderer seemed like a good way to go.

This image has an empty alt attribute; its file name is spinning_cube_12_depthbuffer.gif

Check out the code on Github

Project goals:

  • Learn to implement basic versions of modern graphics staples behind rasterization pipelines.
  • No existing libraries may be used for graphics. I write all the code that produces a buffer of [R,G,B] pixels, and that gets sent to the screen.
  • Derive proofs of all theorems used
    • I may be guided by derivations done by others as guides, but I want to have gone through, and grokked, all of the algebra.
    • I’ve long felt that I lacked proper mathematical background, so this was a big one.
    • As a test: a week after learning a new trick I would sit down and re-derive it on my own, on paper, from first principles

Along the way I’ve used several key resources to piece things together:

From those places in turn there are lots of juicy papers to find.

I chose SDL2 as a means to show CPU-side pixel buffer on the screen, and to enable user input later. With that included in the project, I was ready to get going.

Bresenham‘s line drawing algorithm, and other related differential techniques, are very fun to study, and perhaps the quickest way to get something moving on screen. I spent a lot of time getting this first render going, as I needed the study and code the linear algebra routines (including vectors, matrices, homogeneous coordinates, projection) first.

Drawing cube with the Bresenham algorithm

Seeing the first rotating cube on screen was a very happy moment!

Next I learned about the use of Barycentric coordinates, determinants, and signed area to perform pixel-in-triangle testing. Note that at this point I wasn’t handling the top-left rule correctly yet, causing the seams between the faces.

Drawing solid triangles

Adding in some n-dot-l lighting makes things come alive.

Adding in super-simple lighting

Naive interpolation of per-vertex UV coordinates (called affine texture mapping), which brings that sweet Playstation 1 look. The tricks built on barycentric coordinates which I learned here (such as efficiently interpolating UVs over the pixels of a face) feel very important. Since then they’ve kept popping up everywhere. (Like the Bezier triangle)

Interpolating vertex attributes (color) in screen space
Not quite there yet
Affine texture mapping! Note the distinct warping of the texture through the diagonal

Perspective-correct uv interpolation sorts out texture mapping.

No more cute texture wobbles… Oh well.

From here, adding a depth-buffer turns out to be a very small change


And that’s where I left it for now! Some things I tried, but didn’t finish yet:

  • Using fixed point arithmetic for the vertex-to-fragment stage to simplify and robustify some per-pixel tests.
  • Bezier curve differential rasterization (I. really. love. Bezier. curves.)

Overall, I learned way more about graphics than I thought I would. Rasterizers turn out to be more complex in their basic machinery than a first-principles-only toy tracer. The trade-off, of course, is that rasterizers can be fast. Along the way a heck of a lot of things about the way Unity and other real-time rendering engines work became clear and logical, instead of feeling arbitrary. I also managed to solidify my understanding of linear algebra, increased my ability to produce basic mathematical proofs, and found a lot of fun in exploring equations on paper. Totally worth it.

Am I done? No! I always have too many personal projects going on at any one time, but after some time spent on the others, here are some things I want to do next:

  • Document theorems and proofs using Sympy
  • Model importing
  • Flexible shading pipeline
  • Increase performance
    • Tiled rendering
      • Rust compiler could auto-vectorize n*n groups of pixels
      • Meshes could be culled per tile
    • Fixed point limited-precision everywhere possible
      • Want to see how far I can push the integer units on the recent Ryzen cpus
    • Multithreading
    • Space-filling-curve indexing (Morton order or Hilbert) to increase cache-coherence of texture reads/writes
  • Hook it up to some simulation and gameplay logic
  • Ship a little game with it

In addition, I’m excited to try and identify some rendering techniques that suit the CPU more than the GPU. After all, the GPU is massively parallel, but there might be some fun sequential routines that do well on the trusty old central processing unit.

Next post: the beauty of de Casteljau-Bezier curves!

Leave a Reply

Your email address will not be published. Required fields are marked *