Further steps toward more pixels, teaching the world to tell what color a ray will see, and creating the necessary transforms for positioning the world in front of the camera. Then have the camera cast a ray, for each pixel in the resulting image, into the world.
Lots of refactoring today, because old assumptions were smashed, and some cleanup of annoying duplications was needed. No new pixels yet, but maybe tomorrow? These where the previous two days pixel results, with the black background made transparent while converting them from ppm to png.
More than just hit detection, we’ve moved into the realm of reflection, lighting and shading. This is mainly about projecting vectors on other vectors, using dot products \(\overrightarrow{v1} \cdot \overrightarrow{v2}\) to measure the length of the projection. For that we need to find normal vectors of surfaces, and a light source.
Today I Learned that I shouldn’t have assumed the sphere would ever need to change from being a unit sphere, so intersecting gets even simpler:
More math heavy stuff, calculating the positions along a ray, where it intersects a sphere. Turns out this is as simple as solving a quadratic equation, with some special coefficients.