Ever wonder how files on a computer get turned into drawings on your monitor?
Objects in a 3D games (eg, barrels) are made of shapes, pictures, and notes/instructions for how to draw the object.
When we say "shapes," we just mean notes about where the shapes are and what they connect to. They are usually triangles, but can be squares, points, or lines as well. By pictures, we just mean ordinary image files like a .png or .jpg. Those notes and instructions are called materials and shaders and this article is about those.
For example, you can think of a very simple digital 3D barrel as a set of triangle positions, an image file with 'wood' and 'metal' sections, and some of those notes and instructions.
The "notes" just say things like "should cast shadows," or "draw in front of other things no matter what" (eg, for health bars), or "hide."
So, given these shapes, pictures, and notes, how do you end up with an image on your screen? Its the instructions, with the help of some rendering code and the graphics drivers (eg, turning shapes into pixels is usually done by OpenGL or DirectX or Vulkan - this is called 'rasterization').
The instructions do a lot of work. Technically, they are categorized into instructions that change the size, position, and rotation of shapes; instructions that add points between other points; instructions that help figure out what colour a pixel should be; how to blend semi-transparent objects; and a few less frequently used categories.
The vertex shader is the one that changes the size, position, and rotation of shapes. You might use this to make it seem like leaves are blowing in the wind, or waves flow, or to stretch things that are moving quickly.
The tessellation shader adds points between other points. Its basically a performance hack to help you to draw things faster. You can send a small amount of data to a GPU and convert it to elaborate curves like the letters of a fancy font. Font rendering is interesting and I hope to write about this later!
The fragment shader is the one that figures out what colour something should be. It figures out what part of the image to 'paste' onto what part of the shape, and how to blend or shade objects. If you want to apply a Gaussian blur to the screen (sometimes used for antialiasing), you would do it here.
Back in university, I wrote my shaders by hand as text files, but these days you often build them in graph-based editors. Unreal has a "material" system, and Unity's version is called "Shader Graph." They are amazingly easy to use, especially compared to digging through OpenGL documentation. You would be pleasantly surprised how accessible things are now!
This isn't enough information to write a shader, but its weird how few tutorials actually explain what is happening under the hood. This is just the quick orientation I wish I'd had. Let me know if you found this interesting so I know what sorts of articles to focus on in the future!