Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have never dealt with shaders, so pardon me if it's a very basic question. In a single frame from a game, are shaders essentially all that are being used to draw it?

Or do we have basic shapes like triangles, squares, circles, etc and the shaders go on top of it, drawing shadows, smoothing edges, etc?

From the example, it seems like you can create a shader to draw any object in a scene, and then I imagine you compose other shaders to get shadows and lightning and all of that. In the very limiting experience I had with drawing, I drew shapes but never through shaders. I always thought they didn't draw the objects themselves.



(Note the following is a simplified description of the classic forward rendering process; the so-called deferred rendering techniqure is a bit different.)

A GPU turns an abstract vector shape like a triangle, defined by three vertices and data such as a normal associated with each vertex, into a stream of fragments, one (or more if multisampling) for each pixel in the output buffer that’s covered by the shape. This part is all done in hardware.

A fragment is a pixel coordinate plus user-supplied data that’s either constant, called uniform, or the aforementioned vertex data interpolated across the triangle face, called varying. This interpolation business is again done in hardware and not programmable.

The fragment shader takes a fragment as input and based on the data computes a color, which is (after a couple more stages) output on the screen (or offscreen buffer) as the color of the respective pixel. This could be anything from a constant color to complex lighting calculations. In GPU rendering, this is all massively parallel, with countless fragments being processed simultaneously at any moment. Shaders are pure, stateless functions: the only data they can access is the input, and the only effect they can have is to return a color (and a few other things like a depth value).

So in a nutshell, the GPU hardware is responsible for computing which pixels should be filled to draw each triangle, but the fragment shader’s responsibility is to determine the color value of each of those pixels.


Shaders do all the drawing, but it does so in different stages. I won't explain the entire graphics pipline[1], but a lot (some 90%+) of what people casually think of as "shaders" for doing lighting effects are the fragment/pixel shader stage of the renderer.

there are other stages (vertex, tesselation) that draw those basic shapes before the fragment shader draws "on top" of the scene.

(there is also a lot more to what I described for fragment shaders. e.g. deferred rendering[2]. But that's an equally large topic to get into).

1: https://vulkan-tutorial.com/Drawing_a_triangle/Graphics_pipe...

2: https://learnopengl.com/Advanced-Lighting/Deferred-Shading


Yes, the color of every pixel is ultimately determined by a shader program, but as you might expect it's more complicated then that.

There is what is referred to as a graphics pipeline consisting of a mix of fixed-function hardware stages and programmable stages. At a high level, it does the following: 1) the GPU accepts a set of 3D triangles from the CPU, 2) a 'vertex' shader program transforms (flattens) the 3D triangle vertices into 2D triangle vertices with pixel coordinates, 3) the GPU rasterizes the 2D triangles to determine exactly which pixels the triangles cover, 4) a 'pixel' shader program is run for each covered pixel to determine the color of the pixel, 5) the resulting pixel color is stored in a frame buffer (which may involve blending it with the existing color). This 'pipeline' is then repeated many times (with different triangle meshes and shaders) until the whole frame is drawn.

Hope that helps!


Yes, you got the right idea. AFAIK every type of code running on the GPU is called a shader (eg. special data operations are even called "compute shaders", although they are a different beast). All the operations you mentioned (colors, shadows, shading, image-effects, general image-processing) are achieved through parallelized computing combining lots of data arrays (vertices and their properties, source textures, pre-computed functions, target textures, buffers, etc).

For example, to get light and shadows, your shader should have access to some (probably global) variable about the position and direction of eg. a spotlight. Very often composite lighthing is achieved by combining multiple shader passes (a base pass for global ilumination, and one for each light for example), each literally adding more light (additive pass). Now, in order to avoid adding light for pixels where the light source is blocked (ie. shadow) the most common technique is using what's called a Z-buffer (just a floating point texture). You want to know for each light in the scene where their light reaches, so (before all lighting is applied) you set up a single shader pass that combines all solid geometry on the scene and using the light position and direction as the camera transform, and use a special shader whose only purpose is writing the objects distance to the Z-buffer. Now, every time you want to know whether a point in space is reached by your light, you go about sampling this Z-buffer (after doing some geometry) and compare the point's distance to the saved value in that direction. Yes, it can be very buggy and precision errors abound, and every engine worth their salt already does this for you, but lets you get in there and modify the process.

Everything else are variations on this theme. Deferred rendering is rendering data instead of colors into an intermediate texture which is later processed to get the colors. Blur effects are 2D convolutions of the rendertexture (eg by a Gaussing kernel). Tesseletion shaders are about generation new geometry in the vertext shader. Even drawing text is achieved through font atlasing and small rectangles.


You can draw any object with fragment (aka pixel) shaders. Because some specific math techniques can be used to draw shapes, regardless of the technology (SDF, trigonometry, ect).

So some talented artists are pushing the bounds and wrestling with performance trade-offs in the fragment shader.

Fragment shaders are more commonly used for making full screen filter effects (color correction, ect).

Shaders are also used to make textures and materials on basic objects. Material artists often generate textures with shader math.

Many visual effects are made by using shaders in creative ways.

Shaders are run on the GPU in a parallel, wave-like fashion. Many, many, many threads run across the same data in one wave.

In some cases shaders are much faster than CPU branching code. Shaders also have easier access to some rendering data.

So they are a good space for creative special effects.

Any object in a game with a high level of surface detail is a common target, to shift that detail onto a shader.

Ocean surfaces, tesselating meshes, ect.

There's many other uses, because GPUs are powerful and flexible.


> Or do we have basic shapes like triangles

Generally this - the SDF mechanism is very clever, but that's not what game engines tend to do, their geometry comes from triangle-based tools used by artists.


SDF is clever, but not as much as triangles. :) SDF is unfortunately slow because of the ray-marching loop.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: