History of PC Graphics Hardware

A Programmer's View

Table of Contents

Voodoo Magic
Dynamite Combiners
Vertices and Registers
Programming at Last
Dependency
Modern Unification

For those of you had the good fortune of not being graphics programmers during the formative years of the development of consumer graphics hardware, what follows is a brief history. Hopefully, it will give you some perspective on what has changed in the last 15 years or so, as well as an idea of how grateful you should be that you never had to suffer through the early days.

Voodoo Magic

In the years 1995 and 1996, a number of graphics cards were released. Graphics processing via specialized hardware on PC platforms was nothing new. What was new about these cards was their ability to do 3D rasterization.

The most popular of these for that era was the Voodoo Graphics card from 3Dfx Interactive. It was fast, powerful for its day, and provided high quality rendering (again, for its day).

The functionality of this card was quite bare-bones from a modern perspective. Obviously there was no concept of shaders of any kind. Indeed, it did not even have vertex transformation; the Voodoo Graphics pipeline began with clip-space values. This required the CPU to do vertex transformations. This hardware was effectively just a triangle rasterizer.

That being said, it was quite good for its day. As inputs to its rasterization pipeline, it took vertex inputs of a 4-dimensional clip-space position (though the actual space was not necessarily the same as OpenGL's clip-space), a single RGBA color, and a single three-dimensional texture coordinate. The hardware did not support 3D textures; the extra component was in case the user wanted to do projective texturing.

The texture coordinate was used to map into a single texture. The texture coordinate and color interpolation was perspective-correct; in those days, that was a significant selling point. The venerable Playstation 1 could not do perspective-correct interpolation.

The value fetched from the texture could be combined with the interpolated color using one of three math functions: additions, multiplication, or linear interpolation based on the texture's alpha value. The alpha of the output was controlled with a separate math function, thus allowing the user to generate the alpha with different math than the RGB portion of the output color. This was the sum total of its fragment processing.

It had framebuffer blending support. Its framebuffer could even support a destination alpha value, though you had to give up having a depth buffer to get it. Probably not a good tradeoff. Outside of that issue, its blending support was superior even to OpenGL 1.1. It could use different source and destination factors for the alpha component than the RGB component; the old GL 1.1 forced the RGB and A to be blended with the same factors.

The blending was even performed with full 24-bit color precision and then downsampled to the 16-bit precision of the output upon writing.

From a modern perspective, spoiled with our full programmability, this all looks incredibly primitive. And, to some degree, it is. But compared to the pure CPU solutions to 3D rendering of the day, the Voodoo Graphics card was a monster.

It's interesting to note that the simplicity of the fragment processing stage owes as much to the lack of inputs as anything else. When the only values you have to work with are the color from a texture lookup and the per-vertex interpolated color, there really is not all that much you could do with them. Indeed, as we will see in the next phases of hardware, increases in the complexity of the fragment processor was a reaction to increasing the number of inputs to the fragment processor. When you have more data to work with, you need more complex operations to make that data useful.