"Hey what's a Geometry Shader?"
The traditional model is: Vertices come in at the front of the pipeline and get transformed, then get re-grouped to produce triangles. Then the rasterization system goes in and converts the triangle into fragments and the individual fragments get processed by the Pixel Shader.
The Geometry Shader sits in-between those two pieces, and it has access to the vertices for one of those triangle primitives, right after it's been assembled, and can operate on all the vertices at once. It can do a couple different things. It can amplify the number of triangles, so it can take those vertices and generate a new set of triangles. Or it can just generate a new set of points, or a new set of lines, and send those to the rasterizer for generation of pixel fragments. We can do things like take a point and generate a set of triangles around that point and expand it into a sprite. Or you could decompose a triangle into a smaller set of triangles that you could then think of as tessellation. Or you could extrude the edges of the triangle and turn it into a volume or a tetrahedron.
One of the problems that we've often seen is being able to get enough data from the CPU into the graphics processor, and so we can generate data internally in the graphics processor, and we've sort of eliminated this transfer bottleneck.