Ah, that's the first answer that has made sense to me. But my question is, if I want to move to shaders, what sort of stuff do I need to get out of my current program.
For example, right now I load meshes from a file, compute normals for each vertex, then do the camera transforms, and render my meshes using VBOs. My lighting looks terrible because I just use normal averages at each vertex. When I tried to use the sample in the wiki, when I used the shaders, my camera and all of that seemed fine, but my lighting disappeared, and everything appeared a single solid color. I am assuming that is because the shader didn't consider the normals that I created. Would that be true?
It's not just that you didn't consider the normals, it's that you didn't implement any sort of algorithm to calculate lighting. The shader example you gave does one thing only: it assigns the color value of vertColor to the output fragment. So the result you see is exactly what is expected. If you want to see lighting, you'll have to manipulate the color value.
If you take a look at this link
, it explains the traditional OpenGL lighting model. The values for ambient, specular, diffuse, and everything else are used on the gpu to calculate the final fragment color. So to get your own lighting, you'll have to do that yourself. A quick google for "glsl lighting tutorial" comes back with several results
. Keep in mind that they will likely be using different versions of GLSL.