Ah, the OpenGL blue book that I never got around to buying.
As opposed to the red book (reference guide)[1] and the orange book (shading language reference)[2] of which I got copies of the respective last versions before they merged into the newer "red-ish book" that combines both[3].
Just in case you ever wondered where that one Lego picture in the Windows 3D maze screen saver came from (the cover of the red book).
It's a lot easier to find OpenGL ES 2.0 material for iOS (or any OS, really) than it used to be a year or so ago.
For something written from a pure iOS perspective, it's hard to beat Jeff LaMarche's chapters from his unpublished book, which start here. You linked to his OpenGL ES 1.1 tutorials, which are also great, but he didn't place his newer 2.0 material on that list.
iPhone 3D Programming by Philip Rideout is a great book that covers both OpenGL ES 1.1 and 2.0. It does not assume that you know OpenGL ES, and he does explain a good bit of the math and other fundamentals required to understand what he's talking about. He gets into some pretty advanced techniques towards the end. However, all of his code is in C++, rather than Objective-C, so that may be a little disconcerting for someone used to Cocoa development. Still, the core C API for OpenGL ES is the same, so it's easy to see what's going on.
If you're just getting started with OpenGL ES 2.0, it might not be a bad idea to start using GLKit (available only on iOS 5.0), which simplifies some of the normal setup chores around your render buffers and simple shader-based effects. Apple's WWDC 2011 videos have some good material on this, but their 2009 and 2010 videos (if you can find them, some are available at apple archive) provide a lot more introductory material around OpenGL ES 2.0.
Finally, as Andy mentions, I taught a class on the subject as part of my course on iTunes U, which you can download for free here. The course notes for that class can be found here or downloaded as a VoodooPad file here. I warn you that I go a little technical quite fast in the OpenGL ES 2.0 session, so you may want to watch the 1.1 session from the previous semester here. I also talk a little bit about what I've done with OpenGL ES 2.0 in this article about my open source application (whose source code can be grabbed from here, if you'd like to play with a functional OpenGL ES 2.0 iOS application).
If you want a less comprehensive guide that will just get you started, check out the OpenGL bible.
If I misunderstood your question and you already know OpenGL, and want to study more about GLSL in particular, here's a good phong shading example that shows the basics.
Compiling a shader source is really simple,
First you need to allocate a shader slot for your source, just like you allocate a texture, using glCreateShader:
Link the shaders to each other, first allocate a program using glCreateProgram, attach the shaders into the program using glAttachShader, and link them using glLinkProgram:
Filters in GPUImage are written in the OpenGL Shading Language (GLSL). There are slight differences between the OpenGL targets (Mac, desktop Linux) and OpenGL ES ones (iOS, embedded Linux) in that shaders on the latter use precision qualifiers that are missing on the former. Beyond that, the syntax is the same.
Shader programs are composed of a matched pair of a vertex shader and a fragment shader. A vertex shader operates over each vertex and usually handles geometric manipulations. A fragment shader operates over each fragment (pixel, generally) and calculates the color to be output to the screen at that fragment.
GPUImage deals with image processing, so most of the time you'll be working only with fragment shaders and relying on one of the stock vertex shaders. The above is an example of a fragment shader that takes in each pixel of an input texture (the image from the previous step in the processing pipeline), manipulates its color values, and writes out the result to the gl_FragColor builtin.
uses the texture2D() function to read the pixel color in the inputImageTexture at a given coordinate (passed in from the vertex shader in the first stage). These coordinates are normalized to the 0.0-1.0 range, and therefore are independent of the input image size.
The values are loaded into a vector type (vec4) that contains multiple components within a single type. In this case, the color components for red, green, blue, and alpha are stored in this four-component vector and can be accessed via .r, .g, .b, and .a. The color component values are also normalized to the 0.0-1.0 range, in case you're used to working with 0-255 values.
In the particular case of a sepia tone filter, I'm attempting to apply a well-known color matrix for a sepia tone effect to convert the incoming color to an outgoing one. That requires matrix multiplication, which I do explicitly in the code above. In the actual framework, this is done as matrix math using builtin types within the shader.
There are many, many ways to manipulate colors to achieve certain effects. The GPUImage framework is full of them, based largely on things like color conversion standards published by Adobe or other organizations. For a given effect, you should identify if one of these existing implementations will do what you want before setting out to write your own.
If you do want to write your own, first figure out the math required to translate incoming colors into whatever output you want. Once you have that, writing the shader code is easy.
The lookup filter in GPUImage takes another approach, and that's to start with a lookup image that you manipulate in Photoshop under the filter conditions you wish to mimic. You then take that filtered lookup image and attach it to the lookup filter. It translates between the incoming colors and their equivalents in the lookup to provide arbitrary color manipulation.
Once you have your shader, you can create a new filter around that in a few different ways. I should say that my new GPUImage 2 framework greatly simplifies that process, if you're willing to forgo some backwards compatibility.
Durian Software has an ongoing series of tutorials covering modern OpenGL. They are aimed at OpenGL 2.0 but avoid using any deprecated functionality in later versions.
As opposed to the red book (reference guide)[1] and the orange book (shading language reference)[2] of which I got copies of the respective last versions before they merged into the newer "red-ish book" that combines both[3].
Just in case you ever wondered where that one Lego picture in the Windows 3D maze screen saver came from (the cover of the red book).
[1] https://www.amazon.de/OpenGL-Shading-Language-Randi-Rost/dp/...
[3] https://www.amazon.de/OpenGL-Programming-Guide-Official-Lear...
It's a lot easier to find OpenGL ES 2.0 material for iOS (or any OS, really) than it used to be a year or so ago.
For something written from a pure iOS perspective, it's hard to beat Jeff LaMarche's chapters from his unpublished book, which start here. You linked to his OpenGL ES 1.1 tutorials, which are also great, but he didn't place his newer 2.0 material on that list.
iPhone 3D Programming by Philip Rideout is a great book that covers both OpenGL ES 1.1 and 2.0. It does not assume that you know OpenGL ES, and he does explain a good bit of the math and other fundamentals required to understand what he's talking about. He gets into some pretty advanced techniques towards the end. However, all of his code is in C++, rather than Objective-C, so that may be a little disconcerting for someone used to Cocoa development. Still, the core C API for OpenGL ES is the same, so it's easy to see what's going on.
If you're looking for particular effects, the OpenGL Shading Language book is still one of the primary resources you can refer to. While written for desktop OpenGL, most of the shading language and shaders presented there translate directly across to OpenGL ES 2.0, with only a little modification required.
The books ShaderX6, ShaderX7, GPU Pro, and GPU Pro 2 also have sections devoted to OpenGL ES 2.0, which provide some rendering and tuning hints that you won't find elsewhere. Those are more advanced (and expensive) books, though.
If you're just getting started with OpenGL ES 2.0, it might not be a bad idea to start using GLKit (available only on iOS 5.0), which simplifies some of the normal setup chores around your render buffers and simple shader-based effects. Apple's WWDC 2011 videos have some good material on this, but their 2009 and 2010 videos (if you can find them, some are available at apple archive) provide a lot more introductory material around OpenGL ES 2.0.
Finally, as Andy mentions, I taught a class on the subject as part of my course on iTunes U, which you can download for free here. The course notes for that class can be found here or downloaded as a VoodooPad file here. I warn you that I go a little technical quite fast in the OpenGL ES 2.0 session, so you may want to watch the 1.1 session from the previous semester here. I also talk a little bit about what I've done with OpenGL ES 2.0 in this article about my open source application (whose source code can be grabbed from here, if you'd like to play with a functional OpenGL ES 2.0 iOS application).
Depending on what you are trying to achieve and what is your current knowledge, you can take different approaches.
If you are trying to learn OpenGL 2.0 while also learning GLSL, I suggest getting the Red book and the Orange book as a set, as they go hand in hand.
If you want a less comprehensive guide that will just get you started, check out the OpenGL bible.
If I misunderstood your question and you already know OpenGL, and want to study more about GLSL in particular, here's a good phong shading example that shows the basics.
Compiling a shader source is really simple,
First you need to allocate a shader slot for your source, just like you allocate a texture, using
glCreateShader
:After that you need to load your source code somehow. Since this is really a platform dependent solution, this is up to you.
After obtaining the source, you set it using
glShaderSource
:Then you compile your sources with
glCompileShader
:Link the shaders to each other, first allocate a program using
glCreateProgram
, attach the shaders into the program usingglAttachShader
, and link them usingglLinkProgram
:Then, just like a texture, you bind it to the current rendering stage using
glUseProgram
:To unbind it, use an id of 0 or another shader ID.
For cleanup:
And that's mostly everything to it, you can use the
glUniform
function family alongside withglGetUniform
to set parameters as well.Your question is a little broad, as you can literally write an entire book on how to create custom shaders, but it's a commonly asked one, so I can at least point people in the right direction.
Filters in GPUImage are written in the OpenGL Shading Language (GLSL). There are slight differences between the OpenGL targets (Mac, desktop Linux) and OpenGL ES ones (iOS, embedded Linux) in that shaders on the latter use precision qualifiers that are missing on the former. Beyond that, the syntax is the same.
Shader programs are composed of a matched pair of a vertex shader and a fragment shader. A vertex shader operates over each vertex and usually handles geometric manipulations. A fragment shader operates over each fragment (pixel, generally) and calculates the color to be output to the screen at that fragment.
GPUImage deals with image processing, so most of the time you'll be working only with fragment shaders and relying on one of the stock vertex shaders. The above is an example of a fragment shader that takes in each pixel of an input texture (the image from the previous step in the processing pipeline), manipulates its color values, and writes out the result to the
gl_FragColor
builtin.The first line in the main() function:
uses the
texture2D()
function to read the pixel color in theinputImageTexture
at a given coordinate (passed in from the vertex shader in the first stage). These coordinates are normalized to the 0.0-1.0 range, and therefore are independent of the input image size.The values are loaded into a vector type (vec4) that contains multiple components within a single type. In this case, the color components for red, green, blue, and alpha are stored in this four-component vector and can be accessed via .r, .g, .b, and .a. The color component values are also normalized to the 0.0-1.0 range, in case you're used to working with 0-255 values.
In the particular case of a sepia tone filter, I'm attempting to apply a well-known color matrix for a sepia tone effect to convert the incoming color to an outgoing one. That requires matrix multiplication, which I do explicitly in the code above. In the actual framework, this is done as matrix math using builtin types within the shader.
There are many, many ways to manipulate colors to achieve certain effects. The GPUImage framework is full of them, based largely on things like color conversion standards published by Adobe or other organizations. For a given effect, you should identify if one of these existing implementations will do what you want before setting out to write your own.
If you do want to write your own, first figure out the math required to translate incoming colors into whatever output you want. Once you have that, writing the shader code is easy.
The lookup filter in GPUImage takes another approach, and that's to start with a lookup image that you manipulate in Photoshop under the filter conditions you wish to mimic. You then take that filtered lookup image and attach it to the lookup filter. It translates between the incoming colors and their equivalents in the lookup to provide arbitrary color manipulation.
Once you have your shader, you can create a new filter around that in a few different ways. I should say that my new GPUImage 2 framework greatly simplifies that process, if you're willing to forgo some backwards compatibility.
The 5th edition of OpenGL SuperBible has been recently released. This edition reflects OpenGL 3.3 which was released at the same time as OpenGL 4.0, the book only covers the core profile and assumes no prior OpenGL knowledge.
That's what I got from the book's description anyway. I have the 4th edition and it's an excellent resource for OpenGL 2.0, so I assume the new edition along with the latest OpenGL Shading Language book would be just what you're looking for.
Durian Software has an ongoing series of tutorials covering modern OpenGL. They are aimed at OpenGL 2.0 but avoid using any deprecated functionality in later versions.