This is a DRAFT only - jesse.

Porting legacy OpenGL

Congratulations, OpenGL!!! You are (almost) 20 years old. That is a long time to have been granting access to a standardized 3D graphics pipeline model and set of programming interfaces to intrepid souls all over the world. The only problem is that entropy being what it is, some of the code (applications, toolkits, libraries) written to those interfaces and against that pipeline model are also (almost) that old. The pipeline model has evolved to reflect not only advances in 3D hardware architectures, but also the accumulated wisdom of software rendering techniques. A separate fork of the standard has sprung up to address the needs and constraints of embedded systems (OpenGL ES). The old, fixed-function pipeline has largely gone the way of the dodo and been replaced by a very powerful fully programmable multi-stage graphics pipeline.

Overview of porting topics

Common topics for "modernization" of OpenGL functionality. These largely apply whether targeting OpenGL ES 2.0 or the current core profile of OpenGL.

  • No more immediate mode. Use vertex and index arrays instead; preferably from buffer objects.
  • Reduced primitive selection.
  • No more fixed-function vertex processing (matrix stacks, transformation API, lighting, material properties, etc.). Need matrix stack and transform library as well as vertex shaders.
  • No more fixed-function fragment processing (fog, texenv, related texture and fog enables, etc.). Need fragment shaders.
  • No bitmaps or polygon stipples.
  • No more glCopyPixels.
  • Much more, but those are the key pieces.

Executing the API conversion this way prevents breakage and regression of existing OpenGL functionality independently of any OpenGL ES-specific work. And, last, but not least, there is another area of consideration, which is the GLX to EGL conversion for config, surface and context management.

Immediate mode

In the early days of OpenGL, the input to the pipeline took the form of commands and attributes. For example, you could tell the pipeline that you wanted it to draw a triangle (or triangles) after which it would expect a multiple of 3 vertex coordinates followed by the command terminator. It looked something like this:

   1 glBegin(GL_TRIANGLES);
   2     glVertex3f(1.0, 0.0, 0.0);
   3     glVertex3f(1.0, 1.0, 0.0);
   4     glVertex3f(0.0, 0.0, 0.0);
   5 glEnd();

This was called immediate mode, and even though vertex arrays were added as an extension for version 1.1, it was largely how the pipeline was fed for the first decade of the existence of OpenGL. The best you could do with immediate mode was to put groups of commonly issued commands into display lists, which is basically a record/playback mechanism for OpenGL commands. This was of limited use, however, as there are a number of useful commands that cannot be put into display lists. There is still a lot of code out there that is written this way; of course, if you are reading this you probably knew that.

What to use instead? How about...

Vertex Arrays and Index Arrays

Immediate mode was great for getting something up quickly, and for drawing very simple objects. For large data sets, it is not very efficient and the code gets ugly fast. For efficient processing of larger data sets, the concept of vertex arrays was introduced. The idea here was basically to setup all of your vertex (and normal, and color and texture coordinate) data in a single memory buffer and simply tell OpenGL how to walk that data in order to yield primitive descriptions. This is done when passing the buffer pointer to OpenGL (e.g. there are 136 3-dimensional floating-point vertex coordinates in this buffer). The draw call simply tells OpenGL what kind of primitive to draw, what offset into the buffer to start at and how many vertex coordinates it should use. It looks something like:

   1 GLfloat my_data[] = {
   2     1.0, 0.0, 0.0,
   3     1.0, 1.0, 0.0,
   4     0.0, 0.0, 0.0
   5 };
   6 glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, my_data);
   7 glDrawArrays(GL_TRIANGLES, 0, 3);

This is an overly trivial example, but still illustrates the point. The data can be separated out from the code, which makes for cleaner code and fewer API calls to perform the drawing, and the implementation can be more efficient at traversing the data.

Depending upon your needs, you may also want to look at using index arrays. Index arrays add an extra level of indirection to the vertex array allowing you to reference a given vertex more than once and in different orders (e.g. face-vertex meshes). Not only can this be an efficiency boost for your application's data management, but many (most) GPUs actually optimize for this case by caching vertex processing, so your vertex may only need to be shaded once even if it is used many times. Index arrays are referenced using the glDrawElements call.

   1 GLfloat my_vertex_data[] = {
   2     1.0, 0.0, 0.0,
   3     1.0, 1.0, 0.0,
   4     0.0, 0.0, 0.0
   5 };
   6 GLuint my_index_data[] = {
   7     0, 1, 2
   8 };
   9 glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, my_data);
  10 glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, my_index_data);

Again, an overly trivial example, but in general, this is how all of your draw code should be setup.

NOTE: The quad, quad strip and polygon primitives are no longer available as valid choices for draw commands in OpenGL and have not been part of OpenGL ES. If you have them in your code, switch to something else, like triangle strips or fans.

Vertex Buffer Objects

Essentially, vertex buffer objects (VBOs) give you a server-side container for all of your vertex and related attribute data (indexes, normals, texture state, etc.). So, the access to your data is even more efficient than with basic vertex arrays because it is memory local to the GPU at the time your draw call is made. The difference in code to the above vertex array example is that the data is placed into the buffer object ahead of time, so the sequence might look something like:

   1 // The folloiwng is only done once at initial setup time...
   2 //
   3 // Create the buffer object
   4 unsigned int bufferObject;
   5 glGenBuffers(1, &bufferObject);
   6 // Setup the vertex data by binding the buffer object, 
   7 // allocating its data store, and filling it in with our vertex data.
   8 GLfloat my_data[] = {
   9     1.0, 0.0, 0.0,
  10     1.0, 1.0, 0.0,
  11     0.0, 0.0, 0.0
  12 };
  13 glBindBuffer(GL_ARRAY_BUFFER, bufferObject);
  14 glBufferData(GL_ARRAY_BUFFER, sizeof(my_data), my_data, GL_STATIC_DRAW);
  15 // Unbind the buffer object to preserve the state.
  16 glBindBuffer(GL_ARRAY_BUFFER, 0);
  18 // Now, any time you want to reference the data in a draw call...
  19 glBindBuffer(GL_ARRAY_BUFFER, bufferObject);
  20 glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
  21 glDrawArrays(GL_TRIANGLES, 0, 3);

Note the difference in the call to glVertexAttribPointer from the original vertex array example. The final pointer argument is no longer a pointer. Because there is a bound buffer object, the pointer argument is treated as a byte offset into the buffer data; in this case, start at the first vertex coordinate when processing the draw call.

Vertex Array Objects

Conceptually, vertex array objects (VAOs) are simply a meta-object that gives the programmer (and the implementation) an easy handle for all of the vertex data, attributes, etc. that go along with an object or objects to be rendered. Rather than keeping track of all of the enables, disables, binds and unbinds of several different buffer objects and attributes, the VAO allows all of this meta-information to be associated with a single object that can then simply be bound and unbound. Sounds great, right? So what's the catch? They are only available in relatively recent version of the OpenGL core profile and actually become mandatory in the most recent versions of the core profile. This means some slightly subltle handling in code bases that provide common OpenGL and OpenGL ES support.


When OpenGL was developed, the available 3D hardware at the time was not typically the general purpose compute engine it is today. It was typically several special-purpose chips strapped onto a board with some memory. One would handle geometry, which included vertex transformation, clipping and lighting. Another might handle primitive rasterization and texturing. And still another might handle additional format conversions for making the trip from digital to analog suitable for display. None of those engines were fully programmable even within the set of tasks they were set. So, if you wanted lighting, for example, there were specific lighting functions built into the engine for that and the original OpenGL API allowed you to choose that particular function. This is what is meant by the term fixed-function. In today's world, the GPU has gotten so general purpose that it is even taking over some of the tasks previously reserved for very high-performance CPUs. The OpenGL (and OpenGL ES) API have also evolved to reflect those changes. Mechanisms like matrix stacks and basic transformations on those matrices are no longer available from the API; this is primarily because they were largely performed in software on the CPU anyway. Even for functionality that was nearly always performed on the graphics hardware, like lighting or fog computations, there is no longer direct support from the API. The "new" way of doing most of this is through shaders. Shaders are programs written in the C-like OpenGL Shading Language (GLSL or GLSL ES). Each pipeline stage gets its own shader, for most of the recent versions of OpenGL and for OpenGL ES 2.0, there are basically 2: the vertex shader and the fragment shader. There are few restrictions on what can be done for each, but the division of labor should be pretty self explanatory. Anything that needs to be computed for each vertex happens in the vertex shader (e.g. transformation of vertex coordinates by a matrix), and anything that needs to be computed for each fragment happens in the fragment shader (e.g. texture sampling or lighting). For OpenGL 4.1, there are additional shader stages to cover tessellation (before vertex processing) and geometry (after vertex processing). As those are not available in OpenGL ES, they won't be covered here (yet).

What to do about the fixed-function vertex processing in your legacy code? For matrices, etc., the libmatrix project provides a simple implementation of vectors, matrices and matrix stacks along with most of the useful maths appropriate to those objects (vector products, matrix transformations). It also has a reasonable GLSL program object abstraction along with a handy function to load your shader source from a file (handy for development so you don't have to recompile your whole application in order to test new shaders). The glcompbench project is a good example of the usage of these objects to provide programmable vertex and fragment processing.

Texture Environment

There are a number of texture modes and environment parameters that have either gone away in recent core profile OpenGL or were never available in OpenGL ES; some of this is related to the conversion from fixed-function to programmable. This list is not exhaustive and will grow over time as we encounter each feature and develop a solution to use as an example.

  • 1D Textures (no example here, you just have to use a 2D texture with height == 1).
  • GL_CLAMP_TO_BORDER wrap mode.

Bitmaps and Polygon Stipples

In OpenGL, bitmaps and polygon stipples are basically rectangular masks of 1-bit color index data, where a value of 0 yields transparency or, rather, leaves the buffer contents unchanged, and a value of 1 yields the current raster color (or possibly a texture sample. These are no longer available in OpenGL and have not been part of OpenGL ES. If you have code that deals with them, a simple substitution is to use a 1-byte alpha texture and handle any special sampling logic in the shader. An example of this can be found here.

How to choose an appropriate EGLConfig on an X11 Window System

The simple answer might just be change "glX" to "egl" in your ChooseConfig, CreateContext and Make[Context]Current calls and have done with it; not so fast. It's even possible that your current GLX calls are not quite right. Don't take it personally; X11 is tricksy and almost no one gets it right. Mostly, modern implementations have simplified how various resources are supported so the "default" thing is valid most of the time (gone are the days of SGI workstations with multiple simultaneously displayable pixel formats on multiple screens at the same time with underlays and overlays). Also, gone are the days when anyone cares about conformance branding. The other important consideration is whether or not you want the results of your rendering to be locally displayable. If so, you will need to set the EGL_WINDOW_BIT on the EGL_SURFACE_TYPE config attribute when you query. So, to get what you want the vast majority of the time, you can do something like this hypothetical function that finds a valid config:

   1 bool
   2 gotValidEGLConfig(EGLConfig& config)
   3 {
   4     using std::cerr;
   5     using std::endl;
   6     // First, get your connection to the display and initialize EGL.  This 
   7     // uses $DISPLAY from the environment, but it could just as easily use
   8     // a string pulled from an option argument.
   9     Display* xdpy = XOpenDisplay(0);
  10     if (!xdpy)
  11     {
  12         // Something really wrong here...
  13         cerr << "Failed to connect to X11 server" << endl;
  14         return false;
  15     }
  16     // Could also use EGL_DEFAULT_DISPLAY and get the "right" result most 
  17     // of the time.
  18     EGLDisplay dpy = eglGetDisplay(static_cast<EGLNativeDisplayType>(xdpy));
  19     if (dpy == EGL_NO_DISPLAY)
  20     {
  21         cerr << "No EGL display available." << endl;
  22         return false;
  23     }
  24     EGLint major, minor;
  25     if (!eglInitialize(dpy, &major, &minor))
  26     {
  27         int errcode = eglGetError();
  28         cerr << "eglInitialize failed, reason: " << errcode << endl;
  29         return false;
  30     }
  32     // Here you need to know what kind of config you really need for the
  33     // rendering you plan to do.  EGLConfigs are sorted in a very particular
  34     // way, but it can often seem nonintuitive.  See the description "Sorting
  35     // of EGLConfigs" in section 3.4.1 of the EGL 1.4 specification for more 
  36     // details.  The following attribute list will yield a list of configs 
  37     // that are renderable by OpenGL ES 2.0, can be passed to 
  38     // eglCreateWindowSurface, are RGBA (as opposed to luminance) in pixel 
  39     // format and have a depth buffer.
  40     const EGLint config_attribs[] =
  41     {
  42       EGL_SURFACE_TYPE,         EGL_WINDOW_BIT,
  43       EGL_RED_SIZE,             1,
  44       EGL_GREEN_SIZE,           1,
  45       EGL_BLUE_SIZE,            1,
  46       EGL_ALPHA_SIZE,           1,
  47       EGL_DEPTH_SIZE,           1,
  49       EGL_CONFIG_CAVEAT,        EGL_NONE,
  50       EGL_NONE,
  51     };
  52     EGLint numConfigs(0);
  53     if (!eglChooseConfig(dpy, config_attribs, 0, 0, &numConfigs))
  54     {
  55         cerr << "Failed to determine number of relevant EGL configs." << endl;
  56         return false;
  57     }
  59     EGLConfig* configs = new EGLConfig[numConfigs];
  60     if (!eglChooseConfig(dpy, config_attribs, configs, numConfigs, &numConfigs))
  61     {
  62         cerr << "Failed to get relevant EGL configs." << endl;
  63         return false;
  64     }
  66     // Here it's worth reviewing how the configs are sorted by EGL.
  67     // The deepest color formats come first, which may or may not
  68     // be what you want, so there may still be some additional manual
  69     // sorting to do...
  70     config = configs[0];
  72     delete [] configs;
  74     // At this point, you can get the native visual ID config attribute and use
  75     // it to query the XVisualInfo struct for the visual you'll use to create
  76     // your colormap and native window in X11 before calling
  77     // eglCreateWindowSurface
  79     return true;
  80 }

Unless the requirements set by the config attributes you passed into eglChooseConfig are an inherent mismatch to your display configuration (e.g., there are no displayable configs with an alpha channel, or there are only slow configs), you should now have something displayable. This is not the only way to accomplish this, and there may be many more displayable configs available to you than just the first one in the list, but this is a fine starting point.

WorkingGroups/Middleware/Graphics/Docs/GLPortingGuide (last modified 2011-09-26 22:30:27)