libGL

What is libGL?

OpenGL-based programs must link with the libGL library. libGL implements the GLX interface as well as the main OpenGL API entrypoints. When using indirect rendering, libGL creates GLX protocol messages and sends them to the X server via a socket. When using direct rendering, libGL loads the appropriate 3D DRI driver then dispatches OpenGL library calls directly to that driver.

libGL also has the ability to support heterogeneous, multi-head configurations. That means one could have two or more graphics cards (of different types) in one system and libGL would allow an application program to use all of them simultaneously.

Where does libGL reside?

The GLcore extension source code resides at xc/lib/GL/GL/ .

libGL (3D) Driver

A DRI aware 3D driver currently based on Mesa.

Where does the 3D Driver reside?

Normally libGL loads 3D DRI drivers from the /usr/X11R6/lib/modules/dri directory but the search patch can be overridden by setting the LIBGL_DRIVERS_PATH environment variable.

The DRI aware 3D driver resides in xc/lib/GL/mesa/src/drv

The DRI driver initialization process

  • The whole process begins when an application calls glXCreateContext (xc/lib/GL/glx/glxcmds.c). glXCreateContext is just a stub that call ?CreateContext. The real work begins when ?CreateContext calls __glXInitialize (xc/lib/GL/glx/glxext.c).
  • The driver specific initialization process starts with __driCreateScreen. Once the driver is loaded (via dlopen), dlsym is used to get a pointer to this function. The function pointer for each driver is stored in the createScreen array in the __DRIdisplay structure. This initialization is done in driCreateDisplay (xc/lib/GL/dri/dri_glx.c), which is called by __glXInitialize. Note that __driCreateScreen really is the bootstrap of a DRI driver. It's the only function in a DRI driver that libGL directly knows about. All the other DRI functions are accessed via the __DRIdisplayRec, __DRIscreenRec, __DRIcontextRec and __DRIdrawableRec structs defined in xc/lib/GL/glx/glxclient.h). Those structures are pretty well documented in the file.
  • After performing the __glXInitialize step, ?CreateContext calls the createContext function for the requested screen. Here the driver creates two data structures. The first, GLcontext, contains all of the device independent state, device dependent constants (i.e., texture size limits, light limits, etc.), and device dependent function tables. The driver also allocates a structure that contains all of the device dependent state. The GLcontext structure links to the device dependent structure via the ?DriverCtx pointer. The device dependent structure also has a pointer back to the GLcontext structure. The device dependent structure is where the driver will store context specific hardware state (register settings, etc.) for when context (in terms of OpenGL / X context) switches occur. This structure is analogous to the buffers where the OS stores CPU state where a program context switch occurs. The texture images really are stored within Mesa's data structures. Mesa supports about a dozen texture formats which happen to satisfy what all the DRI drivers need. So, the texture format/ packing is dependent on the hardware, but Mesa understands all the common formats. See Mesa/src/texformat.h. Gareth and Brian spent a lot of time on that.
  • createScreen (i.e., the driver specific initialization function) is called for each screen from ?AllocAndFetchScreenConfigs (xc/lib/GL/glx/glxext.c). This is also called from __glXInitialize.
  • For all of the existing drivers, the __driCreateScreen function is just a wrapper that calls __driUtilCreateScreen (xc/lib/GL/dri/dri_util.c) with a pointer to the driver's API function table (of type __DriverAPIRec). This creates a __DRIscreenPrivate structure for the display and fills it in (mostly) with the supplied parameters (i.e., screen number, display information, etc.). It also opens and initializes the connection to DRM. This includes opening the DRM device, mapping the frame buffer (note: the DRM documentation says that the function used for this is called drmAddMap, but it is actually called drmMap), and mapping the SAREA. The final step is to call the driver initialization function for the driver (from the ?InitDriver field in the __DriverAPIRec (DriverAPI field of the __DRIscreenPrivate).
  • The ?InitDriver function does (at least in the Radeon and i810 drivers) two broad things. It first verifies the version of the services (XFree86, DDX, and DRM) that it will use. The driver then creates an internal representation of the screen and stores it (the pointer to the structure) in the private field of the __DRIscreenPrivate structure. The driver-private data may include things such as mappings of MMIO registers, mappings of display and texture memory, information about the layout of video memory, chipset version specific data (feature availability for the specific chip revision, etc.), and other similar data. This is the handle that identifies the specific graphics card to the driver (in case there is more than one card in the system that will use the same driver).
  • After performing the __glXInitialize step, ?CreateContext calls the createContext function for the requested screen. This is where it gets pretty complicated. I have only looked at the Radeon driver. radeonCreateContext (xc/lib/GL/mesa/src/drv/radeon/radeon_context.c) allocates a GLcontext structure (actually struct __GLcontextRec from extras/Mesa/src/mtypes.h). Here it fills in function tables for virtually every OpenGL call. Additionally, the __GLcontextRec has pointers to buffers where the driver will store context specific hardware state (textures, register settings, etc.) for when context (in terms of OpenGL / X context) switches occur. The __GLcontextRec (i.e. GLcontext in Mesa) doesn't have any buffers of hardware-specific data (except texture image data if you want to be picky). All Radeon-specific, per-context data should be hanging off of the struct radeon_context. All the DRI drivers define a hardware-specific context structure (such as structure radeon_context, typedef'd to be radeonContextRec, or structure mga_context_t typedef'd to be mgaContext). radeonContextRec has a pointer back to the Mesa __GLcontextRec and Mesa's __GLcontextRec->DriverCtx pointer points back to the radeonContextRec. If we were writing all this in C++ (don't laugh) we'd treat Mesa's __GLcontextRec as a base class and create driver-specific derived classes from it. Inheritance like this is actually pretty common in the DRI code, even though it's sometimes hard to spot. These buffers are analogous to the buffers where the OS stores CPU state where a program context switch occurs. Note that we don't do any fancy hardware context switching in our drivers. When we make-current a new context, we basically update all the hardware state with that new context's values.
  • When each of the function tables is initialized (see radeonInitSpanFuncs for an example), an internal Mesa function is called. This function (e.g., swrast?GetDeviceDriverReference) both allocates the buffer and fills in the function pointers with the software fallbacks. If a driver were to just call these allocation functions and not replace any of the function pointers, it would be the same as the software renderer.
  • The next part seems to start when the createDrawable function in the __DRIscreenRec is called, but I don't see where this happens. createDrawable should be called via glXMakeCurrent since that's the first time we're given an X drawable handle. Somewhere during glXMakeCurrent we use a DRI hash lookup to translate the X Drawable handle into an pointer to a __DRIdrawable. If we get a NULL pointer that means we've never seen that handle before and now have to allocate the __DRIdrawable and initialize it (and put it in the hash table). -- IanRomanick and BrianPaul

Of what use is the Mesa code in the xc tree?

Mesa is used to build some server side modules/libraries specifically for the benefit of the DRI. The libGL is the client side aspect of Mesa which works closely with the server side components of Mesa.

The GLU and GLUT libraries are entirely client side things, and so they are distributed separately.

Is there any documentation about the XMesa* calls?

There is no documentation for those functions. However, one can point out a few things.

First, despite the prolific use of the word "Mesa" in the client (and server) side DRI code, the DRI is not dependent on Mesa. It's a common misconception that the DRI was designed just for Mesa. It's just that the drivers that we at Precision Insight have done so far have Mesa at their core. Other groups are working on non-Mesa-based DRI drivers.

In the client-side code, you could mentally replace the string "XMesa" with "Driver" or some other generic term. All the code below xc/lib/GL/mesa/ could be replaced by alternate code. libGL would still work. libGL has no knowledge whatsoever of Mesa. It's the drivers which it loads that have the Mesa code.

On the server side there's more of the same. The XMesa code used for indirect/software rendering was originally borrowed from stand-alone Mesa and its pseudo GLX implementation. There are some crufty side-effects from that.

that's not really true- there's also the __driRegisterExtensions function that libGL uses to implement glXGetProcAddress. That's another long story.

How do X modules and X applications communicate?

X modules are loaded like kernel modules, with symbol resolution at load time, and can thus call each other functions. For kernel modules, the communication between applications and modules is done via the /dev/* files.

X applications call X libraries function which creates a packet and sends it to the server via sockets which processes it. That's all well documented in the standard X documentation.

There are 3 ways 3D clients can communicate with the server or each other:

  1. Via the X protocol requests. There are DRI extensions.
  2. Via the SAREA (the shared memory segment)
  3. Via the kernel driver.