Handling hardware that can't handle mipmaps

To one degree or another, most older graphics chips do not support mipmaps. Some new chips, such as the R100 family, don't support mipmaps on cubic or volume textures. This document presents a couple different ways to deal with such hardware. A description of each technique will be presented along with its pros and cons.

The following table summarizes the chips know to have mipmapping limitations.

Chip name Supported by Mesa? Rasterization Method 1D / 2D mipmaps 3D mipmaps cubic mipmaps
ATI Rage IIc No Scanline No N/A N/A
ATI ?RagePro Yes Polygon setup No N/A N/A
ATI Radeon R100 Yes TNL Yes No No
ATI Radeon R200 Yes TNL Yes No Yes
S3 Virge No1 Scanline No N/A N/A
3Dlabs 500TX No Polygon setup2 No N/A N/A
Matrox G100 No Scanline? No? N/A N/A

1: A driver exists, but it is not maintained and is know to have significant problems.

2: Polygon setup is performed by a Glint Delta chip, and rasterization is performed by the 500TX. Some (rare) cards may exist that omit the Delta, and it may be possible to circumvent the Delta and program the 500TX directly.

Clamp to base LOD

The easiest technique is to simply use the base LOD setting for the texture. When selecting which mipmap to use, all of the usual variables (e.g., LOD bias, explicit baselevel setting, etc.) are considered. Then, that single mipmap is used until either the current texture or the LOD settings are changed.

It should be noted that this technique will produce very incorrect results if the successive mipmaps are not scaled, filtered versions of the base map. Other than programs that test the texture filtering quality of a driver, I don't know of any applications that do this, but they may exist.

Pros:

  • Very easy to implement
  • Fast

Cons:

  • Non-conformant
  • Low visual quality

Fallback to software

Another option is to fallback to software rasterization if a mipmapped texture filter is selected.

Pros:

  • Very easy to implement
  • Conformant
  • High visual quality

Cons:

  • SLOW

Calculate per-polygon LOD

After the points of a polygon have been transformed, clipped, lit, and transformed to screen space, the resulting data can be used to approximate the rho calculation defined by the OpenGL spec (see pages 171 and 172 of the OpenGL 2.0 spec). For each edge in the polygon, the following calculation is performed.

  • rho = max( sqrt( sqr(du/dx) + sqr(dv/dx) + sqr(dw/dx) ), sqrt( sqr(du/dy) + sqr(dv/dy) + sqr(dw/dy) ) ) Where dx and dy are the changes in screen space coordinates across the edge. du, dv, and dw are the changes in texel addresses across the edge. In this case, u = s * width, v = t * height, and w = r * depth. For 1D textures, height and depth are zero. For 2D textures, depth is zero.

On hardware that is scanline based (e.g., S3 Virge), this calculation could be done per-scaneline during rasterization. However, other hardware limitations may make this impractical.

As an interesting side effect, this algorithm could be used to implement per-polygon anisotropic filtering. The technique is described in the SIGGRAPH '98 course Advanced Graphics Programming Techniques Using OpenGL.

Marc Olano et. al. has a paper on a similar technique called "Vertex-based Anisotropic Texturing". Slides are also available for the 2001 SIGGRAPH/Eurographics Workshop on Graphics Hardware presentation. This algorithm may not be of much use here, but the paper is an interesting read either way.

Pros:

  • Improved visual quality
  • Allows "fake" anisotropic filtering

Cons:

  • Non-conformant
  • Difficult to implement in Mesa
  • Sizable performance hit

Calculate per-batch LOD

This technique is just like calculating the per-polygon LOD, but the same mipmap level is used for all polygons in a batch (e.g., within a single glBegin / glEnd pair).

Pros:

  • Improved visual quality
  • Modest performance hit on non-TNL cards

Cons:

  • Non-conformant
  • Difficult to implement in Mesa

CategoryHardware