Introduction

Mesa / DRI was not accepted as a SoC organization, however we still accept projects through X.Org. This page is the current list of possible project ideas. As always with GSoC, student suggested project ideas are always welcome. Please join us for discussion in #dri-devel on Freenode.

Project List

Below is the list of potential projects. If there is a project not on the list that you would like to mentor, please add it. If there is a project not on the list that you would like to work on, please contact us on #dri-devel on Freenode or on the Mesa developer's mailing list (https://lists.sourceforge.net/lists/listinfo/mesa3d-dev)

Peep-hole Optimizer for Real-Time Code Generation

More and more parts of the rendering pipeline are being dynamically generated. For example, the Cell driver generates code, at run-time, to perform vertex fetch, depth testing, alpha blending, etc. As time goes on, this will become more and more common. Writing optimized C code to generate this assembly code is difficult. One particular area of difficulty is preventing redundant calculations and redundant data loads. Another area of difficulty is properly scheduling instructions for the target processor.

Much of this complexity could be eliminated by the presence of a peep-hole optimizer. This optimizer would analyze the code generated, for example, to perform alpha blending. It would then reorder instructions, change register usage, eliminate redundant calculations, etc. Peep-hole optimization is a fairly well known area in compiler development, however, stand-alone optimizers for use cases like this are less known.

The goal of this project is to develop a peep-hole optimizer for the rtasm (real-time assembly) infrastructure in the gallium-0.1 branch of Mesa. The optimizer should optimize code generated for one of the common Mesa target architectures (i.e., PowerPC, Cell SPE, x86-32 SSE, or x86-64). Alternately, an optimizer could be written for one of the GPU architectures (r300, i965, etc.). The optimizer should be considered a proof-of-concept implementation. While being specific to one particular architecture, the optimizer will likely provide a model that would be used to create additional optimizers. Eventually an optimizer will be need for each target processor and GPU.

Mentor: IanRomanick

Add a Nouveau gallium backend

Nouveau currently implements 3D through gallium3D, which allows implementing a 3D driver core in very few lines of C (about 3000 for a basic driver). However, this support only exists (and in limited form) for nv40 cards. Preliminary support exists for nv10 and nv30.

The goal of this project is to get a new 3D backend going. Obviously, this is a very ambitious goal, but intermediate goals can be set (for example: working single triangle, working glxgears, working textures...). Furthermore, nvidia cards have a fair amount of hardware in common from one generation to the other, and so code could be (and should be) leveraged from existing drivers (for example the nv20 vertex unit is identical to nv30's, whose code exists right now). The exact card to work on is to be determined by what hardware the student owns.

Mentor: ?StephaneMarchesin

Gallium3D frontend for video decoding

Currently, real time video decoding of high definition streams can only be achieved thanks to hardware acceleration. The purpose of this project is to add a card-agnostic way of supporting hardware video decoding through the gallium3D framework. Gallium3D is a new framework allowing hardware accelerated OpenGL in free software drivers. However, it is not limited to OpenGL: when working on the frontend side, one can support additional APIs. The purpose of this project is to add a new frontend that supports hardware video decoding.

Steps for this include choosing relevant APIs (XvMC and vaapi are among the possibilities) and implementing them in terms of the intermediate gallium3D format. The entropy decoding stage (which is the first step of most video decoding pipelines) should be taken particular care of, since it does not work well on a GPU (except when special hardware is available), so it will probably have to be done as optimized C or assembly. Special care should also be taken in ensuring that the decoding components are kept modular, since video codecs often use similar pipelines with slight changes.

Implement render operations as a gallium3D frontend

Currently drivers that want to implement render acceleration need to implement it separately in the DDX. This code is is generally duplicated from the 3D driver, since texture operations are the most common method for doing blends and transforms in hardware. The purpose of this project is to add a card-agnostic way of supporting hardware render operations through the gallium3D framework. Gallium3D is a new framework allowing hardware accelerated OpenGL in free software drivers. However, it is not limited to OpenGL: when working on the frontend side, one can support additional APIs that map to the card's pipeline.

Implementation would include support for 3D blits and fills with transformed coordinates and blends for the Porter/Duff compositing operations.

Improve support for AMD/ATI cards

While there are already drivers for most AMD/ATI hardware there are several features that are not yet taken advantage of. Some possible ideas include: support for new extensions, support for private back/depth buffers, better use of GART space, etc.

Mentor: AlexDeucher