COOL Version 1.22 / Wayland Support

I submitted version 1.22 of COOL to the corresponding web stores for OS X respectively Linux. This version fixes a bug with high resolution controllers where 16Bit (short) instead of 8Bit values are used for analog-sticks.

Wayland Support

An experimental Wayland version is now also available:

Steps to compile a Mesa application for Wayland


Following libraries are necessary

  • libwayland-client
  • libwayland-server
  • libwayland-egl
  • libwayland-cursor
  • llibxkbcommon
  • libdl

The available wayland-egl library only supports a subset of the outdated Khronos EGL 1.4 specification. The missing functions can be found in the original Mesa library libEGL.

In a gnu-makefile, the libraries can be added in the following way:


To compile the latest glfw version from GitHub with wayland support, it was necessary to add several patches from the wayland patchwork.

For better debugging I added several lines of code to the library.

Context Sharing

To use OpenGL in a multi-threaded application the EGL specification defines a shared context. A shared context was created like this:

Interestingly, anything but texture buffers could be shared between contexts. Texture buffers are available since version 3.0 of OpenGL. It remains to be explained why a shared context can create, but not bind a buffer against a GL_TEXTURE_BUFFER target.

Because of these limitations the wayland build remains experimental.

Real-Time pseudo 3D projections from webcam recordings

In this Article I describe a technique for Real-Time pseudo 3D projections from monoscopic webcam recordings.

Current 3D cameras for the consumer market primarily consist of two low-res CCD-Sensors and lenses with fixed positions. Devices with additional depth-sensors are often restricted to be used with certain hardware and are too massive for easy mobility. The device-sensors of stereoscopic-cameras share a single bus and the raw-data is interleaved before transmission. With limited bandwidth and only the parallax effect available for spatial calculations, these devices tend to be unusable for many areas.
Fast object movements and high computational costs for 3D calculations often prevent operations in real-time. If the purpose of the recording is only illustrative, an approximation of the depth data should be more useful. Also monoscopic devices should be sufficient for this technique. Below I describe a method to “fake” the missing depth information by the assumption that the captured data resemble spheres.
With the information from an edge detection pass, tiles are transposed and mapped onto a sphere surface. The sphere z-coordinate augments the fragment-plane position. Expensive and unreliable screen-space algorithms, like the determinations of light sources and the shadow length are thereby avoided.

Example: Anaglyph

Color filters are used to create different images for the left and right eye. This method doesn’t require any advanced projection methods. It is sufficient to transpose and colorize the original image:


To enhance this effect, it is necessary to move one image-copy before and another behind the virtual focus (the center of the small sphere). Without the original image the result can be viewed here:

It is noteworthy that this method doesn’t create any real 3D data and the processing is solely done by the brain.

Example: Normal Mapping

As a next step the edge information can be used to simulate a normal map. In this example a primitive Sobel-Operator is applied to identify the edge direction:


The sphere z-coordinate is used to calculate the slope of the normal:

Example: Displacement Mapping

Finally the data can be applied to a displacement map. For this technique it is crucial that the main direct light-source is the monitor screen. Otherwise the 3D head projection gets bumpy:


Now it becomes also obvious, that stereoscopic information is missing. Fragments cannot be located above other fragments and angles above 90 degree don’t exist. Unless you want to setup a 360 degree camera grid, this limitation is unavoidable.

Bonus Example: Depth Field Projection

Inspired by a WebGL Demo from Stephane Cuillerdier I created a depth field projection. The scene-portal (instead of a deferred step from the original) is segmented into a physically based shaded (left side) and unaltered depth-data part:


The ray-marching algorithm has a very high GPU cost but is otherwise simple to program. As usual the light pass is reversed (photon mapped) and followed from the eye-position until it collides with the surface from the camera-image.

The uniform clientCamDispAmp can be used to manually fix the surface position. entryPos and exitPos are the ray-portal intersection. The height data, calculated from the image is averaged. It means that 0.5 is at height zero and the value is also compared against its neighbors.

Further improvements

A better scene prediction could also improve the quality of the approximated 3d space. For example a reliable object/head detection would be desirable. This is typically done by down-sampling the original image followed by a pattern-recognition step:
If a sufficient number of model patterns are matched on the image above, a head could be positively identified.

Simplified Environment Creation

Simplified Environment Creation for 3D Scenes

To reduce the cost of environment generation, I decided to discard the elaborate physically based (PBR) texturing process(which requires from 4 up to 6 textures per material) and replace it with a more synthetical approach where additional material information is calculated in real-time. So texturing can be done with a single colormap again:
To avoid monotony, textures can be augmented by texture blending. This is typically realized with an additional UV-map and a blend-map:

Alternatively, it is possible to use tessellated geometries or calculated curvatures (e.g. for puddles, rust or mold) to generate pivot points. On these points, fractal-noise can be setup to generate blending between multiple textures. Here is an example of my tessalating asteroid shader:

Dynamical information for environment-objects can also be mapped from vertex colors or texture buffers (I don’t use this feature for my next project).

Example: Normal Mapping

The primary bump-information is calculated by a mixture of saturation and curvature (tech-demo graphics):

The distance and the viewing angle is used to create the fist simple normal-map:

Additional sinusoids are mapped to enhance certain aspects of the texture. In this example for mid size coarseness:

As with color toning, completing filters are added to create a realistic appearance:

Update: 11-4 Added Light Tweaks:

Linux OpenGL per frame micro benchmarks

Linux OpenGL per frame micro benchmarks

For my upcoming game release I did some benchmarks on varying hardware. For this post two GPUs from different vendors were chosen. Both support OpenGL 4.4+ on Linux with their latest driver. All examples are rendered with my “yet another graphics library” (yagl). The time recording was done with atomic counters.

Example 1: Single Object rendering with pbr and post-processing stages:


Benchmarks with enabled vsync

The measured data is displayed in seconds:

One hundred frames segment

The blue line marks the 60FPS real-time margin:

Benchmarks without vsync


One hundred frames segment


Example 2: in Game + clock, pick, shadow, physics, occlusion stages


Benchmarks with enabled vsync


One hundred frames segment


Benchmarks without vsync


One hundred frames segment


Fast direct and deferred atmosphere rendering with GLSL 4.1+

Fast direct and deferred atmosphere rendering with GLSL 4.1+

Volumetric rendering is typically slow. Sending ray-casts through voxel-fields, up-sampling the results and merging the results with the final image is costly. I present two methods with reasonable computation times and believable results.

Direct Rendering


The direct rendered atmosphere is based on billboard sprites. This means that many small 2D images are drawn to fake the effect of smoke particles. I’m using several different textures to create these images.

(Compressed) Irradiance maps

Irradiance maps are used to receive the emissions from indirect (non-scene graph) light sources. They store the lambertian part of a physically based calculation.


Noise maps are used to simulate randomness. However the values (the shades of gray 🙂 ) aren’t completely independent and can be used to simulate chaotic phenomena like clouds or terrain.


To calulate the likelihood of entangled events the gauss curve can be used.

Texture Creation

Usually all textures are created in real-time from my library. For example the irradiance-map is part of a light-probe and created with:

Light-probes are part of the scene graph and are bound to the nearest objects.

Also all images are rendered in HDR color range (Therefore the images presented in this post are gamma corrected).
I added a small plotter library for the pattern generation. The gauss curve was created like this:

It is also possible to use programming languages like Julia or GNU Octave for texture generation.

The Vertex Shader

As already mentioned the atmosphere particles are rendered as camera-view aligned planes. I learned a nice trick from Datenwolf to get rid of the annoying normal|eye-direction alignment:

The mesh rotation and the scale is exported from the projection matrix and reversed.

Alternatively, if you have screen-space access from your engine you can directly use these coordinates:


The Normal

The plane normal can immediately be derived from the texture coordinates:

The Fragment Shader

At first I’m limiting the surface to a sphere:

With the surface normal and camera position it is already possible to render a virtual sphere(the volume where the atmosphere simulation starts):


To calculate the likelihood of a light scatter event the gauss value is used:

Finally the color is created with a phong Ambient Diffuse calculation:

Deferred Rendering

The Advantage of deferred- against direct-rendering is the possibility to use screen-space ray-casting and with it the use of depth-fields or ray/object intersections. This allows, that the atmosphere can directly interact with all other visible scene elements. The depth and distances of objects can be taken into account.

Example of a deferred rendered atmosphere

In this example the clouds are partly occluded by the sphere. Light scattering is visible at the sphere borders:

The Vertex Shader

the vertex shader just covers the screen-plane. the texture coordinates are normalized:

If necessary, the inversed projection (GLSL: inverseMatrix) can be used to calculate the screen world space positions.

The Fragment Shader

Instead of aligned planes the fragment shader now calculates the density of a depth field. Spheres are described by the center position and their dimensions.

If you know the PBRT Raytracer you will immediately recognize my implementation of a sphere:

Screen-Space refractions


If a ray hits a atmo-sphere its properties can be used to calculate a refraction:

Finally this result can be mixed with the a volume noise function: cloudsDeferrredInter

Comparison: Quadric spherical harmonic expansion irradiance maps VS. Analytic tangent irradiance environment maps for anisotropic surfaces

For anisotropic materials Mehta, Ramamoorthi et al. developed an alternative rendering method for the otherwise identical compact irradiance map representation.

Example: Left compact VS. Right tangent irradiance


This technique is already noticeable on materials with little glossiness:

The GLSL code for both calculations can directly be derived from the formulas presented in the corresponding papers.

The Vertex Tangent Vector

If your model representation already contains vertex tangents (and your OS supports enough available vertex-attribute-pointer) you can adjust it to your normal/bump/height map just like the normal vector:

There are several variants from tangent-space normal-maps where additional information is stored in the blue- and alpha- channel.

If the tangent is missing

If you don’t have access to correctly precalculated tangent-vectors the situation is a little bit tricky. In contrast to normals, tangent-vectors have a so called handedness. This handedness results from the alignment of the tangent space to the UV-texture space:tSpace

If the handedness flips (usually at texture seams) ugly “tangent space distortions” get visible (Of course only if your illumination technique is dependent from it).

In my efforts to create universal usably GPU applications im using following code for the vertex shader:

where the rotation can be written as:

YAGL V2.1.7 / D.V0.5

Changes D V0.5:

Updated Controls

  • Maneuverability is influenced by atmosphere pressure.
  • The aerodynamic drag is now simulated. 
  • The atmosphere simulation now includes turbulences.
  • The hull heat temperature is  now calculated.
  • The ship center of gravitation now moved to the back for amplified flight instability.

The ship minimum speed remains above earth escape velocity.

Changes YAGL 2.17

Improved shading:

Ray marching

For advanced image effects a local view-space ray-march is now used.

View frustum optimized shadows


Update: 2013/10/08: Framerates

The actual framerates with disabled v-sync:
1080p fullscreen: averaging 180 FPS
720p windowed resizable: averaging 260 FPS