Current 3D cameras for the consumer market primarily consist of two low-res CCD-Sensors and lenses with fixed positions. Devices with additional depth-sensors are often restricted to be used with certain hardware and are too massive for easy mobility. The device-sensors of stereoscopic-cameras share a single bus and the raw-data is interleaved before transmission. With limited bandwidth and only the parallax effect available for spatial calculations, these devices tend to be unusable for many areas.
Fast object movements and high computational costs for 3D calculations often prevent operations in real-time. If the purpose of the recording is only illustrative, an approximation of the depth data should be more useful. Also monoscopic devices should be sufficient for this technique. Below I describe a method to “fake” the missing depth information by the assumption that the captured data resemble spheres.
With the information from an edge detection pass, tiles are transposed and mapped onto a sphere surface. The sphere z-coordinate augments the fragment-plane position. Expensive and unreliable screen-space algorithms, like the determinations of light sources and the shadow length are thereby avoided.
Example: Anaglyph
Color filters are used to create different images for the left and right eye. This method doesn’t require any advanced projection methods. It is sufficient to transpose and colorize the original image:
To enhance this effect, it is necessary to move one image-copy before and another behind the virtual focus (the center of the small sphere). Without the original image the result can be viewed here:
It is noteworthy that this method doesn’t create any real 3D data and the processing is solely done by the brain.
Example: Normal Mapping
As a next step the edge information can be used to simulate a normal map. In this example a primitive Sobel-Operator is applied to identify the edge direction:
The sphere z-coordinate is used to calculate the slope of the normal:
Example: Displacement Mapping
Finally the data can be applied to a displacement map. For this technique it is crucial that the main direct light-source is the monitor screen. Otherwise the 3D head projection gets bumpy:
Now it becomes also obvious, that stereoscopic information is missing. Fragments cannot be located above other fragments and angles above 90 degree don’t exist. Unless you want to setup a 360 degree camera grid, this limitation is unavoidable.
Bonus Example: Depth Field Projection
Inspired by a WebGL Demo from Stephane Cuillerdier I created a depth field projection. The scene-portal (instead of a deferred step from the original) is segmented into a physically based shaded (left side) and unaltered depth-data part:
The ray-marching algorithm has a very high GPU cost but is otherwise simple to program. As usual the light pass is reversed (photon mapped) and followed from the eye-position until it collides with the surface from the camera-image.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
#define STEPS 50 vec3 stepVec = (exitPos - entryPos) / STEPS; float h = 0; for(int i = 0; i < STEPS; i++) { vec3 p = entryPos + stepVec * i; float pDist = dot( (back[0] - p), vsNormal); vec3 surfPos = p + vsNormal * pDist; h = pow( surfaceHeight(surfPos) , clientCamDispAmp.x) * clientCamDispAmp.y; if(h >= pDist) break; } |
The uniform clientCamDispAmp
can be used to manually fix the surface position. entryPos
and exitPos
are the ray-portal intersection. The height data, calculated from the image is averaged. It means that 0.5 is at height zero and the value is also compared against its neighbors.