How to utilize grain in stereoscopic productions

The use of grain is probably one the most disregarded topics in stereoscopic productions.
And yet – it can most certainly enhance the perception of a stereoscopic movie. So I thought I take the time to write about the experiences I made concerning grain management and also give you some examples to see and judge for yourself.
So, why did I even come to the point of caring about grain?
If you read any of the articles in this blog you might have noticed that I made a stereoscopic movie with the Panasonic AG 3DA1 twin lens camera to get some experiences with stereoscopic postproduction techniques. Due to the fact that renting the camera waspretty expensive for a student I wasn´t able to get an extra SDI recording device which is why I had to rely on the inbuilt AVCHD Recorder. I actually thought it would not make a huge difference but after processing the clips through colour correction the artifacts were pretty obvious which is why I decided to go back to the start and try to denoise the plates as best as I could to then ad back the noise on top.

Below you can see the denoised plate on the left and the original image on the right.
To view the images contained in this article in full size, please click on the image.

Adding grain or rather a noise pattern actually isn´t that big of a deal. But how would you do that in a stereoscopic movie? Here are my experiences and a lot of pictures, so grab your anaglyph glasses and compare for yourself!

For the sake of making the effect more obvious I exaggerated the noise-pattern massively.
The effect would otherwise be impossible to see with with an anaglyph viewing method.

In the first image below you can see what a normal image would look like whiteout any grain treatment. The noise patterns are unequal for both views and thus aren´t perceiving any depth information. Because there are no correlating patterns, our brain can´t fuse the noise patterns and it produces eyestrain and an uncomfortable feeling that is more obvious in this exaggerated version.

In the next to pictures you can see a planar noise pattern. The noise Pattern acts like a window that can be either on the screen plane thus allowing elements to brake trough(left image) or in front of the closest object to minimize false depth interpretations(right image).

The most elegant method I find – also the most extensive one – would be to map the noise pattern to the proper depth positions of the contained objects. This can be done with a disparity map. A disparity map displays the disparity of corresponding pixels in a single 32bit float image. This image can then be used to shift the pixels of the noise pattern accordingly to generate a depth perception. The effect can be compared to the images of “the magic eye” books which used to be popular in the early nineties. Below you can see the picture of a disparity map as well as the stereoscopic noise-pattern and the combined picture.

The disadvantages – besides the huge amount of time it takes to generate such a stereoscopic noise-pattern – are also the accuracy of disparity maps. Parts of the image that don´t contain enough information to let the algorithm find enough corresponding points will result in false depth interpretations.

Another important question is: Do we even need grain in stereoscopic movies? Digital projections methods in movie theatres are already taking away the once so appreciated filmlook. 48-FPS  and 4k projections now seem to be a way to go. But there are still people who would rather shoot on film and then convert to stereo3D later on – like Men in Black III.

I am excited to see where stereoscopic cinema is going!

Panoramic CG Images

When I put up this website about two weeks ago I was thinking about taking a picture with a wide aspect ratio to have it as a cover picture. Now… since I am doing VFX work I decided to take a panoramic HDR Image of my hometown Duesseldorf in autumn and try to enhance it with some lines of a groundplane and have you – the audience – think about
what is actually possible when you have an HDRI Lighting Setup. Also… I thought it looked kinda fancy.

Stitching up an HDRI actually isn’t that big of a deal. But have you ever tried to render out a 180 or 360 degrees render from your 3D application? I never did… and it took me quite a while to figure out how I could achieve that look. First of all… there are some Camera-shaders out there. But they are for older versions of Maya and you have to compile them by yourself and people on the forums are complaining that they cannot get it working. Besides… I am on a Mac and I guess there are not so many people to ask about that out there.

But after playing around with some lens distortion (ok… it was late) I remembered some tutorials by Frank Rueter that are available on www.thefoundry.co.uk about spherical transformations. So what I ended up doing was basically taking three “90° field of view” cameras and rotate them around by 90°. In Nuke I used a spherical transform node and converted the sides of the panoramic cube to a 180° picture. Simples! If some of you guys know any better solutions or found different ways to render panoramic CG Images, please feel free to leave a comment.

Let´s talk about stereo-conversion

One and a half years ago I finished my studies of audio and video engineering with a thesis
about stereoscopic matchmoving and stereoscopic postproduction techniques. While my thesis was basically just about the technical aspects and challenges of building a robust stereo pipeline between matchmoving, compositing and 3D it took me almost another year to finish the VFX shots for the movie. Now that I am about to release the darn thing I took some time to evaluate this one little shot that I always took for granted… but… after a few month of not working on it I started to hate the little rascal. While the movie was shot on a Panasonic AG 3DA1 it was actually not possible to change the interocular distance. So I decided to avoid extremely wide or close shots. But in one shot at the end of the movie the camera had to approach the main hero to a very close distance so my two views ended up with a massive parallax shift. I sort of tried to get them together as close as possible. And.. on a small monitor it somehow works. But lets be honest… That wasn´t a proper solution. So I eventually decided to pull a conversion. Actually conversions are a very tedious progress. You have to roto as hell to get a desirable amount of control over your scene.

But… with this scene I was really lucky to get the work done in only a couple of hours. Lucky, because I had a stereocam. Even though I wasn´t able to use my second view I could still use it as a wittness camera for the matchmoving process to get a more accurate result due to the second cameraposition and the thus resulting parallax to start from there on.

Another decision that saved me a couple of hours was to not roto the whole shot but only one frame and project it onto static geometry. even though there was some subtle motion I decided to go the easy way at first and check out whether I could get away with that approach. The geometry was done in Maya and is really simple with some overlapping faces to avoid wholes during the camera movement. And boy.. Am I looking forward to working with Nuke7 and its new modeling tools. Honestly.. working with a fixed camera in Maya is hell because you are unable to Zoom into the viewport to make proper adjustments which would again save so much time if you can shift one or two vertices from the nuke viewport.

So .. just some little tweaks and some paint fixes and its done.  Voilá: nice stereo!

Fell free to compare for yourself !

The finished shot will be released during the next couple of weeks being part of the
stereoscopic short movie “Industrie Dschungel”.

Cheers, Stefan

Native S3D footage:

Converted Scene:

First entry of the geekfest blog

Hello everyone,

let´s face the fact: VisualFX work is full of challenges and every now and then
artists face certain issues that will drive them crazy, make them stay up all night
and let them do funny dances at 5 am after solving an issue.
This part of my blogs is about telling geek stuff as well as showing work in progress
projects or releasing plug-ins, gizmos or whatever I feel should be contributed or shouted out to the world. In any case… feel free to leave comments.