Many of the comments I’ve received in response to these Production Blog posts have included some very positive remarks about the behind-the-scenes details and processes I’ve attempted to convey. With that in mind, I wanted to share the latest computer rendering of the “Narrator” (aka Mr. Neil Gaiman) character from The Price, and breakdown just how it was put together.
As an artistic tool, computers are capable of creating truly astounding imagery; but as with all digital processes, you have to tell your box exactly what to do… and I do mean exactly. It is this aspect of CGI more than any other that causes those of us who attempt it to pull out the largest tufts of hair.
Exasperation. Frustration. Endless complication. In short, there are just a whole lot of “tions” to deal with!
So: the first step is to begin with an incredible digital model (please see Videoblog #04 for a peek at the superb work of Ryan Peterson). Looks fantastic, but now, how do you get the hair to look like, well, hair? And what about the skin?
For help with this highly specialized and technical artistry, I was lucky enough to find the talented (and generous) Michael Hoopes. Currently working hard on the video game Star Wars: The Old Republic for Bioware in the great state of Texas, Michael makes time to develop and test different shaders and to create render passes that he sends on to me to composite (or put together digitally). So, now you’re asking: just what-the-heck are shaders and render passes? Let’s take a look…
We’ll start with the skin. On the left is a rendered image called a “Diffuse Pass” (pass as in, the computer is going to have to make several “passes” or layers to generate all of the information needed to make the finished image). It’s pretty good, but you can see right away what I mean about getting the skin to look right. As it is, it appears a lot like a rubber mask; there is no depth, no translucency, none of the layered colors you can see when you look closely in the mirror at yourself. Since we all do this every day, each of us is an expert in detecting “fakeness” in these Computer Generated Images. We might not know exactly how to define it, we just know it looks wrong.
Some extremely intelligent people figured out that we needed to teach the computer how our skin reacts to light. Instead of just being reflected outright, light actually travels past the surface, bounces around inside, and then meanders out, causing the flesh to appear translucent. After giving themselves some well-deserved congratulations, they decided to call this effect Sub Surface Scattering.
To simulate this, Ryan painted not only the surface colors of Neil’s skin, but the colors of multiple layers beneath as well! Michael then developed shaders in the powerful 3D program Maya using a rendering system called Mental Ray. Shaders are a way of defining the properties of a particular material so the computer knows what to do with it. Michael used Ryan’s painted layers to create the different levels or depths of skin — you can see the “Sub Surface Front” and “Mid” layers above. I combined these on top of the base/diffuse layer (using Adobe After Effects to composite all of these different images into one) and adjusted the balance until I achieved the level of translucency I wanted.
Next, we need something to make the eyes look wet and glossy and the skin to have a little shine. Looking at the Reflection pass on the left,you can see all of those highlights against the black, which are again blended with the Diffuse pass in the center image. You have to balance/adjust each area to achieve the look you want (for example, too much on the skin will make it look oily — ick). On the right is what is called the Ambient Occlusion pass, which generates the dark areas on a surface that are created when light is being blocked by the structure and features of that surface. Different than just cast shadows, this image really helps to define the shape of things and makes the lighting look much more realistic.
We’re not finished yet; the Backscatter pass simulates the way light bleeds through the edge of a translucent object, like, say a nose or cheek. Looking at the mixed image above, you can see how the Backscatter helps define those areas and makes them seem more “fleshy.” The Depth map is a way of telling the computer how far away different areas of the model are from the camera. The way this pass is set up, the darkest objects are closest. In the final composite, I can use this information to affect which parts of the face are in focus, and create a shallow depth of field effect.
Finally, that famous mane of his. The Diffuse again provides the base image, while the Specular gives me control over the highlights and glossiness, and the Ambient Occlusion defines the shadows and the shapes.
Once all of these passes are ready, I add a background and then dig deep into my bag of compositing tricks… and you wind up with the image below (which you are welcome to download if you are so inclined).
Again, it is my sincere hope that this post will provide you with some insight and appreciation for the work involved in creating the images I have been dreaming about for so very long; above all, I want to give you a tantalizing taste of things to come!!!