Tracking, and a catch up.

Video

Well, looks like someone fell off the blogging wagon… Bit silly of me as I’ve done an awful lot the past couple of months and I may well have remembered more of the techniques if I’d blogged about them at the time, however… what’s done is done. I’ll just have to make up for it now.

First of all, I’ve been learning about tracking. I created this reel for an application to a Matchmover/Tracker freelance job at a company that I really want to work for, but unfortunately the job was taken down a day before the reel was ready. Sod’s Law in action, yet not a completely wasted effort as I now have a reasonable grasp on tracking techniques where previously I knew nothing.

01 – Camera Tracking

In this one I finally finished the “shoe” project from the first semester of the Master’s. It was in equal parts nice and mortifying opening the file for the first time in eight months: nice, because I realised I’ve learned so much since then; mortifying because I’ve learned so much since then and by my higher standards the file was a mess.

– The Displacement map didn’t work first time around so I chucked it before the final render – I now know how Displacement maps work (32 bit depth ONLY; really neat UVs; Vector Displacement maps where possible because they are SO much more stable), so I set myself the challenge of getting it to work this time, which it did fairly easily compared to last time.

– I cheated with the reflection on the table the first time by just flipping the shoe – it worked, but I wanted to do it properly this time as that particular method won’t hold when dealing with animated characters (unless you Cache Geometry, then flip it, but still… messy).

I was literally stuck on this one for days, even though I knew that the answer when I finally figured it out would be really simple, and it was. It involved going into the mip_matteshadow shader, and ticking a box called “Reflection.”

I blame the naming conventions. It wouldn’t have been as hard if mip_matteshadow was renamed to mip_DOES_EVERYTHING-NOT_JUST_SHADOWS. Maybe it’s just me…

– I also used Linear Workflow and sub-surface scattering when creating the lighting and shaders, which I didn’t last time.

Compared to that lot, actually tracking the scene in NukeX seemed relatively easy. I roto’d out the middle of the table as the markers seemed to be sliding around a lot there, then steadily deleted markers to get the Average Tracking Error below 1 (0.581, in this case), while making sure the total number of trackers stayed above the minimum 100 required for a stable track.

02 – Object Tracking

This was a nightmare! Also, my first time tracking using Autodesk Matchmover, which I’m told is used quite often in industry. There’s a lack of decent tutorials in Object Tracking so I had to more-or-less wing it. The main difficulty was the busy background, as well as the fact that the markers would often disappear during the sword’s rotation. I had eight tracks in all, a large portion of which was done manually because Matchmover couldn’t pick the markers out from the background. The sword asset was from the Digital Tutors Asset Library and was particularly useful in that the fact it was glowing meant the lighting accuracy was less important. I did find and use a similar HDRI from HDR Labs, though.

03 – Planar Tracking

Compared to the other two this was very simple, and once I’d watched the tutorial on how it was done it only took about ten minutes to do. The tracking and replacement was done completely within NukeX.

So there you go – my tracking reel. I’ll be updating with more posts over the next few days, including Non-Organic Modelling, Multi-Tile Workflows, and my first ever VFX job…!

Bye for now,
S.

Mental Ray Proxy Depth Pass: A Workaround

Image

Mental Ray Proxy Depth Pass: A Workaround

Working with proxies (and depth-of-field) as much as I have been recently, it has become more and more frustrating that they do not work with the Mental Ray Pass system. Luckily for me, yesterday I was told of a very simple workaround for this:

1. Open the original model of the proxy. Apply a white Surface Shader to the model, and export this as a new proxy (don’t overwrite the old one!).

2. In the file containing the proxy geometry “Save As…” a new file ([filename]_depth works for me). Under the Attributes of the proxy shapes, change the “Render Proxy” to the new “white surface” version (obviously this is much easier if the proxies have been Instanced, then you only have to change it once for each Assembly file).

3. Under the Render Settings, change “Render Using” to “Maya Software.”

4. Under Maya Software > Render Options > Post Processing, click the checkered box next to “Environment Fog.” Then change “Render Using” back to “Mental Ray.”

5. In the Hypershade, find envFogMaterial (under the Materials tab). In its Attributes, change the Color to black and uncheck “Color Based Transparency.”

6. Use the Distance Tool to measure the distance from the camera to the furthest object from it. Type this number into the “Saturation Distance” box.

7. Hit Render… et voilà! A depth pass.

Massive thanks to the genius that is Pat Imrie for the tip.

EDIT: After having a couple of hiccups with this, there are a few more things I’d like to add (so I remember…)

– If the scene file is to a large scale and the fog isn’t working, change the Max “Fog Clipping Plane” value in the Attribute Editor to the same value as the “Saturation Distance.”

– If you’re using layers, MAKE SURE THE FOG NODE IS ON THAT LAYER!!

Exterior Comping, Gnomon Underwater Scenes and Ed Whetstone

I have been putting together the lighting for the exterior scenes, with a little help from Gnomon Underwater Environments.

gnomonunderwater

Including:

  • Setting up “fake caustics” by attaching an image sequence to a directional light.
  • Creating ambient light by setting up a “dome” of directional lights with no shadowing, which gives a kind of hazy, desaturated effect.
  • [tutorial missing from DVD… no idea why] Setting up spotlight fog which moves by using the same image sequence I used for the caustics. I figured it out based on the provided final Quicktime. The most difficult part was figuring out how to attach a custom curve to control the falloff of the fog.

I have also been mainly watching Guerrilla Compositing Tactics in Maya and Nuke with Ed Whetstone, as well as his other tutorial, Creating Light Rigs in Maya, a truly fantistic initiation into 3-point CG lighting.  From Guerrilla Compositing Tactics I learned how the depth pass can be used for all kinds of things: creating depth-based fog and haze, which I already kind of knew about, but the true revelation was the idea of targeted colour correction, based on the depth pass.

ed

ed2

In my own render, I have decreased the colour temperature based on distance from the camera.

In the end I found that simply emulating the underwater environment, while convincing in its realism, was a bit too hazy and would impair the clarity of the storytelling.  Therefore I used a combination of underwater and 3-point lighting techniques.  The only thing missing is particles, which I will do in Nuke now that the Bokeh effect has been improved.

Test001Speaking of which, I recently bought NukeX 7, which has a fantastic upgrade to the ZBlur node: ZDefocus.  Instead of adjusting the image plane, it uses a “focal point” that I can drag, and it automatically recalculates.  It also vastly improves the quality of the blur: with ZBlur, the only options were “Gaussian” or “Disk” Filter – now, with ZDefocus I can create bokeh effects that are pentagonal or hexagonal or even heart-shaped.

In other news, Angus has approved the animation so far, so I shall be working on those renders today.

Fantastic.

Full steam ahead!

The Blue Planet

Suffered a creative block yesterday so I decided to take John’s advice and go take some underwater film footage. Well, sort of. Underwater video cameras and diving holidays are somewhat outside my budget so I settled on the next best thing: The Blue Planet.

I watched two episodes: The Deep and Coral Seas.  Got some good screen grabs, which have got me inspired to do more texturing and lighting based on the footage.CoralSeas002 CoralSeas005 CoralSeas007 CoralSeas008 CoralSeas010 CoralSeas011 CoralSeas012 CoralSeas013 TheDeep003 TheDeep010 TheDeep016 TheDeep020 TheDeep025

New Comp Test

Image

New Comp Test

The latest comp test of the nuclear interior. I’m really happy with this. Tried out a few new things today including Lens Blur and God-Rays (could there BE a cooler name for a node?). Digital Tutors’ Photorealistic Camera Lens Effects in NUKE course has been fantastic.

Also fantastic is the mia_bokeh node, which I used on top of some nParticles. It’s really versatile as you can add your own texture to it – taking inspiration from the Jellyfish Pictures reel I’ve connected a pentagon-shaped alpha to it. Although Nuke has its own particle system, the quality of the Bokeh effect within Maya/Mental Ray was way too good to pass up.

I hope to be able to post a moving version up tomorrow.

Proxies in Action

Image

Proxies in Action

So I finally got the proxies working. This scene is around one billion polygons. Not sure how long it took, but I know it was under four hours, because I went to bed and then woke up at 4 a.m. with a delirious urge to check the render (I think everyone’s had that at some point… or maybe it’s just me).

It looks extremely weird right now, but hopefully I can turn it into something nice-looking by Monday’s assessment.

A Question of Scale, Part Three

So I set up a lighting test with my Nucleosome Proxies all nicely arranged, positioned a light right at the back of the scene with the intention of creating nice bright highlights with a sharp falloff into dark shadows, and hit Render

Immediately I noticed something really weird.  The histones were all bright red, even in the places where they should have been in complete darkness.  It was like the light was going straight through them.  In fact I had noticed a similar effect on anything I had applied a Subsurface Scattering shader to.

I decided that it was time to stop hoping that the default Scatter settings would get me through, and actually do my research.

As always, Digital Tutors knew how to provide, and I found a fantastic tutorial, mental ray Workflows in Maya: Subsurface Scattering.  It took the fear out of working with SSS.

I found out why the fast_skin shader was acting in that way, and of course it was down to – surprise, surprise – scale.

The fast_skin shader is set up to work as human skin, to be put onto a real-world size human being, not a tiny cell organelle.  The Back Scatter (attribute used to control areas of complete translucency i.e. the webbing between fingers) has a Radius setting to control how far through the light can penetrate. This is the default setting of 25:

highbackscatterradiusAnd this is what it looked like after I scaled it back to a more moderate 0.5:

lowbackscatterradiusSo now the light only filters through on the edges, rather than the whole object.