Tweets
Squarespace
Powered by Squarespace
Wednesday
Feb022011

Filming Josef & Aimée

Director Ben Shirinian went into pre-production with a clear cinematic vision for his enchanting short film Josef & Aimée.  Ben, together with 1st Assistant Director Chris Byrne, prepared an ambitious six day shooting schedule which took place during the last week of November 2010.  The almost entirely green screen shoot was filmed on the RedMX camera at Commercial Studios in Toronto.  Numerous camera setups called for crane, dolly, steadicam, turntable, and camera car rigs.  There was also a custom built motion control rig which was used for a time lapse sequence.  Despite the busy schedule, this proved to be one of the most enjoyable experiences I've ever had on set.  The atmosphere was very collaborative and the entire crew delivered some amazing work.  I was blown away by the beautiful images captured at 4K by the RedMX and Cooke Panchro lenses.

Jeremy Benning and Ben Shirinian with the RedMXTo realize the elaborate choreography, the film went through extensive previs at The Junction VFX.  Every shot was blocked out for layout and timing and each camera move was carefully planned out.  This became the basis for the shoot and served as an invaluable guide on set.  It also informed where practical set pieces could be built and where set extensions would be required.  Practical foreground set pieces were built whenever an actor had to interact with their environment.  The art department did an awesome job, often working through the night to prepare the set for the next day.  The hectic schedule was completed under the guidance of Chris Byrne, who ran the shoot masterfully.  Chris is very visual effects savvy and was always accommodating on set.

As you might imagine, we had tracking markers everywhere.  Even so, I don't envy the task of match moving nearly every shot in the film.  Accurate camera tracks will be essential when it comes time to integrate the film's many matte paintings, digital environments, set extensions, and CG characters.

Although this was my first time working beside Cinematographer Jeremy Benning, Josef & Aimée was actually my second Benning project.  The first was as online editor for the stop motion short The Stone of Folly, which went on to win the Prix du Jury at The 2002 Cannes Film Festival. I had a great time working with Jeremy and his crew.  I always enjoy any chance I get to hang out with the camera department, and this was no exception. 

The vast majority of the visual effects will be handled by Junction VFX, including digital environments, matte paintings and set extensions.  Meanwhile I'll be looking after the film's digital character Parpar.  Parpar appears throughout the film in various forms, and is currently in development at Spin VFX.  There's a hint to who Parpar is in his name.  The offline is being assembled at School Editing and I am eagerly awaiting the first cut.

With it's engaging story, charming characters and stunning performances by Kai Stothers (Josef) and Nolwenn Boutle (Aimée), Ben has captured a film with enormous potential.  I have high hopes for Josef & Aimée and I'm keen to get shots into production.

Saturday
Aug212010

Josef & Aimée

I'm happy to announce that I will be the Visual Effects Supervisor for Josef & Aimée, an enchanting short film co-written by director, Ben Shirinian and producer, Leslie Gottlieb.  Josef & Aimée is a magical love story about two Jewish children orphaned in the south of France during the Holocaust.  The film will be executed using a hybrid of miniature sets, 3D animation and live action.  It will be shot on the RED camera by Director of Photography, Jeremy Benning.

In February, I was contacted by Leslie, who was looking to assemble a visual effects team to handle post production on the film.  At first I was skeptical, but after meeting with Ben and Leslie I soon realized that they had something special, a film with an engaging story and a unique visual style.  It wasn't until recently that our schedules aligned and I was able to bring the project to Spin VFX, where our primary focus will be the creation of a principal digital character named "Parpar."  The remainder of the visual effects will be completed by The Junction VFX in Toronto.

Although I can't say much more right now, I will post updates here over the course of the project.  In the mean time, visit www.benshirinian.com for a sample of work from director Ben Shirinian.

Sunday
Jun062010

Close-Up on The Day of the Triffids: London Aftermath

I get to see a lot of scripts, breakdowns, and previs around the studio.  Every now and then I'll come across a shot that grabs me from the first moment I see it.  I'm a sucker for slow dramatic camera moves, heroic signature poses, or epic environments.  This is one of those special shots.

From the 2009 BAFTA award winning BBC mini-series The Day of the Triffids, this shot is also a great example of blending live action practical set pieces with visual effects enhancements to create a level a realism which would have been difficult to achieve with visual effects alone.  We were called upon to create the aftermath from a downed commercial airliner in the streets of London.  The plane tears a path through the city leaving a trail of destruction in it's wake.  We had to extend the two buildings on the right, one of which was destroyed in the crash, the crumbling and burning buildings, street and London skyline beyond.

Camera Projections in FusionOriginally our plan was to build a full CG model of the airplane which was to be featured in several shots, but after seeing the footage we decided that a simple multi-plane approach would be more flexible and cost effective.  I aligned several image planes based on the point cloud data I received from our matchmoving department.  I used three different camera projections for the set extension on the right, two for the buildings and one for the tree.  The airplane was made up of two more projections, one for the tail section and one for the fuselage.  And finally, the background building, street and city extension comprised of a multi-layered matte painting.  All of the camera Fusion Flowprojections were done using Fusion's 3D environment.  Once the geometry was aligned with the footage and all of the camera projections were set up, I could quickly swap out revisions from matte painter, Juan Garcia, without having to re-render layers in 3D.  This also allowed me to precomp many layers of fire and smoke elements onto still background images before re-projecting them onto the geometry.

I had great practical reference for the fire and interactive lighting from the footage.  Using filmed fire and smoke effects elements, I layered in pockets of fire and bounce light around the destroyed buildings.  I also added fire inside and around the downed aircraft and on the piles of debris lining the city streets, creating pools of light which revealed just enough detail to bring the environment to life.  I even used Fusion's procedural noise tool to create additional smoke layers for the composite.  Finally, layers of falling ash and paper helped to blend everything together.

Sunday
Mar282010

VFX Quick Tip: Sharp Clone Strokes

A common mistake especially among junior artists is misunderstanding sub-pixel filtering.  While most experienced artists understand the importance of choosing an appropriate filtering algorithm when scaling an image, it is often overlooked with simple transformations.  My personal favorite and perhaps the most often overlooked of all is the transformation that takes place behind the scenes when performing cloning operations.  I can often be heard around the studio saying, "No mushy paint strokes!" to artists at all levels of experience.  Here are a couple quick and simple tips for preserving sharpness and detail when transforming images in both Nuke and Fusion.

In any software, when you translate an image by sub-pixel values the software must interpolate the new position by sampling surrounding pixels.Reference size  While some algorithms do a better job of maintaining sharpness than others, the best way is to avoid re-sampling all together.  In other words, move the image by a whole number of pixels.  If a transform is animated then the only recourse is to choose an appropriate filtering algorithm.  But if your simply moving an image from one place to another there is no better way to ensure a lossless transformation.  In Nuke this is as easy as using the Position node rather than the Transform node.  The Position node by definition, "Moves the input by an integer number of pixels."  In Fusion it's a little less obvious.  At the bottom of any transform tool you'll see a toggle labelled "Reference size."  Clicking this reveals width and height sliders, and a "Use Frame Format Settings" check box.  Assuming you've set your frame format preferences according to the resolution of your footage you can simply check this option.  This gives you positional values based on the width and height of your image, rather than values normalized between zero and one.  Now it should be easy to translate by whole pixels by simply avoiding decimals.

"Snap Offset" in FusionSo what does any of this mean for clone painting?  If you think about it, a clone stroke is simply a transform masked by the extent of the brush stroke.  So the same rules apply here.  Fortunately both Fusion and Nuke provide an easy solution.  In Fusion it's called "Snap Offset" and it does exactly what you would expect.  It snaps the clone source offset to the nearest whole pixel.  In Nuke it's called "round" and the tool tip says it all.  It will, "Round translation amount to the nearest whole integer pixel to avoid softening due to filtering."  "Round" in NukeThese techniques are especially important when working with material which originated on film or with any media where preserving the grain structure is critical to achieving seamless results.  A rig or wire removal which might otherwise be invisible will boil noticeably if the grain detail is softened.  To see the effect for yourself, be sure to view your work at full resolution.  Proxy scaling or viewer re-sizing will make it difficult to see the difference as it introduces another layer of image re-sampling.