VFX Interview: Kevin Margo, Director on the making of Construct
Blockbuster-quality CGI, action-packed fight sequences, robot rebels… It may sound like the work of a big studio, but today audiences will finally get to see the full version of CONSTRUCT, a 12-minute short that was created by a team of industry vets in their spare time. Already a Vimeo Staff Pick!
You may have heard of CONSTRUCT back in 2014, when the one-minute teaser from Kevin Margo, one of the guys behind Deadpool’s infamous test footage, started sweeping sites like Engadget, Creators and Indiewire. It was hailed for its innovative virtual production techniques (blended GPU rendering with live ray-tracing and motion capture) and people threw around the word “game changer” without a hint of irony.
Fast forward to today, and we are pleased to offer you a first look at the completed short. It’s a big win for indie filmmakers everywhere, and it even earned Margo a feature development deal with Brian Kavanaugh-Jones of Automatik producing, Nightfox financing and Marcos Gabriel on the script.
//The Interview From Kevin Margo, Director
Where did the idea for CONSTRUCT come from?
CONSTRUCT was born of two desires. First, to explore intimate themes of family set against the backdrop of a near future world where humans and sentient robots co-exist. Second, to prototype a virtual production workflow for the future.
Being a Director/Supervisor, what prompted you to shoot a short featuring motion capture and live-action filmmaking?
CONSTRUCT is 100% CGI. There are no live-action plates or elements in the short film. However, we strove to replicate the live-action process as closely as possible in the interest of attaining greater naturalism in performances via motion capture and lighting/cinematography with our V-Ray for Motion Builder plugin that interactively ray-traced these performances at the same time.
CONSTRUCT work began in 2014 and now it’s been released in 2018. How does that feel?
It’s great to finally share it with the world. CONSTRUCT has been a multi-faceted project so not only can we share the short film, but we’ve a multitude of ancillary content like Behind the Scenes videos that showcase our virtual production RnD, fight choreography, asset development, and Virtual Reality components.
CONSTRUCT’s pre-production to post-production has taken a long time. How was this managed and what was your approach when working with a small team?
We anchored the bulk of production out of my 300 sq.ft studio apartment in Venice, CA. Much of the team was located around the world, so physical space wasn’t a necessity. There were many emails and Skype calls to work through technical and creative guidance, and remote meetings at coffee shops or bars when larger groups were involved in meetings/discussions.
Can you share something about how ray tracing and virtual production were used on CONSTRUCT?
We embraced V-Ray GPU to enable ray tracing at interactive frame rates for use on the virtual production stage. This was predominantly exploratory RnD, but it was great to see the possibilities of a fully realized production pipeline using these methods. We also took advantage of ray tracing with our final frames, as V-Ray is a native ray tracer. Paired with the GPU, we were rendering HD final frames in less than 5 minutes.
Can you share something about the creation of CONSTRUCT’s 3D characters and assets?
CONSTRUCT, being GPU based and designed with 2014 generation of NVIDIA cards in mind, had memory limitations that required economical design and execution of the assets. Texture memory was a big consideration, keeping the resolution of most to under 2K. Also, we used instancing as much as possible. The robots were clones of each other and used materials to define their unique characteristics. The environment used instanced geometry everywhere…trees, grasses, and the wood beams of the house were all instanced by design to limit the amount of unique geometry required, again to keep the memory under control.
What was the challenges faced in terms of Motion Capture and VFX on CONSTRUCT?
Our goal was to capture extended long performances, often up to 30 seconds in length, independent of camera, so that we could shoot as much coverage as needed. We wanted to maintain this idea with our fight sequences as well, but often had to shoot inserts or divide up the fight for wire rigging or specialized acrobatic stunts that were too difficult to include in a single longer action shot. In these instances, we still stitched together these mocap clips and cleaned up the results to again have extended performances in which coverage could be shot. It was a different approach than usual, taking these performances through a first pass of animation with loose attention to contact points and eye darts. Only then did we shoot the camera coverage. 9 out of 10 shots didn’t require any additional animation attention once seen in these shot contexts, which saved excess animation time and resources.
How did companies like Chaos Group and NVIDIA support you during this process?
Chaos Group was the first supporter, offering their assistance with any ideas we may have that takes advantage of V-Ray. This was the perfect opportunity to explore the virtual production RnD we had been pondering a few years prior. With them aboard, they extended an invitation to NVIDIA to contribute GPU hardware support to the project. This laid the foundation for the next several years of RnD and production. Chaos Group developers prototyped a V-Ray GPU plugin for MotionBuilder and extended features in the 3ds Max GPU renderer to accommodate the visual fidelity goals we targeted. NVIDIA invited us to render a good portion of the short on their GPU Cluster in Santa Clara, which helped tremendously.
How did Optitrack, Boxx, Itoosoft and Rouge Mocap support you on CONSTRUCT?
Optitrack lent us a Flex 13 mocap camera system and the Insight VCS virtual camera rig to RnD the virtual production goals outside of a large mocap stage. We also would shoot our coverage cameras with this setup. Boxx provided 3 towers that could support 4x NVIDIA GPUs. This too was critical, as the # of GPUs added to a render task scales linearly, so our performance gains on render times were incredible. Rouge Mocap kindly provided access to their stage in off hours and weekends when we needed larger mocap volumes for performances and stunt work.
Creating Realistic CG Characters with Effects is challenging. How did you manage that?
I’m a proponent of naturalistic lighting conditions. Embracing simplicity, using a single HDRI to light all 210 shots, not only achieved this natural aesthetic but enabled incredibly efficient shot lighting. Often we spent only a few hours per shot in lighting. Additionally, we avoided multiple render passes and AOVs, achieving as much as we could ‘in camera’. So our compositing was non existent aside from a few lens effects and color grading. This too streamlined the finishing process.
You have experience in CG production. What differences did you notice between CG and live-action filmmaking production?
Each have their benefits and drawbacks. With live action, the footage you capture on set is fully representative of the end result. All elements of performance, production design, lighting, and cinematography are in play simultaneously and influence each other to large degree. Also, the possibilities of discovery and happy accidents arise in these conditions. The downside is you’re stuck with what you captured at that time, and revisiting it becomes very cost prohibitive. With CG, you have added flexibility to revisit various components of the scene, but never concurrently, and never by the same artist, which is a detriment to capturing a singular moment and vision. CONSTRUCT’s virtual production RnD is a quest to unify the best of each into a single workflow.
How many years did it take to develop CONSTRUCT? What software and plugins did you use?
Production on CONSTRUCT took almost 2 years to complete the short film. We used 3ds Max, ZBrush, V-Ray, Forest Pack Pro, XSI, MotionBuilder, Digital Fusion and Adobe Premiere.
Can you share something about CONSTRUCT VR? What is the VR experience like? CONSTRUCT is also connected to V-Ray Cloud and Google Cloud. What’s the story there?
We partnered with Starbreeze and Nozon to re-interpret Construct into an 8-minute VR experience using their PresenZ plugin for V-Ray. It’s a fantastic way to delivery cinematic quality visuals into VR. How to handle editing has been a discussion in VR, and one successful solution we found was to slow down cuts and upon every cut, reorient the entire world to begin the subsequent shot with the viewer looking in the direction intended by the director. Very quickly the viewer acclimates to this idea that they are looking in the direction they should at every cut. It’s simple, but we it found very effective.
With Google, their cloud division used CONSTRUCT assets as the flagship demo content for their newly launched GPU render farm. They performed live demonstrations at their Next conference interactively rendering on the cloud with 128 GPUs simultaneously processing a CONSTRUCT scene.
CONSTRUCT is currently in feature film development with Automatik. Can you share something about that?
CONSTRUCT is in development with Brian Kavanaugh Jones of Automatik producing and a script by Marcos Gabriel. We are continuing to develop the project as we build out the creative team and work towards production. The story retains the core element from the short but heads in a slightly different course.
Many thanks to Kevin Margo for sharing with us his experiences on the making of his short film. We are eagerly awaiting the next one.
For more info: