This project is concerned with the construction of a mixed media video using live action and computer-generated images with realism as a goal.
First, variations of an actor running from a window were filmed in raw format to retain linear data for post-processing.
inline_Image[image1.png|Actor jumping in front of window]
inline_Image[image2.png|High exposure]
inline_Image[image3.png|Low exposure]
Along with these plates, HDRIs were also captured inside and outside the room using a RICOH THETA Z1, an industry standard 360 camera as recommended by Alex Pearce of Light Sail VR. That can capture bracketed photos from a low exposure to a high exposure which can then be combined to create a radiance file that contains all the light information. The Theta was used instead of a camera tripod setup as it is far quicker so light levels wouldn't change between capturing.
inline_Image[image4.png|HDRI]
All the camera specifications were written down and a distortion chart was filmed to be able to undistort the lens in Nuke. Distortion charts are extremely time saving in VFX and are good practice to take on set along with other data like HDRIs.
inline_Image[image5.png|Distortion chart]
3D scans of both the actor and the scene itself were taken. These were used to create a digital double and reconstruct the scene in 3D with the correct measurements. Digital doubles are a key part of the process in actors interacting with a virtual scene.
inline_Image[image6.png|3D scan of actor]
inline_Image[image7.png|3D scan of the room]
To reconstruct the captured scene in 3D, a mix of methods were used. Firstly, the 3D scan was used as a baseline for the measurements and distances of parts of the room. However, the scan was low detail so it couldn't be used as a final output. The HDRI captured inside the room was used by projecting it onto geometry from the centre.
inline_Image[image8.png|Projection of HDRI onto geometry]
This allowed the geometry to be extruded to match different wall planes and construct the scene, with the centre aligning to the HDRI centre. These were imported into Houdini, a VFX software, with the benefits of both types of the reconstruction being used, the scan for the organic parts like sofas and the HDRI construction for the flat walls.
Camera matching, a process of matching the physical camera with a virtual camera, was the next stage. This was done in a program called Fspy chosen because the camera is static, making it impossible to use motion tracking to estimate the direction or placement of the lens. Fspy is useful, as it uses parallel lines to find out where the camera is. Using data from the filming day, such as the focal length and sensor size, Fspy could create a virtual camera that would fit the exact specifications of the real camera to export into Houdini.
Houdini's procedural tools were utilised to construct the walls as it allowed for future flexibility. Using offset line objects, bricks were instanced to each point where all aspects of the wall could be controlled (height, width, length, brick size). The wall's mortar was placed between the bricks using its bounding box. The wall parameters were then configured to line up with the original plate's walls.
inline_Image[image9.png|Stage1]
inline_Image[image10.png|Stage2]
inline_Image[image11.png|Stage3]
inline_Image[image12.png|Stage4]
The blinds are also completely procedural, using lines to define the length and width. Blinds also have string that connects the ends together, created by connecting a group of points between each blind looping through each of them and then pulling down the centre points allowing for updates with however many blinds that are needed.
Thus, Houdini and procedural aspects proves to be a time saver within the project.
inline_Image[image13.png|Line for length]
inline_Image[image14.png|line for thickness]
inline_Image[image15.png|Singular blind]
inline_Image[image16.png|array of blinds]
inline_Image[image17.png|Close-up of the blind connectors]
To break through the wall, the cyclops character was chosen as the collision object. A rigged base mesh was adapted, combining basic modelling techniques like extruding and merging to create a one-eyed cyclops model. This model was then animated by blocking out key poses and refining the movement with additional keyframes. Once the animation was complete, the cyclops model was exported to ZBrush.
inline_Image[image18.png|Cyclops model]
inline_Image[image19.png|original base mesh]
Zbrush is used as an industry standard tool for sculpting used in many different films and tv to create detailed characters such as Westworld where it was heavily used in the opening sequence. The sculpting used different techniques like creating scales with the mask brush. These were then inflated over the surface creating lots of scales that were the same height across the skin. Other details were added with attention to anatomical structures such as the eyelids and lips.
inline_Image[image20.jpeg|Close-up of cyclops sculpt]
For texturing, a mood board was used for the kind of texturing that was going to be used:
inline_Image[image21.png|Texture mood board]
Exporting from Zbrush to Substance Painter allowed baking details into a lower poly mesh, creating texture maps like normal, curvature, ambient occlusion, and thickness. These maps can then be used to apply creative details to the texturing. A skin workflow was used to build up different layers of colour which create a more realistic skin.
The curvature was used to make lighter parts between the scales. Manual painting of different patterns added variation to the texturing.
inline_Image[image22.png|Cyclops texturing]
The next stage in the process of creating the wall destruction scene was the simulation stage which consists of RBD simulation used for the brick wall and glass, with vellum simulation for the blinds and ragdoll simulation for the Digi double.
The wall fracturing began with Houdini's material fracture node using the concrete setting to break the bricks into pieces. However, the material fracture node treats constraints equally, preventing typical brick chunk grouping:
inline_Image[image23.jpeg|Construction site with broken bricks]
This was addressed by the RBD cluster node set to 'group constraints' instead of the 'combine pieces' setting which prevents separation, creating distinct constraints between piece groups with varying strengths configured through an RBD constraints property chain.
The wall is set to "sleeping" at first deactivating the physics until the collision geometry collides with it.
inline_Image[image24.png|Constraints]
inline_Image[image25.png|Sleeping setting]
The cyclops would need to be used as collision geometry, as the hand is what would be pushed through the wall. Most approaches would require custom node graphs, however the agent collision node (usually used in crowd simulation) has functionality for attaching collision shapes onto each bone in a simple way. In Houdini, every node can be edited, this is useful as the node could be adapted so the instanced shapes were physically simulated polygon spheres instead of a capsule visualiser which would not collide with anything.
For the wall itself, a less precise convex hull collision was needed as it would make the simulation more stable and would only have to calculate one extra collision object.
inline_Image[image26.png|Hand Collision for ragdoll]
inline_Image[image27.png|Simplified Hand Collision for wall]
After this, the wall would break as the hand pushed through it.
Simulation of the blinds needed proxy geometry without thickness. The proxy was long strips with string connecting the ends.
inline_Image[image28.png|Proxy blinds]
The string makes the simulation realistic by pulling up subsequent blinds as they would in reality. Using Vellum as the solver, early tests without string collided perfectly, but with string added, the cloth stretched indefinitely despite property adjustments. Separating the string and strips, then attaching them with Vellum glue constraints solved this by allowing individual property control - the string's weight was one of the issues causing the stretch.
However, the blinds would still stretch very slightly at the start of every simulation, so a time offset node skipped until they were in a stable position at the start of the timeline. Using the hand collision straight into the blinds worked well. The collision needed to be time shifted inversely to the final simulation to get the timing right. After the simulation all the procedural work for the blinds was used to convert the simplified geometry back to a full version of the blinds.
The static collision for the blinds meant that the blinds went through the wall. To handle this, a curve along the final hole's edge was extruded and boolean'd out of a VDB wall/glass proxy to create a dense mesh collision proxy.
inline_Image[image29.png|Hole Collision for blinds]
The window comprised glass and a metal frame. The glass simulation used the same solver as the brick wall but with a glass shatter material fracture. A challenge with transparent glass is making internal fracture lines only visible after shattering. This was addressed by the RBD connected/disconnected faces nodes, removing internal faces until pieces separate, though requiring an unpack step after the RBD configure's geometry packing.
inline_Image[image30.png|RBD disconected faces]
For the metal frame, a bending workflow used soft constraints keeping pieces connected within a distance, combined with plasticity to simulate bending. Then RBD deform pieces constructed a single deforming mesh around the separated pieces.
inline_Image[image31.png|Window frame simulation Peices]
inline_Image[image32.png|Window frame simulation result]
Ragdoll simulation with the scanned Digi-double was the next stage. Ragdolls are easy to set up in the crowd sim workflow and already have a prebuilt node called 'TestRagdoll'. The node was unpacked and with all the joints and collisions set up, the hand colliders were added into the simulation so the hand would realistically pick up the digidouble. Ragdolls were chosen instead of keyframed animation as they are driven by physics and can be more accurate.
There were a few issues with the ragdoll falling out so guide colliders were used to make sure it stayed in the hand as it was picked up.
inline_Image[image33.png|Ragdoll Simulation]
inline_Image[image34.png|Ragdoll Collision]
The scan itself was slightly different to the plate of the actor as the scan had a hat in it so it needed to be removed using sculpting and clone stamping the hair. The material for the hair uses a Fresnel effect to approximate how the light passes through the hair in the filmed plate.
inline_Image[image35.png|Digital double]
The mortar between the bricks originally was a challenge as most Houdini guides for brick wall destruction completely ignore the mortar and packs the bricks as closely together as possible. However, to match the filmed plate, the mortar between bricks needed to be visibly destroyed, but fracturing it with the bricks would not maintain the original pattern. So, the solution was to create a separate simulation which would make use of Houdini's particles and to disintegrate the mortar as it was broken.
inline_Image[image36.png|Brick wall with hole]
inline_Image[image37.png|particle simulation]
inline_Image[image38.png|particle simulation with wall]
This required a few methods. The cement between the bricks needed a way to be cut through and deleted as the hand passed through it, for this the collision convex hull version of the hand was used to create a trail on every frame, this meant that all previous frames were persistent throughout the animation this could then be converted to a VDB and back to a mesh to have a constantly updating object for where the hand had been.
inline_Image[image39.png|Hand Collision]
inline_Image[image40.png|Hand Trail]
This was cut out of the mortar object using a Boolean. The object was then used as a mask where all close parts of the mortar would be used to spawn in particles. A Popnet simulated them with gravity so they could also collide with the hand object as it came through the wall. Randomly scaled platonic solids were instanced on every particle so they would appear as solid objects. Combining this with the booleaned mortar it appears as if it disintegrates.
Finally, smoke was simulated using the particles as the pyro solver's source. Using a vdb version of the brick wall simulation to also push the smoke in a convincing way.
inline_Image[image41.png|Voxel room approximation]
inline_Image[image42.png|Smoke simulation]
Rendering was done in the aces colour-space, so it was compatible with the footage later on. Cycles was chosen as the render engine for a few reasons:
When lighting the room, there needed to be two separate HDRI's that would light both the inside and also through the window. This is a problem because render engines usually only support one environment light. To solve this, lighting used the outside HDRI as the main environment light, with the inside room's HDRI projected onto geometry approximating the room, allowing realistic lighting transition for CG objects moving inside.
Specific scene objects were set as 'shadow catchers' to render only cast shadows while contributing indirect lighting, such as the red sofas globally illuminating any objects sitting on them.
Compositing is the final stage in putting every element together, for this an outside background needed to be created, this is because the wall gets broken revealing what's behind it, however this was never filmed so it needed to be reconstructed in photoshop. The final reconstruction was created out of the original footage, the 360 HDRI map taken outside and cloning different parts of each this meant that there would be dynamic range in the image as both the original plate and the 360 HDRI were in raw formats.
inline_Video[background-reconstruction.mp4|Background Reconstruction Process]
The issue with the 360 HDRI was that it would be curved so it needed to be reprojected onto a flat surface and rendered out.
With the different AOV's rendered out, they were taken into nuke. Starting with the original plate, it was undistorted and reformatted. Parts of it were rotoscoped out, for example the sofas, so the CG render could be merged into the composition. Colour correction was used on the bricks to closely match the colour over time in the real shot.
inline_Image[image43.png|Rotoscoping]
inline_Image[image44.png|Color Matching]
The actor needed to be rotoscoped to separate them from the background, this was a tedious process that involved masking off each body part. One of the strategies used was to put each spline on the very edge before the pixels fade into the background. Then nuke can calculate the motion blur it should have.
inline_Image[image45.png|Rotoscoping Result]
inline_Image[image46.png|Rotoscoping Shapes]
The constructed background was set behind the CG wall so it would be uncovered when the wall is smashed in.
The shadow pass AOV was multiplied over the existing composition. This places the shadows from the CG objects over the plate.
inline_Image[image47.png|Shadow pass]
To have the actor appear in front of part of the hand but behind the fingers and rubble that should be in front, the depth pass was used to mask of only the closer objects.
Matching the digidouble and the actor required layers of colour corrections create the transition in a more subtle way. Grid warp aligned parts of the digidouble with the actor, and a freeze frame of the hair, being the most noticeable difference, was attached to the digidouble.
inline_Image[image48.png|Color correction layers]
The glass was improved by layering stock photos to create realistic texture this was multiplied over the plate and masked by the cryptomatte of the glass.
inline_Image[image49.png|Broken glass texture]
inline_Image[image50.png|Glass compositing process]
Finally dust was merged over the whole shot, masking out the actor at the beginning so they would show up through the smoke. With the original lens distortion applied to re-distort the composition.
To get advice on how to use ACES, an industry professional Adan Currey was contacted: He advised how to get everything in the same colour space, and provided the ACES primer which demonstrates how to use aces properly.
The composite was taken into Davinci Resolve as an ACES sequence where some colour grading was done and a previous shot was added to the start for buildup. The sound design was layered sounds from freesound.org.
inline_Video[final-breakdown.mp4|Final VFX Breakdown]
This project successfully demonstrated a complete visual effects pipeline, from live-action filming and photogrammetry capture to physics simulations, look development, rendering, and final composition. By embracing Gareth Edwards' organic filmmaking approach, it allowed for a more flexible workflow.
A key aspect was utilizing techniques and software used in industry like Houdini, with its procedural tools and physics solvers like RBD and Vellum, PBR texturing in Substance Painter, the ACES rendering pipeline, and Nuke's node-based compositor. Capturing HDRIs of the location provided realistic lighting conditions to seamlessly blend CG and live-action elements.
The decision to use the ACES colour management framework from filming through post-production ensured a coherent linear colour pipeline, reducing the need for extensive colour correction.
While there are always opportunities to iterate further, with extra fracture details or adding more variation to debris, the overall approach succeeded. The final composite demonstrates how utilising technology like procedural workflows and simulations, when combined with live-action filmmaking, can produce a high-quality mixed media VFX shot.