Project Breakdown
This project was one of my main projects in the final year of university so this is the research project I created for creating the face
The idea is to create a digital version of an actors face that can be used in a visual effects shot, the visual effects will be using the face to make a scene where it opens to reveal robotics underneath. This will aim to incorporate multiple aspects of 3D including: scanning, sculpting, simulation and hard surface modelling whilst also aiming to achieve photorealism in as many aspects as possible. This will be achieved through scanning to get the highest detail in the face. The project will also utilise the PBR workflow to create realistic textures baked from high poly versions. Software like Houdini will be used for simulation and substance painter for its PBR texturing tools.
The eye is made up of several components, these include: The sclera which is “the white outer coating of the eye that gives the eye its shape. It is a tough, fibrous tissue consisting of highly compacted flat bands of collagen bundles which scatter light”. The cornea which is “a dehydrated and avascularized transparent tissue that serves as the first and strongest convex element of the human eye lens system”. The iris, a “colored diaphragm which serves as an aperture controlling the amount of light entering the eye. Made up of circular and radial muscles”. The limbus, a material transition from sclera to the cornea.
inline_Image[TheSecondLetterisO.png|Figure 1]
( Franc¸ois, et al., 2009) Eyes are one of the main features of the face, this means it is very important for them to look correct “Creating photo-realistic digital humans is a long-standing grand challenge in computer graphics. The eye, arguably the most important facial feature, has so far only received very little attention, especially its shape.” (Berard, et al., 2014) To create the eyes, certain features had to be considered. Features like the lens would not make up much of a noticeable difference to the final image apart from perhaps the red eye effect “When light impinges on the human eye at an angle not too far from the optical axis, it may propagate through the pupil, be reflected back from the fundus, and exit the pupil at approximately the same angle it entered. The fine blood vessels in the fundus color the reflected light red.” (Safonov, 2007)
This effect is primarily caused by caustics with the lens of the eye focusing the incoming light to the back of the eye. Caustics are usually a tricky thing to render using traditional methods as they are focus patterns of light and thus need far more samples in a path tracer to render correctly. “Caustics are interesting patterns caused by the light being focused when reflecting off glossy materials. Rendering them in computer graphics is still challenging: they correspond to high luminous intensity focused over a small area. Finding the paths that contribute to this small area is difficult, and even more difficult when using camera-based path tracing instead of bidirectional approaches.” (Li, et al., 2022) The red eye effect is not something desirable on video or photos so it will not be included in the eye model. Features like the Sclera, Cornea, Iris and Limbus will be considered as they contribute significantly to the realism of the eye and how it interacts with light. The Sclera is made up of a vein texture, a bump texture created from procedural noise and subsurface scattering to make sure light passes slightly through before it leaves like it would in real life. This part of the eye transitions through the limbus to the cornea. The veins were replicated from a self-taken photo of the eye and hand drawn in substance painter to get the desired effect.
inline_Image[figure2.png|Figure 2]
inline_Image[figure3.png|Figure 3]
The cornea is a simple glass material with a refractive index of 1.376 (Palanker, 2013) The limbus in this model of the eye is made up of 2 parts, one mimicking the edge highlighting that appears as a caustic effect when light is at grazing angles ( Franc¸ois, et al., 2009) and the other as the edge around the transition from sclera to cornea. The caustic effect is not created through actual caustics, but it is made through a translucent shader that picks up light from behind when light is at grazing angles creating an edge glow. The iris is a sculpt created in zbrush (maxon, 2023). These are all hand sculpted to mimic the individual fibres that make it up, this high poly sculpt can then be baked down into displacement and normal maps for optimisation whilst keeping the details. This is then procedurally coloured to match the actors eye colour. The advantage of having actual displacement driving the iris detail means that it looks realistic from most levels of zoom as it has actual deformation of the geometry.
The eye also includes features that are integral to it looking correct but in the wider context of the face, these are the lacrimal caruncle (tear duct) and the wetness at the bottom of the eye, these features are integral in connecting the eye to the rest of the face realistically.
inline_Image[figure4.jpg|Figure 4]
inline_Image[figure5.png|Figure 5]
The aim of the face scanning was to get the highest possible detail level with laser scanners, two scanners were used; the artec eva (Artec, 2023) and the handyscan black elite (Creaform3d, 2023) which has an accuracy of 0.025 mm. Two scans were taken, one with the artec eva to get a colour version of the face and one with the handyscan to get a high-resolution version with skin details. These are then combined using the colour scan as a base. Another software was used called wrap (faceform, 2023) which takes a scan and wraps a lower poly version of a head to it, this means that it has all the correct topology whilst following the shape of the head. The higher detail scan was then used to give extra detail to the lower poly head so it could be taken into zbrush (maxon, 2023). In zbrush the goal was to enhance the detail that the scanners gave by using an extremely detailed model that was scanned with a full scanning studio that simply wouldn’t be possible to get with the laser scanners (3dscanstore, 2023). This would be a similar workflow to industry as scanned details are the best way to get the right detail for the correct areas of the face. This is similar to work done on the film logan where they recreated the actor’s face “Working from the base scans provided by ICT – which offered a mask-like high-resolution version of the actor’s face and skin – Image Engine fleshed out the remainder of the head and neck, cleaning up renders to align them with plate photography captured on set.” (Image Engine, 2017) In zbrush the downloaded face was fitted to match the actor’s scanned face, with the highest level of geometry then being projected onto the face of the actor creating an extremely high-resolution mesh with all the skin pores of a real human face. This was then exported as 8k displacement,normal and cavity maps.
inline_Image[figure6.png|Figure 6]
inline_Image[figure7.png|Figure 7]
Rendering skin is hard because “Most lighting from skin comes from sub-surface scattering” (Gosselin, 2004) this makes rendering skin tricky as light interacts quite differently with different layers in the skin, subsurface scattering also smooths out a lot of the detail in the displacement and micro details of the skin so balancing the different parts can be tricky. The other tricky part about skin is its glossiness, this comes in an overall general shininess of the skin but also the way each micro detail of the skin catches the light, for this a specular map can be used to make each fleck of skin catch the light in a specific way. The final detail that gives skin its realism is “Peach Fuzz” the extremely small and thin hairs that grow all over the skin, this seemingly small detail adds a natural sheen to the skin and will catch light at different angles than the skin normally would have, it creates a natural Fresnel effect.
inline_Image[figure8.png|Figure 8]
inline_Image[figure9.png|Figure 9]
The hard surface modelling part of the project mostly comes down to the internal robot structure underneath the face. This process essentially consisted of having lots of reference of animatronics as they are physical robots that are designed to look human which is the main goal of the project. Most robotics are made up of a hard plastic or metal casing which contains all the other smaller and more intricate parts. This is useful as it creates a natural primary and secondary levels of detail, this is good for a viewer as it creates a balance of Unity and Variety, the smaller details add variety while the larger details add to the overall shape and add to the balance of detail level. (Marder, 2023) . Fitting the organic with the robotic turned out to be a challenge as the outside of the android had to look as close to a human as possible whilst still functioning as a robotic version that could move
underneath, this was especially tricky when fitting the teeth into the hard surface model as in a real human, the teeth and gums are directly connected to the inside of the mouth which forms around to connect with the lips and the rest of the skin.
inline_Image[figure10.png|Figure 10]
(Coutsoukis , 2020) It essentially left no room for the face to open up. This was solved by only modelling the mouth up to a specific point whilst adding mechanical pieces to the outside to cover up the exposed parts. This means the lips open up directly to the teeth which looks good from the camera’s view but isn’t strictly anatomically correct. The main face plates are modelled to be similar to a skull, this creates the idea that this is a human like structure but a robotic version of it. The smaller details add to the illusion that the robotics are a very detailed functional mechanism, a lot of the parts are modelled after real components such as servo’s which are used a lot in animatronics (animatronicsworkshop, n.d.) . The eyes were first just fitted into the eye sockets in the skull portion of the robotics however this made them stand out and look very strange on their own, this was solved by adding eyelid like structures over the top of them to create an eye shape that would be more readable for human expression (Guarnera, et al., 2015).
The eyebrows and eyelashes are an overlooked part of the face. They define the look of a face, eyelashes add a natural edge to the eye which highlights their shape from a distance, whilst the eyebrows add expression to the face. CG faces need these otherwise they would look unnatural. The challenge of creating eyelashes comes with their unique properties; eyelashes tend to grow shorter on the lower portion of the eye. They clump together at the ends which forms an uneven coverage at the ends. This can be done using a few guide hairs that are interpolated and clumped at the ends to create this effect.
First a few tests were made to prove the method could work and the hair would simulate properly over the deforming face mesh, this was done before the scans were completed so an older scan of myself was used as the base with a similar hairstyle on top.
inline_Image[figure11.gif|Figure 11]
This provided a basis of understanding of Houdini’s hair system and velum simulation. However, the actual workflow to get there would need to be changed as it was not as optimised and worked less for longer hair. This test also used far less strands of hair than would be needed for the amount of hair a real human has, which is roughly anywhere from 80,000 to 120000 according to the national library of medicine (Murphrey, et al., 2023) . this would mean that simulation would probably have to be interpolated before and after to reduce the calculation that the simulation would have to do without simulating every individual hair strand. After this, tests were done with the new scanned model, this provided some challenges because the final version of the new face had holes in the mesh, whereas the hair in Houdini needed a solid mesh. This test made it obvious that the hairstyle and hair simulation needed to be done separately with variations of the mesh that had been processed differently in Houdini’s nodes. The two variations were a solid subdivided version which could also be used for the VDB version that lost of the hair groom nodes need. Then the one for the simulation which had thickness applied to it so it is similar to the thickness of the faceplates.
inline_Image[figure12.png|Figure 12]
inline_Image[figure13.png|Figure 13]
For the main project; starting off hair guides are used to create the underlying structure of the hair, this base layer defines the flow of hair and will be used for interpolation. The main consideration when creating the guide hair is how it follows interpolation from everywhere on the scalp, this is important as there needs to be enough hair strands to define the interpolation; if there are not enough strands they will usually go through the head and places they shouldn’t be. This means that the initial hair guides are probably the most important part of the groom. For visualisation purposes a node network was used to visualise the strands with thickness and individual colours.
inline_Image[figure14.png|Figure 14]
These hair guides would then later be interpolated using hair gen creating a flow of hair which could then be clumped and deformed in realistic ways to make it seem more natural. One thing about long hair with a visible separation is that interpolation between the two sides will occur, meaning that hair will generate through the mesh and make the hair not separate correctly. This could be solved in Houdini using attribute paint nodes painting each side of the hair separately and using that attribute to drive the hair gen to ignore each side of the hair if it was on the other side. This process is a workflow that has been adapted from Jesus Fernandez’s workflow, who is a groom artist who worked for ILM (Fernandez, 2024)
Hair simulation was a challenging step in the groom process as it required to move some parts of the groom process around, so the solver was not calculating thousands of strands of hair individually. However, the original guide hairs were not enough to get the right resolution in creating the simulation. This meant that a hair gen before and after the velum solver needed to be used.
inline_Image[figure15.png|Figure 15]
Lots of workflows with hair simulation were tested as not everything works correctly with other things like the groom. the main challenge was getting an attribute called skin prim working, this is an attribute that tells each root of each hair what part of the mesh it is attached to and is crucial in all hair grooming and simulation. (SideFX, 2024) The last step in the hair process was being able to move the data out of Houdini and into blender to render it with the rest of the face. This is where things became complicated as alembic files are usually what is used to take full simulation data out of Houdini however blender did not accept animated curve data so it would import blank. The solution to this problem was using procedural methods on both sides. In Houdini, the hair strands were converted to a mesh using the ‘ends’ node which fills in a mesh polygon for each hair strand primitive. In blender these were then imported using an alembic and converted back into a curve using geometry nodes. Then an empty hair object created referencing the original imported simulation defining its thickness and removing the last edge which was left over from the filled in polygon.
inline_Image[figure16.png|Figure 16]
One more thing that was noticeable was that importing a lot of hair was extremely slow, however if the hair was imported at a lower amount and then interpolating the in blender from a lower amount of imported hair then it would run faster (still pretty slow but a lot more manageable). This required the hair to be separated into two halves so the problem of interpolation between the middle of the hair wouldn’t come up again. This meant two alembic files were exported. This also creates a bald spot in the middle of the hair so two separate selections were made which interpolate slightly over the middle from each side to fill the gap. However the bald spot was still an issue as the skin was showing through. The hair simulation does not really effect the actual roots of the hair so using blender’s own hair system, two separate hair curve pieces were used to create a natural hair parting.
inline_Image[figure17.png|Figure 17]
Overall, this project was a very complex process and required a lot of problem solving throughout, it involved multiple aspects of 3D from scanning to simulation. Things that would be changed if there was more time, would probably be matching the hair more to the actor’s actual hair as not enough reference photos were captured so it was tricky to recreate it precisely. Achieving photorealism was the goal and although photorealism is almost an impossible task, the project looks very close and hopefully for most people surpasses the uncanny valley (Mori, 2012). The goal is to use this asset for filming a short film, the processes used were flexible enough that things can pretty easily be changed if needed for specific shots and the flexibility needed for matching to filmed plates of the real actor. Things like the hair is fully procedural and have no destructive elements so can be very easily stimulated per shot, which is the case for a lot of the project such as the materials for the eyes and skin too. This project definitely was a challenge to make but created a lot of interesting problems to solve.
inline_Image[figure18.jpg|Moodboard]
Franc¸ois, G., Gautron, . P., Breton, G. & Bouatouch, K., 2009. Image-Based Modeling of the Human Eye. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, pp. 815-827. 3dscanstore, 2023. 3dscanstore. [Online] Available at: https://www.3dscanstore.com/blog/Free-3D-Head-Model [Accessed 07 12 2023]. animatronicsworkshop, n.d. Servos (anamatronics). [Online] Available at: http://animatronicsworkshop.com/?page_id=306 [Accessed 11 12 2023]. Artec, 2023. artec3d. [Online] Available at: https://www.artec3d.com/ [Accessed 07 12 2023]. Berard, P. et al., 2014. High-Quality Capture of Eyes. ACM SIGGRAPH. Coutsoukis , P., 2020. Anatomy The mouth. [Online] Available at: https://theodora.com/anatomy/the_mouth.html [Accessed 11 12 2023]. Creaform3d, 2023. creaform3d. [Online] Available at: https://www.creaform3d.com/ [Accessed 07 12 2023]. faceform, 2023. faceform. [Online] Available at: https://faceform.com [Accessed 07 12 2023]. Fernandez, J., 2024. Jesus fc patreon. [Online] Available at: https://www.patreon.com/jesusfc [Accessed 17 december 2024]. Gosselin, D., 2004. Real time skin rendering. In Game Developer Conference, Volume 9. Guarnera, M., Hichy, Z., Cascio, M. & Carrubba, S., 2015. Facial Expressions and Ability to Recognize Emotions From Eyes or Mouth in Children. Europe's journal of psychology, 11(2), pp. 183-196. Image Engine, 2017. image-engine logan case study. [Online] Available at: https://image-engine.com/case-studies/logan/ [Accessed 07 12 2023]. Li, H. et al., 2022. Unbiased Caustics Rendering Guided by Representative Specular Paths. SIGGRAPH Asia, pp. 1-8. Marder, L., 2023. The 7 Principles of Art and Design. [Online] Available at: https://www.thoughtco.com/principles-of-art-and-design-2578740 [Accessed 11 12 2023]. maxon, 2023. zbrush. [Online] Available at: https://www.maxon.net/en/zbrush [Accessed 02 12 2023]. Mori, M., 2012. The uncanny valley [from the field]. IEEE Robotics & automation magazine, 19(2), pp. 98-100. Murphrey, M., Agarwal, S. & Zito, P., 2023. Anatomy, Hair.. In: StatPearls [Internet]. s.l.:StatPearls Publishing. Palanker, D., 2013. Optical properties of the eye. AAO One Network, p. 48. Safonov, I. V., 2007. Automatic red eye detection. International conference on the Computer Graphics and Vision. SideFX, 2024. Houdini guideskinattriblookup. [Online] Available at: https://www.sidefx.com/docs/houdini/nodes/sop/guideskinattriblookup.html [Accessed 05 01 2024].