Digital Human League: first free data

Digital Human League: first free data

| Posted by Mike Seymour | May 29, 2015

This year at FMX, the group of artists known as the Digital Human League, started by Chaos Group’s Christopher Nichols, presented the team’s first research, and we interviewed Chris (see below).

Based on Lightstage data from USC ICT, the group is exploring photorealistic human faces and how to construct and render them. The work is open and the files are fully published in an effort to advance the state of the art and our understanding of how to produce realistic faces.  This first publishing of their findings is just the start. The group is actively working already on the next stage and they see this first presentation as just the beginning of what will hopefully be a worldwide collaboration.

USC ICT is known for its pioneering work in facial scanning and has kindly donated time and effort into allowing the core digital files to be released for anyone to research and explore with further. Over a dozen other artists have contributed to the eyes, the hair and producing shader models that support the fundamental scientific work.

Seven years after the original Digital Emily was created, Emily O’Brien was generous enough to have herself scanned again, this time at much higher resolution and fidelity. This scan is the first subject used in the Wikihuman project. This data, along with work done by the Digital Human League, is now available for download.

After presenting at FMX, Chris Nichols sat down with Mike Seymour (also a member of the DHL) and they discussed the project, the first results and how they hope the community can all benefit. This interview formed part of fxphd’s BKD series and we are happy to offer it also free to everyone here via our blog.

The digital human pipeline aims to be informative and true to the maths of what actually happens, of course, in any real situation one may want to tweak the lighting or adjust for artistic interpretation, but our aim is to offer a clean solution that illustrates how the physics of light really works inside a Physically Plausible Lighting setup, as you will hear in the interview.

The files are available to download here.

(The model is provided in Alembic format). The Maya scene file supplied contains the model and shaders. Note: Requires Maya 2015 or above and V-Ray 3.0 for Maya or above. fxphd is working on a RenderMan version and you are encouraged to try the data in any suitable 3D renderer.  You are free to use the data but not resell, under the license agreement it is free for non-commercial use.

For more information visit the DHL web site: wikihuman.org

 

Special thanks to Emily O’Brien who gave generously of her time and allowed ICT to provide her data for this project. Her contribution and generosity is greatly appreciated. Special thanks also to Paul Debevec and the incredible team at USC ICT, for their support and the Chaos Group for their leadership in the DHL.

5 Responses to “Digital Human League: first free data”

  1. Lonzo5

    Thanks so very much for this. A wealth of great technical info from Chris, with well-placed, informed commentary and pertinent questions by Mike. I admire what you’ve accomplished so far. Keep it up!

    Reply
  2. Thibault

    Wow that’s really nice, thanks. That’s Just a bit unfortunate the Alembic file does not contains any texture coordinates, as I guess they’re just in the Maya project.

    Really nice initiative though.

    Reply
  3. Daniel najera

    Here are some test I put through corona
    https://corona-renderer.com/forum/index.php/topic,6927.15.html

    Reply
  4. Donald

    Excellent work by the Digital Human League. A priceless contribution.

    Reply
  5. Max Liani

    Nice article and work by the Digital Human League. Congratulations.

    I would like to formulate a bit on the reason why a naive implementation of Fresnel do not look right on skin. I have had the feeling the explanation given was either wrong or over simplified.

    The application of Fresnel equation in some render engines is based only on the inclination of the incoming ray to the surface normal. The equation assumes the surface is locally flat. At a glancing angle the Fresnel reflection term approaches 1 and it falls to low values very quickly as the inclination reduces and the ray becomes perpendicular to the surface.

    The definition of rough surfaces is that there is a microscopic geometry where the surface at any give point have normals facing a variety of directions, therefore inclinations, but those features are so small that we treat the effect as a smooth statistical distribution rather than actual geometry, meaning that inside each pixel, inside each sample, we have a whole range of inclinations for the surface we see. The surface is therefore not “locally flat” and a simple implementation of Fresnel won’t work.
    Rough surfaces seen at a glancing angle have only a small portion of the micro-surface normals that are actually “at a clanging angle” where the Fresnel term would be at its peak. A large portion of the normals distribution will be pointing forward, making the reflection a lot less intense inside that region of space.

    From a mathematical point of view, if you integrate the Fresnel equation over the statistical distribution of normals of the rough surface at a given point you will get a completely different looking function where reflections at glancing angles are a lot more attenuated. To get an approximate visual clue without writing any code, render a black plastic sphere illuminated by a flat white dome light on a black background so that you get just the “intensity of the Fresnel function” in form of an image. You should see a bright line of light around the sphere which quickly fades to black toward the middle. Take that image and blur it. This actually spread the energy of the Fresnel reflection to a larger surface area while maintaining the overall “energy”. The bright peak of light is replaced by a soft and more spread distribution of energy. This is just a visual clue, not an actual trick you can use.
    What I describe here is the part that is often done wrong in many shaders and renderers, including some of the major commercial products.

    Reply

Leave a Reply