Contributors to this article are Matt Leonard, John Montgomery, and Mike Seymour.
The 7th iteration of DigiPro took place on Saturday and it was once again very well-attended. The technical conference is one of our favorites and brings together engineers, scientists, artists and more with a variety of presentations. The day kicked off with programming presentations from DNEG and DreamWorks, included a keynote address from SideFX’s Mark Elendt covering the top ten bugs in Houdini, as well as touched on pipelines, the cloud, and more. For a flavor of the networking aspect of the event, be sure to check out our DigiPro photo gallery at the bottom of this article. We take a quick look at two of the talks from the day.
Firefly Detection with Half Buffers
Keith Jeffery’s (DreamWorks) presentation called Firefly Detection with Half Buffers was in the last group of presentations of the day and really well done. It covered noise spikes (fireflies), which are overly-bright pixels out of place compared to neighboring pixels, which are a common artifact in Monte Carlo ray traced images. In order to solve the problem, one must first detect the bright pixels (referred to as ‘outliers’) and then fix them. He did an outstanding job of explaining the issue as well as working through the solution in a detailed and clear manner.
One of the main overall issues in detecting the outliers was determining what area to sample.They use each half buffer to analyze the image data, as this is already readily available to them in the pipeline. Jeffery used an example image from How to Train Your Dragon 3 to demonstrate one of the issues with sampling, which was a dark scene of Hiccup carrying a torch. If one were to examine the entire scene to determine fireflies it would be impossible to do so because of the large range of pixels from dark to light. The pixel values for the flame could easily be considered outliers due to their value compared to the darker areas.
Due to this, a much smaller area in each half buffer is examined at a time: 5×5 pixels. For that 5×5 region, any pixel which is determined to be a firefly is flagged as a possible outlier. With overlapping sample areas, an individual pixel will be checked against its neighbors 25 total times. If, and only if, it is flagged as a possible outlier in every sample square — then it is marked as an outlier.
This is because even within a small area, there might be a strong specular highlight or (as Jeffery showed) glitter or other shiny object. It is only going to be marked if it is flagged 25 total times. If a detected outlier exists at the same pixel location in both buffers, it can safely be assumed that the pixel contains a legitimate highlight (and not random noise) and can be added back into the image.
The color value of outliers are reconstructed through an iterative process of filtering neighbors through various routines that were also covered as part of the talk.
Keynote: Mark Elendt’s Top 10 Houdini Bugs of All Time
After the lunch break, Elendt’s address was fun way to kick off the afternoon, with a quick rundown of his top bugs that have ever been found in Houdini. It was a fantastic presentation, with a lot of amusing code jokes and anecdotes as well as interesting insights into the bug reporting process at SideFX.
Side Effects has its roots in production formed by Kim Davidson and Greg Hermanovic in 1987. They got out of production and first started selling software in 1989 when Elendt joined the company. In 1992, they had their first booth at SIGGRAPH exhibiting their PRISMS software. Houdini 1.0 was released in 1997. For a more complete history of SideFX, check out our past featured story on fxguide.
While we won’t diving into the specifics of the top ten bugs, there were some really interesting stats and facts regarding the bug reporting process at SideFX. The company is known to have a reputation for great support and the root of that likely comes from the founders background in production — they know the demands of what artists and facilities face on a daily basis. Contacts through the support system basically are broken down the common categories of bugs, requests for enhancements (RFEs), and questions and with that comes an understanding that answers are needed as soon as possible for the bugs and questions categories. They aim to respond within 24 hours to any question even, Elendt says (with a laugh), if the answer is “well, I’ll don’t know.”
The support team is made up of 3 individuals at SideFX: Silvina, Jenny, and Hector — and they answer every email that comes into support. Elendt shared a few interesting stats about the number of support requests they’ve received:
- Since January 1st, the team has processed over 6000 requests that have come in, most being questions and RFEs
- With the issues, it’s not just a single email reply but often times two or three emails back and forth about the request
- On average, a support request entails seven emails back and forth about the issue. Sometimes it’s as few as two or three — and occasionally as many as 50
- That means that since January 1st, the three member team has processed over 40,000 emails — or one every two minutes of the day, which “amazing” says Elendt (and we’d agree)
Every member of the development team — even including Elendt — takes a turn spending two weeks each year as the main contact point/bridge between support and dev. While Elendt admits that it’s an additional workload and no one really likes doing the job (because they’re often “unable to do what they want to do, which is write code”, it does give the dev team insights into the broader picture. They get to learn what each of the other developers is doing and also view things from the end user perspective — which in turn leads to a better product for the artists and facilities.
When a bug gets fixed it gets rolled into the nightly builds, entered into the bug database, and and email is sent to the client who reported the bug. With the nightly builds, Elendt says this means that Houdini always needs to be in the state that it’s “ready to ship” so it changes the way they code. “We can’t say: ‘oh i’m gonna commit this code now and I’ll fix it a week from now’,” says Elednt. “W always have to make sure that Houdini is in a really good state to ship.” They have regression tests that run all the time to make sure it’s in a stable state.
There’s an corporate culture rule at SideFX that any software developer “breaks the build is responsible for bringing in donuts for the rest of the team,” says Elednt. The internal wiki page actually has “Donut Rules” that lay out the guidelines. “What’s really nice,” jokes Elednt, “is that this actually seems to work…we haven’t had donuts for about six months.”
Siggraph session recap: Well Worn
The first main talk at SIGGRAPH on cloth was a collection of four sessions entitled Well Worn which focused on the outstanding work found in Pixar’s Incredibles 2 and Coco. In this recap we take a look at two of the sessions.
Collaborative Costume Design and Construction on Incredibles 2
The first session was called ‘Collaborative Costume Design and Construction on Incredibles 2’ and the speakers included Aimei Kutt (Technical Director), Fran Kalal (Technical Director), Trent Crow (Character Shading Technical Artist) and Beth Albright (Shading TD).
Pixar started the project knowing there were a large number of costumes to work with for the show and that creating the most realistic look and feel was a key goal in selling the characters. A specific focus was put on the tailoring of the costumes along with finding the right initial design for each character. Although we’d met some of the characters before in the original movie back in 2004, Pixar knew they needed to update the costumes not just in design, but also due to changes in what was needed for the story, and the advancements in technology over the last 14 years.
The Pixar pipeline, like many studios, works in a linear way. Models are created, textures painted, characters rigged, animated, then simulated before fx, lighting and compositing finish off the shot. However in some instances characters were still being adjusted even during the cloth simulation stages which led to a more creative pipeline approach. Another key area which needed to dovetail into the simulation setup was texturing and how the stretching of the geometry would affect the look of the shaders and texturing. Something that came to light early on in the process was the requirement to be able to shape and hold specific shapes in the characters’ costumes such as folds and creases. How the cloth fitted the character was key to design choices made earlier in production, and having those stick during the character’s performance was another important requirement for the team.
The world of the Incredibles spanned two areas of costume design, that of the Supers and their counterpart civilian world. Bob / Mr Incredible voiced by Craig T. Nelson was the first Super to have his costume built. The cloth was initially tied to the character’s rig and then the second layer of performance layered on top through the simulation itself. Helen / Elastigirl, played by Holly Hunter was a more complicated costume to construct and shade. It required a specific anisotropic look to the shading which needed to stand up during complex and more extreme deformations.
When the Incredibles were living their civilian lives the character models need to be slightly adjusted to make them look less ‘super’. Each civilian costume was specially tailored to help sell the emotional state of the character at any given time in the story, whether it be smart and business-like or tired and dishevelled. Lots of real world reference was used to help tie-in the final look of each costume including iconic movie stars such as Audrey Hepburn and Marilyn Monroe for Elastigirl.
A number of new tools were written to help the simulation artist, including the constant system ‘Slide on Surface’. Another key technique used was an Override tool which enabled the simulation team to remove certain areas of animation such as around the neck of Mr. Incredible. This enabled the simulation of the neck tie to better fit the character without bobbing about. Another technique the simulation team could utilize was the ability to break the cloth into patches and then simulate or adjust each region separately. The simulation could be adjusted so the cloth would hit the right position even in extreme poses.
Alongside the old characters, the story introduced a selection of new heroes and villains which each had their own unique challenges. These included Winston Deavour, Evelyn Deavour, Voyd, Reflux and The Ambassador.
Dressed for Saving the Day: Finer Details for Garment Shading
The next part of the talk continued with Pixar and The Incredibles 2, titled ‘Dressed for Saving the Day: Finer Details for Garment Shading’. This more technical talk focused on the Shader component of the costume design, specifically covering the use of Pixar’s standard shader ‘Pxrsurface’, a layerable, physically plausible, Beckmann or Ggx specular lobed illumination model. A new Bump to Roughness system was created which used a Microfacet BRDF. This rendered a similar quality to their existing Super Sampling tool but provided a speed closer to that found with using Bump Mapping. For more information see the white paper ‘Geometry into Shading’ by Christophe Hery, Michael Kass, and Junyi Ling.
The talk then moved into a more technical discussion on the create procedural fuzz which was used not only for creating facial peach fuzz but also for more general micro fibres found on clothing. A new macro was created for the Foundry’s Katana, taken from their existing Maya plugin, outputting to the standard RIB format.
The next section of the presentation was entitled ‘Coco AnimSim: Increasing Quality and Efficiency’ and was focused on a mix of cloth and more general rigid body dynamics.
Due to the large amount of character animation that required simulations a new system was created at Pixar called ‘AnimSim’. This was designed to create a robust and stable method for the animation team to run simulations based on their animations. Due to the design of many of the characters, specifically the skeletons, usual cloth techniques would prove to be problematic. Cloth could easily get snagged between bones or caught up in the more complex geometry.
To solve this problem, Pixar created a force field tool which could be used to help the cloth sim work with collision objects. One example which was highlighted was the complicated setup of the main character, Miguel, being able to put his hands in his pockets. Traditionally this is very hard not only to animate but also to simulate. The force field tool was used to open the pocket based on the proximity of the hand as it entered the pocket.
Another new Simulation Grab tool was written to constrain the position of CV’s on a cloth object during the simulation process. One use of this would be a character needing to push up the long sleeves on a jacket or shirt. One of the more complex sequences in the movie was a Mexican dance sequence which involved multiple dancers with long complex dresses. These new tools and techniques were instrumental in enabling this to be animated and simulated in a much more cost-effective way.
Another key sequence which was covered was the collection of Chicharrón’s guitar. This complex sequence involved multiple simulations including a character with cloth, a hammock and multiple rigid body objects. To achieve this first the cloth sim for the hammock was solved and then the rigid bodies. A proxy object was used to add volume to the cloth simulation of the hammock first before the rigid bodies were added and simmed. The animators had special tools for shaping the hammock and then being able to simulate on top of that initial shape. This enabled the character of Chicharrón to interact directly with the cloth while still keeping a simulated look.
One issue that had to be solved was being able to dial in an initial state for the cloth position, this enabled consistency across shots. With regards to the rigid bodies these could be cached out following the sim and then those curves simplified and adjusted by the animators to hit specific positions.
DigiPro 2018 Paparazzi
Photography credit to Mike Seymour