This course is broken down into a number of key areas, Colour: Theory and Pipeline, Lighting: Concepts and Stages, HDR Lighting: Preparation and Workflow, Look Dev: Concepts and Development, Optics: Cameras and Lens, and finally Compositing: Single and Multi Sample. We will draw on real-world examples, examine white papers and dig deep into the software tools of today's VFX artists.
The course is aimed at anyone interested in Look Development and Lighting. We will use Maya 2016 as the backbone of everything we’ll do. We’ll cover the tools to a more detailed level than ever before and include the techniques used in both large and small VFX facilities around the world. Alongside Maya we’ll be using RenderMan and Mental Ray as our renderers, Photoshop and AutoPano for HDR work and Nuke for compositing. This will be an excellent course for anyone who knows Maya already but wants to expand their knowledge, or for someone just getting into the business.
The course is taught by Matt Leonard who has been in the 3D and Visual Effects industry since 1990. He has been using Maya since its release in 1998 and before that 3D Studio (DOS), 3DSMax and Softimage. He is a member of the Visual Effects Society and has worked as a beta tester for not only Maya but also Katana, Arnold, RenderMan, Mari, Nuke, and Fusion. He currently works for MPC (Vancouver) as Lead Lighting Instructor and before that ran his own training company Sphere VFX for 9 years.
We start with a key foundation to Look Dev and Lighting, colour theory. We’ll be looking at various methodologies of dealing with colour in a VFX facility, and gain an understanding of what it means to work in a Physically Based pipeline. From there we move to Scene-Referred vs. Display Referred imagery, Gamma, Gamut, ACES, OpenColourIO and how to practically work in a Linear colour pipeline.
This begins a two stage look at the fundamentals of lighting. We begin by discussing the main aims of lighting before looking at the various stages of lighting a scene. We’ll start by understanding how to read the lighting in a background plate and lighting spheres taken on-set. Next we’ll look at how to light and match CG spheres to the ones taken on-set and how this helps as a basis for setting up your overall shot lighting.
We continue to look at the fundamentals of lighting. Having matched our chrome and grey spheres, we’ll progress on to our main model which we’ll start by lighting based off a simple grey material. From there we’ll add the main shaders / textures and finalize the lighting, matching it as closely as possible to the background plate. We finish up by quickly looking at basic integration techniques using Nuke.
We begin to look at High Dynamic Range Images and their use in lighting. We start with a set of bracketed photos taken using an 8mm fisheye lens and convert these into multiple HDRI’s. From there we use AutoPano Gigo to stitch these into a single LatLong image and finally Nuke to colour balance them to the background plate we'll be using.
We continue looking at HDRI’s. Having colour balanced our LatLong we now look at extracting the key light source from the image and cleaning up the remaining image. From there we take all our image material into Maya and use the data to light a scene based off the localised placement of the key light sources and using the cleaned up original LatLong.
We switch gears to Look Dev focusing entirely on Maya’s new system for shader building introduced in the 2016 version, along with the latest version of Mental Ray. We'll use this to look at the basis of Look Development and how best to approach projects.
We start a two stage project based around the 'Digital Emily 2' project developed by The Digital Human League. Initially this shot was generated using Chaos Groups V-Ray but we will re-LookDev it using RenderMan.
We continue to work on our Digital Emily projects, refining the Look Development and discuss the process of complex shader setups, the use of various texture maps and how best to light the model.
We move from Look Dev to Cameras. We start by covering the basics of photography and how f/stop, shutter speed and ISO affect the overall image. We then more into the 3D realm and examine the workings of our digital camera and how we can create a more plausible camera output that doesn’t break all the rules.
We look at the creation of Slap Comps using both single and multiple sample images, i.e. Deep. We start with a general overview looking at what a Slap Comp is and isn’t, from there we’ll look in-depth at Deep Image Data before covering how to output it in both Mental Ray 2016 and RenderMan. Then we’ll move into Nuke to build our own Slap Comp.